text
stringlengths
14
1.76M
# PolyLM: Learning about Polysemy through Language Modeling Alan Ansell1,2 , Felipe Bravo-Marquez3, Bernhard Pfahringer2 1Language Technology Lab, University of Cambridge 2Department of Computer Science, University of Waikato 3Department of Computer Science, University of Chile & IMFD ###### Abstract To avoid the “meaning conflation deficiency” of word embeddings, a number of models have aimed to embed individual word senses. These methods at one time performed well on tasks such as word sense induction (WSI), but they have since been overtaken by task-specific techniques which exploit contextualized embeddings. However, sense embeddings and contextualization need not be mutually exclusive. We introduce PolyLM, a method which formulates the task of learning sense embeddings as a language modeling problem, allowing contextualization techniques to be applied. PolyLM is based on two underlying assumptions about word senses: firstly, that the probability of a word occurring in a given context is equal to the sum of the probabilities of its individual senses occurring; and secondly, that for a given occurrence of a word, one of its senses tends to be much more plausible in the context than the others. We evaluate PolyLM on WSI, showing that it performs considerably better than previous sense embedding techniques, and matches the current state-of-the-art specialized WSI method despite having six times fewer parameters. Code and pre-trained models are available at https://github.com/AlanAnsell/PolyLM. ## 1 Introduction Much work in NLP has been dedicated to vector representations of words, but it has been recognized since as early as Schütze (1998) that such representations fail to capture the polysemous nature of many words, conflating their multiple senses into a single point in semantic space. There have been several attempts at embedding individual word senses to avoid this issue, termed the “meaning conflation deficiency” by Camacho-Collados and Pilehvar (2018) in their survey on the area. We propose PolyLM, an unsupervised sense embedding model which is effective and easy to apply to downstream tasks. PolyLM can be thought of as both a (masked) language model and a sense model, as it calculates a probability distribution both over words and word senses at masked positions. The formulation is derived from two observations about word senses: firstly, that the probability of a word occurring in a given context is equal to the sum of the probabilities of its individual senses occurring; and secondly, that for a given occurrence of a word, one of its senses tends to be much more plausible in the context than the others. There are several reasons for the interest in sense representations. The first is the downsides associated with the meaning conflation deficiency. Word embedding models can have difficulty distinguishing which sense of an ambiguous word applies in a given context Yaghoobzadeh and Schütze (2016). Additionally, homonymy and polysemy cause distortion in word embeddings: for instance, we would find the unrelated words left and wrong unreasonably close in the vector space due to their similarity to two different senses of the word right, an effect noted by Neelakantan et al. (2014) and illustrated in Figure 1. Intuitively we would expect that sense embedding models could gain superior semantic understanding by avoiding these problems. (a) Word embeddings (b) Sense embeddings Figure 1: An illustration of the meaning conflation deficiency, showing selected word and sense embeddings learned by PolyLM visualized using t-SNE Maaten and Hinton (2008) and adjustText Flyamer (2017). Sense embeddings were learned by training PolyLM${}_{\text{SMALL}}$ with the standard 8 senses per word; word embeddings were learned by training PolyLM${}_{\text{SMALL}}$, but with a single sense per word. Note that both models were trained on unlemmatized data, unlike those used in the WSI experiments. The occurrence of closely related polysemous words nearby in the word embedding space (i.e. left and right) causes unrelated words to be closer together (e.g. left and wrong) and related words to be further apart (e.g. right and east) than they otherwise would be. The use of sense embeddings avoids such distortion. PolyLM is capable of detecting comparatively rare word senses, such as the political senses of left and right, and the use of smith and mason to refer to tradespeople. In addition to well-established applications for sense representations such as word sense disambiguation (WSD) and induction (WSI), another interesting use case is the automatic construction of lexical resources Neale (2018). While there are existing human-curated word sense inventories for English such as such as WordNet Miller (1995), these are expensive to create and are unavailable for most languages. Panchenko (2016) showed that sense embeddings learned using the model of Bartunov et al. (2016) could be linked with word senses contained in BabelNet Navigli and Ponzetto (2012) with a reasonable degree of precision, although the mapping struggled with recall. PolyLM represents a significant advance over Bartunov et al.’s in terms of WSI performance, so it seems reasonable to imagine that this approach to lexical resource construction might now be more feasible. The emergence of contextualized models such as ELMo Peters et al. (2018) and BERT Devlin et al. (2019) has had a tremendous impact on the area of semantic representation. Rather than representing words using a single embedding, or even a set of sense embeddings, these models allow words to be represented using an infinite set of possible embeddings depending on the context. This approach has been very effective across NLP and many state-of-the-art systems incorporate contextualized models, including systems for WSD and WSI. The success of contextualized models raises the question of whether there is still value in learning discrete sense representations. However, contextualized models still rely on word embeddings, and are therefore subject to the meaning conflation deficiency. Furthermore it could be argued that it is inefficient to have the same representation size for all words regardless of how diverse their range of senses is. Another drawback is that before they can be applied to word sense-related tasks, an adaptation step such as clustering to induce discrete senses or fine-tuning is generally required, which is often expensive in terms of both research and compute time. The contributions of this paper can be summarized as follows: * • We propose PolyLM, an end-to-end, unsupervised neural sense embedding model derived from two simple assumptions about word senses. We demonstrate that PolyLM learns senses which correspond well to human notions by showing that it performs well at WSI. * • PolyLM is flexible in that it can use any “contextualizer” (a useful term coined by Liu et al. (2019)), so it will remain relevant as contextualization techniques improve. * • We reduce the effect of the meaning conflation deficiency by disambiguating word senses at the input with a neural “disambiguation layer.” We show that good performance on WSI can be achieved using the output of this layer alone, suggesting that it could be a useful component in many neural networks for language understanding. ## 2 Related Work One of the first works in unsupervised learning of sense representations was by Schütze (1998), who proposed a two-step process, where vector representations are first derived for each context containing an ambiguous word, and these are then clustered into a pre-defined number of groups. Huang et al. (2012) added a third step, where after sense-labeling each word according to its context cluster, sense representations are learned through neural language modeling. A number of later approaches employed a joint training approach, where sense labeling and sense representation learning happen in parallel. Neelakantan et al. (2014), Li and Jurafsky (2015) and Bartunov et al. (2016) each proposed multi-sense variants of the Skip-Gram model Mikolov et al. (2013). Various approaches were tried for determining the number of senses per word: for instance, Li and Jurafsky and Bartunov et al. used Chinese Restaurant Processes and Dirichlet Processes respectively to automatically learn an appropriate number of senses for each word. Many joint training approaches have the disadvantage that they create ambiguity in the context representation by representing context words with word embeddings in order to avoid considering the exponential number of possible sense labelings for the context. Qiu et al. (2016) and Lee and Chen (2017) propose purely sense-based approaches which can sense-label the input efficiently. Arora et al. (2018) took a novel approach to the problem of learning word senses, demonstrating that the embedding learned by traditional techniques for an ambiguous word tends to be very close to a linear combination of the hypothetical vectors corresponding to its individual senses. They proposed a method for recovering the underlying sense vectors and coefficients, and evaluated their system on WSI. Since the emergence of contextualized models, there have been a number of other systems which have exploited their powerful semantic representations for specific tasks such as word sense disambiguation Huang et al. (2019); Vial et al. (2019) and induction Amrami and Goldberg (2018, 2019), however none of these methods creates explicit sense embeddings. ## 3 PolyLM ### 3.1 Overview Consider a typical neural language model. Each word $w$ in a vocabulary $V$ is assigned a single embedding, resulting in an embedding matrix $M\in\mathbb{R}^{|V|\times d}$, where $d$ is the embedding dimensionality. The probability of $w$ occurring in a context $c$ is estimated as $\displaystyle\mathbb{P}(w\ |\ c)=\big{[}\text{softmax}\big{(}M\bm{y}(c)+\bm{a}\big{)}\big{]}_{w},$ (1) where $\bm{y}(c)\in\mathbb{R}^{d}$ is a vector representation of $c$ and $\bm{a}\in\mathbb{R}^{|V|}$ is a trainable bias vector. In BERT Devlin et al. (2019) for instance, $\bm{y}(c)$ corresponds to the final output of multiple Transformer encoder layers Vaswani et al. (2017). Now suppose that for each $w\in V$, there is a corresponding set $S_{w}$ of sememes, or senses which $w$ can have. For instance, intuitively we might have $S_{\text{rock}}=\\{\text{rock:stone, rock:musical genre, rock:shake}\\}$. We assume that the $S_{w}$ are disjoint, i.e. $S_{w}\cap S_{w^{\prime}}=\emptyset$ whenever $w\neq w^{\prime}$, and we define the full sense inventory $S=\bigcup_{w\in V}S_{w}$. Context induces specific senses for the words it contains. Thus a passage of text can be thought of as a sequence of sememes as well as a sequence of words. The first observation underlying PolyLM is that the probability of a word $w$ occurring in a context $c$ is equal to the sum of the probabilities of $w$’s component sememes occurring in the context, i.e. $\displaystyle\mathbb{P}(w\ |\ c)=\sum_{s\in S_{w}}\mathbb{P}(s\ |\ c).$ (2) We wish to learn representations for individual senses, and so we assign an embedding to each sememe in our sense inventory, resulting in a matrix $E$ with dimension $|S|\times d$ and bias vector $\bm{b}$ of dimension $|S|$. Note that this assumes that we know the number of senses of each word a priori, an assumption whose consequences we discuss later. Following Eq. 1, we define the vector $\bm{p}(c)\in\mathbb{R}^{|S|}$ of sememe probabilities in a context $c$ as $\displaystyle\bm{p}(c)=\text{softmax}\big{(}E\bm{x}(c)+\bm{b}\big{)}.$ (3) Considering Eq. 2, we have $\displaystyle\mathbb{P}(w\ |\ c)=\sum_{s\in S_{w}}p(c)_{s},$ (4) allowing us to formulate the problem of learning sense representations with a language modeling objective. PolyLM is constructed from three components: the input layer, which represents the input tokens as aggregates of their sense embeddings, the disambiguation layer, which attempts to determine the contextually appropriate sense embeddings for the input, and the prediction layer, which implements the language modeling objective. We adopt the masked language modeling (MLM) task used for training BERT. When training, we select a subset $T\subset\\{1,2,...,n\\}$ of the tokens in the input sequence as targets for prediction, and produce a masked version $c^{\prime}=w^{\prime}_{1},w^{\prime}_{2},...,w^{\prime}_{n}$ of the original sequence $c=w_{1},w_{2},...,w_{n}$ as follows: 15% of tokens are chosen at random as targets, of which 80% are replaced with a special [MASK] token, 10% are replaced with a random token, and 10% are left unchanged. ilikeapplepie.Unmasked sequence $c$ilike[MASK]pie.Masked sequence $c^{\prime}$$\bm{x}(w^{\prime}_{1})$$\bm{x}(w^{\prime}_{2})$$\bm{x}(w^{\prime}_{3})$$\bm{x}(w^{\prime}_{4})$$\bm{x}(w^{\prime}_{5})$Input embeddings for $c^{\prime}$Disambiguation Transformer Encoder, $C^{D}$$\bm{y}_{1}^{D}(c^{\prime})$$\bm{x}_{1}^{P}(c^{\prime})$$\bm{y}_{2}^{D}(c^{\prime})$$\bm{x}_{2}^{P}(c^{\prime})$$\bm{y}_{3}^{D}(c^{\prime})$$\bm{x}_{3}^{P}(c^{\prime})$$\bm{y}_{4}^{D}(c^{\prime})$$\bm{x}_{4}^{P}(c^{\prime})$$\bm{y}_{5}^{D}(c^{\prime})$$\bm{x}_{5}^{P}(c^{\prime})$ Disambiguation --- output for $c^{\prime}$ Disambiguated --- input representations Prediction Transformer Encoder, $C^{P}$$\bm{y}_{1}^{P}(c^{\prime})$$\bm{y}_{2}^{P}(c^{\prime})$$\bm{y}_{3}^{P}(c^{\prime})$$\bm{y}_{4}^{P}(c^{\prime})$$\bm{y}_{5}^{P}(c^{\prime})$Output representations $\bm{p}_{3}(c^{\prime})$ --- . . apple1: | 0.00006 apple2: | 0.05164 apple3: | 0.00012 . . Sense probabilities --- summed to give word probabilities for masked words, used to calculate language modeling loss $J^{LM}$ $\mathbb{P}(\text{{apple}})=$ --- 0.00006 + 0.05164 + 0.00012 = 0.0518 $J^{LM}$ $\bm{q}_{3}^{P}(c^{\prime},c)$ --- apple1: | 0.0011 apple2: | 0.9966 apple3: | 0.0022 $\bm{y}_{1}^{D}(c)$$\bm{y}_{2}^{D}(c)$$\bm{y}_{3}^{D}(c)$$\bm{y}_{4}^{D}(c)$$\bm{y}_{5}^{D}(c)$ $\bm{q}_{3}^{D}(c)$ --- apple1: | 0.0426 apple2: | 0.9084 apple3: | 0.0403 $J^{M}$$J^{D}$ $J^{M}$ encourages prediction --- and disambiguation sense probabilities to match, $J^{D}$ encourages only one sense of target word to have high probability Disambiguation --- outputs for $c$ Disambiguation Transformer Encoder, $C^{D}$$\bm{x}(w_{1})$$\bm{x}(w_{2})$$\bm{x}(w_{3})$$\bm{x}(w_{4})$$\bm{x}(w_{5})$Input embeddings for $c$ilikeapplepie.Unmasked sequence $c$ Figure 2: Architecture diagram for PolyLM when training, illustrated on the sentence “I like apple pie.”, where the word “apple” is chosen as a target and masked (note that “apple” is ambiguous when tokens are lower-cased, as it may refer to a fruit or a technology company). At inference time, the bottom components (up to and including $\bm{q}^{D}(c)$) do not need to be evaluated, and the sequence may not be masked at the input. ### 3.2 Input Layer We define a contextualizer to be a function which maps a sequence of input representations $\bm{x}_{1},\bm{x}_{2},...,\bm{x}_{n}\in\mathbb{R}^{d}$ to a corresponding sequence of output representations $\bm{y}_{1},\bm{y}_{2},...,\bm{y}_{n}\in\mathbb{R}^{d}$. Recurrent Neural Networks and Transformer architectures are both commonly used as contextualizers for language modeling. Typically the input representations are drawn from an embedding matrix $I\in\mathbb{R}^{|V|\times d}$. It has become common (e.g. BERT) to set $I$ equal to $O$, the embedding matrix used at the language modeling output, as recommended by Press and Wolf (2017), and thus have a single embedding matrix $E$. The issue of input representation poses a problem for our model. Our output embeddings $E\in\mathbb{R}^{|S|\times d}$ correspond to sememes. We cannot straightforwardly tie our input and output embeddings as Press and Wolf suggest, because we receive words rather than sememes as input. We solve this problem by setting the input representation of a word to be a convex combination of the representations of its sememes, i.e. $\displaystyle\bm{x}(w)=\sum_{s\in S_{w}}\lambda_{ws}\bm{e}_{s},$ (5) where $\bm{e}_{s}$ is the row of $E$ corresponding to sememe $s$, and $\bm{\lambda}_{w}$ is a learnable weight vector with the properties that $\sum_{s\in S_{w}}\lambda_{ws}=1$ and $\bm{\lambda}_{w}\geq\bm{0}$ (in practice, $\bm{\lambda}_{w}$ is the softmax of an underlying, unconstrained variable vector). ### 3.3 Disambiguation Layer The disambiguation layer attempts to infer the contextually appropriate sememe embeddings for the input based on the conflated representations from the input layer. Representations $\bm{x}(w^{\prime}_{1}),\bm{x}(w^{\prime}_{2}),...,\bm{x}(w^{\prime}_{n})$ of $c^{\prime}$, calculated according to Eq. 5, are fed into a contextualizer instance $C^{D}$, which outputs representations $\bm{y}_{1}^{D}(c^{\prime}),\bm{y}_{2}^{D}(c^{\prime}),...,\bm{y}_{n}^{D}(c^{\prime})$. We use these representations to calculate a probability distribution over each sense of the tokens in the input: $\displaystyle\bm{q}_{i}^{D}(c^{\prime})=\text{softmax}\big{(}E^{(w^{\prime}_{i})}\bm{y}_{i}^{D}(c^{\prime})+\bm{b}^{(w^{\prime}_{i})}\big{)},$ (6) where $E^{(w^{\prime}_{i})}$ is a submatrix of $E$ containing only the rows corresponding to senses of token $w^{\prime}_{i}$, and similarly $\bm{b}^{(w^{\prime}_{i})}$ is a subvector of a learnable bias vector $\bm{b}\in\mathbb{R}^{|S|}$. In other terms, $\displaystyle q_{is}^{D}(c^{\prime})=\frac{e^{\bm{e}_{s}^{\top}\bm{y}_{i}^{D}(c^{\prime})+b_{s}}}{\sum\limits_{s^{\prime}\in S_{w^{\prime}_{i}}}e^{\bm{e}_{s^{\prime}}^{\top}\bm{y}_{i}^{D}(c^{\prime})+b_{s^{\prime}}}},$ (7) where $s\in S_{w^{\prime}_{i}}$. $q_{is}^{D}(c^{\prime})$ corresponds to the probability that the $i$th token in sequence $c^{\prime}$ has sense $s$. The disambiguated representation of a token could simply be its highest- probability sememe embedding in the context, but to allow gradients to flow through the disambiguation layer, we take the sum of the sememe embeddings weighted by their probabilities: $\displaystyle\bm{x}_{i}^{P}(c^{\prime})=\sum_{s\in S_{w^{\prime}_{i}}}q_{is}^{D}(c^{\prime})\bm{e}_{s}.$ (8) ### 3.4 Prediction Layer The prediction layer maps a sequence of disambiguated input representations onto a corresponding set of output representations, and from each output representation estimates the probability of every sememe in the sense inventory occurring at the corresponding position of the sequence. Disambiguated representations $\bm{x}_{1}^{P}(c^{\prime}),\bm{x}_{2}^{P}(c^{\prime}),...,\bm{x}_{n}^{P}(c^{\prime})$ are fed into another contextualizer instance $C^{P}$, which returns output representations $\bm{y}_{1}^{P}(c^{\prime}),\bm{y}_{2}^{P}(c^{\prime}),...,\bm{y}_{n}^{P}(c^{\prime})$. These are used to calculate a probability distribution over the entire sense inventory, as prescribed by Eq. 3: $\displaystyle\bm{p}_{i}(c^{\prime})=\text{softmax}(E\bm{y}_{i}^{P}(c^{\prime})+\bm{b}).$ (9) We define an additional set of probabilities $\bm{q}^{P}$ analogous to $\bm{q}^{D}$ defined in Eq. 6: $\displaystyle\bm{q}_{i}^{P}(c^{\prime},c)=\text{softmax}\big{(}E^{(w_{i})}\bm{y}_{i}^{P}(c^{\prime})+\bm{b}^{(w_{i})}\big{)}.$ (10) $\bm{q}_{i}^{P}$ takes both $c^{\prime}$ and the unmasked sequence $c$ as arguments because we are interested in the sense probabilities of the words $w_{i}$ that actually occurred. $\bm{q}_{i}^{P}$ will be used later for defining the loss function and is useful for downstream tasks. ### 3.5 Loss Function We seek to minimize a loss function $J$ with three components, each of which is explained below: $\begin{split}J(c,c^{\prime},T)=\ &J^{LM}(c,c^{\prime},T)\ +\\\ &J^{D}(c,c^{\prime},T)\ +\\\ &J^{M}(c,c^{\prime},T)\end{split}$ (11) #### 3.5.1 Language Modeling Loss The language modeling loss $J^{LM}$ is defined as the mean negative log likelihood of the target tokens occurring: $\displaystyle J^{LM}(c,c^{\prime},T)$ $\displaystyle\quad=\frac{-1}{|T|}\sum_{i\in T}\log\hat{\mathbb{P}}(w_{i}\ |\ c^{\prime})$ (12) $\displaystyle\quad=\frac{-1}{|T|}\sum_{i\in T}\log\sum_{s\in S_{w_{i}}}\hat{\mathbb{P}}(\text{sememe }i\text{ is }s\ |\ c^{\prime})$ (13) $\displaystyle\quad=\frac{-1}{|T|}\sum_{i\in T}\log\sum_{s\in S_{w_{i}}}p_{is}(c^{\prime}),$ (14) where $\bm{p}_{i}$ is as defined in Eq. 9. #### 3.5.2 Distinctness Loss Recall that we assume in advance a number of senses for each word. In practice we guess a relatively high number to avoid missing senses. When we overestimate the number of senses, we find that two different sense embeddings for a word converge to essentially the same meaning. The aim of the distinctness loss is to ensure that each sense has a distinct meaning, and to “kill off” superfluous senses by causing them to have very low probability in all contexts. The second key observation of PolyLM is that if the sememes corresponding to a word $w$ are distinct, then in contexts where $w$ occurs, we would expect one of these sememes to have a high estimated probability of occurring, and the rest to have a low probability. The distinctness loss, given by, $\displaystyle J^{D}(c,c^{\prime},T)=\frac{-1}{r|T|}\sum_{i\in T}\log\sum_{s\in S_{w_{i}}}\big{(}q_{is}^{P}(c^{\prime},c)\big{)}^{r},$ (15) with hyperparameter $r>1$, encourages this separation to occur. A full justification is given in Appendix A. #### 3.5.3 Match Loss Without extra supervision, the disambiguation layer tends to very quickly allocate almost all of the probability mass for a word to a single one of its senses. This appears to be due to a “rich get richer” effect in Eq. 8, where the sense embedding with the highest weight has larger gradients associated with it. A more reliable source of sense probabilities is the output of the prediction layer, as this is more closely associated with the ground truth. Therefore we encourage the disambiguation sense probabilities $\bm{q}^{D}$ to be similar to the prediction sense probabilities $\bm{q}^{P}$ by adding a sense probability “match loss,” which is proportional to the cosine similarity between $\bm{q}^{D}$ and $\bm{q}^{P}$. Because $\bm{q}_{i}^{D}(c^{\prime})$ is meaningless when token $i$ is replaced with [MASK], when calculating the match loss we evaluate the disambiguation layer on the unmasked sequence (shown with bottom-up arrows in Figure 2), obtaining $\bm{q}_{i}^{D}(c)$. The match loss is defined as $\displaystyle J^{M}(c,c^{\prime},T)=\frac{-\lambda^{M}}{|T|}\sum_{i\in T}\frac{\bm{q}_{i}^{D}\cdot\bm{q}_{i}^{P}}{\|\bm{q}_{i}^{D}\|\|\bm{q}_{i}^{P}\|},$ (16) where $\bm{q}_{i}^{D}$ and $\bm{q}_{i}^{P}$ are shorthand for $\bm{q}_{i}^{D}(c)$ and $\bm{q}_{i}^{P}(c^{\prime},c)$ respectively, and $\lambda^{M}$ is a hyperparameter. As we wish the disambiguation layer to learn from the prediction layer rather than the other way around, we do not allow gradients from the match loss to propagate through $\bm{q}_{i}^{P}$. ### 3.6 Details and Parameters #### 3.6.1 Preprocessing To avoid the issue of how to represent a word’s sense when it is broken into sub-word level tokens, our vocabulary consists of whole-word tokens. However the WSI tasks on which we evaluate our model operate on the lemma level, so we lemmatize our training corpus as described in Appendix B. The vocabulary consists of the $\scriptstyle\sim$86K tokens appearing more than 500 times in our training corpus, which like BERT’s consists of English Wikipedia + BookCorpus Zhu et al. (2015). All tokens are lower-cased. #### 3.6.2 Contextualizers One of the advantages of PolyLM is that it can be used with any type of contextualizer - note however that we must train our contextualizers together with the rest of the model rather than using pretrained contextualizer instances, because their word embedding matrix would not match our sense embedding matrix. In this paper we present results where the disambiguation and prediction contextualizers $C^{D}$ and $C^{P}$ use BERT’s implementation of the Transformer encoder architecture. #### 3.6.3 Parameters To keep the total number of embeddings reasonable, we allow only the $\scriptstyle\sim$10,000 tokens which occur more than 20,000 times in the training corpus, or appear as focuses in the evaluation datasets, to have multiple senses. Specifically, we assign these tokens a fixed number of $k=8$ embeddings, and other tokens a single embedding. Since according to Zipf’s law Zipf (1950), it is the most frequent words which tend to have the most senses, we expect not to miss too many senses by assuming that infrequent words are monosemous. We leave the investigation of more sophisticated methods for pre- allocating or dynamically updating the number of senses for each token for future work. We train two PolyLM models of different sizes, PolyLM${}_{\text{SMALL}}$ and PolyLM${}_{\text{BASE}}$. Due to the prohibitive computational cost of training a model of BERT${}_{\text{LARGE}}$’s size, we use significantly smaller dimensions, as shown in Table 1. Models were trained over 6,000,000 batches consisting of 32 sequences of length 128 using the Adam optimizer Kingma and Ba (2014). The learning rate was increased linearly from 0 to 3e-5 over the first 10,000 batches, and then reduced linearly back to zero over the remaining batches. The hyperparameters $\lambda^{M}$ and $r$ specific to PolyLM’s loss function were first increased linearly and then left constant, $\lambda^{M}$ from 0 to 0.1 over the first 1,000,000 batches, and $r$ from 1.0 to 1.5 over the first 2,000,000 batches. It is important for $r$ to be gradually increased in this manner because if $r$ is large initially, then the effect of the distinctness loss reduces the diversity of the senses learned. On the other hand, increasing $r$ too slowly seems to be detrimental to the senses’ distinctness. Model | $d$ | Filter size | No. attn. heads | No. layers | Seq. len. | Vocab size | No. embeddings | Total params ---|---|---|---|---|---|---|---|--- PolyLM${}_{\text{SMALL}}$ | 128 | 512 | 8 | 4 ($C^{D}$), 8 ($C^{P}$) | 128 | 86K | 157K | 24M PolyLM${}_{\text{BASE}}$ | 256 | 1024 | 8 | 4 ($C^{D}$), 12 ($C^{P}$) | 128 | 86K | 157K | 54M BERT${}_{\text{LARGE}}$ | 1024 | 4096 | 16 | 24 | 512 | 30K | 30K | 340M Table 1: Parameters of PolyLM and BERT${}_{\text{LARGE}}$. System | Version | SemEval-2010 | SemEval-2013 ---|---|---|--- F-S | V-M | AVG | FBC | FNMI | AVG Amrami and Goldberg (2019) | BERT${}_{\text{LARGE}}$ | 71.3 | 40.4 | 53.6 | 64.0 | 21.4 | 37.0 AutoSense Amplayo et al. (2019) | | 62.9 | 10.1 | 25.2 | 61.7 | 8.0 | 22.2 PolyLM† | BASE | 65.8 | 40.5 | 51.6 | 64.8 | 23.0 | 38.6 SMALL | 65.6 | 35.7 | 48.4 | 64.5 | 18.5 | 34.5 Qiu et al. (2016)† | | - | - | - | 56.9 | 6.7 | 19.5 SE-WSI-fix-cmp Song et al. (2016)† | | 54.3 | 16.3 | 29.8 | - | - | - AdaGram Bartunov et al. (2016)† | | 43.9 | 20.0 | 29.6 | 13.2 | 8.9 | 10.8 Arora et al. (2018)† | $k=5$ | 46.4 | 11.5 | 23.1 | - | - | - Table 2: Comparison of sense embedding models and WSI-specific techniques on the SemEval 2010 and 2013 WSI tasks. SE-WSI-fix-cmp is based on Neelakantan et al. (2014)’s MSSG model. † \- models which obtain explicit sense embeddings. Description | SemEval-2010 | SemEval-2013 ---|---|--- F-S | V-M | AVG | FBC | FNMI | AVG PolyLM${}_{\text{SMALL}}$ | 65.6 | 35.7 | 48.4 | 64.5 | 18.5 | 34.5 No distinctness loss | 53.5 | 33.4 | 42.3 | 57.4 | 16.3 | 30.5 No disambiguation layer | 64.9 | 25.5 | 40.6 | 64.5 | 17.5 | 33.6 Disambiguation layer only | 63.6 | 29.3 | 43.2 | 62.7 | 15.7 | 31.4 Table 3: PolyLM ablation study. ## 4 Experiments Word sense induction (WSI) is the task of inferring the senses of a word in an unsupervised manner. This is precisely the aim of our method, and so is an ideal test task. We evaluate PolyLM on two WSI datasets, SemEval-2010 Task 14 Manandhar et al. (2010) and SemEval-2013 Task 13 Jurgens and Klapaftis (2013). Both datasets consist of passages containing one of a set of polysemous focus words. The occurrences of the focus words in the test set have been sense- labeled by human annotators according to a reference sense inventory. In the SemEval-2010 dataset, each instance is labeled with a single sense, whereas in the SemEval-2013 dataset an instance may be labeled with several relevant senses, each with a corresponding weight denoting its degree of applicability in the context. Performance on SemEval-2010 is measured using paired F-Score (F-S) and V-Measure (V-M), and on SemEval-2013 using Fuzzy B-Cubed (FBC) and Fuzzy Normalized Mutual Information (FNMI). Overall performance on each task (AVG) is typically defined as the geometric mean of its two sub-metrics. Currently, the best performing system on both datasets is that of Amrami and Goldberg (2019). Their system uses the idea of substitute vectors, first devised by Başkaya et al. (2013). For each instance, a set of most likely words that could have occurred instead of the focus word is obtained from the output of a language model. These sets are then clustered, and each cluster is taken to correspond to a different sense of the focus word. Amrami and Goldberg use BERT${}_{\text{LARGE}}$ as their language model. PolyLM can be used for WSI without any further training. For the SemEval-2010 dataset, each instance $c$ is labeled with the sense of the focus word $w_{i}$ which has the highest predicted probability, i.e. $\text{argmax}_{s\in S_{w_{i}}}\bm{q}^{P}_{is}(c^{\prime},c)$, where $c^{\prime}$ is formed from $c$ by replacing $w_{i}$ with [MASK]. For SemEval-2013, we consider a sense applicable if it has a predicted probability $q^{P}_{is}(c^{\prime},c)>p^{\text{thresh}}$, and the weight assigned to each applicable sense is its probability $q^{P}_{is}(c^{\prime},c)$. We arbitrarily set $p^{\text{thresh}}$ to 0.2. Results are shown in Table 2. Both PolyLM models comprehensively outperform previous sense embedding methods. PolyLM${}_{\text{BASE}}$ and Amrami and Goldberg’s system slightly outperform each other on one dataset each, suggesting similar overall proficiency at WSI. However it is worth noting that the BERT${}_{\text{LARGE}}$ language model used by Amrami and Goldberg has more than six times as many parameters as PolyLM${}_{\text{BASE}}$ and is much more computationally expensive to train and run. PolyLM scales well for the sizes tested, with PolyLM${}_{\text{BASE}}$ outperforming PolyLM${}_{\text{SMALL}}$ by 3.2 and 4.1 points in AVG score on the two datasets with a 2.25x increase in the number of parameters. Even if further increases in model dimensions yielded much smaller improvements in performance, it seems likely that a PolyLM model of BERT${}_{\text{LARGE}}$’s 340 million parameter size would achieve results significantly better than those of Amrami and Goldberg (2019). ### 4.1 Ablation Study We test three alternative configurations against PolyLM${}_{\text{SMALL}}$: one where the distinctness loss term is removed from the objective (“no distinctness loss”), one where the disambiguation layer is removed (“no disambiguation layer”), and one where the disambiguation sense probabilities $\bm{q}^{D}$ are used in place of $\bm{q}^{P}$ when performing WSI (“disambiguation layer only”). Note that the first two configurations require new models to be trained, whereas the last simply uses PolyLM${}_{\text{SMALL}}$ in a different way. Results are shown in Table 3. The use of the distinctness loss has a big impact on model performance, while the disambiguation layer is somewhat less important but still useful. The model still performs surprisingly well when the disambiguation rather than the prediction sense probabilities are used; these are the output of only four Transformer layers and hence are much cheaper to compute. This suggests that it might be practical to add the disambiguation layer at the input of various neural NLP models to improve their understanding of polysemy. ## 5 Conclusions PolyLM is a novel model of polysemy based on two assumptions about word senses: firstly, that the probability of a word occurring in a context is equal to the sum of its individual senses occurring, as expressed by the language modeling loss; and secondly, that generally only one sense of a word ought to have a high probability of occurring in a given context, as expressed by the distinctness loss. PolyLM does indeed learn word senses which correspond well to human notions, as demonstrated by its performance on word sense induction, which matches that of the previous state-of-the-art system despite having 6 times fewer parameters. It can be easily applied to many word-sense related tasks, as it generates a probability distribution over the senses of each word in the input text. It is not specific to any one contextualizer and so can be improved as contextualizers improve. ## Acknowledgements Felipe Bravo-Marquez was funded by ANID FONDECYT grant 11200290, U-Inicia VID Project UI-004/20 and ANID - Millennium Science Initiative Program - Code ICN17_002. ## References * Amplayo et al. (2019) Reinald Kim Amplayo, Seung-won Hwang, and Min Song. 2019. AutoSense model for word sense induction. In _The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019_ , pages 6212–6219. * Amrami and Goldberg (2018) Asaf Amrami and Yoav Goldberg. 2018. Word sense induction with neural biLM and symmetric patterns. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 4860–4867, Brussels, Belgium. Association for Computational Linguistics. * Amrami and Goldberg (2019) Asaf Amrami and Yoav Goldberg. 2019. Towards better substitution-based word sense induction. * Arora et al. (2018) Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2018. Linear algebraic structure of word senses, with applications to polysemy. _Transactions of the Association for Computational Linguistics_ , 6:483–495. * Bartunov et al. (2016) Sergey Bartunov, Dmitry Kondrashkin, Anton Osokin, and Dmitry Vetrov. 2016. Breaking sticks and ambiguities with adaptive skip-gram. In _Proceedings of the 19th International Conference on Artificial Intelligence and Statistics_ , volume 51 of _Proceedings of Machine Learning Research_ , pages 130–138, Cadiz, Spain. PMLR. * Başkaya et al. (2013) Osman Başkaya, Enis Sert, Volkan Cirik, and Deniz Yuret. 2013. AI-KU: Using substitute vectors and co-occurrence modeling for word sense induction and disambiguation. In _Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)_ , pages 300–306, Atlanta, Georgia, USA. Association for Computational Linguistics. * Camacho-Collados and Pilehvar (2018) Jose Camacho-Collados and Mohammad Taher Pilehvar. 2018. From word to sense embeddings: A survey on vector representations of meaning. _Journal of Artificial Intelligence Research_ , 63:743–788. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. * Flyamer (2017) Ilya Flyamer. 2017. adjustText. * Huang et al. (2012) Eric Huang, Richard Socher, Christopher Manning, and Andrew Ng. 2012. Improving word representations via global context and multiple word prototypes. In _Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 873–882, Jeju Island, Korea. Association for Computational Linguistics. * Huang et al. (2019) Luyao Huang, Chi Sun, Xipeng Qiu, and Xuanjing Huang. 2019. GlossBERT: BERT for word sense disambiguation with gloss knowledge. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3507–3512, Hong Kong, China. Association for Computational Linguistics. * Jurgens and Klapaftis (2013) David Jurgens and Ioannis Klapaftis. 2013. SemEval-2013 task 13: Word sense induction for graded and non-graded senses. In _Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)_ , pages 290–299, Atlanta, Georgia, USA. Association for Computational Linguistics. * Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. * Lee and Chen (2017) Guang-He Lee and Yun-Nung Chen. 2017. MUSE: Modularizing unsupervised sense embeddings. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 327–337, Copenhagen, Denmark. Association for Computational Linguistics. * Li and Jurafsky (2015) Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 1722–1732, Lisbon, Portugal. Association for Computational Linguistics. * Liu et al. (2019) Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 1073–1094, Minneapolis, Minnesota. Association for Computational Linguistics. * Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. _Journal of machine learning research_ , 9(Nov):2579–2605. * Manandhar et al. (2010) Suresh Manandhar, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. SemEval-2010 task 14: Word sense induction &disambiguation. In _Proceedings of the 5th International Workshop on Semantic Evaluation_ , pages 63–68, Uppsala, Sweden. Association for Computational Linguistics. * Manning et al. (2014) Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In _Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations_ , pages 55–60, Baltimore, Maryland. Association for Computational Linguistics. * Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. * Miller (1995) George A. Miller. 1995. WordNet: A lexical database for English. _Communications of the ACM_ , 38(11):39–41. * Navigli and Ponzetto (2012) Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. _Artificial Intelligence_ , 193:217–250. * Neale (2018) Steven Neale. 2018. A survey on automatically-constructed WordNets and their evaluation: Lexical and word embedding-based approaches. In _Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)_ , Miyazaki, Japan. European Languages Resources Association (ELRA). * Neelakantan et al. (2014) Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014\. Efficient non-parametric estimation of multiple embeddings per word in vector space. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1059–1069, Doha, Qatar. Association for Computational Linguistics. * Panchenko (2016) Alexander Panchenko. 2016. Best of both worlds: Making word sense embeddings interpretable. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)_ , pages 2649–2655, Portorož, Slovenia. European Language Resources Association (ELRA). * Peters et al. (2018) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. * Press and Wolf (2017) Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers_ , pages 157–163, Valencia, Spain. Association for Computational Linguistics. * Qiu et al. (2016) Lin Qiu, Kewei Tu, and Yong Yu. 2016. Context-dependent sense embedding. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 183–191, Austin, Texas. Association for Computational Linguistics. * Schütze (1998) Hinrich Schütze. 1998. Automatic word sense discrimination. _Computational Linguistics_ , 24(1):97–123. * Song et al. (2016) Linfeng Song, Zhiguo Wang, Haitao Mi, and Daniel Gildea. 2016. Sense embedding learning for word sense induction. In _Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics_ , pages 85–90, Berlin, Germany. Association for Computational Linguistics. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, _Advances in Neural Information Processing Systems 30_ , pages 5998–6008. Curran Associates, Inc. * Vial et al. (2019) Loïc Vial, Benjamin Lecouteux, and Didier Schwab. 2019. Sense vocabulary compression through the semantic knowledge of WordNet for neural word sense disambiguation. In _Proceedings of the Tenth Global Wordnet Conference_ , pages 108–117, Poland. * Yaghoobzadeh and Schütze (2016) Yadollah Yaghoobzadeh and Hinrich Schütze. 2016. Intrinsic subspace evaluation of word embedding representations. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 236–246, Berlin, Germany. Association for Computational Linguistics. * Zhu et al. (2015) Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In _Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV)_ , ICCV ’15, pages 19–27, Washington, DC, USA. IEEE Computer Society. * Zipf (1950) George K. Zipf. 1950. Human behavior and the principle of least effort. _Journal of Clinical Psychology_ , 6(3):306–306. ## Appendix A Justification of the Distinctness Loss Consider the derivative of the language modeling loss for one particular target position $i\in T$ with respect to the pre-softmax scores $\bm{e}_{k}^{\top}\bm{y}_{i}^{P}+b_{k}$ of the target word $w_{i}$’s sense embeddings $k\in S_{w_{i}}$. For brevity, we define $y_{k}=\bm{e}_{k}^{\top}\bm{y}_{i}^{P}+b_{k}$. $\displaystyle-\frac{\partial}{\partial y_{k}}J^{LM}(c,c^{\prime},\\{i\\})$ $\displaystyle\ =\frac{\partial}{\partial y_{k}}\log\sum_{s\in S_{w_{i}}}\big{[}\text{softmax}(E\bm{y}_{i}^{P}+\bm{b})\big{]}_{s}$ $\displaystyle\ =\frac{\partial}{\partial y_{k}}\log\sum_{s\in S_{w_{i}}}\frac{e^{y_{s}}}{\sum_{s^{\prime}\in S}e^{y_{s^{\prime}}}}$ $\displaystyle\ =\frac{\partial}{\partial y_{k}}\log\frac{\sum_{s\in S_{w_{i}}}e^{y_{s}}}{\sum_{s\in S_{w_{i}}}e^{y_{s}}+\sum_{s\in S\setminus S_{w_{i}}}e^{y_{s}}}$ $\displaystyle\ =\frac{\partial}{\partial y_{k}}\log\frac{\sum_{s\in S_{w_{i}}}e^{y_{s}}}{\sum_{s\in S_{w_{i}}}e^{y_{s}}+C},$ $\displaystyle\text{where }C=\sum_{s\in S\setminus S_{w_{i}}}e^{y_{s}},$ $\displaystyle\ =\frac{\partial}{\partial y_{k}}\bigg{(}\log\sum_{s\in S_{w_{i}}}e^{y_{s}}-\log\Big{(}\sum_{s\in S_{w_{i}}}e^{y_{s}}+C\Big{)}\bigg{)}$ $\displaystyle\ =\frac{\frac{\partial}{\partial y_{k}}\sum_{s\in S_{w_{i}}}e^{y_{s}}}{\sum_{s\in S_{w_{i}}}e^{y_{s}}}-\frac{\frac{\partial}{\partial y_{k}}(\sum_{s\in S_{w_{i}}}e^{y_{s}}+C)}{\sum_{s\in S_{w_{i}}}e^{y_{s}}+C}$ $\displaystyle\ =\frac{e^{y_{k}}}{\sum_{s\in S_{w_{i}}}e^{y_{s}}}-\frac{e^{y_{k}}}{\sum_{s\in S_{w_{i}}}e^{y_{s}}+C}$ $\displaystyle\ =\frac{e^{y_{k}}}{\sum_{s\in S_{w_{i}}}e^{y_{s}}}-\frac{e^{y_{k}}}{\sum_{s\in S}e^{y_{s}}}$ $\displaystyle\ =q_{ik}^{P}(c^{\prime},c)-p_{ik}(c).$ Since $q_{ik}^{P}>p_{ik}$, $\frac{\partial}{\partial y_{k}}J^{LM}(c,c^{\prime},\\{i\\})$ will always be negative, meaning that every sense embedding for the target word will always move towards the contextualized representation $\bm{y}_{i}^{P}$. This is undesirable, because it means that even senses which are irrelevant in a context will receive a positive update. Now consider the derivatives of the distinctness loss: $\displaystyle-\frac{\partial}{\partial x_{y}}J^{D}(c,c^{\prime},\\{i\\})$ $\displaystyle\ =\frac{\partial}{\partial y_{k}}\frac{1}{r}\log\sum_{s\in S_{w_{i}}}\big{(}q_{is}^{P}(c^{\prime},c)\big{)}^{r}$ $\displaystyle\ =\frac{1}{r}\frac{\partial}{\partial y_{k}}\log\sum_{s\in S_{w_{i}}}\bigg{(}\frac{e^{y_{s}}}{\sum_{s^{\prime}\in S_{w_{i}}}e^{y_{s^{\prime}}}}\bigg{)}^{r}$ $\displaystyle\ =\frac{1}{r}\frac{\partial}{\partial y_{k}}\log\sum_{s\in S_{w_{i}}}\frac{e^{ry_{s}}}{(\sum_{s^{\prime}\in S_{w_{i}}}e^{y_{s^{\prime}}})^{r}}$ $\displaystyle\ =\frac{1}{r}\frac{\partial}{\partial y_{k}}\log\frac{\sum_{s\in S_{w_{i}}}e^{ry_{s}}}{(\sum_{s\in S_{w_{i}}}e^{y_{s}})^{r}}$ $\displaystyle\ =\frac{1}{r}\frac{\partial}{\partial y_{k}}\bigg{(}\log\sum_{s\in S_{w_{i}}}e^{ry_{s}}-\log\Big{(}\sum_{s\in S_{w_{i}}}e^{y_{s}}\Big{)}^{r}\bigg{)}$ $\displaystyle\ =\frac{1}{r}\bigg{(}\frac{\frac{\partial}{\partial y_{k}}\sum_{s\in S_{w_{i}}}e^{ry_{s}}}{\sum_{s\in S_{w_{i}}}e^{ry_{s}}}-\frac{r\frac{\partial}{\partial y_{k}}\sum_{s\in S_{w_{i}}}e^{y_{s}}}{\sum_{s\in S_{w_{i}}}e^{y_{s}}}\bigg{)}$ $\displaystyle\ =\frac{1}{r}\bigg{(}\frac{re^{ry_{k}}}{\sum_{s\in S_{w_{i}}}e^{ry_{s}}}-\frac{re^{y_{k}}}{\sum_{s\in S_{w_{i}}}e^{y_{s}}}\bigg{)}$ $\displaystyle\ =\frac{e^{ry_{k}}}{\sum_{s\in S_{w_{i}}}e^{ry_{s}}}-q_{ik}^{P}(c^{\prime},c).$ When $r>1$, $\frac{e^{ry_{k}}}{\sum_{s\in S_{w_{i}}}e^{ry_{s}}}$ is a “sharpened” version of $q_{ik}^{P}(c^{\prime},c)$: it is larger than $q_{ik}^{P}$ when $q_{ik}^{P}$ is large, and smaller when $q_{ik}^{P}$ is small. Now we have $\displaystyle-\frac{\partial}{\partial y_{k}}\Big{(}J^{LM}(c,c^{\prime},\\{i\\})+J^{D}(c,c^{\prime},\\{i\\})\Big{)}$ $\displaystyle\ =-\frac{\partial}{\partial y_{k}}J^{LM}(c,c^{\prime},\\{i\\})-\frac{\partial}{\partial y_{k}}J^{D}(c,c^{\prime},\\{i\\})$ $\displaystyle\ =q_{ik}^{P}(c^{\prime},c)-p_{ik}(c)+\frac{e^{ry_{k}}}{\sum_{s\in S_{w_{i}}}e^{ry_{s}}}-q_{ik}^{P}(c^{\prime},c)$ $\displaystyle\ =\frac{e^{ry_{k}}}{\sum_{s\in S_{w_{i}}}e^{ry_{s}}}-p_{ik}(c)$ $\displaystyle\ =q_{ik}^{\text{sharp}}(c^{\prime},c)-p_{ik}(c).$ Thus the addition of the distinctness loss results in even stronger reinforcement for senses which are highly applicable in the context, and even weaker (possibly negative) reinforcement for senses which are inapplicable. This encourages only one sense of a word to have high probability in a given context, as desired. ## Appendix B Lemmatization The training corpus and all text used for evaluation are lemmatized as follows: first, we perform part-of-speech (POS) tagging using Stanford CoreNLP’s POS tagger Manning et al. (2014). Any token with a tag associated with inflectional morphology in English (NNS, JJR, JJS, RBR, RBS, VBD, VBG, VBP, VBZ or VNB) is split into two separate tokens, its lemmatized form and a special token. There is a unique special token for each of the above tags except the pairs JJR and RBR (comparative adjectives and adverbs) and JJS and RBS (superlative adjective and adverbs), which share [COMP] and [SUP] tokens respectively.
# Evolution of Small Cell from 4G to 6G: Past, Present, and Future Vanlin Sathya Department of Computer Science, University of Chicago, Illinois, USA. ###### Abstract To boost the cellular system’s capacity, the operator’s have started to reuse the same licensed spectrum by deploying 4G LTE small cells (_i.e.,_ Femto Cells) in the past. But in time, these small cell licensed spectrum is not sufficient to satisfy future applications like augmented reality (AR) and virtual reality (VR). Hence, cellular operators look for alternate unlicensed spectrum in Wi-Fi 5 GHz band, later 3GPP named as LTE Licensed Assisted Access (LAA). The recent and current roll-out of LAA deployments (in developed nations like the US) provides an opportunity to understand coexistence’s profound ground truth. This paper discusses a high-level overview of my past, present, and future research works in the direction of small cell benefits. In the future, we shift the focus onto the latest unlicensed band: 6 GHz, where the latest Wi-Fi version, 802.11ax, will coexist with the latest cellular technology, 5G New Radio (NR) in unlicensed. ###### keywords: Femtocells, Small cells, LAA, NR-U, Wi-Fi myfootnotemyfootnotefootnotetext: Email<EMAIL_ADDRESS>(Vanlin Sathya) ## 1 Introduction The growing penetration of high-end consumer devices (smartphones, tablets, etc.) running bandwidth-hungry applications (e.g., mobile multimedia streaming) has led to a commensurate surge in demand for mobile data (pegged to soar up to 77 exabytes by 2022). An anticipated second wave will result from the emerging Augmented/Virtual Reality (AR/VR) industry and, more broadly, the Internet-of-Things that will connect an unprecedented number of intelligent devices to next-generation (5G and beyond) mobile networks. These must, therefore, greatly expand their aggregate network capacity to meet this challenge. It is achieved by combining approaches, including multi-input, multi-output (MIMO) techniques, network densification (_i.e.,_ deploying small cells), and more efficient traffic management and radio resource allocation. On the other side cheap, fast, and portable computing devices with ubiquitous wireless connectivity [2, 19, 33] can revolutionize the personal computing [39] landscape by creating an opportunity to design an unprecedented array of new services and applications. Keeping the same philosophy in mind, we have focused mainly on maximizing the experienced data rate of an end-user by offering increased bandwidth through the coexistence of licensed wireless services (e.g., Long Term Evolution (LTE)) an unlicensed band [81, 82, 83, 84]. To achieve the goal of throughput maximization [79], we focused on minimizing the interference and handling frequent handovers. It is done by optimally placing the small cells [37, 38] (_i.e.,_ a miniature base station, specifically designed to extend the data capacity, speed, and efficiency of a cellular network [40]) and controlling their emitting power in a dense small cell deployment scenario. 5G LTE, which operates in the licensed band and 802.11 wireless LAN (Wi-Fi) [32, 36], which operates in an unlicensed band, has some fundamental structural differences. For example, LTE control is where a base station (BS) exclusively allocates the radio resources to the users, interference due to concurrent transmission by the users handled. On the other hand, Wi-Fi [66, 80] follows a distributed approach [63, 64] where each user independently contests to occupy the channel, thereby concurrent transmission results in interference. The main motive of LTE/Wi-Fi coexistence in unlicensed bands for LTE users in case of very few or no Wi-Fi users. So, the research on the fair coexistence of LTE/Wi-Fi mainly focuses on the intelligent use of the unlicensed band by the LTE users to keep the Wi-Fi users unaffected so that the aim of formation of the unlicensed band remains unaltered. The standard development community has accepted two mechanisms for using the unlicensed band by LTE. These mechanisms are licensed assisted access (LAA) and LTE Unlicensed (LTE-U). LAA follows the same approach of sensing the unlicensed channel, called Listen Before Talk (LBT), as Wi-Fi. LTE-U estimates its duty cycle to access the unlicensed channel based on the various parameters such as interference, type of traffic, and load on the track. In our recent research, we have focused on both LAA and LTE-U mechanisms. In both mechanisms, we keenly observe the various aspects of real-time, which can adversely affect the Wi-Fi users and were ignored by the existing literature. Some of the observations are a) static channel allocation to LTE-U node in an unlicensed band, b) difficulty in association to the Wi-Fi access point (AP) by the Wi-Fi users due to high duty cycle (_i.e.,_ repeating ON and OFF intervals in the medium) in LTE-U, c) a considerable reduction in the duration of duty cycle in LTE-U if several surrounded APs considered in its estimation. We used a Machine Learning approach to propose solutions based on these observations. As part of my future research plan, We want to explore the research challenges in the fair coexistence of LTE and Wi-Fi on the 6 GHz band used as an unlicensed band. Apart from that, We plan to provide Machine Learning (ML) based solutions to some of the existing problems on LTE/Wi-Fi coexistence on 5 GHz. These problems are an efficient use of high bandwidth by Wi-Fi users and optimal channel selection by both LAA BS and a Wi-Fi AP in a multi LTE operators-multi AP scenario. ML algorithms are used to closely observe the system’s behavior on different conditions to make intelligent decisions. The paper is organized as follows. Section 2 provides a brief overview of 4G and 5G Heterogeneous Networks. Section 3 describes the associated challenges and solutions for the past and present small cell deployments. Section 4 focuses on the future and recent NR-U small cell in 6 GHz. Finally, conclusions and future research directions are presented in Section 5. ## 2 4G & 5G Heterogeneous Networks (HetNets) [5, 6]: Presently, cellular network users are not only those which generate mostly downlink traffic (_i.e.,_ web browsing, downloading) but also combination of users generating symmetric (both uplink (UL) and downlink (DL)) traffic (_i.e.,_ social networking, gaming) and users generating uplink traffic (_i.e.,_ M2M/IoT). In order to provide better connectivity and high data rates to these users, low power network nodes, called as small cells, are being deployed. Presence of such diverse traffic generating users and small cells with different transmit powers and sizes, has turned cellular networks from homogeneous to heterogeneous in nature. A heterogeneous network (HetNet) consists of a Macro cell augmented with various types of small cells to address the challenge of enhancing system capacity and coverage. Examples of small cells are micro cell, pico cell, relay, Remote Radio Head (RRH), Femto, Licensed Assisted Access (LAA), New Radio in Unlicensed (NR-U), and Femto cell [3, 4] as shown in Fig. 1. Table 1 shows characteristics of various types of cells. The description of various types of small cells are as follows. Table 1: Characteristics of heterogeneous cells in 4G and 5G Technology | Placement | Transmit Power | Backhaul Characteristic | Number of Users ---|---|---|---|--- Macro BS | Outdoor | 46 dBm | Dedicated wireline | 1000-2000 Pico or Micro cell | Outdoor | 30 dBm | Dedicated wireline | 100-200 RRH | Outdoor or Indoor | 30-35 dBm | Dedicated wireline | 100-200 Relay | Outdoor or Indoor | 30-35 dBm | Wireless out-of-band or in-band | 60-100 Femotcell | Indoor | 20-23 dBm | Residential or enterprise broadband | 10-30 LAA [65, 67] | Outdoor or Indoor | 20-23 dBm | Residential or enterprise broadband | 10-30 NR-U | Outdoor or Indoor | 20-23 dBm | Residential or enterprise broadband | 10-30 Figure 1: 4G and 5G Heterogeneous Network * 1. Pico or Micro BS: These are deployed in an outdoor environment to cover a radius up to 300 m. The pico BS (_i.e.,_ 30 dBm) is smaller than that of Macro BS. Pico BS has a dedicated $X3$ backhaul connection to the Macro BS for coordination. * 2. Relay: Relay BS acts as a repeater. It receives the data/signal from the Macro BS and transmits/boosts the data to the relay connected users. The link between the relay and Macro BS is also wireless. Normally the relay BS(s) is the preferred choice of operators to extend the coverage region (e.g., Hilly region, rural areas) of the Macro BS. * 3. RRH: Unlike traditional BS, RRH is a radio transceiver component that performs only the transmission and reception of In-phase Quadrature (IQ) samples. The remaining BS processing is done at a centralized cloud data center by Baseband Unit (BBU) pool. The BS’s cost will come down by performing centralized processing, which directly reduces CAPEX and OPEX. * 4. LAA: Unlike traditional Femtocells, LAA operates on the 5 GHz unlicensed spectrum, which does implement an LBT protocol similar to that used by Wi-Fi, with different values for parameters such as sensing threshold and transmission intervals. * 5. NR-U: Unlike LAA cells, NR-U operates on the 6 GHz unlicensed spectrum, which implements an LBT protocol to fair share the spectrum protect the incumbent users. The advantage of HetNets is as follows. 1. 1. cell range expansion (CRE) [20, 26, 27]: Increasing or decreasing the transmit power (or coverage region) based on the load in small cells will boost the overall system throughput. 2. 2. Integrating Macro and small cells: Improving user throughput by dual connectivity [28] (_i.e.,_ Macro and one small cell). 3. 3. Self-organizing network (SON): In HetNets, small cells are deployed in huge number. The operator can provide each small cell with SON features, which aim to automatically configure and optimize the network, reducing the human effort. It plays a key role in improving OAM (operation, administration, management). ### 2.1 LTE Femtocell Networks The existing Macro Base Stations (MBS) cannot satisfy mobile users because of most users’ huge data demand and indoor locality. Reports by Cisco tell that 70% of the traffic is generated in indoor environments such as homes, enterprise buildings, and hotspots. Hence, mobile operators must improve coverage and capacity‘[11] of indoor environments. But the basic problem with the existing MBS (or small outdoor cells with shorter coverage) is that they can only boost data rates of Outdoor User pieces of equipment ($OUEs$). But, they cannot do the same for Indoor User pieces of equipment ($IUEs$) because it is difficult for electromagnetic signals to penetrate through walls and floors. Owing to numerous obstacles in the communication path between MBS and $IUEs$ inside the building, radio signals attenuate faster with an increase in the distance. Thus, $IUEs$ receive low signal strength (_i.e.,_ Signal-to- Noise Ratio, SNR) compared to outdoor users. Hence, mobile operators must improve coverage and capacity in indoor environments. As a solution, Femtocells are being deployed by both operators and end customers. Femtocell is a low-cost, low-power consuming cellular base station that operates only in a licensed spectrum and designed for outdoor and indoor communication. The range of Femtocell is 100-150 meters for enterprise environments consuming 100 mW power. A home-based Femto (HeNB) can serve 4-5 users, whereas an office-based Femto can serve a maximum of 64 users. Each Femto requires a backhaul connection to the evolved packet core (EPC). Advantages of using Femtos are described as follows: Operator Advantages : 1. 1. The operator can increase the network capacity. 2. 2. The operator can reduce Operational expenditure (OPEX) and Capitational expenditure (CAPEX). 3. 3. The operator can reduce Backhaul cost. 4. 4. The operator can reduce Traffic overload on MBSs. User Advantages : 1. 1. Improved Quality of Experience (QoE). 2. 2. Improved energy efficiency/battery life. ### 2.2 Architecture of Indoor LTE Femto cells In the LTE HetNet system’s architecture, Femtos are deployed inside the building and connected to a Femto Gateway (F-GW) over the S1 interface. F-GW is mainly used to reduce the load on MME. It acts as a virtual core network to Femtos. The F-GW gets assigned with an eNB ID, and thus F-GW is considered another eNB by the MME. The X2 interface [89, 90, 91, 92, 93] is introduced between Femtos of enterprise Femtocell networks to avoid inter-cell interference and directly route the data and signaling messages among Femtos, thereby reducing the load on LTE core network and offering better coordination among Femtos. ### 2.3 Access Modes in Femto Since Femtos [35] are deployed for offering high data rates to indoor (paid) users in enterprise and residential buildings, each Femto is configured with a list of subscribers called Subscriber Group (SG) such that only the users in the SG can access the Femto. The users not belonging to this list are called Non-SG (NSG), and they may not be served by the Femto even when they are close to the Femto. Following access modes are defined for Femtos: * 1. Open access: The open-access mode allows all users (_i.e.,_ SG & NSG) to access the Femto without any restriction. * 2. Closed access : The fast-access mode permits only authorized users (_i.e.,_ SG) to access the Femto. * 3. Hybrid access: The hybrid access [24] is the combination of both open and closed access. It allows all users (_i.e., SG & NSG_) by providing preferential access for SG users over NSG users. ### 2.4 LTE Licensed Assisted Access 3GPP specifies LTE-LAA in Release 13 adopted the LBT approach for coexistence with Wi-Fi and supported only DL transmissions in the unlicensed band: a secondary cell (sCell) aggregated with a licensed primary cell (pCell). Enhanced LAA (eLAA), as specified in Release 14, supports UL operation in the unlicensed band. However, the legacy LTE UL schedule continued to be used in eLAA, thus increasing the processing delay in scheduling grants due to LBT procedures. Hence, in April 2017, 3GPP started the "further eLAA" (FeLAA) working group (in Release 15) to improve LAA DL and UL performance through enhanced support for autonomous UL transmissions. In the proposed FeLAA, a UL transmission ought to receive a grant from the eNB before the transmission, which solves the constraint imposed by the legacy eLAA. Most of these features have not been tested in the field before deployment. As LTE-LAA deployments are being rolled out in major cities in the US, they offer an opportunity for real-world testing. ### 2.5 5G Small Cell in Unlicensed Deployment The coexistence of small-cell LTE-U [68, 69, 70, 71, 72, 73, 74, 75] and Wi-Fi networks in unlicensed bands at 5 GHz is a topic of active interest [23, 25], primarily driven by industry groups affiliated with the two (cellular and Wi- Fi) segments. In contrast, there is a body of analytical work [76, 77, 78, 85] exploring the coexistence of LTE-U and Wi-Fi, our focus in this project has been on real-time measurements [104, 105] and real-time deployment aspects of such coexisting networks. Coexistence is a topic that has seen little traction in the existing literature. As per the scope of this project, we actively design, analyze, and implement wireless network algorithms in simulation (using ns-3), in a real-time National Instrument (NI) unlicensed coexistence test-bed [29, 34] and also conduct measurements and analysis on recently deployed LAA in the Chicago area. In our previous work [29, 34], we investigated various aspects of coexistence between the two principle variants of LTE in the unlicensed bands, LTE-U and LAA, and Wi-Fi [106]. For LTE-U, we analyzed the effect of the LTE-U duty cycle [86, 87, 88, 1] on the performance of the association of Wi-Fi. We demonstrated [45, 46, 47, 53] that using a high duty cycle adversely impacted Wi-Fi’s ability to access the channel due to the Wi-Fi beacon transmission process’s disruption. Therefore, we recommended using a lower duty cycle even if there was no Wi-Fi present in the channel to enable fair access to a new Wi-Fi access point that may wish to use the channel. We also developed machine-learning-based algorithms to determine the number of Wi-Fi APs on the air to enable an appropriate duty cycle setting. However, since LTE-U is not considered for wide deployment by industry, we switched our attention to evaluating LAA coexistence. We have made tremendous progress toward understanding Wi-Fi and LAA’s coexistence behavior in our previous work’s unlicensed bands. Our theoretical analysis, corroborated by detailed system-level simulations using ns-3 and use of software-defined-radios, demonstrates that coexistence is improved substantially when the two systems treat each other symmetrically. That is when Wi-Fi and LAA use the same detection threshold to defer to each other. In our recent work [61], we added a new dimension by performing detailed measurements of deployed LAA networks by the three major carriers, Verizon, AT&T, and T-Mobile, in Chicago. We conducted these measurements using off-the- shelf and custom-designed apps to extract detailed network information via APIs on Android smartphones. We believe this to be the first such exercise in academia and the measurements revealed several interesting new directions, which we will continue to research. Two such topics are: (i) Though most LAA deployments are outdoors and Wi-Fi’s are indoors, the client devices that connect to these networks can be used outdoor/indoor; this results in hidden- node scenarios worse by the fact that two systems do not decode each other’s signals (ii) Most academic analyses have focused on coexistence in a single 20 MHz channel, Whereas our measurements reveal that LAA usually aggregates three unlicensed channels, therefore increasing Wi-Fi’s impact. These results have been presented to the industry as well and been incorporated into recommendations by Cisco. ## 3 Associated Challenges and Existing Solutions for past and present small cell deployments In this section, we discuss various issues and challenges while deploying small cells in LTE HetNets. Also, we discuss existing solutions to address these issues and challenges. All the challenges boil down to finding a solution to improve the system performance. Some of the important challenges are discussed below: 1. 1. Small Cell Placement in 4G LTE Due to the large scale deployment of Femtocells in enterprise/office environments [7, 8, 9] and many practical constraints (_e.g.,_ lack of space and power), operators will go for arbitrary deployment. Arbitrary deployment of Femtos will lead to coverage holes and an increased number of Femtos (increased OPEX and CAPEX). To address these issues, placement of Femtos needs to be optimal [10, 60]. Optimal placement of Femtos [7, 41, 56] ensures good SINR and improves overall system capacity. Usually, due to physical constraints, operators may need to go for sub-optimal or arbitrary deployment. Consequently, the number of Femtos deployed is more than that in the optimal model to ensure no coverage holes inside the building. Approaches that deal with the deployment of single Femto is not scalable to enterprise buildings. Placement approaches do not consider the building model parameters (like wall thickness), dynamic power transmission, cost of human resources, and field testing for randomly placed Femtos. Femtos create coverage holes for UEs when the Femtos operate in closed access mode, and non-subscriber UEs are in close vicinity. [12, 13, 14] propose to mitigate coverage holes by proper placement and power control mechanisms. In HetNet systems, the major factor that affects the network throughput is the interference (between Femtos and between Macro and Femtos). There are two types of interference possible in the HetNet systems: 1. (a) Co-tier Interference: Due to reuse of one usage of spectrum, interference from the neighboring small cells is called co-tier interference [15, 17, 58]. For example, UE2 is getting served by the Femto BS (F2), but it is receiving interference from the neighboring Femto BSs (F1 & F3). The traditional solution to avoid co-tier interference among BSs is Inter-Cell Interference Coordination (ICIC). In the ICIC scheme, all BSs cooperatively communicate using the X2 interface and allocate RBs efficiently to the cell edge users, but on the other hand, this increases the signaling messages. 2. (b) Cross-tier Interference: The interference between Macro BS and small cell is called cross-tier interference [16, 17, 18]. For example, UE1 is getting served by the Macro BS, but it is receiving interference from small cells (_i.e.,_ Femtocells F1, F3). The traditional solution to avoid cross-tier interference is enhanced ICIC (eICIC) [21, 22, 59]. In the eICIC scheme, the interference between MBS and Femto BS (FBS) is avoided by muting some sub- frames (Almost Blank Sub-frame) in MBS during FBSs transmissions. In turn, it reduces the interference and increases the capacity in HetNet systems. Though the spectrum efficiency and system capacity could increase due to spatial reuse of the same spectrum in LTE HetNets, SINR and network throughput may be affected by cross-tier inference MBS(s) and small cells and co-tier interference among small cells and obstacles inside buildings. The signal leaks at the buildings’ edges/corners, which causes cross-tier interference and degrading the performance of $OUEs$ in the High Interference Zone (HIZone) around the building connected to one of the Macro BSs in LTE HetNet. To the best of our knowledge, none of the existing works addressed the cross-tier interference issue to HIZone UEs ($HIZUEs$) in a dynamic fashion based on their occupancy levels in the HIZone. In this work, we propose an active power control scheme which is employed at the Femtos to reduce cross-tier interference to $HIZUEs$ in the HIZone. 2. 2. Scheduling or Radio Resource Allocation Since Femtos are deployed to offer high data rate services to indoor (paid) users in enterprise and residential buildings, each Femto is configured with a list of subscribers called Subscriber Group (SG) that can access them [24]. The users not belonging to this list belong to the Non-Subscriber Group (NSG), and they are served by MBSs even when they are close to a Femto. This type of restricted access is called closed access. Femtos configured in open access do not distinguish between SG and NSG users, and hence, they may fail to ensure QoS for SG users, especially during peak traffic loads. Telecom operators favor hybrid access Femtocells (HAFs) as they can provide QoS for SG users by giving them preferential access to radio resources over NSG users and also improve the capacity of LTE HetNet by serving nearby NSG users. Rewarding mechanisms have been proposed to popularize hybrid access mode for Femtos. Challenges here are optimal HAF deployment and efficient splitting of radio resources [95, 96, 97, 98, 99] between SG and NSG users of HAFs in indoor environments. 3. 3. Improving Data Rates in LTE Small Cells and D2D Communication Though the deployment of Femtocells (Femtos) improves indoor data rates, the resulting LTE Heterogeneous Network may face a host of problems [15, 43]. To number a few co-tier, cross-tier interference (frequent reuse in one LTE), and regular handovers due to short coverage areas of Femtos are some significant issues. Deployment of Femtos inside a building can lead to signal leakage at the buildings’ edges/corners. It causes cross-tier interference and degrades outdoor UEs in the HIZone around the building area, connected to one of the Macro BS (MBSs) in the LTE HetNet. Arbitrary placement of Femtos can lead to high co-channel cross-tier interference among Femtos [54, 55] and MBSs coverage holes inside buildings. If Femtos are placed without power control, it leads to increased power consumption and high inter-cell interference in large scale deployments. Our goal was to address these problems by developing efficient LTE small cell architecture, optimal Femto placement, power control to ensure SINR threshold in Indoor environment and Device-to-Device (D2D) based communication were in free/idle Indoor UEs connected to Femto act like UE-relays (_i.e.,_ UE-like BS, forwarding downlink data plane traffic for some of the phone users connected to MBS. 4. 4. Asymmetric Vs. Symmetric ED threshold on LAA and Wi-Fi The exponential increase in the number of mobile devices in use today has led to a commensurate increase in cellular and Wi-Fi infrastructure demands, thus requiring that both licensed (cellular) and unlicensed (Wi-Fi) spectrum be utilized as efficiently as possible. The industry’s actively pursued solution is for cellular systems to use the unlicensed spectrum and the licensed spectrum, which would require fair coexistence with Wi-Fi in the unlicensed spectrum. As per the IEEE 802.11 standard, Wi-Fi uses an energy detection (ED) threshold of -62 dBm when LTE-LAA and LTE-U nodes are deployed close by LTE- LAA specification recommends that LTE-LAA detect Wi-Fi at -72 dBm. In our work [25], we evaluate the effect of this asymmetry in the ED threshold on the coexistence between two systems. We develop a coexistence simulator in ns-3 and vary both the Wi-Fi and LTE energy detection thresholds, and demonstrate that lowering the Wi-Fi ED threshold from -62 dBm improves performance Wi-Fi LTE-LAA. Prior work has mostly focused on determining the ED threshold that should be used by LTE-LAA/LTE-U. As far as we are aware, this is the first result that demonstrates that lowering the Wi-Fi ED threshold improves both systems’ performance. The conclusion is that if Wi-Fi treats LTE-LAA/LTE-U as it would an overlapping Wi-Fi, coexistence performance improves compared to the current assumption that Wi-Fi treats LTE-LAA/LTE-U as noise. 5. 5. Facilitating LAA/Wi-Fi Coexistence Using Machine Learning Approach The various aspects of the coexistence scenarios such deployments give rise to have been considered in a vast body of academic and industry research. However, there is very little data and analysis on how these coexisting networks will behave in practice. The question of "fair coexistence" between Wi-Fi and LAA has moved from a theoretical question to reality. The recent roll-out of LAA deployments provides an opportunity to collect data on these networks’ operations and study coexistence issues on the ground. In our recent work [61], we examined the problems raised due to the coexistence of LAA and Wi-Fi in a real-time deployment scenario by various US carrier operators in downtown Chicago. Based on the nature of the traffic (e.g., data, video, live streaming, etc.) and the availability of the small cell (Femto/LAA) coverage, enabling of secondary unlicensed component carriers are observed. The observation concluded that due to a static channel allocation strategy followed by an LAA BS, a particular channel occupied for a longer time, thereby unlicensed Wi-Fi APs face resource crunch as they follow dynamic channel allocation strategy. As a solution, we predicted the best channel assignment to LAA BS by applying ML algorithms to the collected data so that Wi-Fi users are not affected. 6. 6. Association Issues on LTE-U/Wi-Fi Coexistence In our previous work [30, 31], we address the issue of association fairness when Wi-Fi and LTE unlicensed (LTE-U) coexist on the same channel in the unlicensed 5 GHz band. Since beacon transmission is the first step in starting the association process in Wi-Fi, we define association fairness as to how fair LTE-U is in allowing Wi-Fi to start transmitting beacons on a channel that it occupies with multiple duty cycles. According to the LTE-U specification, if an LTE-U base station determines that a channel is vacant, it can transmit for up to 20 ms and turn OFF for only 1 ms, resulting in a duty cycle 95%. In an area with heavy spectrum usage, there will be cases when a Wi-Fi access point wishes to share the same channel, as it does today with Wi-Fi. We study, both theoretically and experimentally, the effect that such a sizeable LTE-U duty cycle can have on the association process, specifically Wi-Fi beacon transmission and reception. We demonstrated via an experimental set-up using NI USRPs so that a significant percentage of Wi-Fi beacons will neither be transmitted in a timely fashion nor be received at the LTE-U BS, thus making it difficult for LTE-U BS to adapt its duty cycle in response to the Wi-Fi usage. We proposed a novel Carrier Sense Adaptive Transmission (CSAT) algorithm [50, 51, 52] to address the problem of Wi-Fi client association in a dense deployment scenario and enable a fair share of spectrum access. 7. 7. Optimal Scaling of LTE-U Duty cycle in LTE-U/Wi-Fi Coexistence The application of Machine Learning (ML) techniques to complex engineering problems has proved to be an attractive and efficient solution. ML has been successfully applied to several practical tasks like image recognition, automating industrial operations, etc. The promise of ML techniques in solving non-linear problems influenced this work, which aims to apply known ML techniques and develop new ones for wireless spectrum sharing between Wi-Fi and LTE in the unlicensed spectrum. In this work [45, 46, 47, 53], we focus on the LTE-U specification developed by the LTE-U Forum, which uses the duty- cycle approach for fair coexistence [42, 44]. The operator can scale the LTE-U duty cycle optimally if the exact numbers of Wi-Fi APs are known. In the literature, no work is proposed on identifying the precise number of APs by the LTE-U BS. The specification suggests reducing the LTE-U BS’s duty cycle when the number of co-channel Wi-Fi basic service sets (BSSs) increases from one to two or more. However, without decoding the Wi-Fi packets, detecting the number of Wi-Fi BSSs operating on the channel in real-time is challenging. This work demonstrates a novel ML-based approach that solves this problem using energy values observed during the LTE-U OFF duration. It is relatively straightforward to watch only the energy values during the LTE-U BS OFF time compared to decoding the entire Wi-Fi packet, which would require a full Wi-Fi receiver at the LTE-U base-station. We implement and validate the proposed ML- based approach by real-time experiments and demonstrate distinct patterns between the energy distributions between one and many Wi-Fi AP transmissions. The presented ML-based approach results in higher accuracy (close to 99% in all cases) as compared to the existing auto-correlation (AC) and energy detection (ED) approaches. ## 4 Future NR-U Small Cell in 6 GHz The current research (on Spectrum Sharing on 5 GHz) has opened many exciting possibilities to solve the research challenges for LTE/Wi-Fi coexistence in 6 GHz used as an unlicensed band [48, 49]. In the following, we outline some of my future directions on the spectrum, sharing small cells. Figure 2: Bandwidth Allocation on 6 GHz Spectrum [100] ### 4.1 Fair Coexistence of NR-U and Wi-Fi in 6 GHz Spectrum [102]: Since the licensed spectrum is a limited and expensive resource, its optimal utilization may require spectrum sharing between multiple network operators/providers of different types. Increasingly licensed-unlicensed sharing is being contemplated to enhance network spectral efficiency beyond the more traditional unlicensed-unlicensed sharing. As the most common unlicensed incumbent, Wi-Fi is now broadly deployed in the unlicensed $5$ GHz band in North America, where approximately $500$ MHz of bandwidth is available. However, these $5$ GHz unlicensed bands are also seeing the increasing deployment of cellular services such as LTE-LAA. Recently, the Federal Communications Commission (FCC) sought to open up 1.2 GHz of additional spectrum for unlicensed operation in the 6 GHz [102, 103] band through a Notice of Proposed Rule Making (NPRM) [100, 101, 57] as shown in Fig. 2. Thus, this spectrum allocation for the unlicensed operation will only accelerate the need for other coexistence solutions among heterogeneous systems. Hence it is clear that regulatory authorities worldwide are paying close attention to the 6 GHz band as the next spectrum band that will continue to enhance unlicensed services across the world. However, it is also clear that this band, like the 5 GHz band, will see both Wi-Fi and cellular systems being deployed, and hence the coexistence issues played out in the 5 GHz band will repeat in this new frequency as well. In recognition of this, the two principal stakeholder standardization entities, IEEE and 3GPP, held a coexistence workshop in July 2019 [103] to discuss methods to address this before the next generation standards being specified. This section discusses the recent activities on FCC’s 6 GHz NPRM and IEEE & 3GPP efforts towards coexistence in the 6 GHz band. ### 4.2 6 GHz Coexistence: Deployment Scenarios, and Channel Access Although several industry entities were not in favor of a re-evaluation, IEEE recommended that coexistence evaluations for NR-U should include 802.11ac (in 5 GHz), 802.11ax (in 6 GHz), and 802.11ad (in 60 GHz). For the sub-7 GHz bands, coexistence evaluations will be technology-neutral (e.g., channel access mechanism) and performed in random carrier frequencies in the 5 GHz band. These evaluations also necessitate devising suitable 11ac/ax coexistence topologies with a significant number of links below -72 dBm. ### 4.3 Future NR-U: Deployment Scenarios The NR-U work item recently approved by 3GPP supports the existing unlicensed 5 GHz band and the new unlicensed "greenfield" 6 GHz band. Industry players such as Qualcomm expect that other unlicensed and shared spectrum bands, including mmWave, will be added to this list in future releases. Researchers will study the following deployment scenarios to investigate the functionalities needed beyond the operation specifications in an unlicensed spectrum. * 1. Carrier aggregation between licensed band NR (PCell) and NR-U (SCell): (a) NR-U SCell with both DL and UL. (b) NR-U SCell with DL-only. * 2. Dual connectivity between licensed band LTE (PCell) and NR-U (PSCell) * 3. Stand-alone NR-U * 4. An NR cell with DL in the unlicensed band and UL in licensed band * 5. Dual connectivity between licensed band NR (PCell) and NR-U (PSCell) The Legacy cellular operators oppose the NR-U stand-alone scenario and want 3GPP to drop it. They fear stiff competition from new players who can use NR-U stand-alone for limited cellular operation. NR-U is likely to be a more potent competitor to 802.11 than LAA as it will have a more flexible and efficient PHY/MAC marked by a shorter symbol duration, shorter HARQ Round Trip Time (RTT), etc. Further, NR-U can be deployed in every configuration where 802.11 is currently operational if both stand-alone and dual connection is approved. Also, unlike 802.11, NR-U will be capable of deploying the same PHY/MAC with flexible configurations across all current and future unlicensed bands. ### 4.4 ML Based approaches to Solve Issues in Current and Future Spectrum Sharing: We have listed some of the interesting problems on LAA/Wi-Fi coexistence solved through an ML-based approach: * 1. Narrowband vs. Wideband LBT in 6 GHz: The LBT mechanism is used by a device to avoid collisions by ensuring that no other transmissions are concurrently active in the channel. LTE-LAA follows CAT 4 LBT for most of its transmissions, while CAT 2 LBT is used for about 5% of DL transmissions. NR-U is likely to adopt a mechanism similar to the LAA LBT. NR-U Release 16, like its predecessor, the NR Release 15, supports component carriers up to the maximum limit of 100 MHz bandwidth. Besides, it supports the aggregation of several inter and intraband component carriers. Multi-carrier LBT channel access as defined in 5 GHz is assumed _i.e.,_ the Type A LBT in 3GPP TS37.213, where each channel performs its independent LBT procedure. Consequently, there is bound to be high complexity when the operation bandwidth is wide. The alternative Type B LBT in 3GPP TS37.213 can reduce this complexity by performing a single LBT on multiple channels. The wideband LBT could simplify the implementation of wideband operation when it identifies that the channel is free of narrowband interference _i.e.,_ limiting the narrowband signal (20 MHz) certain sub-bands or by long/short term measurements and LBT bandwidth adaption. Hence, wideband LBT is beneficial for systems operating with wide bandwidth as it simplifies LBT implementation. * 2. Intelligent Selection of Unlicensed Channel by LAA BS: While analyzing the collected data in real-time on LAA/Wi-Fi coexistence [61], we found that in an incredibly dense deployment scenario, multiple LAA operators contest for the unlicensed channel. However, numerous unlicensed channels are available, but choosing a particular channel and estimating the duration to occupy the track, with the vision of not affecting the Wi-Fi users and multiple LAA operators, is challenging. To solve this problem, we propose to use a Q-learning based ML solution so that a channel and its occupancy time decided intelligently based on the parameters such as interference from other operators, load on the channels, channel activities of Wi-Fi users, etc. * 3. Intelligent Channel Selection by Wi-Fi AP in LTE/Wi-Fi Coexistence: In a multi-AP setting, an AP selects the channel for the operation of the expected capacity of the existing links. The traditional way is to take the SINR-based capacity estimate into account. However, this capacity model may sometimes fail to represent the complex interactions between PHY and MAC layers due to the presence of LAA, as it makes the scenario heterogeneous. As a result, decisions regarding channel selection may be delayed or inaccurate. To solve this problem, we propose to use supervised learning as a tool to model the complex interactions between PHY and MAC layers based on factors such as power and PHY rate of a neighboring Wi-Fi link. * 4. Efficient Radio Resource Allocation Using Reinforcement Learning In the fifth- generation (5G) of mobile broadband band systems, Radio Resources Management (RRM) has reached unprecedented levels of complexity. To cope with the ever more sophisticated RRM functionalities and to make prompt decisions required in 5G, efficient radio resource scheduling will play a critical role in the RAN system. Depending on tasks’ purposes, the scheduling process is divided into three steps: prioritization, resource number determination, and resource allocation. However, to support a diverse range of applications such as ultra- reliable low latency applications, IoT applications, V2X applications, AR/VR applications, massive multimedia applications, etc., the scheduling process’s steps become more complex. As a solution, we propose to use deep reinforcement learning tools to get feedback about the traffic in real-time so that efficient scheduling decisions and optimal use of radio resources are made. ## 5 Conclusions Small Cells deployment in 4G and 5G play a crucial role in boosting the capacity by efficient reuse of the same spectrum. This paper listed some of the open challenges and issues in the current (in 5GHz) and future (in 6GHz) spectrum sharing. We will continue the research into coexistence by shifting focus onto the latest unlicensed band: 6 GHz. The newest Wi-Fi version, 802.11ax, will coexist with the latest cellular technology, 5G NR-U. Unlike LAA, 5G NR-U will transmit both uplink and downlink data in the unlicensed band. Unlike 802.11ac in 5 GHz, 802.11ax in 6 GHz will employ Orthogonal Frequency Division Multiple Access (OFDMA): these changes will create new coexistence scenarios. ## References * [1] 3GPP, “TSGRAN; Study on Licensed-Assisted Access to Unlicensed Spectrum,” Tech. Rep. TR 36.889 V13.0.0, June 2015. * [2] M. K. Giluka, N. Rajoria, A. C. Kulkarni, V. Sathya, and B. R. Tamma, “Class based dynamic priority scheduling for uplink to support m2m communications in lte,” in 2014 IEEE World Forum on Internet of Things (WF-IoT), pp. 313–317, IEEE, 2014. * [3] R. Chaganti, V. Sathya, S. A. Ahammed, R. Rex, and B. R. Tamma, “Efficient son handover scheme for enterprise femtocell networks,” in 2013 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS), pp. 1–6, IEEE, 2013. * [4] V. Sathya, H. V. Gudivada, H. Narayanam, B. M. Krishna, and B. R. Tamma, “Enhanced distributed resource allocation and interference management in lte femtocell networks,” in 2013 IEEE 9th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), pp. 553–558, IEEE, 2013. * [5] B. M. Krishna, M. Siddula, V. Sathya, and B. R. Tamma, “A dynamic link aggregation scheme for hetregenous networks,” 2014. * [6] S. Madhuri, V. Sathya, B. R. Tamma, et al., “A dynamic link aggregation scheme for heterogeneous wireless networks,” in 2014 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), pp. 1–6, IEEE, 2014. * [7] M. Tahalani, V. Sathya, A. Ramamurthy, U. Suhas, M. K. Giluka, and B. R. Tamma, “Optimal placement of femto base stations in enterprise femtocell networks,” in 2014 IEEE International Conference on Advanced Networks and Telecommuncations Systems (ANTS), pp. 1–6, IEEE, 2014. * [8] V. Sathya, A. Ramamurthy, and B. R. Tamma, “Joint placement and power control of lte femto base stations in enterprise environments,” in 2015 International Conference on Computing, Networking and Communications (ICNC), pp. 1029–1033, IEEE, 2015. * [9] M. Tahalani, V. Sathya, U. Suhas, R. Chaganti, and B. R. Tamma, “Optimal femto placement in enterprise building,” in 2013 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS), pp. 1–3, IEEE, 2013. * [10] H. Lokhandwala, V. Sathya, and B. R. Tamma, “Phantom cell realization in lte and its performance analysis,” in 2014 IEEE International Conference on Advanced Networks and Telecommuncations Systems (ANTS), pp. 1–6, IEEE, 2014\. * [11] R. V. Sathya and B. R. Tamma, “Dynamic spectrum allocation in femto based lte network,” in 2013 Fifth International Conference on Communication Systems and Networks (COMSNETS), pp. 1–2, IEEE, 2013. * [12] H. Lokhandwala, V. Sathya, and B. R. Tamma, “Phantom cell architecture for lte and its application in vehicular iot environments,” EAI Endorsed Transactions on Ubiquitous Environments, 2015. * [13] V. Sathya, K. Bala Murali Krishna, and B. R. Tamma, “Efficient interference management scheme for lte femtocell networks,” ACM Mobihoc (Poster), 2013. * [14] A. Ramamurthy, V. Sathya, V. Venkatesh, R. Ramji, and B. R. Tamma, “Energy-efficient femtocell placement in lte networks,” 2015. * [15] V. Sathya, A. Ramamourthy, S. Kumar, and B. R. Tamma, “On improving sinr in lte hetnet with d2d relays,” Elsevier Computer Communication (COMCOM), 2016\. * [16] V. Sathya, A. Ramamourthy, M. Tahalani, and B. R. Tamma, “On femto placement and decoupled access for downlink and uplink in enterprise environments,” EAI Endorsed Transactions on Ubiquitous Environments (Future Internet), 2015\. * [17] V. Sathya, V. Venkatesh, R. Ramji, A. Ramamourthy, and B. R. Tamma, “Handover and sinr optimized deployment of lte femto base stations in enterprise environments,” Wireless Personal Communications (WPC), pp. 1–25, 2016\. * [18] V. Sathya, A. Kumar, A. Ramamourthy, S. Kumar, and B. R. Tamma, “Maximizing dual cell connectivity opportunities in lte small cells deployment,” in National Conference on Communications (NCC), IEEE, 2016. * [19] S. Dama, T. Valerrian, V. Sathya, and K. Kuchi, “A novel rach mechanism for dense cellular-iot deployments,” in Wireless Communications and Networking Conference (WCNC), IEEE, 2016. * [20] A. Ramamurthy, V. Sathya, S. Ghosh, A. Franklin, and B. R. Tamma, “On improving capacity of full-duplex small cells with d2d,” arXiv preprint arXiv:1606.07198, 2016. * [21] M. K. Giluka, M. S. A. Khan, G. Krishna, T. A. Atif, V. Sathya, and B. R. Tamma, “On handovers in uplink/downlink decoupled lte hetnets,” in 2016 IEEE Wireless Communications and Networking Conference, pp. 1–6, IEEE, 2016\. * [22] M. K. Giluka, M. S. A. Khan, V. Sathya, and A. A. Franklin, “Leveraging decoupling in enabling energy aware d2d communications,” in 2016 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS), pp. 1–6, IEEE, 2016. * [23] A. M. Baswade, V. Sathya, B. R. Tamma, et al., “Unlicensed carrier selection and user offloading in dense lte-u networks,” in 2016 IEEE Globecom Workshops (GC Wkshps), pp. 1–6, IEEE, 2016. * [24] S. Ghosh, V. Sathya, A. Ramamurthy, B. Akilesh, and B. R. Tamma, “A novel resource allocation and power control mechanism for hybrid access femtocells,” Computer Communications, vol. 109, pp. 53–75, 2017. * [25] M. Iqbal, C. Rochman, V. Sathya, and M. Ghosh, “Impact of changing energy detection thresholds on fair coexistence of wi-fi and lte in the unlicensed spectrum,” in 2017 Wireless Telecommunications Symposium (WTS), pp. 1–9, IEEE, 2017. * [26] S. Y. Kumar, V. Sathya, and S. Ramanath, “Enhancing spectral efficiency in lte-d2d networks,” in 2017 9th International Conference on Communication Systems and Networks (COMSNETS), pp. 401–402, IEEE, 2017. * [27] B. Akilesh, V. Sathya, A. Ramamurthy, and B. R. Tamma, “A novel scheduling algorithm to maximize the d2d spatial reuse in lte networks,” in 2016 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS), pp. 1–6, IEEE, 2016. * [28] D. Martolia, V. Sathya, A. K. Rangisetti, B. R. Tamma, and A. A. Franklin, “Enhancing performance of victim macro users via joint absf and dynamic power control in lte hetnets,” in 2017 Twenty-third National Conference on Communications (NCC), pp. 1–6, IEEE, 2017. * [29] M. Mehrnoush, V. Sathya, S. Roy, and M. Ghosh, “Analytical modeling of wi-fi and lte-laa coexistence: Throughput and impact of energy detection threshold,” IEEE/ACM Transactions on Networking, vol. 26, no. 4, pp. 1990–2003, 2018. * [30] V. Sathya, M. Mehrnoush, M. Ghosh, and S. Roy, “Association fairness in wi-fi and lte-u coexistence,” in 2018 IEEE Wireless Communications and Networking Conference (WCNC), pp. 1–6, IEEE, 2018. * [31] V. Sathya, M. Mehrnoush, M. Ghosh, and S. Roy, “Analysis of csat performance in wi-fi and lte-u coexistence,” in 2018 IEEE International Conference on Communications Workshops (ICC Workshops), pp. 1–6, IEEE, 2018. * [32] S. M. Kala, V. Sathya, M. P. K. Reddy, B. Lala, and B. R. Tamma, “A socio-inspired calm approach to channel assignment performance prediction and wmn capacity estimation,” Journal of Network and Computer Applications, vol. 125, pp. 42–66, 2019. * [33] S. Dama, V. Sathya, K. Kuchi, and T. V. Pasca, “A feasible cellular internet of things: Enabling edge computing and the iot in dense futuristic cellular networks,” IEEE Consumer Electronics Magazine, vol. 6, no. 1, pp. 66–72, 2016. * [34] M. Mehrnoush, S. Roy, V. Sathya, and M. Ghosh, “On the fairness of wi-fi and lte-laa coexistence,” IEEE Transactions on Cognitive Communications and Networking, vol. 4, no. 4, pp. 735–748, 2018. * [35] R. Vanlin Sathya and B. R. Tamma, “Dynamic spectrum allocation in femto based lte network,” 2013. * [36] S. M. Kala, V. Sathya, M. P. K. Reddy, and B. R. Tamma, “icalm: A topology agnostic socio-inspired channel assignment performance prediction metric for mesh networks,” in Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, pp. 702–704, 2018. * [37] A. Ramamurthy, V. Sathya, S. Ghosh, A. Franklin, and B. R. Tamma, “Dynamic power control and scheduling in full duplex cellular network with d2d,” Wireless Personal Communications, vol. 104, no. 2, pp. 695–726, 2019. * [38] S. M. Kala, V. Sathya, and B. R. Tamma, “Exploring the relationship between socio-inspired calm and network capacity through regression analysis,” in 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 2369–2374, IEEE, 2018. * [39] S. M. Kala, V. Sathya, S. S. Magdum, T. V. K. Buyakar, H. Lokhandwala, and B. R. Tamma, “Designing infrastructure-less disaster networks by leveraging the alljoyn framework,” in Proceedings of the 20th International Conference on Distributed Computing and Networking, pp. 417–420, 2019. * [40] H. Lokhandwala, V. Sathya, and B. R. Tamma, “Eai endorsed transactionspreprint research article/editorial,” * [41] V. Sathya, A. Ramamurthy, and B. R. Tamma, “On placement and dynamic power control of femtocells in lte hetnets,” in 2014 IEEE Global Communications Conference, pp. 4394–4399, IEEE, 2014. * [42] V. Sathya, M. Merhnoush, M. Ghosh, and S. Roy, “Energy detection based sensing of multiple wi-fi bsss for lte-u csat,” in 2018 IEEE Global Communications Conference (GLOBECOM), pp. 1–7, IEEE, 2018. * [43] S. M. Kala, W. K. Seah, V. Sathya, and B. Lala, “Statistical relationship between interference estimates and network capacity,” arXiv preprint arXiv:1904.12125, 2019. * [44] Y. Kumar, V. Sathya, and S. Ramanath, “Enhancing spectral utilization by maximizing the reuse in lte network,” arXiv preprint arXiv:1907.05201, 2019\. * [45] V. Sathya, M. Mehrnoush, M. Ghosh, and S. Roy, “Auto-correlation based sensing of multiple wi-fi bsss for lte-u csat,” in 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), pp. 1–7, IEEE, 2019. * [46] M. G. Adam Dziedzic, Vanlin Sathya and S. Krshnan, “Detection of multiple wi-fi bsss for lte-u csat using machine learning approach,” 2019. * [47] V. Sathya, A. Dziedzic, M. Ghosh, and S. Krishnan, “Machine learning based detection of multiple wi-fi bsss for lte-u csat,” in 2020 International Conference on Computing, Networking and Communications (ICNC), pp. 596–601, IEEE, 2020. * [48] G. Garg, V. Reddy, V. Sathya, R. T. Bheemarjuna, et al., “An sla-aware network function selection algorithm for sfcs,” in 2019 IEEE 2nd 5G World Forum (5GWF), pp. 524–527, IEEE, 2019. * [49] S. M. Kala, V. Sathya, S. S. Magdam, and B. R. Tamma, “Odin: Enhancing resilience of disaster networks through regression inspired optimized routing.,” Telecommunications Systems, 2019. * [50] V. Sathya, M. Mehrnoush, M. Ghosh, and S. Roy, “Wi-fi/lte-u coexistence: Real-time issues and solutions,” IEEE Access, vol. 8, pp. 9221–9234, 2020\. * [51] S. Manas Kala, V. Sathya, M. Reddy, B. Lala, and B. Reddy Tamma, “A socio-inspired calm approach to channel assignment performance prediction and wmn capacity estimation,” arXiv, pp. arXiv–1808, 2018. * [52] S. M. Kala, V. Sathya, S. W. KG, and B. R. Tamma, “Cirno: Leveraging capacity interference relationship for dense networks optimization,” in 2020 IEEE Wireless Communications and Networking Conference (WCNC), pp. 1–6, IEEE, 2020. * [53] A. Dziedzic, V. Sathya, M. I. Rochman, M. Ghosh, and S. Krishnan, “Machine learning enabled spectrum sharing in dense lte-u/wi-fi coexistence scenarios,” IEEE Open Journal of Vehicular Technology, vol. 1, pp. 173–189, 2020. * [54] A. K. Rangisetti and V. Sathya, “Qos aware and fault tolerant handovers in software defined lte networks,” Wireless Networks, pp. 1–19, 2020. * [55] V. Sathya, R. Madhumathi, and R. Radhakrishnan, “Modified aco algorithm for resource allocation in cloud computing environment,” * [56] V. Sathya, S. Ghosh, A. Ramamurthy, and B. R. Tamma, “Small cell planning: Resource management and interference mitigation mechanisms in lte hetnets,” Wireless Personal Communications, vol. 115, no. 1, pp. 335–361, 2020. * [57] V. Sathya, S. M. Kala, M. I. Rochman, M. Ghosh, and S. Roy, “Standardization advances for cellular and wi-fi coexistence in the unlicensed 5 and 6 ghz bands,” GetMobile: Mobile Computing and Communications, vol. 24, no. 1, pp. 5–15, 2020. * [58] V. Sathya, S. M. Kala, S. Bhupeshraj, and B. R. Tamma, “Raptap: a socio-inspired approach to resource allocation and interference management in dense small cells,” Wireless Networks, pp. 1–24, 2020. * [59] V. Sathya, A. Ramamurthy, M. I. Rochman, and M. Ghosh, “Qos guaranteed radio resource scheduling in stand-alone unlicensed multefire,” in IEEE 5G World Forum, 2020. * [60] “Optimal femto placement in enterprise building,” in IEEE ANTS. * [61] V. Sathya, M. I. Rochman, and M. Ghosh, “Measurement-based coexistence studies of laa & wi-fi deployments in chicago,” IEEE Wireless Communication Magazine, 2020. * [62] M. Ghosh, V. Sathya, M. Iqbal, M. Mehrnoush, and S. Roy, “Coexistence of lte-laa and wi-fi: Analysis simulation and experiments,” in P802. 11 Coexistence SC Workshop, 2019. * [63] S. C. Liew, C. Kai, H. C. Leung, and P. Wong, “Back-of-the-Envelope Computation of Throughput Distributions in CSMA Wireless Networks,” IEEE Transactions on Mobile Computing, vol. 9, no. 9, pp. 1319–1331, 2010. * [64] G. Bianchi, “Performance Analysis of the IEEE 802.11 Distributed Coordination Function,” IEEE Journal on selected areas in communications, vol. 18, no. 3, pp. 535–547, 2000. * [65] Z. Jiang and S. Mao, “Harmonious Coexistence and Efficient Spectrum Sharing for LTE-U and Wi-Fi,” in Proc. of International Conference on Mobile Ad Hoc and Sensor Systems (MASS), pp. 275–283, IEEE, 2017. * [66] I. S. Association et al., “802.11-2012-IEEE Standard Part 11: Wireless LAN MAC and PHY Specifications,” Retrived from http://standards. ieee. org/about/get/802/802.11. html, 2012. * [67] L.-U. Forum, “LTE-U SDL Coexistence Specifications.” LTE-U Forum, http://www.lteuforum.org/documents.html, 2015. * [68] C. Chen, R. Ratasuk, and A. Ghosh, “Downlink performance analysis of LTE and WiFi coexistence in unlicensed bands with a simple listen-before-talk scheme,” in Proc. of Vehicular Technology Conference (VTC) Spring, pp. 1–5, IEEE, 2015. * [69] A. M. Baswade, T. A. Atif, B. R. Tamma, and A. A. Franklin, “On the Impact of Duty Cycled LTE-U on Wi-Fi Users: An Experimental Study,” in Proc. of International Conference on Communication Systems and Networks, pp. 196–219, Springer, 2018. * [70] Y. Gao, X. Chu, and J. Zhang, “Performance analysis of laa and wifi coexistence in unlicensed spectrum based on markov chain,” in Proc. of Global Communications Conference (GLOBECOM), pp. 1–6, IEEE, 2016. * [71] A. M. Baswade, L. Beltramelli, F. A. Antony, M. Gidlund, B. R. Tamma, and L. Guntupalli, “Modelling and analysis of wi-fi and laa coexistence with priority classes,” in Proc. of International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), pp. 1–8, IEEE, 2018\. * [72] Qualcomm, “LTE in Unlicensed Spectrum: Harmonious Coexistence with Wi-Fi.” Qualcomm White Paper, June 2014. * [73] 3GPP, “3GPP-TSG-RAN-WG1; Evolved Universal Terrestrial Radio Access (E-UTRA),” Tech. Rep. TR 36.814 V9.0.0, March 2010. * [74] J. Jeon, H. Niu, Q. C. Li, A. Papathanassiou, and G. Wu, “LTE in the Unlicensed Spectrum: Evaluating Coexistence Mechanisms,” in Proc. of Globecom Workshops (GC Wkshps), 2014, pp. 740–745, IEEE, 2014. * [75] E. Almeida, A. M. Cavalcante, R. C. Paiva, F. S. Chaves, F. M. Abinader, R. D. Vieira, S. Choudhury, E. Tuomaala, and K. Doppler, “Enabling LTE/WiFi coexistence by LTE blank subframe allocation,” in Proc. of International Conference on Communications (ICC), pp. 5083–5088, IEEE, 2013\. * [76] A. Abdelfattah and N. Malouch, “Modeling and Performance Analysis of Wi-Fi Networks Coexisting with LTE-U,” in Proc. of Conference on Computer Communications (INFOCOM), pp. 1–9, IEEE, 2017. * [77] P. C. Ng and S. C. Liew, “Throughput analysis of ieee802.11 multi-hop adhoc networks,” IEEE/ACM Transactions on networking, vol. 15, no. 2, pp. 309–322, 2007. * [78] Y. Gao, D.-M. Chiu, and J. Lui, “Determining the end-to-end Throughput Capacity in Multi-hop Networks: Methodology and Applications,” in ACM SIGMETRICS Performance Evaluation Review, vol. 34, pp. 39–50, ACM, 2006. * [79] Cisco, “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2017-2022.” Cisco White Paper, Feb 2019. * [80] K. Medepalli and F. A. Tobagi, “Towards Performance Modeling of IEEE 802.11 based Wireless Networks: A Unified Framework and its Applications,” in Proc. of International Conference on Computer Communications (INFOCOM), pp. 1–12, IEEE, 2006. * [81] A. M. Cavalcante, E. Almeida, R. D. Vieira, S. Choudhury, E. Tuomaala, K. Doppler, F. Chaves, R. C. Paiva, and F. Abinader, “Performance Evaluation of LTE and Wi-Fi Coexistence in Unlicensed Bands,” in Proc. of Vehicular Technology Conference (VTC Spring), pp. 1–6, IEEE, 2013. * [82] S. Sagari, I. Seskar, and D. Raychaudhuri, “Modeling the coexistence of lte and wifi heterogeneous networks in dense deployment scenarios,” in Proc. of International Conference on Communication Workshop (ICCW), pp. 2301–2306, IEEE, 2015. * [83] R. Ratasuk, M. A. Uusitalo, N. Mangalvedhe, A. Sorri, S. Iraji, C. Wijting, and A. Ghosh, “License-exempt lte deployment in heterogeneous network,” in Proc. of International Symposium on Wireless Communication Systems (ISWCS), pp. 246–250, IEEE, 2012. * [84] T. Nihtilä, V. Tykhomyrov, O. Alanen, M. A. Uusitalo, A. Sorri, M. Moisio, S. Iraji, R. Ratasuk, and N. Mangalvedhe, “System performance of lte and ieee 802.11 coexisting on a shared frequency band,” in Proc. of IEEE Wireless Communications and Networking Conference (WCNC), pp. 1038–1043, IEEE, 2013. * [85] S. Sagari, S. Baysting, D. Saha, I. Seskar, W. Trappe, and D. Raychaudhuri, “Coordinated Dynamic Spectrum Management of LTE-U and Wi-Fi Networks,” in Proc. of IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), pp. 209–220, September 2015. * [86] LTE-U Forum, “LTE-U Technical Report.” [Online] http://www.lteuforum.org/documents.html, 2015. * [87] Y. Li, F. Baccelli, J. G. Andrews, T. D. Novlan, and J. C. Zhang, “Modeling and analyzing the coexistence of Wi-Fi and LTE in unlicensed spectrum,” IEEE Transactions on Wireless Communications, vol. 15, no. 9, pp. 6310–6326, 2016. * [88] C. Cano and D. J. Leith, “Coexistence of WiFi and LTE in unlicensed bands: A proportional fair allocation scheme,” in Proc. of International Conference on Communication Workshop (ICCW), pp. 2288–2293, IEEE, 2015. * [89] M. Sawahashi, Y. Kishiyama, A. Morimoto, D. Nishikawa, and M. Tanno, “Coordinated multipoint transmission/reception techniques for lte-advanced,” IEEE Wireless Communications, vol. 17, no. 3, p. 26, 2010\. * [90] S. Brueck, L. Zhao, J. Giese, and M. A. Amin, “Centralized scheduling for joint transmission coordinated multi-point in lte-advanced,” in 2010 International ITG Workshop on Smart Antennas (WSA), pp. 177–184, IEEE, 2010\. * [91] D. Bladsjö, M. Hogan, and S. Ruffini, “Synchronization aspects in lte small cells,” IEEE Communications Magazine, vol. 51, no. 9, pp. 70–77, 2013. * [92] R. Mendrzik, R. A. J. Castillo, G. Bauch, and E. Seidel, “Interference coordination-based downlink scheduling for heterogeneous lte-a networks,” in 2016 IEEE Wireless Communications and Networking Conference, pp. 1–6, IEEE, 2016. * [93] M.-Y. Zhang, Y. Li, T. Zhou, Y. Yang, H. Hu, and H. Wang, “Coordination method between access points using unlicensed frequency band,” Apr. 16 2019. US Patent 10,264,603. * [94] V. Sathya, M. Mehrnoush, M. Ghosh, and S. Roy, “Wi-Fi/LTE-U Coexistence: Real-Time Issues and Solutions,” IEEE Access, vol. 8, pp. 9221–9234, 2020\. * [95] R. Liu, Q. Chen, G. Yu, G. Y. Li, and Z. Ding, “Resource Management in LTE-U Systems: Past, Present, and Future,” IEEE Open Journal of Vehicular Technology, vol. 1, pp. 1–17, 2019. * [96] P. Charalampou, I. Giannoulakis, E. Kafetzakis, and E. D. Sykas, “Experimenting on LTE-U and WiFi coexistence,” in 2019 4th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), pp. 1–6, IEEE, 2019. * [97] R. Biswas and J. Wu, “Co-existence of LTE-U and Wi-Fi with Direct Communication,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), pp. 1–6, IEEE, 2019. * [98] S. Zinno, G. Di Stasi, S. Avallone, and G. Ventre, “On a fair coexistence of LTE and Wi-Fi in the unlicensed spectrum: A Survey,” Computer Communications, vol. 115, pp. 35–50, 2018. * [99] A. M. Voicu, L. Simić, and M. Petrova, “Survey of spectrum sharing for inter-technology coexistence,” IEEE Communications Surveys & Tutorials, vol. 21, no. 2, pp. 1112–1144, 2018. * [100] FCC, “FCC Notice of Proposed Rulemaking on Unlicensed Use of the 6 GHz Band.” [Online] https://docs.fcc.gov/public/attachments/FCC-18-147A1.pdf, 2018. * [101] S. Methley and W. Webb., “Wi-Fi Spectrum Needs Study, Final Report to Wi-Fi Alliance 3rd ed.” [Online] http://www.wi-fi.org, 2017. * [102] N. Patriciello, S. Goyaly, S. Lagen, L. Giupponi, B. Bojovic, A. Demir, and M. Beluri, “Nr-u and wigig coexistence in 60 ghz bands,” arXiv preprint arXiv:2001.04779, 2020. * [103] IEEE 802.11 coexistence workshop., “IEEE 802.11 coexistence workshop..” [Online] https://grouper.ieee.org/groups/802/11/, 2019. * [104] D. Xu, A. Zhou, X. Zhang, G. Wang, X. Liu, C. An, Y. Shi, L. Liu, and H. Ma, “Understanding operational 5g: A first measurement study on its coverage, performance and energy consumption,” in Proceedings of the Annual conference of the ACM Special Interest Group on Data Communication on the applications, technologies, architectures, and protocols for computer communication, pp. 479–494, 2020. * [105] A. Narayanan, E. Ramadan, R. Mehta, X. Hu, Q. Liu, R. A. Fezeu, U. K. Dayalan, S. Verma, P. Ji, T. Li, et al., “Lumos5g: Mapping and predicting commercial mmwave 5g throughput,” in Proceedings of the ACM Internet Measurement Conference, pp. 176–193, 2020. * [106] F. Wilhelmi, S. Barrachina-Muñoz, B. Bellalta, C. Cano, A. Jonsson, and G. Neu, “Potential and pitfalls of multi-armed bandits for decentralized spatial reuse in wlans,” Journal of Network and Computer Applications, vol. 127, pp. 26–42, 2019.
# Nature-Inspired Algorithms for Wireless Sensor Networks: A Comprehensive Survey Abhilash Singh Sandeep Sharma Jitendra Singh Indian Institute of Science Education and Research Bhopal, India School of ICT, Gautam Buddha University, Greater Noida, India Department of Electrical Engineering, Indian Institute of Technology Kanpur, India ###### Abstract In order to solve the critical issues in Wireless Sensor Networks (WSNs), with concern for limited sensor lifetime, nature-inspired algorithms are emerging as a suitable method. Getting optimal network coverage is one of those challenging issues that need to be examined critically before any network setup. Optimal network coverage not only minimizes the consumption of limited energy of battery-driven sensors but also reduce the sensing of redundant information. In this paper, we focus on nature-inspired optimization algorithms concerning the optimal coverage in WSNs. In the first half of the paper, we have briefly discussed the taxonomy of the optimization algorithms along with the problem domains in WSNs. In the second half of the paper, we have compared the performance of two nature-inspired algorithms for getting optimal coverage in WSNs. The first one is a combined Improved Genetic Algorithm and Binary Ant Colony Algorithm (IGA-BACA), and the second one is Lion Optimization (LO). The simulation results confirm that LO gives better network coverage, and the convergence rate of LO is faster than that of IGA- BACA. Further, we observed that the optimal coverage is achieved at a lesser number of generations in LO as compared to IGA-BACA. This review will help researchers to explore the applications in this field as well as beyond this area. ###### keywords: Optimal Coverage, Bio-inspired Algorithm, Lion Optimization, WSNs. ††journal: Computer Science Review ## 1 Introduction Sensors in WSNs can sense, collect and transmit information together [1]. All these tasks need to be done effectively in order to minimize the wastage of limited sensor battery lifetime. We cannot increase the sensor lifetime by supplying external or additional energy because most of the sensors are deployed in hard-to-reach areas [2, 3, 4, 5, 6, 7, 8, 9, 10]. Much work has been done to increase the lifetime of the sensor node. Liang et al. [11], have proposed, Huang algorithm, an optimal energy clustering algorithm to ensure balanced depletion of energy over the whole network which prolongs the lifetime of the system. Cardei et al. [12], have proposed TianD algorithm, to extend the operational time of the sensors by arranging them into several maximal disjoint set covers that are activated successively. However, there exist some limitations in the above-listed algorithms. Huang algorithm is highly complex, and if data for communication is large, then it may block the channel. In contrast, the complexity of TianD algorithm is lower. However, it is unable to point out the redundant node, which is sensing redundant information. In addition to the energy constraint, accurate sensing, and non-redundant information is a critical challenge in WSNs. In order to sense non-redundant information, the sensors need to be placed apart at a sufficient distance from each other so that the overlapping in the sensing region is minimum. However, if the sensors are placed at a more considerable distance away from each other, then it will create uncovered areas which are termed as coverage hole or blind areas. To ensure guaranteed coverage, Wang et al. [13], proposed a Coverage Configuration Protocol (CCP) which guaranteed coverage and connectivity with self-configuration for a wide range of applications. However, the CCP Algorithm gives underperformance if the numbers of sensors are significant. After critically analyzing the problem of energy constraint and sensor node separation (i.e., node placement), we observed that there exist a trade-off between these two problems. In literature, researchers have proposed individual solutions to each problem of energy constraint and node placement but not collectively. Keeping in view the limitations of the above-proposed solutions and instead considering the problem individually, we have combined these two problems as a multi-objective optimization problem. To balance this trade-off, we need to optimize a multi-objective optimization problem. To balance this trade-off, we need to optimize the multi-objective optimization problem. After successful optimization, we can achieve optimal coverage with less number of sensor nodes. Several reviews are published in context to use of nature-inspired algorithms in WSNs [14, 15, 16, 17, 18]. However, only a few cover the optimal coverage aspect in WSNs [19, 20, 21]. In [19], they discussed the various issues that are generally encountered while using a nature-inspired algorithm-based optimization technique for sensor deployment that leads to the optimal coverage. Whereas in [20], they compared three algorithms namely, standard Multi-Objective Evolutionary Algorithm (MOEA), Non-dominated Sorting Genetic Algorithm (NSGA-II) and Indicator-Based Evolutionary Algorithms for optimal coverage in WSNs. Recently, [21] efficiently discussed the theoretical, mathematical and practical application of nature-inspired algorithms in WSNs. They discussed the genetic algorithm, evolutionary deferential algorithm, NSGA and genetic programming in-depth for routing, clustering, coverage and localization in WSNs. Nevertheless, none of them provides a critical review of the problem domains in WSNs and in particular of the optimal coverage. In this paper, firstly, we have briefly discussed the nature-inspired algorithms and their application in WSNs. We have also discussed the advantages and disadvantages of the work done by various researchers. Later, we have compared the performance of two such algorithms for the optimization of a multi- objective optimization problem stated above. The first algorithm is IGA-BACA [22, 23, 24, 25]. It is a hybrid of the modified evolutionary and swarm-based nature-inspired algorithm. In contrast, the second one is LO [26], which is a purely swarm-based nature-inspired algorithm. The rest of the paper is organized as follows. In Section 2, we have discussed the WSN’s problem domains that consist of the critical issues of the WSNs by categorizing it into four categories which are followed by a brief discussion of the taxonomy of some of the prominent optimization algorithms. Further, in Section 3, we have briefly discussed the theoretical and mathematical aspect of some nature-inspired algorithms. Furthermore, in Section 4, we have discussed the solutions to the problem domains of WSNs. Afterwards, in Section 5, we have discussed the optimal coverage aspect in detail with respect to nature-inspired algorithms. Then, we have presented the system model in Section 6. After that, we have presented the simulation results in Section 7. Lastly, in Section 8, we have presented the conclusion and the future scope of the work. For better readability, the outline of the paper is shown is Fig. 1. Figure 1: Organization of the paper. Figure 2: Problem domains of WSNs. ## 2 WSNs and Optimizations The critical issues in WSNs can be broadly classified into three, namely energy efficiency, Quality of Service (QoS) and security. There exist a trade- off between all these issues. For example, if we want good QoS, then we have to compromise with the network lifetime. Same follows with the security parameters. A significant amount of work has been done concerning addressing these issues individually. However, many loopholes exist when addressing these issues individually. So, to develop a better WSNs, we need to optimize these issues simultaneously. One way of doing this is to develop a multi-objective function and optimize it by using a suitable optimizer or algorithm. The selection of suitable algorithm depends upon various factors such as the behaviour of the algorithm, type of problem, time constraint, resource availability, and desired accuracy. We have first discussed the problem domains in WSNs and then review the optimizations techniques that are available to date to solve it. Figure 3: Coverage: (a) Hexagonal shape-based, (b) Circular shape-based (radius = a), (c) Real time-based (irregular) and (d) Circular shape-based (radius $=a^{{}^{\prime}}$). ### 2.1 Problem Domains in WSNs We have reviewed the potential of optimization and focused on the different areas in WSNs, as shown in Fig. 2. * 1. Optimal Coverage in WSNs * 2. Data Aggregation in WSNs * 3. Energy Efficient Clustering and Routing in WSNs * 4. Sensor Localization in WSNs We have first briefly discussed each of these problem domains followed by the work done to solve these issues using optimizations in the section 4. #### 2.1.1 Optimal Coverage in WSNs Coverage is necessary and hence becomes an essential topic in the study of WSNs. The coverage in a given target area is defined as finding a set of sensors for covering the given area or all the target points. Optimal coverage means covering the entire area or all the targets point with a minimum number of sensors. One of the crucial parameters in the coverage of a sensor in WSNs is the shape of the sensing area. In Fig. 3 (a) - (d), we present four two-dimensional geometrical-based sensing shapes. In real life, the shape of the sensing area is irregular and complex due to the terrain features and solid structures. The Fig. 3 (c), represents a typical example of real-life sensing shape of a sensor. However, for computational and conceptual ease, we often adopt either a hexagonal shape or a circular shape. The hexagonal shape is often applied for analysis in the WSNs because of its flexibility and no overlapping, as illustrated in Fig. 3 (a). However, because of the low complexity, the circular shape is more popular. The limitation associated with the circular shape is that it creates a coverage hole, as illustrated in Fig. 3 (b). This limitation is compensated by increasing the radius of the circle, as illustrated in Fig. 3 (d). However, this gives birth to a new issue of overlapping regions. These overlapping regions lead to the sensing of redundant information and wastage of the limited sensor battery. However, if we critically compared all the three possibilities with the real-life sensing shape, then Fig. 3 (d), comes out to be the representative of Fig. 3 (c). The only challenging issue in this problem domain is the reduction of these overlapping sensing regions with no coverage hole. The more the overlapping area, the more redundant information will be sensed by the sensors and hence more will be the wastage of the limited battery of the sensors. One way of minimizing this redundancy is to optimize the sensor node placement, which is a single-objective optimization problem. We can extend the single objective to multi-objective by considering the other network parameters. #### 2.1.2 Data Aggregation in WSNs The second way of minimizing the sensing of redundant information is data aggregation. It is an energy-efficient technique in WSNs. While monitoring an area, sensors collect local information and send it either the complete processed data or partially processed data to the data aggregation centre. According to the data received, the data aggregation centre takes a specific decision to improve the lifetime of the sensors by eliminating the sensing of overlap or common regions. Figure 4: Types of Data aggregations: (a) Tree-based, (b) Cluster-based, (c) Grid-based and (d) Chain based. We can broadly classify the data aggregation techniques into four, i.e., Tree- based, Cluster-based, Grid-based and Chain based. All four types have been illustrated in Fig. 4 (a) - (d). The Tree-based data aggregation technique is based on tree architecture in which the source node act as coordinator, and the data aggregation takes place at the intermediate nodes known as the aggregator node. The lower level nodes forward the information to the upper- level nodes. The Cluster-based aggregation technique is based on clustering architecture. In this type of data aggregation, the network is first divided into several clusters followed by Cluster Head (CH) selection based on sensor parameters such as sensor energy, etc. The CH first aggregates the data locally within the clusters, and then the aggregated data is sent to the sink. For each new round of data transmission, a new CH is selected to avoid excess energy consumption from the CH. In the Grid-based aggregation technique, the network is first divided into several areas, and every area reports the occurrence of any new event. The data aggregation take place at the grid aggregator node, also known as the central node. In the Chain based aggregation technique, the sensor node transfer the data to its neighbouring node and the data aggregation take place at the lead. The main challenging issues in this problem domain are * 1. To address the problem of optimal power allocation. * 2. Finding minimum no. of aggregation points while routing data. * 3. Perform consistency for large scale and dynamic WSNs. #### 2.1.3 Energy Efficient Clustering and Routing in WSNs Due to the limited energy supply in sensors, the need for energy-efficient infrastructure is of utmost importance. Most of the sensor energy is consumed in the transmission of the sensed data. The energy required for the data transmission increases exponentially with the transmission length. Due to which the data transmission in sensors follows multi-hop communication. Routing in WSNs is referred to as the path traversed by the data packets to reach the sink from the source node. First, the sensors are clustered into groups. Then a CHs is selected for each group which collects all the data from the non-CH sensors. Subsequently, the collected data is transmitted to the sink using optimal routing techniques. The main challenging issues in this problem domain are * 1. Selection of high energy CHs and an optimal routing path in each round. * 2. Maximization of the data delivered and the network lifetime. * 3. Communication distance minimization. Figure 5: Working of the localization system. #### 2.1.4 Sensor Localization in WSNs Sensor localization is the process of calculating the location of the sensor present in a network. It consists of two phases. The first one is the distance estimation, and the second one is the position calculation, as illustrated in Fig. 5. The anchor or beacon node is the node with known location either through Global Positioning System (GPS) or by manual pre-programming during deployment. During the first phase, the relative distance between the anchor and the unknown node is estimated. The coordinates of the unknown node concerning the anchor nodes are calculated in the second phase using this gathered information. In order to localize the other nodes in the WSNs, the available information of distances and positions are manipulated by using various localization algorithms. A details study of such algorithms can be found in [27]. The main challenging issues in this problem domain are * 1. Minimization of the localization error. * 2. Increasing the accuracy of the unknown node location. ### 2.2 Optimization in WSNs An optimization can be done by a model, or by a simulator or by an algorithm. In this paper, we have evaluated the potential of optimization of the problem domains in WSNs based on algorithm approach. A detailed taxonomy of the optimization algorithms that are frequently used in WSNs is shown in Fig. 6. However, there exist more than 100 nature-inspired algorithms since 2000. Hence, it is not possible to list all the existing algorithms in one taxonomy. For example, Xing and Gao [28] have listed 134 such algorithms and an online repository Bestiary list more than 200 algorithms [29]. The most recent and complete taxonomies or databases of the nature-inspired algorithms can be found in [30]. The optimization algorithms are classified into deterministic (local search) and stochastic (global search). In deterministic methods, we have a theoretical guarantee of reaching the global minimum or at least to a local minimum, whereas stochastic methods only provide a guarantee in terms of probability. However, stochastic methods are faster as compared to the deterministic one. Moreover, stochastic methods are suitable for black-box formation and ill-behaved functions. In contrast to stochastic methods, the deterministic method mostly relies on the theoretical assumptions about the problem formulation and also on its analytical properties. Figure 6: Taxonomy of the optimization techniques. Figure 7: Venn diagram for broad classification of optimization algorithm. Figure 8: Regions of application for various algorithm. Further, the stochastic methods are classified into a heuristic and meta- heuristic algorithm. Both types of algorithms are used to increase the speed of the process of finding a global optimum for the cases where finding an optimal solution is difficult. Heuristics algorithms are problem-dependent algorithms. Due to its adaptiveness with the problem and greedy nature, they are highly prone to get stuck at local optima; resulting into failure of obtaining global optima. In contrast, meta-heuristics algorithms are problem- independent algorithms. The non-adaptive and non-greedy nature of these algorithms enables its use as a black box. These algorithms sometimes accept a temporary deterioration of the solution (e.g., simulated-annealing method) in order to get the global optima. The meta-heuristic algorithms are also known as nature-inspired algorithms, or intelligent optimization algorithms [31, 32, 33]. These algorithms are formulated by delineating inspiration from nature. The nature-inspired/ meta-heuristic algorithms are further classified as bio- inspired, physics-inspired, geography inspired and human-inspired. The majority of the nature-inspired algorithms are inspired by the biological system. Hence, a big chunk of nature-inspired algorithms are bio-inspired (biology-inspired) (Fig. 7). The bio-inspired algorithms are further classified into three, namely evolutionary, swarm-based and plant-based. The evolutionary algorithms are based on the principle of evolution, such as Darwin’s principle of selection, heredity and variation [34]. In contrast, swarm algorithms are based on the collective intelligence [35, 36]. For representing the present status of these algorithms in context to WSNs, we have created a Venn diagram (Fig. 8) that illustrate the different regions of applications or problem domains. In Fig. 8, the region R 1, R 2, R 3 and R 4 represents the application area for optimal coverage, data aggregation, energy-efficient clustering and routing and sensor localization respectively. Also, the overlapping regions have combined regions of application (e.g., R 3 represents the application area that includes optimal coverage as well as data aggregation). Finally, Table 1 represents the current status of the bio- inspired algorithms in context to WSNs. Not all the bio-inspired algorithms are of potential use in WSNs. The algorithms for any specific problems in WSNs arena are selected based on the analogous parameters between the problem domain and the algorithm (e.g., Table 3) [37, 38]. According to the previous studies (Table 1), only three algorithms (PSO, GA, and ACO) covers all the problem domains of WSNs (i.e. lies in region R 13). Hence, PSO, GA, ACO and their modifications such as IGA, BACA and IGA-BACA (combined meta-heuristic) are suitable for the optimizations of the problem domains in WSNs. In this study, we have evaluated the potential of the LO for optimal coverage in WSNs. In the next section, we have tried to elaborate and give an insight into all these algorithms. Table 1: Algorithms with region of application. Algorithm | Region of application | Main references ---|---|--- GA [39] | R 13 | [40, 41, 42, 43] Evolutionary programming [44] | R 4 | [45] Learning classifier system [46] | R 14 | Not addressed Genetic programming [47] | R 10 | [48, 49] Evolutionary strategy [50] | R 10 | [51, 52] Estimation of distribution algorithm [53, 54] | R 12 | [55, 56] Differential evolution [57] | R 12 | [58, 59, 60] Multi-factorial evolutionary algorithm [61] | R 2 | [62] Multi-tasking genetic algorithm [63] | R 14 | Not addressed ACO [64, 65, 66, 67] | R 13 | [68, 69, 70, 71] PSO [72, 73, 74] | R 13 | [75, 76, 77, 78] Bacterial foraging algorithm [79, 80, 81] | R 12 | [82, 83, 84] Artificial fish swarm optimization [85, 86] | R 12 | [87, 88, 89] Artificial bee colony [90, 91, 92, 93] | R 12 | [94, 95, 96, 97] Bee system [98, 99] | R 14 | Not addressed Bees algorithm [100, 101] | R 4 | [102] Virtual bees [103] | R 14 | Not addressed Virtual ant algorithm [104] | R 14 | Not addressed Cat swarm [105, 106] | R 15 | [107, 108] Accelerated particle swarm optimization [109] | R 14 | Not addressed Good lattice swarm optimization [110] | R 14 | Not addressed Monkey search [111] | R 3 | [112] Firefly algorithm [113, 114, 115] | R 12 | [116, 117, 118] Fast bacterial swarming algorithm [119] | R 14 | Not addressed Bee colony optimization [120] | R 2 | [121] Bee swarm optimization [122, 123] | R 14 | Not addressed Bumblebees algorithm [124] | R 14 | Not addressed Cuckoo search [125, 126, 127] | R 11 | [128, 129, 130] Hierarchical swarm model [131] | R 14 | Not addressed Consultant guided search [132, 133, 134] | R 14 | Not addressed Bat algorithm [135] | R 12 | [136, 137, 138] Wolf search [139] | R 14 | Not addressed Krill herd [140] | R 15 | [141, 142] Weightless swarm algorithm [143] | R 14 | Not addressed Eagle strategy [144] | R 14 | Not addressed Gray wolf optimizer [145, 146] | R 12 | [147, 148, 149] Ant lion optimizer [150] | R 5 | [151, 152] Dragonfly algorithm [153] | R 6 | [154, 155] Crow search algorithm [156] | R 3 | [157] LO [26] | R 9 | [151, 158] Whale optimizer algorithm [159] | R 12 | [160, 161, 162] Sperm whale algorithm [163] | R 14 | Not addressed Red deer algorithm [164] | R 14 | Not addressed Grasshopper optimization algorithm [165] | R 14 | Not addressed Spotted hyena algorithm [166] | R 14 | Not addressed Salp swarm algorithm [167] | R 10 | [168, 169] Artificial flora optimization algorithm [170] | R 14 | Not addressed Squirrel search algorithm [171] | R 14 | Not addressed Shuffled shepherd algorithm [172] | R 14 | Not addressed Group teaching algorithm [173] | R 14 | Not addressed ## 3 Theoretical Background of the Leading Algorithms in WSNs Arena ### 3.1 Mathematical Foundation of the Nature-inspired Algorithms In this sub-section, we have discussed the generic mathematics of nature- inspired algorithms. In computational science, any optimization algorithm can mathematically analyze in terms of an iterative process. According to [174, 175], any nature-inspired algorithm with $k$ parameters, $p=(p_{1},...,p_{k})$, and $m$ random variables, $\epsilon=(\epsilon_{1},...,\epsilon_{m})$ for a single-agent trajectory-based system can be mathematically expressed as $x^{t=1}=\phi(x^{t},p(t),\epsilon(t))$ (1) where, $\phi$ represent the non-linear mapping from the current solution (at $t$) to the better solution (at $t+1$). For population based system with $n$ swarm solution, the equation 1 can be extended to $\begin{bmatrix}x_{1}\\\ x_{2}\\\ \vdots\\\ x_{n}\end{bmatrix}^{t+1}=\phi\Big{(}(x^{t}_{1},x^{t}_{2},\cdots,x^{t}_{n});(p_{1},p_{2},...,p_{k});(\epsilon_{1},\epsilon_{2},...,\epsilon_{m})\Big{)}\begin{bmatrix}x_{1}\\\ x_{2}\\\ \vdots\\\ x_{n}\end{bmatrix}^{t}$ (2) where, $(p_{1},p_{2},...,p_{k})$ represent algorithm-dependent parameters and $(\epsilon_{1},\epsilon_{2},...,\epsilon_{m})$ represents the random variables used for incorporating the randomization in the algorithm. This mathematical representation can include all the nature-inspired/meta-heuristic algorithm listed in Fig. 6. ### 3.2 Particle Swarm Optimization (PSO) PSO was given by Kennedy and Eberhart in 1995 [72, 74]. The basic PSO was based on the simulation of the single directed, controlled motion of a swarm of flying birds. Each of these birds is treated as particles which regulate their flying information by its own and neighbour’s flying experience. In other words, it combines the self-experience with the social experience, hence it was a social behaviour simulator. Later on, several revised versions of PSO emerged in which additional parameters such as confidence factors ($c_{1},c_{2}$) and inertia weight ($w$) were added [176, 177]. A recent study on PSO and its taxonomy can be found in [178]. The initialization is random, and after that, several iterations are carried out with the particle velocity ($v$) and position ($x$) updated at the end of each iteration, as follows: Each particle (i.e. bird) is represented by a particle number $i$. Each particle possesses a position which is defined by coordinates in $n$-dimension space and velocity, which reflects their proximity to the optimal/best position. At first, the initialization is random, and after that, the particles are manipulated by several iterations carried out with equation 3 and 4 for position and velocity, respectively. $x^{i}(k+1)=x^{i}(k)+v^{i}(k+1)\makebox[49.79231pt]{}$ (3) $v^{i}(k+1)=w^{i}v^{i}(k)+c_{1}r_{1}(x^{i}_{best}-x^{i}(k))+c_{2}r_{2}(x_{gbest}-x^{i}(k))$ (4) where; $i$ = 1,2,…,$N_{s}$; $N_{s}$ is the size of the swarm $k$ =1,2,… $w^{i}$ = inertia weight for each particle i $x^{i}_{best}$ = best location of the particle $x_{gbest}$ = best location amongst all the particle in swarm $c_{1}$ = confidence factor which represents the private thinking of the particle itself; assigned to $x^{i}_{best}$ $c_{2}$ = confidence factor which represents the collaboration among the particles; assigned to $x_{gbest}$ $r_{1},r_{2}$ = random values between [0 ,1]. ### 3.3 GA and Adaptive GA(or IGA) John H. Holland and his collaborators proposed the genetic algorithm in the 1960s and 1970s [39], and since then it has become one of the widely used meta-heuristic algorithms. It is based on the abstraction of Darwin’s evolution principle of biological systems that has three components or genetic operators; reproduction-crossover-mutation. Every solution is encoded in a string (often decimal or binary) called chromosomes. The fitness function in every iteration calculates its value. Afterwards, these values are sorted in descending order. Solutions that are present at the top are considered as good solutions and selected for reproduction. It discards the solutions with low fitness values. After completion of reproduction, the selected solutions will go through crossover and mutation. The role of the crossover operator is to produce crossed solutions with optimal fitness values by the interchange of genetic material. The probability of this event is known as crossover probability, represented by ${P_{c}}$. This event is followed by mutation; which targets to find the unexplored genetic material with a probability known as mutation probability, represented by ${P_{m}}$. The computational equations for ${P_{c}}$ and ${P_{m}}$ is given by 5 and 6. $P_{c}=\begin{cases}\frac{k_{1}(f_{max}-f^{\prime})}{f_{max}-f_{avg}}&f^{\prime}>f_{avg}\\\ k_{3}&f^{\prime}<f_{avg}\end{cases}\makebox[28.45274pt]{}$ (5) $P_{m}=\begin{cases}\frac{k_{2}(f_{max}-f)}{f_{max}-f_{avg}}&f>f_{avg}\\\ k_{4}&f<f_{avg}\end{cases}\makebox[49.79231pt]{}$ (6) Figure 9: BACA network. Where ${f_{max}}$ and ${f_{avg}}$ represents the highest fitness and average fitness of the population respectively, ${f^{\prime}}$ represents the higher fitness amongst the two solutions that are selected for crossover and ${f}$ represents the fitness of the solution that is selected for mutation. In order to restrict the values of ${P_{c}}$ and ${P_{m}}$ in the range [0,1], the values of the constants ${k_{1},k_{2},k_{3}}$ and ${k_{4}}$ should be less than ${1}$. Also, the constants $k_{1}$ and $k_{3}$ should be greater than the $k_{2}$ and $k_{4}$. Adaptive GA (AGA) or Improved GA (IGA) is an enhanced version of conventional GA. In AGA, ${P_{c}}$ and ${P_{m}}$ changes adaptively based on different individuals condition that ultimately retard the possibility of premature convergence [179]. ### 3.4 ACO and BACA ACO is based on the food searching process by ants. In this process, the ant emits pheromone in the path. The remaining ants follow the path with a high intensity of pheromones [180, 181]. ACO estimates the optimal path through continuous accumulation and pheromones release process in several iterations. The performance of ACO depends strongly on the early stage pheromones. Lack of sufficient pheromones may result in premature convergence (i.e., local optima) [182] and to avoid this, we use BACA. A typical example of how BACA works is illustrated in Fig. 9. This binary coding increases the efficiency of the algorithm [182]. Different ants search the same routine and emit the pheromones on each edge. Out of the two binary edges, each ant selects one. This process can be represented in the form of a matrix with only (2 $\times$ $n$)’s space. Defining a digraph ${G=(V,R)}$ with $V$ representing the node- set and $R$ representing the path set. $\begin{aligned} \big{\\{}\big{\\{}v_{0}(c_{s}),v_{1}((c^{0}_{N})),v_{2}((c^{1}_{N})),v_{3}((c^{0}_{N-1})),v_{4}((c^{1}_{N-1})),...,\\\ v_{2N-3}((c^{0}_{2}),v_{2N-2}((c^{1}_{2}),v_{2N-1}((c^{0}_{1}),v_{2N}((c^{1}_{1})\big{\\}}\big{\\}}\end{aligned}\makebox[49.79231pt]{}$ (7) In this digraph, ${c_{s}}$ represent the staring node while ${c^{0}_{j}}$ and ${c^{1}_{j}}$ represents the value 0 and 1 of ${b_{j}}$ bit used in the binary mapping. $N$ is the encoding length (binary). For each node present in the node set, ${(j=1,2,3,...,N)}$, there exist only two paths (0 and 1 states) which points towards ${c^{0}_{j-1}}$ and ${c^{1}_{j-1}}$ respectively [25]. Initially, it has been assumed that all the path have same piece information (equal to ${\tau_{i,j}(0)=C}$; $C$ is constant and ${\Delta\tau_{i,j}(0)(i,j=1,2,...,N)}$). During the path deciding phase, the pheromones realized by $k$ ($k$ = 1, 2, 3, …, $m$; $m$ is the number of ants) ants and the probability of movement decides the direction. The probability of movement, ${p^{k}_{i,j}}$, is defined as $\begin{aligned} p^{k}_{i,j}=\frac{(\tau^{\alpha}_{i,j}(t).\eta^{\beta}_{i,j}(t))}{(\sum_{s\in allowedk}\tau^{\alpha}_{i,j}(t).\eta^{\beta}_{i,j}(t))}\end{aligned}\makebox[49.79231pt]{}$ (8) $k$ ants moves from point $i$ to point $j$. ${\alpha}$ and ${\beta}$ are constants. ${\tau_{i,j}}$ and ${\eta_{i,j}}$ represents the unutilized information and visualness respectively in the ($i$, $j$) junction at $t$ moment. ${Allowed_{k}={(0,1)}}$ represents the upcoming status. With time, the pheromones evaporate resulting in loss of information. $\rho$ is the perseverance factor and (1-$\rho$) represents the information loss factor. The ${\tau_{i,j}}$ for the next moment is represented by $\begin{aligned} \tau_{i,j}(t+1)=\rho.\tau_{i,j}(t)+\Delta\tau_{i,j}\end{aligned}\makebox[49.79231pt]{}$ (9) $\begin{aligned} \Delta\tau_{i,j}=\frac{1}{f_{best}(S)}\end{aligned}\makebox[49.79231pt]{}$ (10) Where, ${f_{best}(S)}$ is the optimal cost. In a nutshell, BACA differs with the conventional ant colony only in the way the ant selects the path. ### 3.5 IGA with BACA The combined meta-heuristic, IGA-BACA, searches for the optimal solution by initializing the BACA network with the final result of the IGA. Firstly the IGA is used to optimized the randomly generated solution. Now for the same time been, this optimized solution is feed to initialize the pheromones information of the BACA algorithm. The IGA-BACA algorithm terminates the loop once it meets the termination condition; otherwise, the complete process repeats itself to meet the termination condition (i.e., optimal updated pheromones). ### 3.6 LO There are two types of lions; residents and nomads. Resident lions always live in groups called pride. In general, a pride of lion typically involves about five female along with their cubs of both sexes and one or more than one adult male. Young males, when getting sexually mature, get excluded from their birth pride. Nomads move either in pair or singularly. Pairing occurs among related males who have been excluded from their maternal pride. The lion may switch lifestyles means nomad at any time become a resident and vice versa [26]. Unlike that of cats, lions hunt together to catch their prey, which increases the probability of success of hunting. In case if a prey manages to escape then the new position of prey, ${PREY^{\prime}}$ is given by $\begin{aligned} \noindent\scalebox{1.0}{ \noindent\scalebox{0.9}{ $PREY^{\prime}=PREY+rand(0,1).PI.(PREY-Hunter)$ }}\end{aligned}\makebox[49.79231pt]{}$ (11) Where ${PREY}$ represents the current position of prey, $PI$ is the percentage of improvement in the fitness of hunter. The formulas are proposed to mimic encircling prey by the hunter group. The new positions, according to the location of prey, are generated as follows: $\begin{aligned} \noindent\scalebox{0.6}{ $Hunter^{\prime}=\begin{cases}rand((2*PREY-Hunter),PREY)&(2*PREY- Hunter)<PREY\\\ rand(PREY,(2*PREY-Hunter))&(2*PREY-Hunter)>PREY\end{cases}$ }\end{aligned}\makebox[49.79231pt]{}$ (12) Where Hunter is the current position of the hunter. And the new position for centre hunters is given as $\begin{aligned} \noindent\scalebox{0.9}{ $Hunter^{\prime}=\begin{cases}rand(Hunter,PREY)&Hunter<PREY\\\ rand(PREY,Hunter)&Hunter>PREY\end{cases}$ }\end{aligned}\makebox[49.79231pt]{}$ (13) In the equation 12 and 13 ${rand(a,b)}$, generates a random number between ${a}$ and ${b}$, where ${a}$ and ${b}$ are upper and lower bound respectively. A detail of the process involve in LO is mention in the pseudo code of the literature [26]. Figure 10: Summary of the PSO approaches/solutions to the problem domains in WSNs. ## 4 Solution to the Problem Domains and Present Status In this section, we have presented a summary of the most prominent solutions to the problem domains of the WSNs based on some of the bio-inspired meta- heuristic algorithms, namely PSO, GA, and ACO. ### 4.1 Applications of PSO in WSNs The centralized nature of PSO enables its application in the minimization of the coverage holes for near-optimal coverage in WSNs [78, 183, 184, 185, 186, 187, 188, 189, 190]. Data aggregation is a repetitive process, which makes it suitable for PSO [191, 192, 193, 194, 195]. PSO is suitable for selecting CH’s with high energy in each round [196, 197, 198, 199, 200]. It also minimizes the sensor node localization errors [76, 201, 202]. A detailed chart, illustrating the PSO based solution, is presented in Fig. 10. #### 4.1.1 For Optimal Coverage using PSO Various studies have been reported to improve sensor coverage using PSO. Mendis et al. [189] used the conventional PSO for optimization of the mobile sink node location in WSNs. To deal with the various complexities and challenges in different applications, various modified or improved version of PSO are proposed in the literatures. Ngatchou et al. [185] used a modified version of PSO, namely sequential PSO for distributed sonar sensor placement. Sequential PSO is generally used for high dimension optimization and found application in underwater sensor deployment optimization. Further, Li et al. [186] also used a modified version of PSO, namely Particle Swarm Genetic Optimization (PSGO) for optimal sensor deployment. PSGO involve selection and mutation operator of GA, which eliminates the premature convergence issue of PSO. Afterwards, Wang et al. [187] proposed a virtual force directed co- evolutionary PSO (VFC-PSO) for dynamic sensor deployment in WSNs. Ab Aziz et al. [78] has proposed a novel optimization approach combining PSO and Voronoi diagram for sensor coverage problem in WSNs. This algorithm works efficiently for small Region of Interest (ROI) with a high number of sensor node or vice- versa. Subsequently, Hong and Shiu [188] used conventional PSO for searching the near-optimal Base Station (BS) location in WSNs. Hu et al. [184] proposed a methodology for optimal deployment of large radius sensors. They have used PSO for optimization of the sensor deployment in order to reduce the links in the proposed topology. Nascimento and Bastos-Filho [190] used the conventional PSO for BS positioning to avoid the overlapping between cells. Overall, various modified versions, including the conventional PSO, can be used for improving the sensor coverage. #### 4.1.2 For Sensor Localization using PSO For accurate node localization, Low et al. [202] used the conventional PSO. They have reported better accuracy when the results are compared with the Gauss-Newton algorithm. Similarly, Gopakumar et al. [76] used the same conventional PSO and reported better accuracy as compared with the simulated annealing approach. Later, Kulkarni et al. [201] presented a comprehensive study on node localization. They have compared the results of PSO and bacterial foraging algorithm. They reported that the node localization in WSNs is faster with PSO and more accurate with bacterial foraging algorithm. #### 4.1.3 For Energy Efficient Clustering and Routing using PSO Several studies have reported the use of PSO for energy-efficient clustering and routing. Tillett et al. [198] have used the conventional PSO for sensor node clustering in WSNs. They reported that the PSO outperforms simulated annealing and random search algorithm in terms of energy-efficient clustering. Afterwards, [200] proposed a divided range PSO algorithm for network clustering. They reported that the proposed algorithm is efficient when then mobile sensors are dense. Subsequently, Guru et al. [196] proposed four variants of PSO and applied it for energy-efficient clustering. They reported that the PSO with the supervisor-student model outperforms the other three algorithms. Cao et al. [197] have used a hybrid of graph theory and PSO algorithm for energy-efficient clustering in multi-hop WSNs. Latiff et al. [199] have used the conventional PSO for re-positioning of BS in a clustered WSNs. Overall, the use of PSO reduces energy consumption and extend the network lifetime. #### 4.1.4 For Data Aggregation using PSO Veeramachaneni and Osadciw [193] have used the conventional PSO for optimization of the accuracy and time from data aggregation aspect. In general, they have evaluated the potential of the PSO for multi-objective optimization. Further, Wimalajeewa and Jayaweera [191] have used the constrained PSO for optimal power allocation. Afterwards, Veeramachaneni and Osadciw [192] have used the hybrid version of PSO, namely ACO and PSO for dynamic sensor management. Guo et al. [194] proposed a multi-source temporal data aggregation algorithm (MSTDA) for data aggregation in WSNs. Subsequently, Jiang et al. [195] have used the constrained PSO with the penalty function concept, which increases the accuracy. ### 4.2 Applications of GA in WSNs GA is proven to be good for random as well as deterministic deployment [38, 25, 203, 204, 205, 206]. It is also good at finding lesser number of data aggregation points while routing the data to the base station [40, 207, 208, 209]. It is used for pre-clustering which reduces the resultant communication distance [210, 41, 211, 212, 213, 214, 215, 216]. Besides this, the global searching capability of the GA results into higher accuracy in locating the sensor nodes [43, 217, 218]. A detail chart, illustrating the GA based solution is presented in Fig. 11. #### 4.2.1 For Optimal Coverage using GA Various studies have evaluated the potential of GA for network coverage optimization. Konstantinidis et al. [204] have modeled the sensor deployment and power assignment as a multi-objective problem and used the conventional GA for the optimization. Poe and Schmitt [206] proposed an approach for sensor deployment over a large WSNs. They make use of conventional GA. They have compared and reported the pros and cons of three different types of deployment. Bhondekar et al. [205] have used the conventional GA for node deployment in a fixed WSNs. Jia et al. [203] proposed an energy-efficient novel network coverage approach using conventional GA. They reported that the proposed approach results in balanced performance with high network coverage rate. Tian et al. [25] have used a hybrid version of GA called Improved GA and Binary ACO Algorithm (IGA-BACA) for optimal coverage in WSNs and compared there results with conventional GA. They reported that IGA-BACA outperforms conventional GA. They also reported a high coverage rate. Recently, Singh et al. [38] have used the same IGA-BACA and conventional GA for optimal coverage in WSNs with reduced sensing of redundant information. Figure 11: Summary of the GA approaches/solutions to the problem domains in WSNs. #### 4.2.2 For Sensor Localization using GA Jegede and Ferens [217] have used the conventional GA for node localization in WSNs. Recently, Peng and Li [43] have used DV-Hop GA based algorithm for node localization WSNs. They reported that the DV-Hop GA based algorithm outperforms the previously proposed algorithm. More recently, Tan et al. [218] have proposed Distance Mapping Algorithm (DMA) and integrate this with the GA for accurate node localization in WSNs. They reported that the proposed algorithm outperforms previously proposed algorithms in terms of accuracy and energy consumption. #### 4.2.3 For Energy Efficient Clustering and Routing using GA Jin et al. [210] proposed a sensor network optimization framework for Bari et al. [214] have used the conventional GA algorithm for energy-efficient clustering and routing in a two-tier sensor network. They have reported that the proposed approach shows significant improvement compared to the earlier proposed schemes. Seo et al. [212] proposed a hybrid GA algorithm, namely Location-Aware 2-D GA (LA2D-GA) for efficient clustering in WSNs. They reported that the LA2D outperform its 1-D version. Hussain and Islam [211] proposed an energy-efficient clustering and routing scheme based on conventional GA. Further, Luo [216] proposed the first quantum GA based QoS routing protocol for WSNs. Also, EkbataniFard et al. [215] proposed a multi- objective GA for energy-efficient QoS routing approach in WSNs. They reported that the proposed approach successfully reduces the average power consumption by efficient optimization of the network parameters. Norouzi et al. [213] proposed a dynamic clustering algorithm for WSNs based on conventional GA. #### 4.2.4 For Data Aggregation using GA Islam et al. [40] have proposed an energy-efficient balanced data aggregation tree algorithm based on GA. They reported that the spanning tree-based proposed algorithm improves the network lifetime significantly. Al-Karaki et al. [207] have proposed a grid-based data aggregation and routing scheme for WSNs. They reported that the proposed scheme reduces power consumption and improves network lifetime. Similar to the Islam et al. [40], Dabbaghian et al. [209] proposed an energy-efficient balanced data aggregation using spanning tree and GA. They also, reported an increase in the network lifetime. However, an improved version of spanning tree-based data aggregation algorithm is proposed by Norouzi et al. [208]. They use the residual energy of the nodes to further improve the network lifetime. Figure 12: Summary of the ACO approaches/solutions to the problem domains in WSNs. ### 4.3 Applications of ACO in WSNs The distributed nature of ACO results in better dynamic deployment of the sensor node for near-optimal coverage [38, 25, 219, 220, 221, 222]. ACO performs better in case of large and dynamic WSNs [223, 224, 225, 226, 227]. It also increases the network lifetime [228, 229, 230, 231, 232, 233]. Never the less it also increases the accuracy of the unknown node in WSNs [69, 234, 235, 236]. A detail chart, illustrating the ACO based solution is presented in Fig. 12. #### 4.3.1 For Optimal Coverage using ACO Li et al. [219] have proposed an efficient sensor deployment optimization toolbox named as DT-ACO. Also, they have proposed a real-time hardware-based application for WSNs called EasiNet. Later, in Li et al. [220], they have modified the previously proposed EasiNet. This modification allows them to eliminate redundant sensors during sensor deployment. Liao et al. [221] have proposed an efficient approach for sensor deployment using ACO. They have formulated the deployment problem as multiple knapsack problem (MKP). They reported a complete network coverage with prolong network lifetime. Liu [222] proposed a novel approach for sensor deployment in WSNs using ACO with three ant transition concept. They report a high coverage rate. Recently, Tian et al. [25] have used the hybrid version of ACO, namely IGA-BACA. They reported a high network coverage rate with high network lifetime. More recently, Singh et al. [38] have used the IGA-BACA for reducing the sensing of the redundant information with optimal coverage. #### 4.3.2 For Sensor Localization using ACO Qin et al. [69] have proposed a novel node localization scheme using ACO through beacons signals. They reported a high localization accuracy with low power consumption. Niranchana and Dinesh [235] have proposed a node localization approach in which the prediction of the nodes is made through interval theory and relocation of the nodes are done through ACO. They also reported a high localization accuracy. Further, Liang et al. [234] have used the simple ACO for node localization in WSNs. They have optimized the trilateration positioning error function. They reported a higher-order localization accuracy compared to the previously proposed localization schemes. Recently, Lu and Zhang [236] have proposed an ACO based mobile anchor node localization scheme for WSNs. #### 4.3.3 For Energy Efficient Clustering and Routing using ACO Camilo et al. [228] have proposed a new routing algorithm for WSNs based on ACO. They reported a low communication load with low energy consumption. Further, Salehpour et al. [231] also proposed a new routing algorithm with two routing levels based on ACO. They reported a relatively low power consumption and more load balancing. Ziyadi et al. [232] proposed an energy-aware clustering protocol based on ACO clustering for WSNs. They reported an increase in the network lifetime. Later, Huang et al. [230] proposed a Prediction routing algorithm based on ACO. It was first of its kind. They reported various advantages such as low power consumption, increase in network lifetime, and high load balancing. Almshreqi et al. [229] proposed a self- optimization algorithm based on ACO for balance energy consumption in WSNs. They reported low energy consumption with reduced packet loss. Mao et al. [233] proposed a fuzzy-based unequal clustering algorithm. They have used ACO for energy-aware routing. They reported that the proposed algorithm outperforms various traditional algorithm such as LEACH. #### 4.3.4 For Data Aggregation using ACO Ding and Liu [223] proposed an efficient self-adaptive data aggregation algorithm for WSNs based on ACO. They reported that the proposed algorithm outperforms the benchmark algorithms such as LEACH and PEGASIS in terms of prolonging the network lifetime. Further, Misra and Mandal [224] proposed an approach for efficient data aggregation algorithm for WSNs. They reported that the proposed approach is energy-efficient. Han and Hong-xu [225] proposed a novel approach for multi-media data aggregation in wireless sensor and actor- network. They have compared the performance with the traditional methods such as MEGA and reported an improvement in the stability, accuracy and network lifetime. Yang et al. [226] proposed an energy-efficient data aggregation algorithm based on ACO for WSNs. They reported an improvement over network lifetime. Similarly, Xie and Shi [227] proposed a data aggregation approach for WSNs using ACO and reported an improvement over network lifetime. Table 2: Summary of the present status of the bio-inspired algorithms approach to the problem domains of WSNs. | Problem --- Domians of WSNs | Optimization --- Algorithms PSO | GA | ACO | LO | Optimal --- Coverage Addressed | Addressed | Addressed | Addressed (in this paper) | Data --- Aggregation Addressed | Addressed | Addressed | Addressed | Energy Efficient Clustering --- and Routing Addressed | Addressed | Addressed | Addressed | Sensor --- Localization Addressed | Addressed | Addressed | Not Addressed PSO, GA and ACO well address all the four problem domains of WSNs. Also, some hybrid techniques emerge for the same. Every new attempted claimed to show improved results over the previous approaches. In continuation of that, we have introduced the LO to solve the issues in WSNs. Table 2, shows the current status of all the four prominent algorithms. ## 5 Optimal Coverage using IGA-BACA and LO Getting optimal coverage in WSN belongs to a multi-objective optimization problem. The existing sensors, $N$, is represented by set ${S=(s_{1},s_{2},...,s_{i},...,s_{N})}$. In this optimization problem, we aim to estimate a sensor set ${S^{\prime}}$, which covers the monitoring area to the maximum with minimum working sensors. The function for maximum coverage and minimum sensors is ${f_{1}(S^{\prime})}$ and ${f_{2}(S^{\prime})}$. Both these functions are conflicting in nature; undermining them both, the new objective function by changing it to a maximal objective function ${f(S^{\prime})}$ read as; $\begin{aligned} \noindent\scalebox{0.9}{ ${f(S^{\prime})=(f_{1}(S^{\prime}).f_{1}(S^{\prime})/f_{2}(S^{\prime}))}$ }\end{aligned}\makebox[49.79231pt]{}$ (14) The framework for obtaining the optimal coverage using IGA-BACA and LO is explained in the upcoming subsections. ### 5.1 Mapping from Solution to Coding Space The binary coding represents the position of the sensors in WSNs. The corresponding control vector is ${L=(l_{1},l_{2},...,l_{i},...,l_{N})}$. ${l_{i}}$ can either have a value of zero or one which represents the inactive or active state of a sensor respectively. The initialization of nomad and pride in LO and gene of the chromosome in GA has one to one correlation with the selection of nodes. Fig. 13, shows a typical example of a control vector. The probability of the sensor to be an active sensor depends on the adaptation or objective function (Equation 14). Higher the value, the larger will be the probability. Figure 13: Illustrating a control vector with 9 active sensors out of 12 is {1 0 1 1 1 1 0 1 1 1 1 0}. The dark circles represent the inactive sensors in the monitoring area. It is represented by a binary ‘0’ in the control vector. ### 5.2 Algorithm and Process For optimization Equation 14, we have used IGA-BACA and LO. In IGA-BACA, we have four processes, namely reproduction, crossover, mutation and update pheromone process. In contrast, LO has three processes, namely, mating, sorting, and elimination. In IGA-BACA, the first process reproduces new offspring depending on probability infraction to their fitness value. Afterwards, the new offspring are sorted based on their fitness values. The only offspring with high fitness values are retained, and others are discarded. This process ensures an increase in the average fitness of the colony. The only limitation associated with this process is that the number of possible varieties is lost. This limitation is subsequently overcome by the crossover and mutation process. In the crossover process, a pair of offspring is selected based on the probability, ${P_{c}}$. This step further increases the probability that crossed solutions may produce offspring with high fitness value. Afterwards, the mutation process alters an offspring based on the probability, ${P_{m}}$. This step explores the unexplored genetic material. Lastly, the update pheromone process (${(T_{g})}$; pheromones update operator) mapped the updated pheromone for an optimal offspring elected by the ant sequence. The ant release the pheromones in the optimal path traversed by them using Max-Min rule. We can calculate the probability of the update pheromone operator by $\begin{aligned} \noindent\scalebox{1.0}{ $P\big{\\{}T_{g}=x_{i}\big{\\}}=\frac{f(x_{i})}{\sum^{N}_{k=1}f(x_{k})}$ }\end{aligned}\makebox[49.79231pt]{}$ (15) Where, ${N}$ is the number of offsprings. In LO, first mating with the best nomad, both male and female, is done followed by sorting nomad lions of both gender-based on fitness value. After which the nomad with least fitness value is eliminated. Analogous terms between the LO parameters and WSNs are listed in Table 3. The complete methodology for LO and IGA-BACA is illustrated in Fig. 14. Figure 14: Flowchart for IGA-BACA and LO. ## 6 System Model As stated earlier, getting optimal coverage is one of the crucial problems associated with WSNs. The network should have maximum coverage with a certain level of QoS [237, 238, 239]. Presence of blind area significantly affects the QoS threshold and network coverage rate, ultimately affecting the network reliability. In order to increase network reliability, we can deploy more sensors in critical areas. Increasing sensors will increase the network cost. In this paper, we used the bio-inspired algorithms to find the optimal node- set. In this study, we have assumed that the total monitoring area, ${A}$, is a two-dimensional plane and It is split into $m$ $\times$ $n$ equal grids. After that, we have randomly distributed ${N}$ no. of sensors in the study area. Mathematically, these sensors are represented by ${S=(s_{1},s_{2},...,s_{i},...,s_{N})}$. All the sensors have effective radii (sensing radius) of ${r}$ with a coordinate ${(x_{i},y_{i})}$ for ${s_{i}}$. In order to ensure maximum coverage, each grid in the monitoring area is considered as a target point. Mathematically, it is represented by $A={(a_{1},a_{2},...,a_{j},...,a_{mxn})}$. If the target point ${a_{j}}$ lies in the sensing region of sensor ${s_{i}}$, then the Euclidean distance between them is given by ${d(a_{j},s_{i})\leq r}$ [25]. Table 3: Analogous mapping between LO algorithm and WSNs. LO algorithm | Optimal coverage problem ---|--- Solution of a food source | Node distribution ${N}$ dimensions in each solution | ${N}$ sensor coordinates Fitness of the solution | Coverage rate in $A$ Maximum fitness | Optimum deployment The probability, ${P_{cov}(x,y,s_{i})}$, that any coordinate ${(x,y)}$ in $A$ is sensed by a sensor ${s_{i}(x_{i},y_{i})}$ is given by $\begin{aligned} \noindent\scalebox{1.0}{ $P_{cov}(x,y,s_{i})=\begin{cases}1&(x-x_{i})^{2}-(y-y_{i})^{2}\leq r^{2}\\\ 0&otherwise\end{cases}$ }\end{aligned}\makebox[49.79231pt]{}$ (16) The area covered by the sensors is given by $\begin{aligned} \noindent\scalebox{1.0}{ $A_{area}(S)=\sum_{x=1}^{m}\sum_{y=1}^{n}P_{cov}(x,y,s_{i})\Delta x\Delta y$ }\end{aligned}\makebox[49.79231pt]{}$ (17) If $S^{\prime}$ is the set of working or active sensors, then the fitness or objective function for the network coverage is given by $\begin{aligned} \noindent\scalebox{1.0}{ $f_{1}(S^{\prime})=A_{area}(S^{\prime})/A_{s}$ }\end{aligned}\makebox[49.79231pt]{}$ (18) In contrast, the objective function for the node uses rate is given by $\begin{aligned} \noindent\scalebox{1.0}{ $f_{2}(S^{\prime})=|S^{\prime}|/N$ }\end{aligned}\makebox[49.79231pt]{}$ (19) Where ${N}$ is the total number of sensor nodes. Equation 18 and 19 are combined to form a multi-objective optimization coverage problem given by. $\begin{aligned} \noindent\scalebox{1.0}{ $maxf(S^{\prime})=max(f_{1}(S^{\prime}),1-f_{2}(S^{\prime}))$ }\end{aligned}\makebox[49.79231pt]{}$ (20) We have to maximise the equation 20 to get maximum coverage with minimum sensor node. ## 7 Simulation Results The simulation parameters that we have used in this study is given in Table 4. We selected a monitoring area ($A$) of ${100}$ m $\times$ ${100}$ m in which sensor nodes having a perception radius of 10 m ($r$ = 10 m) are deployed. The constants $k_{1}$, $k_{2}$, $k_{3}$, and $k_{4}$ are set to 1, 0.5, 1 and 0.5 respectively. These values restrict the range of $P_{c}$ between 0.5 and 1 (i.e., 0.5 $<$ $P_{c}$ $<$ 1) and $P_{m}$ between 0.001 and 0.05 (i.e., 0.001 $<$ $P_{m}$ $<$ 0.05). The moderately large range of $P_{c}$ and small range of $P_{m}$ is required for extensive recombination of solutions and prevention of the disruptions of the solutions respectively which ultimately prevents the algorithm from getting stuck into local optimum. The constant $\alpha$ controls the pheromone importance, while $\beta$ controls the distance priority. In general, $\beta$ should be greater than $\alpha$ for the best results. Both these parameters are interlinked. We have to fix one and vary (iterate) the other to find the optimal set of value. In this study, we fixed the $\alpha$ to 1 and found $\beta$ to be 6. Table 4: Simulation parameters. Parameter | Value ---|--- Monitoring area (${A}$) | 100 m $\times$ 100 m Perception radius ($r$) | 10 m $N$ | 100 $k_{1}$ = $k_{3}$ | 1 $k_{2}$ = $k_{4}$ | 0.5 $\alpha(\alpha\geq 1)$ | 1 $\beta(\beta\geq 1)$ | 6 Figure 15: Optimal network coverage. Figure 16: Random distribution of 100 nodes. We implemented the corresponding algorithm in MATLAB® (version 2017b). We iterated the IGA-BACA algorithm for ${300}$ iterations. In doing so, we found that only ${42}$ (out of $100$) sensors cover the monitoring area optimally, as shown in Fig. 15. This optimal coverage is treated as a benchmark for further analysis. In contrast, while distributing these $100$ sensors randomly, we found a network coverage map, as shown in Fig. 16. We randomly distribute these ${100}$ sensors in the monitoring area, as shown in Fig. 16. Although the monitoring area is almost covered completely, there exist a significant amount of redundant nodes which senses redundant information. The uncovered area in the target monitoring area is considered as a coverage hole, and in Fig. 16, we can easily detect such coverage hole or blind areas (highlighted in red boxes). Hence, random network coverage is not usually adopted. Figure 17: 50 Generation of IGA-BACA. Figure 18: 100 Generation of IGA-BACA. Figure 19: 150 Generation of IGA-BACA. Figure 20: 200 Generation of IGA-BACA. Figure 21: 50 Generation of LO. Figure 22: 100 Generation of LO. Figure 23: 150 Generation of LO. Figure 24: 200 Generation of LO. Figure 25: 250 Generation of LO. Figure 26: Network coverage vs sensor. Figure 27: Network coverage vs generation. Table 5: Simulation results for IGA-BACA and LO. | Iterations --- IGA-BACA | LO | | Network coverage --- rate (%) | Active sensor --- node | Time (s) --- (GPU) | Network coverage --- rate (%) | Active sensor --- node | Time (s) --- (GPU) 50 | 93.5 | 72 | 5 | 95.9 | 67 | 4 100 | 95.0 | 65 | 11 | 96.9 | 61 | 9 150 | 96.4 | 59 | 17 | 98.7 | 55 | 15 200 | 97.5 | 53 | 22 | 98.9 | 48 | 19 250 | 97.9 | 48 | 26 | 99.1 | 42 | 23 Network coverage using the IGA-BACA algorithm for 50, 100, 150, 200 iterations (or generations) is shown in Fig. 17 \- 20. As we increase the number of iterations from 50 to 200, we found that the network coverage tends towards the optimal coverage; hence the number of redundant nodes decreases significantly. In comparison with the IGA-BACA derived results, the network coverage using LO algorithm for 50, 100, 150, 200 and 250 iterations is shown in Fig. 21 \- 25. We found a similar trend of moving towards the optimal coverage with an increase in the number of iterations. For both the algorithms (i.e., IGA-BACA and LO), we have tabulated the network coverage rate (in percentage), GPU processing time (in seconds) and the number of active sensors corresponding to 50, 100, 150, 200 and 250 iterations as shown in Table 5. We compared the results that are obtained through combined meta-heuristic IGA- BACA with the results obtained through LO. In doing so, we observed that the optimal network coverage is obtained by both the approach. However, IGA-BACA algorithm requires approximately $300$ iterations and LO algorithms require $250$ iterations to achieve the optimal coverage. Also, in LO, the optimal coverage is obtained at a lesser number of sensors, as shown in Fig. 26. Further, LO has a faster rate of convergence which is primarily due to the presence of a large number of local maxima with higher values of fitness functions (Table 5). In addition to this, we plotted the network coverage rate against the number of iteration (Fig. 27). In doing so, we observed that the network coverage increases as the function of iteration. Also, the convergence of LO is better than IGA-BACA. ## 8 Conclusion The use of nature-inspired algorithms has created a new era in next-generation computing. These algorithms are well suited for solving multi-objective optimization problems. Various features of nature-inspired algorithms such as reasonable computational time, find global optimal and applicability make them well suited for real-world optimization problems. In contrast, traditional algorithms generally fail to provide satisfactory results mainly because of the complexity and size of the problem structure. In this paper, we have presented a comprehensive review of such algorithms in context to various issues related to the WSNs. We have evaluated the potential of two efficient meta-heuristic approaches that compute the optimal coverage in WSNs, namely IGA-BACA and LO. We have compared the results of both these approaches. In doing so, we observed that as the number of iteration is increasing the network coverage rate tend towards optimal coverage. Also, the network coverage rate is faster in LO approach as compared to IGA-BACA. The optimal coverage is achieved with a lesser number of iteration in case of LO as compared with other approaches. It is due to the presence of a large number of local maxima with higher fitness value, and hence it is hardly any chance to miss local maxima. Although LO gives better performance than other optimization algorithms, still there is much scope to explore this algorithm and to apply it in multi-objective problems. For instance, if we can use machine learning approach such as Artificial Neural Network (ANN) that incorporates combined heuristic such as Ant Lion Optimization (ALO), IGA-BACA, etc. as our system inputs. ## Conflict of Interest The author states that there is no conflict of interest. ## CRediT author statement Abhilash Singh and Sandeep Sharma: Conceptualization, Methodology, Software. Abhilash Singh and Sandeep Sharma: Data curation, Writing- Original draft preparation, Visualization, Investigation. Sandeep Sharma: Supervision. Abhilash Singh, Sandeep Sharma and Jitendra Singh: Software, Validation. Abhilash Singh, Sandeep Sharma and Jitendra Singh: Writing- Reviewing and Editing. ## Acknowledgment We would like to acknowledge IISER Bhopal, Gautam Buddha University Greater Noida, and IIT Kanpur for providing institutional support. We thank to the editor and all the anonymous reviewers for providing helpful comments and suggestions. ## References * Sohrabi et al. [2000] K. Sohrabi, J. Gao, V. Ailawadhi, G. J. Pottie, Protocols for self-organization of a wireless sensor network, IEEE Personal Communications 7 (2000) 16–27. * Singh et al. [2020] A. Singh, V. Kotiyal, S. Sharma, J. Nagar, C. C. Lee, A machine learning approach to predict the average localisation error with applications to wireless sensor networks, IEEE Access (2020) 1–1. * Akyildiz et al. [2002] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, Wireless sensor networks: A survey, Comput. Netw. 38 (2002) 393–422. * Borges et al. [2014] L. M. Borges, F. J. Velez, A. S. Lebres, Survey on the characterization and classification of wireless sensor network applications, IEEE Communications Surveys Tutorials 16 (2014) 1860–1890. * Lu et al. [2009] S. Lu, X. Huang, L. Cui, Z. Zhao, D. Li, Design and implementation of an asic-based sensor device for wsn applications, IEEE Transactions on Consumer Electronics 55 (2009) 1959–1967. * Sharma et al. [2017] S. Sharma, J. Singh, R. Kumar, A. Singh, Throughput-save ratio optimization in wireless powered communication systems, in: 2017 International Conference on Information, Communication, Instrumentation and Control (ICICIC), 2017, pp. 1–6. URL: https://doi.org/10.1109/ICOMICON.2017.8279031. doi:10.1109/ICOMICON.2017.8279031. * Kumar and Singh [2018] R. Kumar, A. Singh, Throughput optimization for wireless information and power transfer in communication network, in: 2018 Conference on Signal Processing And Communication Engineering Systems (SPACES), 2018, pp. 1–5. URL: https://doi.org/10.1109/SPACES.2018.8316303. doi:10.1109/SPACES.2018.8316303. * Yick et al. [2008] J. Yick, B. Mukherjee, D. Ghosal, Wireless sensor network survey, Comput. Netw. 52 (2008) 2292–2330. * Imran et al. [2012] M. Imran, H. Hasbullah, A. M. Said, Personality wireless sensor networks (pwsns), CoRR abs/1212.5543 (2012). * Sharma et al. [2020] S. Sharma, R. Kumar, A. Singh, J. Singh, Wireless information and power transfer using single and multiple path relays, International Journal of Communication Systems 33 (2020) e4464. * Liang and Yu [2005] Y. Liang, H. Yu, Energy adaptive cluster-head selection for wireless sensor networks, in: Sixth International Conference on Parallel and Distributed Computing Applications and Technologies (PDCAT’05), 2005, pp. 634–638. URL: https://doi.org/10.1109/PDCAT.2005.134. doi:10.1109/PDCAT.2005.134. * Cardei and Du [2005] M. Cardei, D.-Z. Du, Improving wireless sensor network lifetime through power aware organization, Wireless Networks 11 (2005) 333–340. * Wang et al. [2003] X. Wang, G. Xing, Y. Zhang, C. Lu, R. Pless, C. Gill, Integrated coverage and connectivity configuration in wireless sensor networks, in: Proceedings of the 1st International Conference on Embedded Networked Sensor Systems, SenSys ’03, ACM, New York, NY, USA, 2003, pp. 28–39. URL: http://doi.acm.org/10.1145/958491.958496. doi:10.1145/958491.958496. * Tsai et al. [2016] C.-W. Tsai, T.-P. Hong, G.-N. Shiu, Metaheuristics for the lifetime of wsn: A review, IEEE Sensors Journal 16 (2016) 2812–2831. * Nanda and Panda [2014] S. J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm and Evolutionary computation 16 (2014) 1–18. * Iqbal et al. [2015] M. Iqbal, M. Naeem, A. Anpalagan, A. Ahmed, M. Azam, Wireless sensor network optimization: Multi-objective paradigm, Sensors 15 (2015) 17572–17620. * Demigha et al. [2012] O. Demigha, W.-K. Hidouci, T. Ahmed, On energy efficiency in collaborative target tracking in wireless sensor network: A review, IEEE Communications Surveys & Tutorials 15 (2012) 1210–1222. * Kulkarni et al. [2010] R. V. Kulkarni, A. Forster, G. K. Venayagamoorthy, Computational intelligence in wireless sensor networks: A survey, IEEE communications surveys & tutorials 13 (2010) 68–96. * Tsai et al. [2015] C.-W. Tsai, P.-W. Tsai, J.-S. Pan, H.-C. Chao, Metaheuristics for the deployment problem of wsn: A review, Microprocessors and Microsystems 39 (2015) 1305–1317. * Molina et al. [2008] G. Molina, E. Alba, E.-G. Talbi, Optimal sensor network layout using multi-objective metaheuristics., J. UCS 14 (2008) 2549–2565. * Al-Mousawi [2019] A. J. Al-Mousawi, Evolutionary intelligence in wireless sensor network: routing, clustering, localization and coverage, Wireless Networks 26 (2019) 1–27. * Grefenstette [1986] J. J. Grefenstette, Optimization of control parameters for genetic algorithms, IEEE Transactions on Systems, Man, and Cybernetics 16 (1986) 122–128. * Sun et al. [2017] Y. Sun, W. Dong, Y. Chen, An improved routing algorithm based on ant colony optimization in wireless sensor networks, IEEE Communications Letters 21 (2017) 1317–1320. * Mehrotra et al. [2014] A. Mehrotra, K. K. Singh, P. Khandelwal, An unsupervised change detection technique based on ant colony optimization, in: 2014 International Conference on Computing for Sustainable Global Development (INDIACom), 2014, pp. 408–411. URL: https://doi.org/10.1109/IndiaCom.2014.6828169. doi:10.1109/IndiaCom.2014.6828169. * Tian et al. [2016] J. Tian, M. Gao, G. Ge, Wireless sensor network node optimal coverage based on improved genetic algorithm and binary ant colony algorithm, EURASIP Journal on Wireless Communications and Networking 2016 (2016) 104\. * Yazdani and Jolai [2016] M. Yazdani, F. Jolai, Lion optimization algorithm (loa): a nature-inspired metaheuristic algorithm, Journal of computational design and engineering 3 (2016) 24–36. * Han et al. [2013] G. Han, H. Xu, T. Q. Duong, J. Jiang, T. Hara, Localization algorithms of wireless sensor networks: a survey, Telecommunication Systems 52 (2013) 2419–2436. * Xing and Gao [2014] B. Xing, W.-J. Gao, Innovative computational intelligence: a rough guide to 134 clever algorithms, in: Intelligent Systems Reference Library, Springer, 2014, pp. 1–451. * Campelo et al. [2020] F. Campelo, C. Aranha, R. Koot, Evolutionary computation bestiary, 2019 (accessed November 1, 2020). URL: https://github.com/fcampelo/EC-Bestiary. * Tzanetos et al. [2020] A. Tzanetos, I. Fister Jr, G. Dounias, A comprehensive database of nature-inspired algorithms, Data in Brief (2020) 105792\. * Tao et al. [2015] F. Tao, Y. Laili, L. Zhang, Brief history and overview of intelligent optimization algorithms, in: Configurable Intelligent Optimization Algorithm, Springer, 2015, pp. 3–33. * Pham and Karaboga [2012] D. Pham, D. Karaboga, Intelligent optimisation techniques: genetic algorithms, tabu search, simulated annealing and neural networks, Springer Science & Business Media, 2012\. * Zhang and Dong [2019] J. Zhang, Z. Dong, A general intelligent optimization algorithm combination framework with application in economic load dispatch problems, Energies 12 (2019) 2175. * Dasgupta and Michalewicz [1997] D. Dasgupta, Z. Michalewicz, Evolutionary algorithms—an overview, in: Evolutionary Algorithms in Engineering Applications, Springer, 1997, pp. 3–28. * Kennedy [2006] J. Kennedy, Swarm intelligence, in: Handbook of nature-inspired and innovative computing, Springer, 2006, pp. 187–219. * Eberhart et al. [2001] R. C. Eberhart, Y. Shi, J. Kennedy, Swarm intelligence, Elsevier, 2001. * Das et al. [2016] K. Das, D. Mishra, K. Shaw, A metaheuristic optimization framework for informative gene selection, Informatics in Medicine Unlocked 4 (2016) 10–20. * Singh et al. [2019] A. Singh, S. Sharma, J. Singh, R. Kumar, Mathematical modelling for reducing the sensing of redundant information in wsns based on biologically inspired techniques, Journal of Intelligent & Fuzzy Systems 37 (2019) 1–11. * Holland [1980] J. H. Holland, Adaptive algorithms for discovering and using general patterns in growing knowledge bases, International Journal of Policy Analysis and Information Systems 4 (1980) 245–268. * Islam et al. [2007] O. Islam, S. Hussain, H. Zhang, Genetic algorithm for data aggregation trees in wireless sensor networks, in: 2007 3rd IET International Conference on Intelligent Environments, 2007, pp. 312–316. * Hussain et al. [2007] S. Hussain, A. W. Matin, O. Islam, Genetic algorithm for energy efficient clusters in wireless sensor networks, in: Fourth International Conference on Information Technology (ITNG’07), IEEE, 2007, pp. 147–154. * Yoon and Kim [2013] Y. Yoon, Y.-H. Kim, An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks, IEEE Transactions on Cybernetics 43 (2013) 1473–1483. * Peng and Li [2015] B. Peng, L. Li, An improved localization algorithm based on genetic algorithm in wireless sensor networks, Cognitive Neurodynamics 9 (2015) 249–256. * Yao et al. [1999] X. Yao, Y. Liu, G. Lin, Evolutionary programming made faster, IEEE Transactions on Evolutionary computation 3 (1999) 82–102. * Zhang et al. [2015] W. Zhang, X. Yang, Q. Song, Improvement of dv-hop localization based on evolutionary programming resample, Journal of Software Engineering 9 (2015) 631–640. * Lanzi [2000] P. L. Lanzi, Learning classifier systems: from foundations to applications, Springer Science & Business Media, 2000. * Koza [1997] J. R. Koza, Genetic programming, Citeseer, 1997. * Tripathi et al. [2011] A. Tripathi, P. Gupta, A. Trivedi, R. Kala, Wireless sensor node placement using hybrid genetic programming and genetic algorithms, International Journal of Intelligent Information Technologies (IJIIT) 7 (2011) 63–83. * Aziz et al. [2016] M. Aziz, M.-H. Tayarani-N, M. R. Meybodi, A two-objective memetic approach for the node localization problem in wireless sensor networks, Genetic Programming and Evolvable Machines 17 (2016) 321–358. * Bäck et al. [1997] T. Bäck, D. B. Fogel, Z. Michalewicz, Handbook of evolutionary computation, CRC Press, 1997. * Fayyazi et al. [2011] H. Fayyazi, M. Sabokrou, M. Hosseini, A. Sabokrou, Solving heterogeneous coverage problem in wireless multimedia sensor networks in a dynamic environment using evolutionary strategies, in: 2011 1st International eConference on Computer and Knowledge Engineering (ICCKE), IEEE, 2011, pp. 115–119. * Sivakumar and Venkatesan [2016] S. Sivakumar, R. Venkatesan, Performance evaluation of hybrid evolutionary algorithms in minimizing localization error for wireless sensor networks, Journal of Scientific and Industrial Research 75 (2016) 289–295. * Mühlenbein and Paass [1996] H. Mühlenbein, G. Paass, From recombination of genes to the estimation of distributions i. binary parameters, in: International conference on parallel problem solving from nature, Springer, 1996, pp. 178–187. * Zhang et al. [2008] Q. Zhang, A. Zhou, Y. Jin, Rm-meda: A regularity model-based multiobjective estimation of distribution algorithm, IEEE Transactions on Evolutionary Computation 12 (2008) 41–63. * Wang et al. [2012] X. Wang, H. Gao, J. Zeng, A copula-based estimation of distribution algorithms for coverage problem of wireless sensor network, Sensor Letters 10 (2012) 1892–1896. * Cequn et al. [2011] F. Cequn, W. Shulei, Z. Sheng, Algorithm of distribution estimation for node localization in wireless sensor network, in: 2011 Seventh International Conference on Computational Intelligence and Security, IEEE, 2011, pp. 219–221. * Qin et al. [2008] A. K. Qin, V. L. Huang, P. N. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE transactions on Evolutionary Computation 13 (2008) 398–417. * Cui et al. [2018] L. Cui, C. Xu, G. Li, Z. Ming, Y. Feng, N. Lu, A high accurate localization algorithm with dv-hop and differential evolution for wireless sensor network, Applied Soft Computing 68 (2018) 39–52. * Maleki et al. [2013] I. Maleki, S. R. Khaze, M. M. Tabrizi, A. Bagherinia, A new approach for area coverage problem in wireless sensor networks with hybrid particle swarm optimization and differential evolution algorithms, International Journal of Mobile Network Communications and Telematics (IJMNCT) 3 (2013) 61–76. * Kuila and Jana [2014] P. Kuila, P. K. Jana, A novel differential evolution based clustering algorithm for wireless sensor networks, Applied soft computing 25 (2014) 414–425. * Gupta et al. [2015] A. Gupta, Y.-S. Ong, L. Feng, Multifactorial evolution: toward evolutionary multitasking, IEEE Transactions on Evolutionary Computation 20 (2015) 343–357. * Tam et al. [2020] N. T. Tam, T. Q. Tuan, H. T. T. Binh, A. Swami, Multifactorial evolutionary optimization for maximizing data aggregation tree lifetime in wireless sensor networks, in: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, volume 11413, International Society for Optics and Photonics, 2020, p. 114130Z. * Dongrui and Xianfeng [2019] W. Dongrui, T. Xianfeng, Multi-tasking genetic algorithm (mtga) for fuzzy system optimization, arxiv (2019). * Dorigo and Birattari [2010] M. Dorigo, M. Birattari, Ant colony optimization, Springer, 2010\. * Dorigo and Blum [2005] M. Dorigo, C. Blum, Ant colony optimization theory: A survey, Theoretical computer science 344 (2005) 243–278. * Socha and Dorigo [2008] K. Socha, M. Dorigo, Ant colony optimization for continuous domains, European journal of operational research 185 (2008) 1155–1173. * Blum [2005] C. Blum, Ant colony optimization: Introduction and recent trends, Physics of Life reviews 2 (2005) 353–373. * Yang et al. [2010] J. Yang, M. Xu, W. Zhao, B. Xu, A multipath routing protocol based on clustering and ant colony optimization for wireless sensor networks, Sensors 10 (2010) 4521–4540. * Qin et al. [2010] F. Qin, C. Wei, L. Kezhong, Node localization with a mobile beacon based on ant colony algorithm in wireless sensor networks, in: 2010 International conference on communications and mobile computing, volume 3, IEEE, 2010, pp. 303–307. * Liao et al. [2008] W.-H. Liao, Y. Kao, C.-M. Fan, Data aggregation in wireless sensor networks using ant colony algorithm, Journal of Network and Computer Applications 31 (2008) 387–401. * Liu and He [2014] X. Liu, D. He, Ant colony optimization with greedy migration mechanism for node deployment in wireless sensor networks, Journal of Network and Computer Applications 39 (2014) 310–318. * Eberhart and Kennedy [1995] R. Eberhart, J. Kennedy, Particle swarm optimization, in: Proceedings of the IEEE international conference on neural networks, volume 4, Citeseer, 1995, pp. 1942–1948. * Shi and Eberhart [1999] Y. Shi, R. C. Eberhart, Empirical study of particle swarm optimization, in: Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), volume 3, IEEE, 1999, pp. 1945–1950. * Kennedy and Eberhart [1995] J. Kennedy, R. Eberhart, Particle swarm optimization (pso), in: Proc. IEEE International Conference on Neural Networks, Perth, Australia, 1995, pp. 1942–1948. * Wang et al. [2017] J. Wang, Y. Cao, B. Li, H.-j. Kim, S. Lee, Particle swarm optimization based clustering algorithm with mobile sink for wsns, Future Generation Computer Systems 76 (2017) 452–457. * Gopakumar et al. [2008] A. Gopakumar, Jacob, Lillykutty, Localization in wireless sensor networks using particle swarm optimization, in: 2008 IET International Conference on Wireless, Mobile and Multimedia Networks, IET, 2008, pp. 227–230. * Lu et al. [2014] Y. Lu, J. Chen, I. Comsa, P. Kuonen, B. Hirsbrunner, Construction of data aggregation tree for multi-objectives in wireless sensor networks through jump particle swarm optimization, Procedia Computer Science 35 (2014) 73–82. * Ab Aziz et al. [2007] N. A. B. Ab Aziz, A. W. Mohemmed, B. D. Sagar, Particle swarm optimization and voronoi diagram for wireless sensor networks coverage optimization, in: 2007 International Conference on Intelligent and Advanced Systems, IEEE, 2007, pp. 961–965. * Passino [2002] K. M. Passino, Biomimicry of bacterial foraging for distributed optimization and control, IEEE control systems magazine 22 (2002) 52–67. * Das et al. [2009] S. Das, A. Biswas, S. Dasgupta, A. Abraham, Bacterial foraging optimization algorithm: theoretical foundations, analysis, and applications, in: Foundations of Computational Intelligence Volume 3, Springer, 2009, pp. 23–55. * Passino [2010] K. M. Passino, Bacterial foraging optimization, International Journal of Swarm Intelligence Research (IJSIR) 1 (2010) 1–16. * Sribala and Virudhunagar [2013] S. Sribala, T. Virudhunagar, Energy efficient routing in wireless sensor networks using modified bacterial foraging algorithm, International Journal of Research in Engineering & Advanced Technology 1 (2013) 1–5. * Nagchoudhury et al. [2015] P. Nagchoudhury, S. Maheshwari, K. Choudhary, Optimal sensor nodes deployment method using bacteria foraging algorithm in wireless sensor networks, in: Emerging ICT for Bridging the Future-Proceedings of the 49th Annual Convention of the Computer Society of India CSI Volume 2, Springer, 2015, pp. 221–228. * Sharma and Kumar [2018] G. Sharma, A. Kumar, Fuzzy logic based 3d localization in wireless sensor networks using invasive weed and bacterial foraging optimization, Telecommunication Systems 67 (2018) 149–162. * Li and Qian [2003] X. Li, J. Qian, Studies on artificial fish swarm optimization algorithm based on decomposition and coordination techniques, Journal of circuits and systems 1 (2003) 1–6. * Li et al. [2004] X.-l. Li, F. Lu, G.-h. Tian, J.-x. Qian, Applications of artificial fish school algorithm in combinatorial optimization problems [j], Journal of Shandong University (Engineering Science) 5 (2004) 015. * Song et al. [2010] X. Song, C. Wang, J. Wang, B. Zhang, A hierarchical routing protocol based on afso algorithm for wsn, in: 2010 International Conference On Computer Design and Applications, volume 2, IEEE, 2010, pp. V2–635. * Yang et al. [2016] X. Yang, W. Zhang, Q. Song, A novel wsns localization algorithm based on artificial fish swarm algorithm, International Journal of Online and Biomedical Engineering (iJOE) 12 (2016) 64–68. * Yiyue et al. [2012] W. Yiyue, L. Hongmei, H. Hengyang, Wireless sensor network deployment using an optimized artificial fish swarm algorithm, in: 2012 International Conference on Computer Science and Electronics Engineering, volume 2, IEEE, 2012, pp. 90–94. * Karaboga and Basturk [2007] D. Karaboga, B. Basturk, Artificial bee colony (abc) optimization algorithm for solving constrained optimization problems, in: International fuzzy systems association world congress, Springer, 2007, pp. 789–798. * Karaboga and Akay [2009] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Applied mathematics and computation 214 (2009) 108–132. * Karaboga and Basturk [2007] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm, Journal of global optimization 39 (2007) 459–471. * Karaboga and Ozturk [2011] D. Karaboga, C. Ozturk, A novel clustering approach: Artificial bee colony (abc) algorithm, Applied soft computing 11 (2011) 652–657. * Öztürk et al. [2012] C. Öztürk, D. Karaboğa, B. GÖRKEMLİ, Artificial bee colony algorithm for dynamic deployment of wireless sensor networks, Turkish Journal of Electrical Engineering & Computer Sciences 20 (2012) 255–262. * Karaboga and Basturk [2008] D. Karaboga, B. Basturk, On the performance of artificial bee colony (abc) algorithm, Applied soft computing 8 (2008) 687–697. * Kulkarni et al. [2016] V. R. Kulkarni, V. Desai, R. V. Kulkarni, Multistage localization in wireless sensor networks using artificial bee colony algorithm, in: 2016 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, 2016, pp. 1–8. * Karaboga et al. [2012] D. Karaboga, S. Okdem, C. Ozturk, Cluster based wireless sensor network routing using artificial bee colony algorithm, Wireless Networks 18 (2012) 847–860. * Lucic and Teodorovic [2001] P. Lucic, D. Teodorovic, Bee system: modeling combinatorial optimization transportation engineering problems by swarm intelligence, in: Preprints of the TRISTAN IV triennial symposium on transportation analysis, 2001, pp. 441–445. * Lučić and Teodorović [2003] P. Lučić, D. Teodorović, Vehicle routing problem with uncertain demand at nodes: the bee system and fuzzy logic approach, in: Fuzzy sets based heuristics for optimization, Springer, 2003, pp. 67–82. * Pham et al. [2005] D. Pham, A. Ghanbarzadeh, E. Koc, S. Otri, S. Rahim, M. Zaidi, The bees algorithm, Technical Note, Manufacturing Engineering Centre, Cardiff University, UK (2005). * Pham et al. [2006] D. T. Pham, A. Ghanbarzadeh, E. Koç, S. Otri, S. Rahim, M. Zaidi, The bees algorithm—a novel tool for complex optimisation problems, in: Intelligent Production Machines and Systems, Elsevier, 2006, pp. 454–459. * Moussa and El-Sheimy [2010] A. Moussa, N. El-Sheimy, Localization of wireless sensor network using bees optimization algorithm, in: The 10th IEEE International Symposium on Signal Processing and Information Technology, IEEE, 2010, pp. 478–481. * Yang [2005] X.-S. Yang, Engineering optimizations via nature-inspired virtual bee algorithms, in: International Work-Conference on the Interplay Between Natural and Artificial Computation, Springer, 2005, pp. 317–323. * Yang et al. [2006] X.-S. Yang, J. M. Lees, C. T. Morley, Application of virtual ant algorithms in the optimization of cfrp shear strengthened precracked structures, in: International Conference on Computational Science, Springer, 2006, pp. 834–837. * Chu et al. [2006] S.-C. Chu, P.-W. Tsai, J.-S. Pan, Cat swarm optimization, in: Pacific Rim international conference on artificial intelligence, Springer, 2006, pp. 854–858. * Chu et al. [2007] S.-C. Chu, P.-W. Tsai, et al., Computational intelligence based on the behavior of cats, International Journal of Innovative Computing, Information and Control 3 (2007) 163–173. * Temel et al. [2013] S. Temel, N. Unaldi, O. Kaynak, On deployment of wireless sensors on 3-d terrains to maximize sensing coverage by utilizing cat swarm optimization with wavelet transform, IEEE Transactions on Systems, Man, and Cybernetics: Systems 44 (2013) 111–120. * Kong et al. [2014] L. Kong, C.-M. Chen, H.-C. Shih, C.-W. Lin, B.-Z. He, J.-S. Pan, An energy-aware routing protocol using cat swarm optimization for wireless sensor networks, in: Advanced Technologies, Embedded and Multimedia for Human-Centric Computing, Springer, 2014, pp. 311–318. * Yang et al. [2011] X.-S. Yang, S. Deb, S. Fong, Accelerated particle swarm optimization and support vector machine for business optimization and applications, in: international conference on networked digital technologies, Springer, 2011, pp. 53–66. * Su et al. [2007] S. Su, J. Wang, W. Fan, X. Yin, Good lattice swarm algorithm for constrained engineering design optimization, in: 2007 International Conference on Wireless Communications, Networking and Mobile Computing, IEEE, 2007, pp. 6421–6424. * Mucherino and Seref [2007] A. Mucherino, O. Seref, Monkey search: a novel metaheuristic search for global optimization, in: AIP conference proceedings, AIP, 2007, pp. 162–173. * Shankar et al. [2019] T. Shankar, G. Eappen, S. Sahani, A. Rajesh, R. Mageshvaran, Integrated cuckoo and monkey search algorithm for energy efficient clustering in wireless sensor networks, in: 2019 Innovations in Power and Advanced Computing Technologies (i-PACT), volume 1, IEEE, 2019, pp. 1–4. * Yang [2009] X.-S. Yang, Firefly algorithms for multimodal optimization, in: International symposium on stochastic algorithms, Springer, 2009, pp. 169–178. * Yang and Deb [2010] X.-S. Yang, S. Deb, Eagle strategy using lévy walk and firefly algorithms for stochastic optimization, in: Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), Springer, 2010, pp. 101–111. * Yang [2010] X.-S. Yang, Firefly algorithm, levy flights and global optimization, in: Research and development in intelligent systems XXVI, Springer, 2010, pp. 209–218. * Manshahia et al. [2016] M. S. Manshahia, M. Dave, S. Singh, Firefly algorithm based clustering technique for wireless sensor networks, in: 2016 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), IEEE, 2016, pp. 1273–1276. * Tuba et al. [2017] E. Tuba, M. Tuba, M. Beko, Mobile wireless sensor networks coverage maximization by firefly algorithm, in: 2017 27th International Conference Radioelektronika (RADIOELEKTRONIKA), IEEE, 2017, pp. 1–5. * Sai et al. [2015] V.-O. Sai, C.-S. Shieh, T.-T. Nguyen, Y.-C. Lin, M.-F. Horng, Q.-D. Le, Parallel firefly algorithm for localization algorithm in wireless sensor network, in: 2015 Third International Conference on Robot, Vision and Signal Processing (RVSP), IEEE, 2015, pp. 300–305. * Chu et al. [2008] Y. Chu, H. Mi, H. Liao, Z. Ji, Q. Wu, A fast bacterial swarming algorithm for high-dimensional function optimization, in: 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), IEEE, 2008, pp. 3135–3140. * Davidović [2016] T. Davidović, Bee colony optimization part i: The algorithm overview, Yugoslav Journal of Operations Research 25 (2016). * Kumar and Kumar [2016] S. Kumar, S. Kumar, Bee colony optimization for data aggregation in wireless sensor networks, in: Proceedings of 3rd international conference on advanced computing, networking and informatics, Springer, 2016, pp. 239–246. * Drias and Mosteghanemi [2010] H. Drias, H. Mosteghanemi, Bees swarm optimization based approach for web information retrieval, in: 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, volume 1, IEEE, 2010, pp. 6–13. * Djenouri et al. [2012] Y. Djenouri, H. Drias, Z. Habbas, H. Mosteghanemi, Bees swarm optimization for web association rule mining, in: 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, volume 3, IEEE, 2012, pp. 142–146. * Comellas and Martinez-Navarro [2009] F. Comellas, J. Martinez-Navarro, Bumblebees: a multiagent combinatorial optimization algorithm inspired by social insect behaviour, in: Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation, ACM, 2009, pp. 811–814. * Yang and Deb [2009] X.-S. Yang, S. Deb, Cuckoo search via lévy flights, in: 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), IEEE, 2009, pp. 210–214. * Yang and Deb [2013] X. S. Yang, S. Deb, Multiobjective cuckoo search for design optimization, Computers & Operations Research 40 (2013) 1616–1624. * Yang and Deb [2014] X.-S. Yang, S. Deb, Cuckoo search: recent advances and applications, Neural Computing and Applications 24 (2014) 169–174. * Goyal and Patterh [2014] S. Goyal, M. S. Patterh, Wireless sensor network localization based on cuckoo search algorithm, Wireless personal communications 79 (2014) 223–234. * Adnan et al. [2016] M. A. Adnan, M. Razzaque, M. A. Abedin, S. S. Reza, M. R. Hussein, A novel cuckoo search based clustering algorithm for wireless sensor networks, in: Advanced Computer and Communication Engineering Technology, Springer, 2016, pp. 621–634. * Dhivya and Sundarambal [2011] M. Dhivya, M. Sundarambal, Cuckoo search for data gathering in wireless sensor networks, International Journal of Mobile Communications 9 (2011) 642–656. * Chen et al. [2010] H. Chen, Y. Zhu, K. Hu, X. He, Hierarchical swarm model: a new approach to optimization, Discrete Dynamics in Nature and Society 2010 (2010). * Iordache [2010] S. Iordache, Consultant-guided search: a new metaheuristic for combinatorial optimization problems, in: Proceedings of the 12th annual conference on Genetic and evolutionary computation, ACM, 2010, pp. 225–232. * lordache [2010] S. lordache, Consultant-guided search algorithms for the quadratic assignment problem, in: International Workshop on Hybrid Metaheuristics, Springer, 2010, pp. 148–159. * Iordache [2010] S. Iordache, Consultant-guided search algorithms with local search for the traveling salesman problem, in: International Conference on Parallel Problem Solving from Nature, Springer, 2010, pp. 81–90. * Yang [2010] X.-S. Yang, A new metaheuristic bat-inspired algorithm, in: Nature inspired cooperative strategies for optimization (NICSO 2010), Springer, 2010, pp. 65–74. * Kaur and Sharma [2015] S. P. Kaur, M. Sharma, Radially optimized zone-divided energy-aware wireless sensor networks (wsn) protocol using ba (bat algorithm), IETE Journal of Research 61 (2015) 170–179. * Ng et al. [2018] C. K. Ng, C. H. Wu, W. H. Ip, K. L. Yung, A smart bat algorithm for wireless sensor network deployment in 3-d environment, IEEE Communications Letters 22 (2018) 2120–2123. * Goyal and Patterh [2013] S. Goyal, M. S. Patterh, Wireless sensor network localization based on bat algorithm, International Journal of Emerging Technologies in Computational and Applied Sciences (2013). * Tang et al. [2012] R. Tang, S. Fong, X.-S. Yang, S. Deb, Wolf search algorithm with ephemeral memory, in: Seventh International Conference on Digital Information Management (ICDIM 2012), IEEE, 2012, pp. 165–172. * Gandomi and Alavi [2012] A. H. Gandomi, A. H. Alavi, Krill herd: a new bio-inspired optimization algorithm, Communications in nonlinear science and numerical simulation 17 (2012) 4831–4845. * Shopon et al. [2016] M. Shopon, M. A. Adnan, M. F. Mridha, Krill herd based clustering algorithm for wireless sensor networks, in: 2016 International Workshop on Computational Intelligence (IWCI), IEEE, 2016, pp. 96–100. * Andaliby [2018] A. Andaliby, Dynamic sensor deployment in mobile wireless sensor networks using multi-agent krill herd algorithm, Ph.D. thesis, University of Victoria, 2018. * Ting et al. [2012] T. Ting, K. L. Man, S.-U. Guan, M. Nayel, K. Wan, Weightless swarm algorithm (wsa) for dynamic optimization problems, in: IFIP International Conference on Network and Parallel Computing, Springer, 2012, pp. 508–515. * Yang and Deb [2012] X.-S. Yang, S. Deb, Two-stage eagle strategy with differential evolution, arXiv preprint arXiv:1203.6586 (2012). * Mirjalili [2015] S. Mirjalili, How effective is the grey wolf optimizer in training multi-layer perceptrons, Applied Intelligence 43 (2015) 150–161. * Mirjalili et al. [2014] S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Advances in engineering software 69 (2014) 46–61. * Dao [2016] T.-K. Dao, Enhanced diversity herds grey wolf optimizer for optimal area coverage in wireless sensor networks, in: Genetic and Evolutionary Computing: Proceedings of the Tenth International Conference on Genetic and Evolutionary Computing, November 7-9, 2016 Fuzhou City, Fujian Province, China, volume 536, Springer, 2016, p. 174. * Rajakumar et al. [2017] R. Rajakumar, J. Amudhavel, P. Dhavachelvan, T. Vengattaraman, Gwo-lpwsn: Grey wolf optimization algorithm for node localization problem in wireless sensor networks, Journal of Computer Networks and Communications 2017 (2017). * Al-Aboody and Al-Raweshidy [2016] N. Al-Aboody, H. Al-Raweshidy, Grey wolf optimization-based energy-efficient routing protocol for heterogeneous wireless sensor networks, in: 2016 4th International Symposium on Computational and Business Intelligence (ISCBI), IEEE, 2016, pp. 101–107. * Mirjalili [2015] S. Mirjalili, The ant lion optimizer, Advances in Engineering Software 83 (2015) 80–98. * Yogarajan and Revathi [2018] G. Yogarajan, T. Revathi, Improved cluster based data gathering using ant lion optimization in wireless sensor networks, Wireless Personal Communications 98 (2018) 2711–2731. * Liu et al. [2018] W. Liu, S. Yang, S. Sun, S. Wei, A node deployment optimization method of wsn based on ant-lion optimization algorithm, in: 2018 IEEE 4th International Symposium on Wireless Systems within the International Conferences on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS-SWS), IEEE, 2018, pp. 88–92. * Seyedali [2016] M. Seyedali, Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems, Neural Computing and Applications 27 (2016) 1053–1073. * Vinodhini and Gomathy [2019] R. Vinodhini, C. Gomathy, A hybrid approach for energy efficient routing in wsn: Using da and gso algorithms, in: International Conference on Inventive Computation Technologies, Springer, 2019, pp. 506–522. * Daely and Shin [2016] P. T. Daely, S. Y. Shin, Range based wireless node localization using dragonfly algorithm, in: 2016 eighth international conference on ubiquitous and future networks (ICUFN), IEEE, 2016, pp. 1012–1015. * Askarzadeh [2016] A. Askarzadeh, A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm, Computers & Structures 169 (2016) 1–12. * Mahesh and Vijayachitra [2019] N. Mahesh, S. Vijayachitra, Decsa: hybrid dolphin echolocation and crow search optimization for cluster-based energy-aware routing in wsn, Neural Computing and Applications 31 (2019) 47–62. * Yuvaraj et al. [2019] D. Yuvaraj, M. Sivaram, A. M. U. Ahamed, S. Nageswari, An efficient lion optimization based cluster formation and energy management in wsn based iot, in: International Conference on Intelligent Computing & Optimization, Springer, 2019, pp. 591–607. * Mirjalili and Lewis [2016] S. Mirjalili, A. Lewis, The whale optimization algorithm, Advances in engineering software 95 (2016) 51–67. * Ozdag and Canayaz [2017] R. Ozdag, M. Canayaz, A new dynamic deployment approach based on whale optimization algorithm in the optimization of coverage rates of wireless sensor networks, European Journal of Technic 7 (2017). * Lang et al. [2019] F. Lang, J. Su, Z. Ye, X. Shi, F. Chen, A wireless sensor network location algorithm based on whale algorithm, in: 2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), volume 1, IEEE, 2019, pp. 106–110. * Jadhav and Shankar [2017] A. R. Jadhav, T. Shankar, Whale optimization based energy-efficient cluster head selection algorithm for wireless sensor networks, arXiv preprint arXiv:1711.09389 (2017). * Ebrahimi and Khamehchi [2016] A. Ebrahimi, E. Khamehchi, Sperm whale algorithm: an effective metaheuristic algorithm for production optimization problems, Journal of Natural Gas Science and Engineering 29 (2016) 211–222. * Fard and Hajiaghaei-Keshteli [2016] A. F. Fard, M. Hajiaghaei-Keshteli, Red deer algorithm (rda); a new optimization algorithm inspired by red deers’ mating, in: International Conference on Industrial Engineering, IEEE.,(2016 e), 2016, pp. 33–34. * Mirjalili et al. [2018] S. Z. Mirjalili, S. Mirjalili, S. Saremi, H. Faris, I. Aljarah, Grasshopper optimization algorithm for multi-objective optimization problems, Applied Intelligence 48 (2018) 805–820. * Dhiman and Kumar [2018] G. Dhiman, V. Kumar, Multi-objective spotted hyena optimizer: A multi-objective optimization algorithm for engineering problems, Knowledge-Based Systems 150 (2018) 175–197. * Mirjalili et al. [2017] S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, S. M. Mirjalili, Salp swarm algorithm: A bio-inspired optimizer for engineering design problems, Advances in Engineering Software 114 (2017) 163–191. * Kanoosh et al. [2019] H. M. Kanoosh, E. H. Houssein, M. M. Selim, Salp swarm algorithm for node localization in wireless sensor networks, Journal of Computer Networks and Communications 2019 (2019). * Syed and Syed [2019] M. A. Syed, R. Syed, Weighted salp swarm algorithm and its applications towards optimal sensor deployment, Journal of King Saud University-Computer and Information Sciences (2019). * Cheng et al. [2018] L. Cheng, X.-h. Wu, Y. Wang, Artificial flora (af) optimization algorithm, Applied Sciences 8 (2018) 329. * Jain et al. [2019] M. Jain, V. Singh, A. Rani, A novel nature-inspired algorithm for optimization: Squirrel search algorithm, Swarm and evolutionary computation 44 (2019) 148–175. * Kaveh and Zaerreza [2020] A. Kaveh, A. Zaerreza, Shuffled shepherd optimization method: a new meta-heuristic algorithm, Engineering Computations (2020). * Zhang and Jin [2020] Y. Zhang, Z. Jin, Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems, Expert Systems with Applications 148 (2020) 113246. * Yang et al. [2013] X.-S. Yang, Z. Cui, R. Xiao, A. H. Gandomi, M. Karamanoglu, Swarm intelligence and bio-inspired computation: theory and applications, Elsevier, 2013. * Yang [2020] X.-S. Yang, Nature-inspired optimization algorithms: challenges and open problems, Journal of Computational Science (2020) 101104. * Del Valle et al. [2008] Y. Del Valle, G. K. Venayagamoorthy, S. Mohagheghi, J.-C. Hernandez, R. G. Harley, Particle swarm optimization: basic concepts, variants and applications in power systems, IEEE Transactions on evolutionary computation 12 (2008) 171–195. * Shi and Eberhart [1998] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: 1998 IEEE international conference on evolutionary computation proceedings. IEEE world congress on computational intelligence (Cat. No. 98TH8360), IEEE, 1998, pp. 69–73. * Esmin et al. [2015] A. A. Esmin, R. A. Coelho, S. Matwin, A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data, Artificial Intelligence Review 44 (2015) 23–45. * Pal and Wang [2017] S. K. Pal, P. P. Wang, Genetic algorithms for pattern recognition, CRC press, 2017. * Wang et al. [2005] M. Wang, J. He, Y. Xue, Fault diagnosis based on ant colony optimal algorithm, International journal of information and systems sciences 1 (2005) 329–338. * Sun and Tian [2010] Y. Sun, J. Tian, Wsn path optimization based on fusion of improved ant colony algorithm and genetic algorithm, Journal of Computational Information Systems 6 (2010) 1591–1599. * Xiong et al. [2006] W. Xiong, L. Wang, C. Yan, Binary ant colony evolutionary algorithm, International Journal of Information Technology 12 (2006) 10–20. * Ab Aziz et al. [2009] N. A. B. Ab Aziz, A. W. Mohemmed, M. Y. Alias, A wireless sensor network coverage optimization algorithm based on particle swarm optimization and voronoi diagram, in: 2009 international conference on networking, sensing and control, IEEE, 2009, pp. 602–607. * Hu et al. [2008] J. Hu, J. Song, M. Zhang, X. Kang, Topology optimization for urban traffic sensor network, Tsinghua Science and Technology 13 (2008) 229–236. * Ngatchou et al. [2005] P. N. Ngatchou, W. L. Fox, M. A. El-Sharkawi, Distributed sensor placement with sequential particle swarm optimization, in: Proceedings 2005 IEEE Swarm Intelligence Symposium, 2005. SIS 2005., IEEE, 2005, pp. 385–388. * Li et al. [2007] J. Li, K. Li, W. Zhu, Improving sensing coverage of wireless sensor networks by employing mobile robots, in: 2007 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE, 2007, pp. 899–903. * Wang et al. [2007] X. Wang, S. Wang, J.-J. Ma, An improved co-evolutionary particle swarm optimization for wireless sensor networks with dynamic deployment, Sensors 7 (2007) 354–370. * Hong and Shiu [2007] T.-P. Hong, G.-N. Shiu, Allocating multiple base stations under general power consumption by the particle swarm optimization, in: 2007 IEEE Swarm Intelligence Symposium, IEEE, 2007, pp. 23–28. * Mendis et al. [2006] C. Mendis, S. M. Guru, S. Halgamuge, S. Fernando, Optimized sink node path using particle swarm optimization, in: 20th International Conference on Advanced Information Networking and Applications-Volume 1 (AINA’06), volume 2, IEEE, 2006, pp. 5–pp. * Nascimento and Bastos-Filho [2010] A. I. Nascimento, C. J. Bastos-Filho, A particle swarm optimization based approach for the maximum coverage problem in cellular base stations positioning, in: 2010 10th International Conference on Hybrid Intelligent Systems, IEEE, 2010, pp. 91–96. * Wimalajeewa and Jayaweera [2008] T. Wimalajeewa, S. K. Jayaweera, Optimal power scheduling for correlated data fusion in wireless sensor networks via constrained pso, IEEE Transactions on Wireless Communications 7 (2008) 3608–3618. * Veeramachaneni and Osadciw [2008] K. Veeramachaneni, L. Osadciw, Swarm intelligence based optimization and control of decentralized serial sensor networks, in: 2008 IEEE Swarm Intelligence Symposium, IEEE, 2008, pp. 1–8. * Veeramachaneni and Osadciw [2004] K. K. Veeramachaneni, L. A. Osadciw, Dynamic sensor management using multi-objective particle swarm optimizer, in: Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2004, volume 5434, International Society for Optics and Photonics, 2004, pp. 205–216. * Guo et al. [2011] W. Guo, N. Xiong, A. V. Vasilakos, G. Chen, H. Cheng, Multi-source temporal data aggregation in wireless sensor networks, Wireless personal communications 56 (2011) 359–370. * Jiang et al. [2012] S. Jiang, Z. Zhao, S. Mou, Z. Wu, Y. Luo, Linear decision fusion under the control of constrained pso for wsns, International Journal of Distributed Sensor Networks 8 (2012) 871596. * Guru et al. [2005] S. Guru, S. Halgamuge, S. Fernando, Particle swarm optimisers for cluster formation in wireless sensor networks, in: 2005 International Conference on Intelligent Sensors, Sensor Networks and Information Processing, IEEE, 2005, pp. 319–324. * Cao et al. [2008] X. Cao, H. Zhang, J. Shi, G. Cui, Cluster heads election analysis for multi-hop wireless sensor networks based on weighted graph and particle swarm optimization, in: 2008 Fourth International Conference on Natural Computation, volume 7, IEEE, 2008, pp. 599–603. * Tillett et al. [2003] J. C. Tillett, R. M. Rao, F. Sahin, T. Rao, Particle swarm optimization for the clustering of wireless sensors, in: Digital Wireless Communications V, volume 5100, International Society for Optics and Photonics, 2003, pp. 73–83. * Latiff et al. [2011] N. A. A. Latiff, N. M. Abdullatiff, R. B. Ahmad, Extending wireless sensor network lifetime with base station repositioning, in: 2011 IEEE Symposium on Industrial Electronics and Applications, IEEE, 2011, pp. 241–246. * Ji et al. [2004] C. Ji, Y. Zhang, S. Gao, P. Yuan, Z. Li, Particle swarm optimization for mobile ad hoc networks clustering, in: IEEE International Conference on Networking, Sensing and Control, 2004, volume 1, IEEE, 2004, pp. 372–375. * Kulkarni et al. [2009] R. V. Kulkarni, G. K. Venayagamoorthy, M. X. Cheng, Bio-inspired node localization in wireless sensor networks, in: 2009 IEEE International Conference on Systems, Man and Cybernetics, IEEE, 2009, pp. 205–210. * Low et al. [2008] K. Low, H. Nguyen, H. Guo, A particle swarm optimization approach for the localization of a wireless sensor network, in: 2008 IEEE international symposium on industrial electronics, IEEE, 2008, pp. 1820–1825. * Jia et al. [2009] J. Jia, J. Chen, G. Chang, Z. Tan, Energy efficient coverage control in wireless sensor networks based on multi-objective genetic algorithm, Computers & Mathematics with Applications 57 (2009) 1756–1766. * Konstantinidis et al. [2008] A. Konstantinidis, K. Yang, Q. Zhang, An evolutionary algorithm to a multi-objective deployment and power assignment problem in wireless sensor networks, in: IEEE GLOBECOM 2008-2008 IEEE Global Telecommunications Conference, IEEE, 2008, pp. 1–6. * Bhondekar et al. [2009] A. P. Bhondekar, R. Vig, M. L. Singla, C. Ghanshyam, P. Kapur, Genetic algorithm based node placement methodology for wireless sensor networks, in: Proceedings of the international multiconference of engineers and computer scientists, volume 1, 2009, pp. 18–20. * Poe and Schmitt [2009] W. Y. Poe, J. B. Schmitt, Node deployment in large wireless sensor networks: coverage, energy consumption, and worst-case delay, in: Asian Internet Engineering Conference, ACM, 2009, pp. 77–84. * Al-Karaki et al. [2009] J. N. Al-Karaki, R. Ul-Mustafa, A. E. Kamal, Data aggregation and routing in wireless sensor networks: Optimal and heuristic algorithms, Computer networks 53 (2009) 945–960. * Norouzi et al. [2012] A. Norouzi, F. S. Babamir, Z. Orman, A tree based data aggregation scheme for wireless sensor networks using ga, Wireless Sensor Network 4 (2012) 191. * Dabbaghian et al. [2010] M. Dabbaghian, A. Kalanaki, H. Taghvaei, F. S. Babamir, S. M. Babamir, Data aggregation trees based algorithm using genetic algorithm in wireless sensor networks, International Journal of Computer and Network Security 2 (2010). * Jin et al. [2003] S. Jin, M. Zhou, A. S. Wu, Sensor network optimization using a genetic algorithm, in: Proceedings of the 7th world multiconference on systemics, cybernetics and informatics, 2003, pp. 109–116. * Hussain and Islam [2009] S. Hussain, O. Islam, Genetic algorithm for energy-efficient trees in wireless sensor networks, in: Advanced intelligent environments, Springer, 2009, pp. 139–173. * Seo et al. [2009] H.-S. Seo, S.-J. Oh, C.-W. Lee, Evolutionary genetic algorithm for efficient clustering of wireless sensor networks, in: 2009 6th IEEE Consumer Communications and Networking Conference, IEEE, 2009, pp. 1–5. * Norouzi et al. [2011] A. Norouzi, F. S. Babamir, A. H. Zaim, A new clustering protocol for wireless sensor networks using genetic algorithm approach, Wireless Sensor Network 3 (2011) 362. * Bari et al. [2009] A. Bari, S. Wazed, A. Jaekel, S. Bandyopadhyay, A genetic algorithm based approach for energy efficient routing in two-tiered sensor networks, Ad Hoc Networks 7 (2009) 665–676. * EkbataniFard et al. [2010] G. H. EkbataniFard, R. Monsefi, M.-R. Akbarzadeh-T, M. H. Yaghmaee, A multi-objective genetic algorithm based approach for energy efficient qos-routing in two-tiered wireless sensor networks, in: IEEE 5th International Symposium on Wireless Pervasive Computing 2010, IEEE, 2010, pp. 80–85. * Luo [2010] W. Luo, A quantum genetic algorithm based qos routing protocol for wireless sensor networks, in: 2010 IEEE International Conference on Software Engineering and Service Sciences, IEEE, 2010, pp. 37–40. * Jegede and Ferens [2013] O. D. Jegede, K. Ferens, A genetic algorithm for node localization in wireless sensor networks, in: The 2013 World Congress in Computer Science, Computer Engineering, and Applied Computing (WORLDCOMP’13), 2013, pp. 22–25. * Tan et al. [2019] R. Tan, Y. Li, Y. Shao, W. Si, Distance mapping algorithm for sensor node localization in wsns, International Journal of Wireless Information Networks (2019) 1–10. * Li et al. [2008] D. Li, W. Liu, Z. Zhao, L. Cui, Demonstration of a wsn application in relic protection and an optimized system deployment tool, in: 2008 International Conference on Information Processing in Sensor Networks (ipsn 2008), IEEE, 2008, pp. 541–542. * Li et al. [2010] D. Li, W. Liu, L. Cui, Easidesign: an improved ant colony algorithm for sensor deployment in real sensor network system, in: 2010 IEEE Global Telecommunications Conference GLOBECOM 2010, IEEE, 2010, pp. 1–5. * Liao et al. [2011] W.-H. Liao, Y. Kao, R.-T. Wu, Ant colony optimization based sensor deployment protocol for wireless sensor networks, Expert Systems with Applications 38 (2011) 6599–6605. * Liu [2012] X. Liu, Sensor deployment of wireless sensor networks based on ant colony optimization with three classes of ant transitions, IEEE Communications Letters 16 (2012) 1604–1607. * Ding and Liu [2004] N. Ding, P. X. Liu, Data gathering communication in wireless sensor networks using ant colony optimization, in: 2004 IEEE International Conference on Robotics and Biomimetics, IEEE, 2004, pp. 822–827. * Misra and Mandal [2006] R. Misra, C. Mandal, Ant-aggregation: ant colony algorithm for optimal data aggregation in wireless sensor networks, in: 2006 IFIP International Conference on Wireless and Optical Communications Networks, IEEE, 2006, pp. 5–pp. * Han and Hong-xu [2008] X. Han, M. Hong-xu, Maximum lifetime data aggregation in distributed intelligent robot network based on aco, in: 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), IEEE, 2008, pp. 50–55. * Yang et al. [2010] J. Yang, Z. Li, Y. Lin, W. Zhao, A novel energy-efficient data gathering algorithm for wireless sensor networks, in: 2010 8th World Congress on Intelligent Control and Automation, IEEE, 2010, pp. 7016–7020. * Xie and Shi [2012] M. Xie, H. Shi, Ant-colony optimization based in-network data aggregation in wireless sensor networks, in: 2012 12th International Symposium on Pervasive Systems, Algorithms and Networks, IEEE, 2012, pp. 77–83. * Camilo et al. [2006] T. Camilo, C. Carreto, J. S. Silva, F. Boavida, An energy-efficient ant-based routing algorithm for wireless sensor networks, in: International workshop on ant colony optimization and swarm intelligence, Springer, 2006, pp. 49–59. * Almshreqi et al. [2012] A. M. S. Almshreqi, B. M. Ali, M. F. A. Rasid, A. Ismail, P. Varahram, An improved routing mechanism using bio-inspired for energy balancing in wireless sensor networks, in: The International Conference on Information Network 2012, IEEE, 2012, pp. 150–153. * Huang et al. [2010] R. Huang, Z. Chen, G. Xu, Energy-aware routing algorithm in wsn using predication-mode, in: 2010 International Conference on Communications, Circuits and Systems (ICCCAS), IEEE, 2010, pp. 103–107. * Salehpour et al. [2008] A.-A. Salehpour, B. Mirmobin, A. Afzali-Kusha, S. Mohammadi, An energy efficient routing protocol for cluster-based wireless sensor networks using ant colony optimization, in: 2008 International Conference on Innovations in Information Technology, IEEE, 2008, pp. 455–459. * Ziyadi et al. [2009] M. Ziyadi, K. Yasami, B. Abolhassani, Adaptive clustering for energy efficient wireless sensor networks based on ant colony optimization, in: 2009 Seventh Annual Communication Networks and Services Research Conference, IEEE, 2009, pp. 330–334. * Mao et al. [2013] S. Mao, C. Zhao, Z. Zhou, Y. Ye, An improved fuzzy unequal clustering algorithm for wireless sensor network, Mobile Networks and Applications 18 (2013) 206–214. * Liang et al. [2010] M.-Y. Liang, L. Li, K. Chen, Wireless sensor network nodes localization method of undergroundbased on ant colony algorithm, Meikuang Jixie(Coal Mine Machinery) 31 (2010) 48–50. * Niranchana and Dinesh [2012] S. Niranchana, E. Dinesh, Object monitoring by prediction and localisation of nodes by using ant colony optimization in sensor networks, in: 2012 Fourth International Conference on Advanced Computing (ICoAC), IEEE, 2012, pp. 1–8. * Lu and Zhang [2014] Y. H. Lu, M. Zhang, Adaptive mobile anchor localization algorithm based on ant colony optimization in wireless sensor networks., International Journal on Smart Sensing & Intelligent Systems 7 (2014). * Ammari [2012] H. M. Ammari, On the problem of k-coverage in mission-oriented mobile wireless sensor networks, Computer Networks 56 (2012) 1935 – 1950. * Chen et al. [2013] W. Chen, S. Chen, D. Li, Minimum-delay pois coverage in mobile wireless sensor networks, EURASIP Journal on Wireless Communications and Networking 2013 (2013) 262\. * Boukerche and Sun [2018] A. Boukerche, P. Sun, Connectivity and coverage based protocols for wireless sensor networks, Ad Hoc Networks 80 (2018) 54 – 69.
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. # Frame based equipment channel access enhancements in NR unlicensed spectrum for the URLLC transmissions Trung-Kien Le†, Umer Salim, Florian Kaltenberger† †EURECOM, Sophia-Antipolis, France <EMAIL_ADDRESS> ###### Abstract Ultra-reliable low-latency communication (URLLC) in 5G New Radio has been originally defined only for licensed spectrum. However, due to new use cases in the Industry 4.0 scenarios, URLLC operation is currently being extended to unlicensed spectrum in the ongoing Release 17 of the 3rd Generation Partnership Project. Although in such controlled environments we can guarantee the absence of any other technology sharing the channel on a long-term basis, the uncertainty of obtaining channel access through load based equipment (LBE) or frame based equipment (FBE) can impede with the latency requirements of URLLC. In FBE, the transmitters can be prioritized to support data with different requirements and have lower energy consumption and latency compared to LBE with a big contention window size. In this paper we analyze the performance of FBE in an unlicensed controlled environment through a Markov chain. Based on this analysis, we propose two schemes to improve the URLLC performance in FBE: The first scheme allows the transmitters to use multiple fixed frame period (FFP) configurations while the second scheme configures the FFP’s starting point of each transmitter based on its priority. The simulations show the benefits of these schemes compared to the URLLC transmission of existing schemes. ###### Index Terms: 5G, URLLC, unlicensed spectrum, frame based equipment ## I Introduction ### I-A Ultra-reliable low-latency communications (URLLC) Many emerging applications such as autonomous driving, Industry 4.0 and augmented reality require the communications with high reliability and low latency. To satisfy these new requirements, the 3rd Generation Partnership Project (3GPP) included URLLC as one of three service categories in 5G new radio (NR). URLLC has attracted the attention on physical layer design since 3GPP Release 15 that is the first full set of 5G standard due to its strict latency and reliability requirements compared to Long-Term Evolution (LTE). To satisfy the requirements of the emerging applications and set a baseline for URLLC design, the URLLC requirements are defined in [1]: “A general URLLC reliability requirement for one transmission of a packet is 10-5 for 32 bytes with a user plane latency of 1 ms”. These requirements pose a challenge to URLLC design because URLLC reliability of 10-5 is much higher than block error rate of 10-2 in LTE. URLLC design even becomes more challenging in Release 16 when higher requirements are specified for new use cases such as factory automation, transport industry and electrical power distribution: “Higher reliability (up to 10-6), higher availability, short latency in the order of 0.5 to 1 ms, depending on the use cases (factory automation, transport industry and electrical power distribution)”[2]. These requirements demand that new features must be standardized to support URLLC in 5G. ### I-B URLLC physical layer’s features in 3GPP Release 15 and Release 16 Several URLLC features have been specified in 3GPP Release 15 - the first full set of 5G standard and Release 16 - the latest finalized release - to improve NR system performance based on URLLC requirements. The values of subcarrier spacing in 5G NR are 15, 30, 60, 120 and 240 kHz instead of a single value of 15 kHz in LTE. This feature reduces symbol length that allows a shorter transmission in time [3]. In LTE, a transmission time interval (TTI) is one slot and a transmission only can start at the fixed symbols at the beginning of a slot. If a packet arrives after the starting symbols in a slot, it must wait until the next slot to be transmitted. This delay is harmful to URLLC with a low latency requirement. To reduce the waiting time of a packet, in 5G NR, a TTI can be a sub-slot of 2, 4 or 7 symbols and a transmission can start at the beginning of a sub-slot [3]. In NR uplink (UL) transmission, the user equipment (3GPP terminology: UE) is allowed to transmit data to the base station (3GPP terminology: gNB) in the configured grant (CG) resources without scheduling request and UL grant to reduce latency. A UE might have several configurations of the CG resources to choose to start a transmission based on the arrival time of a packet. To increase UL transmission’s reliability, the UE might be configured to transmit the repetitions of a packet in the consecutive slots or sub-slots without feedback from the gNB [4]. When the UE having high priority traffic such as URLLC coexists with the UE having low priority traffic such as eMBB, two techniques have been standardized to guarantee URLLC UL transmission’s performance. In the first technique, the eMBB UE is ordered by a control signal from the gNB to stop its UL transmission overlapping with the URLLC UL transmission of another UE. The second technique allows the URLLC UE to increase power level of its UL transmission overlapping with the UL transmission of an eMMB UE to compensate the impact of interference [5]. ### I-C URLLC in unlicensed spectrum The URLLC features standardized in 3GPP Release 15 and Release 16 are for the operation in licensed spectrum. With the new use cases in the industrial scenario, unlicensed spectrum starts to attract the attention because of its low cost, high flexibility, simplicity of deployment and availability of bandwidth. URLLC operation in unlicensed spectrum is one of the main work items of 3GPP standardization starting from the ongoing Release 17. The features of New Radio-Unlicensed (NR-U) have been specified since Release 15 [6]. However, these features do not take into account the URLLC features and requirements. The latency and reliability requirements of URLLC might not be achieved in unlicensed spectrum due to listen before talk (LBT). In unlicensed spectrum, a transmitter is required to do channel access mechanism through LBT before acquiring a channel in a certain amount of time to transmit data. There are two channel access mechanisms: load based equipment (LBE) and frame based equipment (FBE) [7]. In LBE, a transmitter can do channel sensing to obtain a channel at any moment that it has data to transmit. The impact of LBE on URLLC is analyzed in [8]. The transmitter using LBE with a small contention window size has low latency in channel access but a small contention window size causes an inter-system unfairness. On the other hand, with a big contention window size, the transmitter has high latency in channel access and low throughput. In FBE, if a transmitter has a packet to transmit, it only can sense the channel and start a transmission at the fixed moments. FBE benefits the URLLC nodes when data rate is low and data arrival is periodic because the receiver does not always have to detect blindly the presence of a transmission at any moments. It reduces the burden and energy consumption at the receiver. Moreover, URLLC UL data is transmitted in the periodical CG resources. If the CG resources is aligned with the starting points of FBE periods, the transmitter is set to only do LBT at the moments with transmission potential and the receiver only has to listen to the channel at those moments. Furthermore, because the transmitters sense channel and transmit data at the fixed moments, the transmitters can be prioritized based on data requirements by setting the parameters relalating to these moments such as periodicity or starting point. In this work, the performance of FBE channel access mechanism is analyzed to see its impact on URLLC operation in unlicensed spectrum. The targeted scenario is the industrial scenario in the factories where the absence of any other technologies sharing the channel such as WIFI and devices operating in LBE is guaranteed on a long-term basis. It can be done by the facility owner when he installs the devices to prevent an unexpected interference from other systems and radio access technologies. This environment is called an unlicensed controlled environment [9], [10]. The industrial scenario with an unlicensed controlled environment is chosen to follow the work of the ongoing 3GPP Release 17 where one of the objectives is the UL enhancements for URLLC in an unlicensed controlled environment [11]. However, this does not mean that LBT is not required prior to a transmission in an unlicensed controlled environment. Although the URLLC nodes work in a controlled environment without WIFI and devices operating in LBE, the uncertainty of FBE LBT due to a competition among the nodes still makes the URLLC nodes unable to attain the requirements. To further improve FBE performance for URLLC, new schemes will be studied. ### I-D Related work [12] shows the throughput of a system using FBE on 5 GHz unlicensed band. [13] and [14] show the performance of FBE when LTE and WIFI coexist. These models in [12], [13] and [14] do not include the constraints of the URLLC transmission so no enhancements of FBE for URLLC operation are considered. In [15], the model to analyze the coexistence of LTE using FBE and WIFI is only for DL transmission from one base station to several UEs in unlicensed spectrum while UL data is transmitted in licensed spectrum. [16] proposes “Enhanced FBE” and “Backoff and Idle Time Reduction FBE”. In Enhanced FBE, a backoff procedure used after a clear channel access (CCA) increases delay in the URLLC transmission. In Backoff and Idle Time Reduction FBE, the idle period is eliminated after an unsuccessful CCA so the transmitters sense the channel after the channel occupancy time in the next frame instead of waiting the entire frame. If the channel occupancy time is long, it still limits the number of channel sensing at the transmitter. Moreover, a backoff procedure is also used that increases latency. [17] proposes that a transmitter senses periodically or continuously the channel in a subframe. The idle period is removed in this subframe compared to the original FBE frame. This method changes the design of the conventional frame that might make it incompatible in the system that the transmitters use both the conventional frame and the proposed frame. Moreover, to increase the sensing opportunities in the URLLC latency budget, the duration of subframe must be reduced that affects the duration of a transmission. In [18] and [19], a transmitter might not transmit after a successful CCA if channel quality is bad to avoid a higher required transmission power. However, when a transmitter does not transmit after a successful CCA, it misses its transmission opportunity and without transmitting data, it has no chance to have a successful transmission of a packet in the URLLC latency budget. Therefore, the latency constraint of URLLC limits the use of this scheme. In [20], a transmitter does channel sensing on several parallel channels and can switch among the channels to avoid the interference and the busy channels. The transmitter also can change the idle period in a frame when there is no available channel. This scheme consumes more resources for one transmission because the transmitter is scheduled with multiple resources in the parallel channels for a transmission instead of only one resource in a channel in the conventional scheme. In [21], a central entity is used in a fully coordinated FBE approach to configure a common Time Division Duplex (TDD) configuration among the nodes in the system so that the UE’s UL transmission to a gNB is not blocked by the neighbor gNB DL transmission due to the misalignment of UL and DL slots among the gNBs. A common TDD configuration among the gNB nodes might not satisfy the specific requirements of each gNB network about the ratio of DL and UL transmissions. In summary, the shortcoming of the existing FBE analysis is that they have not taken into account the URLLC requirements so the limitations of FBE affecting the URLLC transmission’s performance have not been analyzed and solved. Some schemes are proposed to add a backoff procedure or avoid transmitting due to a bad channel after a successful CCA. These proposals increase transmission latency that is harmful to URLLC. Two works propose to remove the idle period in the frame with some conditions. This allows the device to have more opportunities to sense a channel in the same interval. However, the number of channel sensing in an interval depends on channel occupancy time so it cannot be modified flexibly based on the requirements of a transmission. One scheme uses several parallel channels to transmit data that increases resource consumption when multiple resources are scheduled in several channels for the transmission of a packet. ### I-E Main contributions This paper focuses on the URLLC operation in an unlicensed controlled environment where the transmitters use FBE to access to a channel. The analysis of URLLC operation’s channel access in FBE is done through a Markov chain in Section II. Firstly, a relationship among the probability of sensing a busy channel (blocking probability), data rate, the allowed number of channel sensing and the number of the transmitters is derived for all types of data (not only URLLC data). Subsequently, the analysis shows that for URLLC data, the URLLC latency constraint limits the number of channel sensing and causes an increase of channel access failure’s probability. Ultimately, it results in a degradation of the URLLC transmission’s reliability. Based on the analysis, we propose two schemes in Section III and Section IV to reduce the channel access failure’s probability in FBE for the URLLC transmitters so that they can achieve the URLLC reliability and latency requirements. In Section III, the multiple FFP configuration scheme is introduced. In this scheme, a transmitter with high priority is configured with multiple FFP configurations in FBE. These FFP configurations are in the same frequency resources but overlap in time so they do not require the additional resources. The transmitter can use these FFP configurations at the same time and switch between the configurations to do channel sensing and transmit data after obtaining a channel. Thanks to the use of multiple FFP configurations, the transmitter has more opportunities to do channel sensing in the URLLC latency constraint than the conventional scheme. Therefore, the probability of channel access failure decreases. In Section IV, a scheme of arranging FFP of each transmitter based on its priority in the system is presented. The high priority transmitters such as the URLLC transmitters are set up with the offsets for the starting points of FFP so that their transmissions are not blocked by the transmissions of the lower priority transmitters. Thereby, the high priority transmitters achieve the lower channel access failure’s probabilities. Section V shows the simulations to verify the analysis in the previous sections. The simulations are also done to compare the conventional scheme and the schemes in the references with two proposed schemes. Based on the results, some suggestions about which proposed scheme is used in a specific scenario are made. Finally, Section VI concludes this work. ## II Analysis of FBE in unlicensed spectrum ### II-A System model Figure 1: Fixed frame period in FBE. Figure 2: The Markov chain for FBE channel access. In FBE channel access mechanism, when a transmitter has data to transmit, it senses a channel to check its availability for a transmission only per fixed period called fixed frame period (FFP) with a duration of 1, 2, 2.5, 4, 5 or 10 ms. As shown in Fig. 1, a FFP consists of a channel occupation time (COT) and an idle period with the durations of $T_{COT}$ and $T_{idle}$, respectively. The maximum duration of $T_{COT}$ is 95% of $T_{FFP}$. The duration of $T_{idle}$ is at least 5% of $T_{FFP}$ but not smaller than 100 $\mu$s. In the idle period, there is a single observation slot where a CCA within $T_{CCA}$ of 25 $\mu$s is performed. If a transmitter has data and senses an idle channel at a CCA occasion, the transmitter starts immediately a transmission at the beginning of a FFP after the end of that CCA occasion. The transmitter occupies the channel in $T_{COT}$ and stops the transmission in $T_{idle}$. The transmitter can share the channel within $T_{COT}$ with the receiver so that the receiver is also able to use that COT to transmit data. In contrast, if the transmitter senses a busy channel, it does not transmit data and has to wait until the CCA occasion in the next FFP to perform another channel sensing. In a network, both the gNB and the UE are capable of initiating its own COT. FBE channel access mechanism is modeled by a Markov chain in Fig. 2. $p_{0}$ is the probability that a transmitter has no packet to transmit in a FFP. If the transmitter has a packet to transmit in a FFP, it jumps from State Start to State 0 with a probability of $1-p_{0}$ and senses the channel in a CCA occasion at the end of that frame. Ignoring the alignment time, time consumption for the first channel sensing is $T_{CCA}$. If the channel is sensed idle, the transmitter obtains the channel to transmit at $T_{CCA}$ and jumps to State Success with a probability of $1-p_{c}$ then goes back to State Start where $p_{c}$ is the probability of sensing a busy channel. State Success means that the transmitter succeeds in acquiring the channel. It does not means that after acquiring the channel, the transmitter transmits data and data is decoded correctly at the receiver. If the channel is busy, the transmitter must wait until the CCA occasion in the next FFP to continue to sense the channel and jumps from State 0 to State 1 with a probability of $p_{c}$. If the sensing is successful, the transmitter obtains the channel at $T_{CCA}+T_{FFP}$. The process continues until the transmitter accesses to the channel or gives up the sensing process for that packet due to a time constraint. In the model, State K presents the last channel sensing at the (K+1)th frame since the first sensing frame. K can be infinite. The Markov chain in Section II is used to model FBE channel access mechanism of the URLLC transmitters or other types of the 5G NR transmitters coexisting with the URLLC transmitters in the 5G NR system. The transmitters share the same frequency channel in sub-6GHz bands, use omnidirectional sensing (omni- LBT) to sense and acquire a channel in FBE channel access mechanism then transmit data by using omnidirectional transmission. Every of the transmitters can detect each other. The hidden nodes are not included because they do not affect the ability of a transmitter to access to a channel that is the main focus of this Markov chain model. The transmitter cannot sense the hidden nodes so it is not blocked to access to the channel by these hidden nodes. The receivers use omnidirectional reception to receive data. The URLLC transmitters have the transmissions with the strict requirements of latency and reliability so they work in an unlicensed controlled environment to satisfy these requirements because the unlicensed controlled environment guarantees the absence of other access technologies such as WIFI and the devices in 5G network operating in LBE on a long-term basis [9], [10]. The unlicensed controlled environment is also applied in the scenarios in Section III and Section IV. ### II-B Probabilities of the states and channel access in Markov chain for FBE channel access We denote $\pi_{Start},\pi_{Success},\pi_{Failure},\pi_{i}$ where $i\in[0,K]$ to be stationary distributions of Markov chain in Fig. 2. We have $\pi_{Start}=\pi_{Success}+\pi_{Failure}.$ (1) The probability of State 0 is $\pi_{0}=(1-p_{0})\pi_{Start}.$ (2) The probabilities from States 1 to States K are: $\pi_{i}=p_{c}^{i}\times\pi_{0}\hskip 28.45274pti\in[1,K].$ (3) The probability that a transmitter succeeds in obtaining a channel is also the probability that a transmitter sends data. The probability of a channel sensing success $\pi_{Success}$ is $\displaystyle\pi_{Success}$ $\displaystyle=\pi_{0}(1-p_{c})\sum_{i=0}^{K}p_{c}^{i}$ $\displaystyle=\pi_{0}(1-p_{c}^{K+1})$ $\displaystyle=\pi_{Start}(1-p_{0})(1-p_{c}^{K+1}).$ (4) The probability is calculated when a packet already entered the process so $\pi_{Start}$ equals to 1. In other words, the transmission probability is calculated given that $\pi_{Start}$ is 1. By substituting $\pi_{Start}=1$ to (4), we have the probability that a transmitter transmits data $P_{trans}=(1-p_{0})(1-p_{c}^{K+1}).$ (5) When a transmitter has a packet and senses the channel to transmit that packet, the probability that the transmitter fails to obtain the channel after all allowed channel sensing and must drop the packet is $P_{failure}=p_{c}^{K+1}.$ (6) ### II-C Relation between $p_{0}$ and $p_{c}$ Figure 3: Channel sensing and data transmission in FBE. The FFP of each transmitter can be configured with an offset to start at different time in the same frequency resources as Fig. 3 so a CCA occasion of a transmitter might overlap with a COT of another transmitter. If a transmitter is sending data in a COT then another transmitter senses the channel at that time, the channel is sensed busy as the case of UE2 in Fig. 3. The UE2 must wait the next FFP to sense the idle channel and transmit data. This offset ensures two UE not to sense an idle channel then transmit at the same time so as to avoid the interference between two simultaneous transmissions causing a degradation of transmission’s reliability. We consider a system that has $Q$ transmitters with the same $K+1$ sensing states but different starting points of FFPs. These transmitters belong to a 5G NR network and send data in the same frequency resource in an unlicensed controlled environment. A CCA occasion of a transmitter overlaps with the COTs of the other transmitters. A transmitter of interest senses an idle channel in a CCA occasion if there is no UE out of $Q-1$ UE transmitting at that time. We have the relation between $p_{0}$ and $p_{c}$ by using the transmission probability of a transmitter in (5) $p_{c}=1-(1-(1-p_{0})(1-p_{c}^{K+1}))^{Q-1}.$ (7) From (7), $p_{c}$ is calculated when $p_{0},K,Q$ are known. ### II-D URLLC operation with FBE in unlicensed spectrum As shown in Fig. 3, a transmitter does channel sensing to transmit a packet in the fixed moments with the gap of $T_{FFP}$ between two consecutive moments. The URLLC transmission has a latency budget of 1 ms while the smallest duration of a fixed frame period $T_{FFP}$ is 1 ms. Therefore, in the URLLC latency budget, the transmitter only can do one channel sensing because the second channel sensing is at $T_{CCA}+T_{FFP}$ that is bigger than 1 ms. If it fails in this only chance, it cannot access to the channel to transmit data in the URLLC latency requirement and the URLLC packet is dropped. When a transmitter has a URLLC packet with only one chance of channel sensing, the value of $K$ in the Markov chain is 0. Substituting $K=0$ to (6), we have the probability that the transmitter fails to access to a channel in order to transmit an URLLC packet is: $P_{failure\\_URLLC}=p_{c}.$ (8) When $Q$ transmitters are configured to transmit the URLLC packets in a controlled environment, from (7), we have the relation between $p_{0}$ and $p_{c}$ of the URLLC transmitters. $p_{c}=1-(1-(1-p_{0})(1-p_{c}))^{Q-1}.$ (9) The limit of channel sensing opportunity due to the time constraint increases the probability of channel access failure. It causes an increase of the dropped packets and reduces the URLLC transmission’s reliability. This section provided a Markov chain to model channel access in unlicensed spectrum when the transmitters in the system use FBE to acquire a channel. Subsequently, the URLLC latency constraint is applied to the model to show the impact of channel access in FBE on URLLC performance. ## III Multiple configurations of FFP in FBE for URLLC in unlicensed spectrum ### III-A Multiple configurations of FFP Figure 4: Multiple configurations of FFP. The conventional FBE scheme provides only one opportunity of channel sensing in the URLLC transmission due to the time constraint. To provide more opportunities of channel access in the URLLC latency budget of 1 ms and reduce the errors due to the packets dropped out of the latency budget in UL transmission, we propose a new FBE scheme where a transmitter is configured with multiple configurations of FFP. These FFP configurations are in the same frequency resources but overlap in time and have different starting points so they do not require the additional resources compared to the conventional scheme with one FFP configuration as shown in Fig. 4. The number of FFP configurations that a transmitter can use is configured by the gNB through downlink control information (DCI) or radio resource control (RRC) based on arrival rate of data, the probability of channel access, the priority and requirements of data transmission. The beginning of a FFP in a configuration is shifted an amount of time defined as $T_{offset}$ compared to the beginning of a FFP in the previous configuration. $T_{offset}$ is configured by gNB through DCI or RRC based on the number of configurations that a transmitter has. The transmitter with high priority data such as URLLC uses multiple FFP configurations in the attempts to access to the channel and transmit a high priority packet while the transmitter with low priority data uses only one FFP configuration in the attempts to access to the channel and transmit a low priority packet. This means that a transmitter might have multiple FFP configurations but only uses one configuration in the attempts to access to the channel and transmit low priority data as the conventional FBE scheme. FFP periodicity in all configurations of a transmitter is the same and configured by the gNB through DCI or RRC. When a transmitter has high priority data such as URLLC, it senses a channel in the CCA occasions of different FFP configurations. It starts to sense the channel from the configuration with the closest CCA occasion from the arrival time of data in order to reduce the waiting time. If it fails to access to the channel in a configuration, it does another attempt in the closest CCA occasion of the next configuration instead of waiting one frame period in the same configuration as the conventional scheme. Subsequently, if it succeeds in channel access in a configuration, it uses that configuration to start the transmission in $T_{COT}$ of a FFP immediately after that successful CCA occasion. This COT is also shared with the receiver to transmit data in the opposite direction. As shown in Fig. 4, a UE is configured with three FFP configurations having different starting points in the same frequency resources. When data arrives, the UE starts to do channel sensing in a CCA occasion of the first configuration because this configuration has the closest CCA occasion from the moment that data arrives. This reduces the waiting time before the first sensing. The UE senses a busy channel in the CCA occasion of the first configuration then it moves to the second configuration to do channel sensing in the next CCA occasion. Consequently, the channel is still busy so the UE moves to the third configuration to do channel sensing. This time the channel is idle so the UE chooses this configuration to transmit data. Data is transmitted immediately in a FFP after the successful CCA. With the multiple FFP configuration scheme in this example, the UE has three opportunities of channel sensing in an interval of $T_{FFP}$ instead of only one opportunity in the conventional scheme. ### III-B The Markov chain of FBE channel access with multiple configurations of FFP Figure 5: The Markov chain for multiple configurations of FFP in FBE channel access. Fig. 5 shows the Markov chain of FBE channel access with multiple FFP configurations. $n$ is the number of the FFP configurations that a transmitter uses to do channel access for a packet’s transmission. $T_{offset}$ equals to $\frac{T_{FFP}}{n}$. Other parameters are defined in Section II-A. A transmitter has a packet and jumps from State Start to State 00 with a probability of $1-p_{0}$. State 00 presents the first sensing frame in the first chosen configuration. If the channel is idle, the transmitter jumps from State 00 to State Success with a probability of $1-p_{c}$ then moves back to State Start. State Success means that the transmitter succeeds in acquiring the channel. It does not means that after acquiring the channel, the transmitter transmits data and data is decoded correctly at the receiver. If the channel is busy, the transmitter jumps from State 00 to State 01 with a probability of $p_{c}$. State 01 presents the first sensing frame in the next configuration. If the channel continues to be busy, the transmitter goes to the next states. After going through the first sensing frame of all configurations, the transmitter comes back to the first chosen configuration and senses the channel in the second sensing frame as presented by State 10. The process continues until the channel is obtained successfully and the transmitter jumps to State Success or all CCAs in the allowed time fail and the transmitter jumps to State Failure. Finally, the transmitter comes back to State Start. Following the same calculations in Section II-B, when $\pi_{Start}$ equals to 1 for a packet that already entered the process, the probability of transmission for a transmitter using FBE with multiple FFP configurations is $P_{trans\\_nconfig}=(1-p_{0})(1-p_{c}^{n(K+1)}).$ (10) The probability of channel access failure when the transmitter has a packet to transmit is $P_{failure\\_nconfig}=p_{c}^{n(K+1)}.$ (11) If $Q$ transmitters have $n$ configurations in a controlled environment, the relation between $p_{c}$ and $p_{0}$ is $p_{c}=1-(1-(1-p_{0})(1-p_{c}^{n(K+1)}))^{Q-1}.$ (12) From (12), $p_{c}$ is calculated when $p_{0},K,n,Q$ are known. In the multiple FFP configuration scheme, a transmitter has maximum $n$ attempts to access to a channel from $T_{CCA}$ to $T_{CCA}+T_{FFP}$ (channel sensing at $T_{CCA}+T_{FFP}$ is not counted). Therefore, a transmitter with an URLLC packet has maximum $n$ opportunities of channel sensing in the URLLC latency budget of 1 ms with $T_{FFP}$ of 1 ms instead of only one opportunity in the conventional FBE scheme. We have $m$ ($m\leq n$) to be the number of FFP configurations that the transmitter can use in the URLLC latency budget of 1 ms. The probabilities of transmission and channel sensing failure of a URLLC transmitter are $P_{trans\\_nconfig\\_URLLC}=(1-p_{0})(1-p_{c}^{m}).$ (13) $P_{failure\\_nconfig\\_URLLC}=p_{c}^{m}.$ (14) The relation between $p_{c}$ and $p_{0}$ for $Q$ URLLC transmitters that can use $m$ configurations in the URLLC latency budget of 1 ms in a controlled environment is $p_{c}=1-(1-(1-p_{0})(1-p_{c}^{m}))^{Q-1}.$ (15) Multiple FFP configurations in FBE mitigate channel access failure in the URLLC transmission due to the time constraint. Moreover, the multiple FFP configuration scheme in FBE also reduces the alignment time compared to the conventional FBE with one configuration. When a packet arrives, a transmitter must wait until the closest CCA occasion to do channel sensing. The alignment time is uniformly distributed among two consecutive CCA occasions. For the conventional FBE scheme, the alignment time is $\frac{T_{FFP}}{2}$. While the alignment time is $\frac{T_{offset}}{2}$ for the multiple FFP configuration scheme. Because $T_{offset}$ between two configurations is smaller than $T_{FFP}$, the alignment time of the multiple FFP configuration scheme is smaller than that of the conventional scheme. It benefits the URLLC transmission with a strict latency requirement. On the other hand, in the multiple FFP configuration scheme, the receiver must detect blindly the presence of a transmission more than one time in a duration of $T_{FFP}$. The receiver has to check each configuration to see whether there is a transmission or not because of the uncertainty of channel sensing and arrival of data at the transmitter. The number of blind detections in a duration of $T_{FFP}$ is equal to the number of the FFP configurations. Therefore, to reduce the burden and energy consumption at the receiver, only the high priority transmitters such as URLLC type are allowed to use multiple FFP configurations in channel sensing and transmission. ## IV FFP arrangement based on the transmitter’s priority Figure 6: FFP arrangement in FBE based on the UE’s priority. In Section II and Section III, the transmission in one COT initiated by a transmitter might block any transmitter in the network to initiate its own COT because a CCA occasion of a transmitter overlaps with a COT of another transmitter. It results in the same probability of sensing a busy channel for all transmitters in the network. However, in case there are the transmitters with different priorities such as latency and reliability in a network, it would be better if the high priority transmitters have a higher chance to access to the channel than the low priority transmitters. Therefore, we propose another FBE scheme in this section to support the UE with different priorities. FFP in each transmitter is configured with an offset so that a CCA occasion of a high priority transmitter overlaps with an idle period of a low priority transmitter. In other words, a low priority transmitter stops the transmission before a CCA occasion of a high priority transmitter so the high priority transmitter senses an idle channel to transmit data whenever it has data to transmit. While a CCA occasion of a low priority transmitter overlaps with a COT of a high priority transmitter so the low priority transmitter might be blocked to initiate its own COT. In this scheme, a transmitter is only blocked to initiate its COT by other COT initiators with higher priority and not blocked by other COT initiators with lower priority. The transmitters belong to 5G NR network and send data in the same frequency resources in an unlicensed controlled environment without interference from WIFI and other devices operating in LBE. The priorities of the UE transmitters are defined based on the latency and reliability requirements of data that they transmit. Following the 3GPP standards, the priority of data is defined by two methods: a priority indicator field in DCI or RRC. A priority indicator field is added in the new DCI format to indicate the priority of the scheduled data on physical uplink shared channel (PUSCH). However, in CG PUSCH, priority of data on PUSCH is configured by RRC and is not written by the activation DCI. Fig. 6 shows an example of a FFP arrangement based on the UE’s priorities. The UE1 has the highest priority so its CCA occasion is configured to overlap with the idle periods of the UE2 and UE3. The UE1 always senses an idle channel to transmit data and fulfil its requirements. The UE2 has lower priority than the UE1 but higher priority than the UE3. Its CCA occasion is set by $T_{offset}$ to overlap with the UE1’s COT and the UE3’s idle period. Due to this arrangement, the UE2 might be blocked to initiate its COT by the UE1 but is not affected by the UE3. Finally, the lowest priority UE3 might be blocked to initiate its COT by the other UE because its CCA occasion overlaps with the UE1 and UE2’s COT. The system model with the parameter $p_{0}$ defined in Section II is used to calculate the channel blocking probability of each UE in the FFP arrangement scheme based on the UE’s priority. We extend the system in Fig. 6 to a system with $Q$ transmitters. In this system, the UE1 has an absolute priority and is not blocked by the transmissions of the other UE in the network so the probability that the UE1 senses a busy channel is $p_{c1}=0$. The UE2 has lower priority than the UE1 so it might be blocked by the UE1’s transmission. However, it has higher priority than the other UE except the UE1 and is not blocked by those transmissions. The probability that the UE2 senses a busy channel is $\displaystyle p_{c2}$ $\displaystyle=1-(1-(1-p_{0})(1-p_{c1}))$ $\displaystyle=1-p_{0}.$ (16) The UE3 might be blocked by the UE1 and UE2’s transmissions with higher priorities so the probability that the UE3 senses a busy channel is $\displaystyle p_{c3}$ $\displaystyle=1-(1-(1-p_{0})(1-p_{c1}))(1-(1-p_{0})(1-p_{c2}))$ $\displaystyle=1-p_{0}(1-(1-p_{0})p_{0}).$ (17) From (16) and (17), we can derive a general equation of the channel blocking probability for the $i^{th}$ UE in the FFP arrangement scheme based on the UE’s priority $p_{ci}=1-\prod_{j=1}^{i-1}(1-(1-p_{0})(1-p_{cj})).$ (18) The URLLC UE has only one chance to do channel sensing so the arrangement to obtain $p_{c}=0$ for the URLLC UE guarantees the URLLC transmission in the latency budget. Therefore, this scheme benefits the URLLC UE where the URLLC UE coexists with other lower priority UE. To arrange the CCA occasions of the high priority UE to overlap with the idle periods of the low priority UE, the requirement of the idle period $T_{idle}$ must be stricter than the current requirement of $T_{idle}$ where $T_{idle}$ is at least 5% of $T_{FFP}$ but not smaller than 100 $\mu$s. $T_{idle}$ in a network with $Q$ UE in the FFP arrangement scheme based on the UE’s priority must satisfy $T_{idle}>max\\{(Q-1)T_{offset}+T_{CCA},100\mu s\\}.$ (19) $T_{idle}$ increases when the number of the UE in the network increases. When $T_{FFP}$ does not change, it results in a decrease of the transmission time $T_{COT}$. Therefore, the UE cannot transmit a long packet but has to fragment it into small segments and transmits them in the consecutive FFPs. Latency increases because there are more idle periods in the transmission of this long packet. Moreover, the UE needs to do channel access between FFPs that increases latency due to the uncertainty of channel access. Therefore, the arrangement of FFP to guarantee the transmission of the high priority UE is suitable to a network with a small number of the UE having different priorities and short packets to transmit. ## V Numerical and simulation results TABLE I: Simulation parameters for Fig. 7, Fig. 8, Fig. 9, Fig. 10, Fig. 11 and Fig. 12 Parameters | Values ---|--- Fixed frame period | 1 ms Channel occupancy time | 900 $\mu$s Number of configurations per UE | 1-4 $p_{0}$ | 0.99, 0.95 Number of simulated frames | $10^{10}$ Bandwidth | 20, 80 MHz In this section, the analytic results are verified by the MATLAB simulations. The performance of channel access in three different schemes of FBE analyzed in Sections II, III and IV are compared: the conventional scheme where each transmitter uses one FFP configuration to sense a channel and transmit data, the proposed scheme where each transmitter uses multiple FFP configurations to sense a channel and transmit data, the proposed scheme where FFPs for the transmitters in the system are arranged based on their priorities. The parameters in Table I are used for the simulations in Fig. 7, Fig. 8, Fig. 9, Fig. 10, Fig. 11 and Fig. 12. In these simulations, UL transmission is carried out where the transmitters being the UE transmits data to the receiver being the gNB in a single cell. The transmitters use omnidirectional sensing to sense and acquire a channel in FBE channel access mechanism then transmit data by using omnidirectional transmission. Every of the transmitters can detect each other. The hidden nodes are not included because they do not affect the ability of a transmitter to access to a channel that is the main focus of the proposed scheme. The transmitter cannot sense the hidden nodes so it is not blocked to access to the channel by these hidden nodes. The receivers use omnidirectional reception to receive data. The FFP is set to 1 ms with COT of 900 $\mu$s. LBT channel access mechanism is done per a bandwidth of 20 MHz so the UE in the simulations uses the same channel with bandwidth of 20 MHz to do channel sensing and transmit data. This means that the number of the UE in the following graphs represents the number of the UE in one frequency resource unit of 20 MHz. This result can be extended to other systems with different bandwidth. In one system, if a bandwidth of 20 MHz is divided into several interlaces and each UE transmits in an interlace, the total number of the UE in the system is the product of the number of the UE shown in Fig. 7 and Fig. 8 and the number of interlaces in a bandwidth of 20 MHz. On the other hand, if NR-U carrier is wide band that is a multiple of 20 MHz and each UE in the system works in a channel band of 20 MHz, the total number of the UE in the system is a multiple of the number of the UE shown in Fig. 7 and Fig. 8. The results of a wide band system with a bandwidth of 80 MHz are shown in Fig. 9 and Fig. 10. The UE is assumed to transmit the URLLC packets that have a low arrival rate where $p_{0}$ is set to 0.99 or 0.95. Figure 7: Channel access failure’s probability in FBE in the conventional one- configuration scheme and the proposed multiple-configuration scheme with $p_{0}=0.99$ and bandwidth of 20 MHz. Figure 8: Channel access failure’s probability in FBE in the conventional one- configuration scheme and the proposed multiple-configuration scheme with $p_{0}=0.95$ and bandwidth of 20 MHz. Fig. 7 and Fig. 8 show the performance of channel access with different number of FFP configurations per UE in a bandwidth of 20 MHz where $p_{0}$ is set to 0.99 and 0.95, respectively. In Fig. 7, the conventional scheme where each UE uses only one FPP configuration in channel sensing has a high probability of channel sensing failure. This probability increases rapidly when the number of the UE in the system increases. Even if there are only two UE, the blocking probability of each UE is $10^{-2}$ that is much higher than URLLC reliability requirement. Fig. 8 shows a similar result for one configuration at $p_{0}$ of 0.95. Therefore, the conventional scheme is not suitable for the URLLC transmission. Fig. 7 also shows the blocking probability in the multiple FFP configuration scheme at $p_{0}$ of 0.99 to compare with the blocking probability in the conventional scheme with one FFP configuration. If each UE has two configurations, the blocking probability of $10^{-4}$ for each UE in the system of two UE is much smaller than that of the conventional scheme. When more FFP configurations per UE are used, the UE achieves a smaller blocking probability even if there is a bigger number of the UE in the system. The benefit of multiple FFP configurations is also shown in Fig. 8, although the probability of channel access failure in Fig. 8 is higher than that in Fig. 7 because of a higher data rate. Therefore, this scheme is suitable for a system where several URLLC UE coexist. Figure 9: Channel access failure’s probability in FBE in the conventional one- configuration scheme and the proposed multiple-configuration scheme with $p_{0}=0.99$ and bandwidth of 80 MHz. Figure 10: Channel access failure’s probability in FBE in the conventional one-configuration scheme and the proposed multiple-configuration scheme with $p_{0}=0.95$ and bandwidth of 80 MHz. Figure 11: Channel access failure’s probability in the schemes of [16], [17] and the proposed multiple-configuration scheme $p_{0}=0.99$ and bandwidth of 20 MHz. Figure 12: Channel access failure’s probability in the schemes of [16], [17] and the proposed multiple-configuration scheme $p_{0}=0.95$ and bandwidth of 20 MHz. In [16], when a UE senses a busy channel in a CCA occasion of 25 $\mu$s and cannot acquire the channel, the idle period in the following FFP is removed so the UE can sense the channel after the channel occupancy time instead of waiting the entire FFP. Channel occupancy time in Table I is 900 $\mu$s. This means that after an unsuccessful CCA, the UE does the second channel sensing after 900 $\mu$s. Therefore, within the URLLC latency budget of 1 ms, an URLLC UE has maximum two sensing opportunities. Similarly, in [17], the idle period does not exist in a frame so the UE also has maximum two channel sensing opportunities in the latency budget of 1 ms. Fig. 11 and Fig. 12 compare the performance of the schemes in [16], [17] with the proposed multiple FFP configuration scheme. As can be seen in the figures, the performance of the schemes in [16], [17] is equivalent to the proposed scheme with two FFP configurations. The probability of channel access failure in these schemes is still higher than the URLLC requirements. When the number of the FFP configurations increases to three and four, the multiple FFP configuration scheme achieves a better performance with the lower probabilities of channel access failure. The multiple FFP configuration scheme allows the number of the FFP configurations to be modified flexibly based on channel condition and data requirements without affecting the transmission duration in COT while the schemes in [16], [17] must reduce COT to increase channel sensing opportunities. TABLE II: Simulation parameters for Fig. 13 Parameters | Values ---|--- Fixed frame period | 1 ms Channel occupancy time | 650 $\mu$s Offset | 40 $\mu$s $p_{0}$ | 0.99, 0.95 Number of simulated frames | $10^{10}$ Bandwidth | 20 MHz Figure 13: Performance of channel access in the FFP priority arrangement scheme. The parameters in Table II are used for the simulations in Fig. 13. Fig. 13 shows the performance of channel access of the UE in FBE where FFPs are arranged based on the UE’s priorities. The first UE is assumed with the highest priority then the priority of the UE decreases in terms of the ordinal number of the UE. The first UE always has the probability of channel access failure to be 0. In Fig. 13, each point represents the blocking probability of the $i^{th}$ UE in the system. With $p_{0}$ of 0.99, the second UE has the blocking probability of 0.01 in the system with at least two UE. The third UE has the blocking probability of 0.02 in the system with at least three UE. Similarly, the blocking probability of the $i^{th}$ UE is presented. The UE is in one bandwidth of 20 MHz and the number of the UE can be extended by using the interlaces or a wide band as explained above. Figure 14: A comparison of the conventional one-configuration scheme, the multiple-configuration scheme and the FFP priority arrangement scheme at $p_{0}=0.99$. Fig. 14 compares the performance of channel access in three analyzed schemes. The scheme of FFP priority arrangement gives an approximate probability of channel access failure as the conventional scheme with one FFP configuration. The difference is that all UE in the conventional scheme have the same blocking probability. While the UE in the FFP priority arrangement scheme has the blocking probability depending on its priority. As can be seen in Fig. 14, for the line of the conventional FBE scheme, each point represents the whole number of the UE in the system. While for the line of the proposed FFP priority arrangement scheme, each point represents the blocking probability of the $i^{th}$ UE (the ordinal number) in the system. For example, if there are three UE, in the conventional scheme, all three UE have the probability of 0.02. While in the FFP priority arrangement scheme, the first UE has the failure probability of 0 that is smaller than the probability of the conventional scheme. The second UE has the probability of 0.01 corresponding to the point (2, 0.01) in the graph that is smaller than the probability of the conventional scheme. The third UE has the probability of 0.02 corresponding to the point (3, 0.02) in the graph. Therefore, the FFP priority scheme is suitable for a system where the UE with different priorities including the URLLC UE coexist. This scheme does not increase the complexity of each UE and network design as the multiple FFP configuration scheme while the URLLC UE is provided an absolute priority at cost of the channel access probability of the other UE. On the other hand, the multiple FFP configuration scheme with four configurations in the simulation provides the best performance of channel access for all UE in the system, although it requires a more complex design of the receivers to detect a transmission in one of four FFP configurations. TABLE III: Simulation parameters for Fig. 15 Parameters | Values ---|--- Fixed frame period | 1 ms Channel occupancy time | 900 $\mu$s $p_{0}$ of URLLC UE | 0.99 $p_{0}$ of low priority UE | 0.5 Number of URLLC UE | 1-9 Number of low priority UE | 1 Bandwidth | 20 MHz Figure 15: A comparison of FBE performance in the multiple-configuration scheme and the FFP priority arrangement scheme in a scenario of the URLLC UE coexisting with the low priority UE. The parameters in Table III are used to simulate a scenario where several URLLC UE coexist with a low priority UE such as an eMBB UE as shown in Fig. 15. The low priority UE has a higher arrival rate of data and no latency constraint. It can do channel sensing until it obtains the channel so K in the Markov chain goes to infinity for this low priority UE. In the multiple configuration scheme, each URLLC UE has four configurations to sense a channel while the eMBB UE only uses one configuration for channel sensing. In the FFP priority scheme, the eMBB UE is set to the lowest priority FFP. The FFP priority scheme brings the high priority UE a better channel sensing performance. From the first URLLC UE to the fifth URLLC UE in the FFP priority scheme achieve the lower channel access failure’s probabilities compared to the URLLC UE in the multiple configuration scheme with the same number of the UE. When the number of the URLLC UE is bigger than 5, the multiple configuration scheme has a better channel access performance. The use of each scheme depends on the number of the UE and the UE’s priorities in the system. ## VI Conclusion This paper analyzes channel access process in an unlicensed controlled environment when the FBE channel access mechanism is used. The analysis through a Markov chain shows the limit of FBE to support URLLC due to a latency constraint. To improve the performance of channel access in FBE for URLLC, we propose two schemes. The first scheme allows the transmitter to use multiple FFP configurations to sense a channel and transmit data after a successful CCA. By using multiple FFP configurations, the transmitter has more chance to access to the channel in the URLLC latency budget. The second scheme configures FFPs of the transmitters in a system based on their priorities so that a high priority transmitter’s transmission is not blocked by a lower priority transmitter’s transmission. Therefore, by using one of two proposed schemes, the URLLC transmitter has a smaller channel blocking probability. Simulations have verified the analysis and shown the benefits of two proposed schemes compared to the current FBE schemes. ## References * [1] 3GPP TR 38.913 v15.0.0, “Study on scenarios and requirements for next generation access technologies”, 2018. * [2] Huawei, HiSilicon, Nokia, Nokia Shanghai Bell, “New SID on Physical Layer Enhancements for NR URLLC”. 3GPP RP-182089, TSG-RAN#81, Gold Coast, Australia, Sept 10–13, 2018. * [3] 3GPP TS 38.211 v16.3.0, “Physical channels and modulation”, 2020. * [4] 3GPP TS 38.214 v16.3.0, “Physical layer procedures for data”, 2020. * [5] 3GPP TS 38.213 v16.3.0, “Physical layer procedures for control”, 2020. * [6] 3GPP TS 37.213 v16.3.0, “Physical layer procedures for shared spectrum channel access”, 2020. * [7] ETSI EN 301 893, “5 GHz RLAN; Harmonised Standard covering the essential requirements of article 3.2 of Directive 2014/53/EU (v2.1.1)”, May, 2017. * [8] T. Le, U. Salim and F. Kaltenberger, “Channel Access Enhancements in unlicensed spectrum for NR URLLC transmissions”, 2020 GLOBECOM, December 2020. * [9] Huawei, HiSilicon, “Uplink enhancements for URLLC in unlicensed controlled environments”, 3GPP R1-2007568, 3GPP TSG RAN WG1 Meeting #103-e, Oct 26 – Nov 3, 2020. * [10] InterDigital, Inc., “Enhancements for unlicensed band URLLC/IIoT”, 3GPP R1-2101291, 3GPP TSG RAN WG1 Meeting #104-e, Jan 25 – Feb 5, 2021. * [11] Nokia, Nokia Shanghai Bell, “Revised WID: Enhanced Industrial Internet of Things (IoT) and ultra-reliable and low latency communication (URLLC) support for NR”, 3GPP RP-201310, 3GPP TSG RAN Meeting #88e, June 29 – July 3, 2020. * [12] J. Um, S. Park and Y. Km, “Analysis of channel access mechanism on 5 GHz unlicensed band,” 2015 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, 2015, pp. 898-902. * [13] A. Abdelfattah, N. Malouch and J. Ling, “Analytical Evaluation and Potentials of Frame Based Equipment for LTE-LAA/WIFI Coexistence,” 2019 IEEE Symposium on Computers and Communications (ISCC), Barcelona, Spain, 2019, pp. 1-7. * [14] J. Li, H. Shan, A. Huang, J. Yuan and L. X. Cai, “Modelling of synchronisation and energy performance of FBE- and LBE-based standalone LTE-U networks,” in The Journal of Engineering, vol. 2017, no. 7, pp. 292-299. * [15] Gordon J. Sutton, Ren Ping Liu, Y. Jay Guo, “Coexistence Performance and Limits of Frame-Based Listen-Before-Talk,” IEEE Transactions on Mobile Computing, pp. 1084-1095, vol. 19, May 2020. * [16] G. P. Wijesiri and F. Y. Li, “Frame Based Equipment Medium Access in LTE-U: Mechanism Enhancements and DTMC Modeling,” GLOBECOM 2017 - 2017 IEEE Global Communications Conference, Singapore, 2017, pp. 1-6. * [17] B. Jia and M. Tao, “A channel sensing based design for LTE in unlicensed bands,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015, pp. 2332-2337. * [18] N. Wei, X. Lin, Y. Xiong, Z. Chen and Z. Zhang, “An Optimal Stopping Approach to Listen-Before-Talk for Frame Based Equipment in Unlicensed Spectrum,” GLOBECOM 2017 - 2017 IEEE Global Communications Conference, Singapore, 2017, pp. 1-6. * [19] N. Wei, X. Lin, Y. Xiong, Z. Chen and Z. Zhang, “Joint Listening, Probing, and Transmission Strategies for the Frame-Based Equipment in Unlicensed Spectrum,” in IEEE Transactions on Vehicular Technology, vol. 67, no. 2, pp. 1750-1764, Feb. 2018. * [20] Z. Wang, H. M. Shawkat, S. Zhao and B. Shen, “An LTE-U coexistence scheme based on cognitive channel switching and adaptive muting strategy,” 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Montreal, QC, 2017, pp. 1-6. * [21] R. Maldonado, C. Rosa and K. I. Pedersen, “A Fully Coordinated New Radio-Unlicensed System for Ultra-Reliable Low-Latency Applications,” 2020 IEEE Wireless Communications and Networking Conference (WCNC), Seoul, Korea (South), 2020, pp. 1-6. * [22] A. Z. Hindi, S. Elayoubi and T. Chahed, “Performance Evaluation of Ultra-Reliable Low-Latency Communication Over Unlicensed Spectrum,” ICC 2019 - 2019 IEEE International Conference on Communications (ICC), Shanghai, China, 2019, pp. 1-7. * [23] G. Bianchi, “Performance analysis of the IEEE 802.11 distributed coordination function,” in IEEE Journal on Selected Areas in Communications, vol. 18, no. 3, pp. 535-547, March 2000. * [24] J. Jeon, H. Niu, Q. Li, A. Papathanassiou and G. Wu, “LTE with listen-before-talk in unlicensed spectrum,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, UK, 2015, pp. 2320-2324. * [25] A. Mukherjee et al., “System architecture and coexistence evaluation of licensed-assisted access LTE with IEEE 802.11,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, UK, 2015, pp. 2350-2355. * [26] R. Yin, G. Yu, A. Maaref and G. Y. Li, “LBT-Based Adaptive Channel Access for LTE-U Systems,” in IEEE Transactions on Wireless Communications, vol. 15, no. 10, pp. 6585-6597, Oct. 2016. * [27] S. Han, Y. C. Liang, Q. Chen, and B. H. Soong, “Licensed-assisted access for LTE in unlicensed spectrum: A MAC protocol design,” IEEE J. Sel. Areas Commun., vol. 34, no. 10, pp. 2550–2561, Oct. 2016. * [28] Y. Gao, X. Chu and J. Zhang, “Performance Analysis of LAA and WiFi Coexistence in Unlicensed Spectrum Based on Markov Chain,” 2016 IEEE Global Communications Conference (GLOBECOM), Washington, DC, 2016, pp. 1-6. * [29] G. J. Sutton et al., “Enabling Ultra-Reliable and Low-Latency Communications through Unlicensed Spectrum,” in IEEE Network, vol. 32, no. 2, pp. 70-77, March-April 2018. * [30] G. J. Sutton, R. P. Liu and Y. J. Guo, “Delay and Reliability of Load-Based Listen-Before-Talk in LAA,” in IEEE Access, vol. 6, pp. 6171-6182, 2018. * [31] H. Ko, J. Lee, and S. Pack, “Joint optimization of channel selection and frame scheduling for coexistence of LTE and WLAN,” IEEE Trans. Veh. Technol., vol. 67, no. 7, pp. 6481–6491, Jul. 2018. * [32] R. M. Cuevas, C. Rosa, F. Frederiksen and K. I. Pedersen, “On the Impact of Listen-Before-Talk on Ultra-Reliable Low-Latency Communications,” 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 2018, pp. 1-6. * [33] Q. Chen, G. Yu, and Z. Ding, “Enhanced LAA for unlicensed LTE deployment based on TXOP contention,” IEEE Trans. Commun., vol. 67, no. 1, pp. 417–429, Jan. 2019. * [34] Y. Zeng, Y. Wang, S. Sun and K. Yang, “Feasibility of URLLC in Unlicensed Spectrum,” 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS), Singapore, 2019, pp. 1-5. * [35] S. Khoshabi Nobar, M. H. Ahmed, Y. Morgan and S. Mahmoud, “Joint Channel Assignment and Occupancy Time Optimization in Frame-Based Listen-Before-Talk,” in IEEE Communications Letters, vol. 24, no. 3, pp. 695-699, March 2020. * [36] T. -K. Le, U. Salim and F. Kaltenberger, “An Overview of Physical Layer Design for Ultra-Reliable Low-Latency Communications in 3GPP Releases 15, 16, and 17,” in IEEE Access, vol. 9, pp. 433-444, 2021.
# Computing rational powers of monomial ideals Pratik Dongre, Benjamin Drabkin, Josiah Lim, Ethan Partida, Ethan Roy, Dylan Ruff, Alexandra Seceleanu, Tingting Tang Indian Institute of Information Technology, Nagpur<EMAIL_ADDRESS>Singapore University of technology and Design<EMAIL_ADDRESS>Johns Hopkins University <EMAIL_ADDRESS>Brown University<EMAIL_ADDRESS>The University of Texas at Austin<EMAIL_ADDRESS>University of Toronto <EMAIL_ADDRESS>University of Nebraska–Lincoln<EMAIL_ADDRESS>San Diego State University<EMAIL_ADDRESS> ###### Abstract. This paper concerns fractional powers of monomial ideals. Rational powers of a monomial ideal generalize the integral closure operation as well as recover the family of symbolic powers. They also highlight many interesting connections to the theory of convex polytopes. We provide multiple algorithms for computing the rational powers of a monomial ideal. We also introduce a mild generalization allowing real powers of monomial ideals. An important result is that given any monomial ideal $I$, the function taking a real number to the corresponding real power of $I$ is a step function which is left continuous and has rational discontinuity points. ###### Key words and phrases: monomial ideals, rational powers, Newton polyhedron, computational algebra, jumping numbers ###### 2020 Mathematics Subject Classification: Primary 13F55, 13F20. ## 1\. Introduction An ideal of the polynomial ring $R=K[x_{1},\ldots,x_{d}]$ with coefficients in a field $K$ is a monomial ideal if it is generated by monomials. In this paper, we study a notion of powers for monomial ideals, where the exponents are allowed to be real numbers as follows: for $r\in{\mathbb{R}}$, $r>0$ we define the $r$-th real power of a monomial ideal, $I$, denoted $\overline{I^{r}}$ to be the monomial ideal whose exponent set consists of (integer) lattice points in the $r$-th dilate of the Newton polyhedron of $I$; see 3.1. We emphasize that $\overline{I^{r}}$ is an ideal of the polynomial ring $R$, and in particular the monomial generators of $\overline{I^{r}}$ have natural number exponents. Thus our notion of real powers of ideals bears no overlap with work taking place in a ring where monomials are allowed to have real number exponents. Prominent examples of work in the latter context are [ISW13, ASW15, Mil20]. Our notion of real powers is inspired by, and in fact coincides when $r\in{\mathbb{Q}}$, with the notion of rational powers, which can be defined for arbitrary ideals, and have appeared previously in the literature in [HS06, §10.5], [Knu06], [Rus07], [Ciu20], [Ciu21], [Lew20]. In these works, rational powers come up in contexts ranging from valuation theory to intersection theory and have application to establishing the Golod property. In particular, [Lew20, Corollary 3.4] establishes a strong connection between rational powers and the widely studied family of symbolic powers of monomial ideals. The above mentioned applications have motivated and inspired us to seek effective methods for handling rational powers from a computational standpoint. The focus of this paper is twofold. First, we handle the task of computing real powers of monomial ideals. One main result in this direction is 3.5, where we show that the generators of a specified real power of a monomial ideal can be confined within a bounded convex region depending only on the exponent and the Newton polytope of the ideal. We complement this theoretical insight with a series of algorithms, Algorithm 1, Algorithm 2, Algorithm 3, and Algorithm 4 which exploit different features of the problem to provide practical solutions for computing real powers of monomial ideals. Our second aim is to study continuity properties of the exponentiation function where the base is a monomial ideal. Being able to do this provides motivation for working with real powers as opposed to the more common rational powers. We find that the exponentiation function is a step function with rational discontinuity points which we term jumping points. This leads to the conclusion that all distinct real powers of a fixed monomial ideal are given by rational exponents. Our main results on properties for the real exponentiation function of a monomial ideal are contained in 5.2 (existence of right limits) 5.6 (left continuity), 5.7 (step function), and 5.9 (jumping numbers). Our paper is organized as follows. After introducing the notions of Newton polyhedron and integral closure in section 2, we turn our attention to real powers of monomial ideals in section 3 and present algorithms capable of computing these ideals in section 4. We end with studying continuity properties and jumping numbers for exponentiation in section 5. ## 2\. Background on integral closure and the Newton polyhedron Let ${\mathbb{R}}$ and ${\mathbb{R}}_{+}$ denote the real numbers and non- negative real numbers respectively. We denote by ${\mathbb{N}}$ the set of non negative integers. Let $R=K[x_{1},\cdots,x_{d}]$ be a polynomial ring with coefficients in a field $K$. Every monomial ideal $I$ in $R$ has a unique minimal monomial generating set denoted $G(I)$. This is a set of monomials that generates $I$ and such that no element of $G(I)$ divides another element of $G(I)$. It is customary to denote monomials in $R$ by the shorthand notation ${\mathbf{x}}^{\mathbf{a}}:=x_{1}^{a_{1}}\cdots x_{d}^{a_{d}}$, where ${\mathbf{a}}\in{\mathbb{N}}^{d}$. The bijective correspondence between monomials ${\mathbf{x}}^{\mathbf{a}}$ and lattice points ${\mathbf{a}}\in{\mathbb{N}}^{n}$ gives rise to convex geometric representations for monomial ideals, chief among which is the Newton polyhedron. ###### Definition 2.1. For any monomial ideal $I$ denote by ${\mathcal{L}}(I)$ the set of exponent vectors of all monomials in $I$ ${\mathcal{L}}(I)=\\{{\mathbf{a}}\mid{\mathbf{x}}^{\mathbf{a}}\in I\\}.$ The Newton polyhedron of $I$, denoted $NP(I)$, is the convex hull of ${\mathcal{L}}(I)$ in ${\mathbb{R}}^{d}$ $NP(I)=\operatorname{convex\ hull}{\mathcal{L}}(I)=\operatorname{convex\ hull}(\\{{\mathbf{a}}\mid{\mathbf{x}}^{\mathbf{a}}\in I\\}).$ The Newton polytope of $I$, denoted $\operatorname{np}(I)$, is the convex hull of the exponent vectors of a minimal monomial generating set for $I$. $\operatorname{np}(I)=\operatorname{convex\ hull}(\\{{\mathbf{a}}\mid{\mathbf{x}}^{\mathbf{a}}\in G(I)\\}).$ Notice that Newton polyhedra are unbounded, while Newton polytopes are bounded convex bodies. Both are lattice polyhedra, meaning that their vertices have integer coordinates. Their relationship can be described using the notion of Minkowski sum. ###### Definition 2.2. The Minkowski sum of subsets $A,B\subseteq{\mathbb{R}}^{n}$ is $A+B=\\{{\mathbf{a}}+{\mathbf{b}}\mid{\mathbf{a}}\in A,{\mathbf{b}}\in B\\}.$ We also write $A-B=\\{{\mathbf{a}}-{\mathbf{b}}\mid{\mathbf{a}}\in A,{\mathbf{b}}\in B\\}$. The precise relationship between the Newton polyhedron and the Newton polytope of $I$, established for example in [CEHH17, Lemma 5.2], is given by the Minkowski sum decomposition (2.1) $NP(I)=\operatorname{np}(I)+{\mathbb{R}}^{d}_{+},$ where ${\mathbb{R}}^{d}_{+}=\\{(a_{1},\ldots,a_{d})\in{\mathbb{R}}^{d}\mid a_{i}\geq 0\\}$ denotes the positive orthant in ${\mathbb{R}}^{d}$. By the version of Carathéodory’s theorem in [CEHH17, Theorem 5.2], any point ${\mathbf{a}}\in NP(I)$ is written as (2.2) ${\mathbf{a}}=\lambda_{1}{\mathbf{t}}_{1}+\cdots+\lambda_{d}{\mathbf{t}}_{d}+c_{1}{\mathbf{e}}_{1}+\cdots+c_{d}{\mathbf{e}}_{d},$ with $\lambda_{i},c_{j}\geq 0,\sum_{i=1}^{d}\lambda_{i}=1$, ${\mathbf{t}}_{1},\ldots,{\mathbf{t}}_{d}\in\operatorname{np}(I)$, and ${\mathbf{e}}_{1},\ldots,{\mathbf{e}}_{d}$ standard basis vectors in ${\mathbb{R}}^{d}$. Thus one can reformulate equation (2.1) using coordinatewise inequalities as (2.3) $NP(I)=\\{{\mathbf{a}}\in{\mathbb{R}}^{d}\mid{\mathbf{a}}\geq{\mathbf{b}}\text{ for some }{\mathbf{b}}\in\operatorname{np}(I)\\}$ While the containment ${\mathcal{L}}(I)\subseteq NP(I)\cap{\mathbb{N}}^{d}$ holds by definition, in general the sets of lattice points ${\mathcal{L}}(I)$ and $NP(I)\cap{\mathbb{N}}^{d}$ need not be equal. We recall below that the set of lattice points in $NP(I)$ is in fact given by $NP(I)\cap{\mathbb{N}}^{d}={\mathcal{L}}(\overline{I})$, where $\overline{I}$ is the integral closure of $I$. ###### Definition 2.3. The integral closure of an ideal $I$ of a ring $R$ is the set of elements $y\in R$ that satisfy an equation of integral dependence of the form $y^{n}+m_{1}y^{n-1}+\cdots+m_{n-1}y+m_{n}=0\text{ where }m_{i}\in I^{i},n\geq 1.$ The integral closure of $I$ is denoted $\overline{I}$. ###### Remark 2.4. It is shown in [HS06] that the description is significantly simpler if $I$ is a monomial ideal. In this case one can give an alternate definition for the integral closure (2.4) $\overline{I}=\left(\\{{\mathbf{x}}^{\mathbf{a}}\mid{\mathbf{x}}^{n{\mathbf{a}}}\in I^{n}\text{ for some }n\in{\mathbb{N}}\\}\right).$ We recall below how the integral closure of a monomial ideal $I$ can be described in terms of its Newton polyhedron. We also show that the minimal generators of $\overline{I}$ lie at bounded lattice distance from the Newton polytope $\operatorname{np}(I)$. In the following we use the notion of lattice (or taxicab) distance between points in ${\mathbf{a}},{\mathbf{b}}\in{\mathbb{R}}^{d}$ defined as $\operatorname{dist}({\mathbf{a}},{\mathbf{b}})=\sum_{i=1}^{d}|a_{i}-b_{i}|$. ###### Lemma 2.5. Let $I$ be a monomial ideal in $K[x_{1},\ldots,x_{d}]$. Then 1. (1) $NP(I)\cap{\mathbb{N}}^{d}={\mathcal{L}}(\overline{I})$, 2. (2) $NP(\overline{I})=NP(I)$, 3. (3) (compare [HS06, Proposition 1.4.9]) if ${\mathbf{x}}^{\mathbf{a}}\in G(\overline{I})$, then there exists ${\mathbf{b}}\in\operatorname{np}(I)$ such that ${\mathbf{a}}\geq{\mathbf{b}}$ and $\sum_{i=1}^{d}(a_{i}-b_{i})\leq d-1.$ ###### Proof. Statement (1) is well-known; see for example [HS06, Proposition 1.4.6]. (2) follows from (1) by noticing that, since $NP(I)$ is a lattice polyhedron we have $NP(I)=\operatorname{convex\ hull}(NP(I)\cap{\mathbb{N}}^{d})=\operatorname{convex\ hull}({\mathcal{L}}(\overline{I}))=NP(\overline{I}).$ (3) If ${\mathbf{a}}\in\operatorname{np}(I)$, the choice ${\mathbf{b}}={\mathbf{a}}$ works as claimed. We may thus assume ${\mathbf{a}}\not\in\operatorname{np}(I)$. By (2.3) there is ${\mathbf{y}}\in\operatorname{np}(I)$ such that the inequality ${\mathbf{a}}\geq{\mathbf{y}}$ is satisfied coordinatewise. Since ${\mathbf{a}}\in{\mathbb{N}}^{d}$, we have that ${\mathbf{a}}\geq\lceil{\mathbf{y}}\rceil:=(\lceil y_{1}\rceil,\ldots,\lceil y_{d}\rceil)$ and since $\lceil{\mathbf{y}}\rceil\geq{\mathbf{y}}$, we have $\lceil{\mathbf{y}}\rceil\in NP(I)$. As ${\mathbf{x}}^{\mathbf{a}}$ is a minimal generator of $\overline{I}$, it follows that ${\mathbf{a}}=\lceil{\mathbf{y}}\rceil$. Denote the unit hypercube in ${\mathbb{R}}^{d}$ by $H_{d}$; it has vertices $\sum_{i\in S\subseteq[d]}{\mathbf{e}}_{i}$. Since $x^{\mathbf{a}}$ is a minimal generator of $\overline{I}$, it follows that the only vertex of ${\mathbf{a}}-H_{d}$ that is in $NP(I)$ is ${\mathbf{a}}$. Moreover, since the only lattice points in ${\mathbf{a}}-H_{d}$ are its vertices, the only lattice point in $({\mathbf{a}}-H_{d})\cap NP(I)$ is ${\mathbf{a}}$. Finally, we have ${\mathbf{y}}\in{\mathbf{a}}-H_{d}$ because ${\mathbf{a}}=\lceil{\mathbf{y}}\rceil$. Let ${\mathbf{z}}\in{\mathbb{N}}^{d}$ be any vertex of $\operatorname{np}(I)$. From the previous considerations, we have ${\mathbf{z}}\not\in{\mathbf{a}}-H_{d}$. Since $\operatorname{np}(I)$ is convex, the line segment $[{\mathbf{y}},{\mathbf{z}}]$ is contained in $\operatorname{np}(I)$. Let ${\mathbf{b}}$ be the intersection point of this line segment with the boundary of the polytope ${\mathbf{a}}-H_{d}$. Such an intersection point exists since ${\mathbf{y}}$ is inside and ${\mathbf{z}}$ is outside ${\mathbf{a}}-H_{d}$. Then ${\mathbf{b}}\in\operatorname{np}(I)$ and $\lceil{\mathbf{b}}\rceil$ is a vertex of ${\mathbf{a}}-H_{d}$ that belongs to $NP(I)$; thus we have $\lceil{\mathbf{b}}\rceil={\mathbf{a}}$. Furthermore, since ${\mathbf{b}}\neq{\mathbf{a}}-\mathbf{1}$, and ${\mathbf{b}}$ is on the boundary of ${\mathbf{a}}-H_{d}$, it follows that for some $1\leq i\leq d$ we have $b_{i}=a_{i}$. Hence we obtain $\sum_{i=1}^{d}(a_{i}-b_{i})\leq d-1$, as claimed. ∎ ## 3\. Real powers of monomial ideals We now discuss powers of monomial ideals with real exponents, termed real powers, and their relationship to integral closure. ###### Definition 3.1. Fix a real number $r\geq 0$. We define the $r$-th real power of a monomial ideal, $I$, to be $\overline{I^{r}}=\left(\\{{\mathbf{x}}^{\mathbf{a}}\mid{\mathbf{a}}\in r\cdot NP(I)\cap{\mathbb{N}}^{d}\\}\right).$ When $r\in{\mathbb{Q}}$ we will refer to $\overline{I^{r}}$ as the $r$-th rational power of $I$. Rational powers of monomial ideals have appeared previously in the literature under the following definition and notation, see [HS06, Definition 10.5.1]: the $r$-th rational power of an arbitrary ideal $I$ of a ring $R$ for $r=\frac{p}{q}$ with $p,q\in{\mathbb{N}},q\neq 0$ is the ideal (3.1) $I_{r}:=\\{y\in R\mid y^{q}\in\overline{I^{p}}\\},$ where $\overline{I^{p}}$ denotes the integral closure of the $p$-th ordinary power of $I$, $I^{p}$. In the following we show that these two definitions agree, i.e., $I_{r}=\overline{I^{r}}$ whenever $r\in{\mathbb{Q}}$ and furthermore for natural exponents $r\in{\mathbb{N}}$ the $r$-th real power agrees with the integral closure of the $r$-th ordinary power of $I$, $I^{r}$. Our notation for real powers deviates from that in (3.1), which is more established in the literature, in favor of being intentionally consistent with the notation for integral closure, since these notions agree for $r\in{\mathbb{N}}$ as shown in the following lemma. ###### Lemma 3.2. Let $I$ be a monomial ideal. Then 1. (1) If $r\in{\mathbb{N}}$, then the $r$-th real power of $I$ is equal to the integral closure of the $r$-th ordinary power $I^{r}$. In particular, the first rational power of $I$, $\overline{I^{1}}$, is the integral closure of $I$. Moreover, the $r$-th real power of $I$ is integrally closed. 2. (2) If $r\in{\mathbb{Q}}$ then the $r$-th real power of $I$ in 3.1 agrees with the $r$-th rational power of $I$, $I_{r}$, in (3.1). ###### Proof. (1) By definition, a monomial $\mathbf{x}^{\mathbf{a}}$ is an element of the $r$-th real power of $I$ if and only if $\mathbf{a}\in r\cdot NP(I)$. Noting that $r\cdot NP(I)=NP(I^{r})$ if $r\in{\mathbb{N}}$, the latter condition is equivalent to $\mathbf{a}\in NP(I^{r})$. Now by 2.5 (1), we have $\mathbf{a}\in NP(I^{r})\cap{\mathbb{N}}^{d}$ if and only if $\mathbf{x}^{\mathbf{a}}$ is an element of the integral closure of $I^{r}$ if and only if $\mathbf{x}^{\mathbf{a}}$ is an element of the integral closure of $\overline{I^{r}}$. (2) Let $r=\frac{p}{q}$ with $p,q\in{\mathbb{N}},q\neq 0$ and let ${\mathbf{x}}^{\mathbf{a}}$ be a monomial. By (3.1), ${\mathbf{x}}^{\mathbf{a}}\in I_{r}$ holds if and only if we have ${\mathbf{x}}^{q{\mathbf{a}}}\in\overline{I^{p}}$, equivalently $q{\mathbf{a}}\in NP(\overline{I^{p}})=NP(I^{p})=pNP(I)$. In turn, the last assertion is equivalent to ${\mathbf{a}}\in rNP(I)\cap{\mathbb{N}}^{d}$ and by 3.1 this holds if and only if ${\mathbf{x}}^{\mathbf{a}}\in\overline{I^{r}}$. ∎ Using 2.5, for $r\in{\mathbb{Q}}_{+}$ we aim to confine the minimal generators of $\overline{I^{r}}$ to a bounded convex set, which will be obtained by Minkowski sum. In order to define this convex set we introduce the unit simplex in $d$-dimensional space, $S_{d}=\\{{\mathbf{a}}=(a_{1},\ldots,a_{d})\in{\mathbb{R}}^{d}\mid a_{1}+\cdots+a_{d}\leq 1,a_{i}\geq 0\text{ for }1\leq i\leq d\\}.$ In the metric space ${\mathbb{R}}^{d}$ endowed with the lattice distance, the unit simplex is the non negative portion of the ball of radius one centered at the origin. Denoting the origin in ${\mathbb{R}}^{d}$ by $\bf 0$, this observation yields an alternate description $S_{d}=\\{{\mathbf{a}}\in{\mathbb{R}}^{d}\mid{\mathbf{a}}\geq\bf 0,\ \operatorname{dist}({\mathbf{a}},\bf 0)\leq 1\\}.$ ###### Remark 3.3. 2.5 (3) can be reformulated using this notation as follows: If $I$ is a monomial ideal and ${\mathbf{x}}^{\mathbf{a}}\in G(\overline{I})$, then ${\mathbf{a}}\in\operatorname{np}(I)+(d-1)\cdot S_{d}$. The following technical result shall prove very useful for our purposes. ###### Lemma 3.4. Let $\mathbf{x}^{\mathbf{a}}$ be a minimal generator of $\overline{I^{r}}$, where $r=\frac{p}{q}$ is a positive rational number. Then there exists a minimal generator $\mathbf{x}^{\mathbf{b}}$ of $\overline{I^{p}}$ such that $q{\mathbf{a}}-{\mathbf{b}}\in d(q-1)\cdot S_{d}$. ###### Proof. By 3.2 (2), we obtain ${\mathbf{x}}^{q{\mathbf{a}}}\in\overline{I^{p}}$. Thus there exists a minimal generator ${\mathbf{x}}^{\mathbf{b}}\in G(\overline{I^{p}})$ such that ${\mathbf{x}}^{\mathbf{b}}$ divides ${\mathbf{x}}^{q{\mathbf{a}}}$. This implies ${\mathbf{b}}\leq q{\mathbf{a}}$, that is, $b_{i}\leq qa_{i}$ for all $1\leq i\leq d$. Suppose that $q{\mathbf{a}}-{\mathbf{b}}\not\in d(q-1)\cdot S_{d}$. Then the inequality $\sum_{i=1}^{d}(qa_{i}-b_{i})\geq d(q-1)+1$ follows by integrality. Applying the pigeon-hole principle, we find that there must exist $i_{0}\in\\{1,\ldots,d\\}$ such that $qa_{i_{0}}-b_{i_{0}}\geq q$. Rewriting, we get that $q(a_{i_{0}}-1)\geq b_{i_{0}}$. We can now set ${\mathbf{a}}^{\prime}={\mathbf{a}}-{\mathbf{e}}_{i_{0}}$ and with this notation we find ${\mathbf{b}}\leq q(a_{1},\ldots,a_{i_{0-1}},a_{i_{0}}-1,a_{i_{0+1}},\ldots,a_{d})=q{\mathbf{a}}^{\prime}.$ Thus ${\mathbf{x}}^{\mathbf{b}}$ divides ${\mathbf{x}}^{q{\mathbf{a}}^{\prime}}$ and ${\mathbf{x}}^{q{\mathbf{a}}^{\prime}}$ is an element of $\overline{I^{p}}$. Applying 3.2 (2) again, this yields that, ${\mathbf{x}}^{{\mathbf{a}}^{\prime}}\in\overline{I^{r}}$, which contradicts that ${\mathbf{x}}^{\mathbf{a}}$ is a minimal generator of $\overline{I^{r}}$. ∎ We are now able to describe a bounded convex set which contains the minimal generators of a rational power for a monomial ideal. The following result constitutes the basis for our Minkowski algorithm described in Algorithm 1. See also 4.2 for an illustration of the convex set $\mathcal{C}(I,r)$ defined below. ###### Theorem 3.5. Let $I$ be a monomial ideal in $K[x_{1},\ldots,x_{d}]$. If $r=\frac{p}{q}$ is a positive rational number and ${\mathbf{x}}^{\mathbf{a}}\in G(\overline{I^{r}})$, then ${\mathbf{a}}$ is in the following bounded convex set (3.2) $\mathcal{C}(I,r)=r\cdot\operatorname{np}(I)+\left(d-\frac{1}{q}\right)\cdot S_{d}.$ Moreover, if ${\mathbf{a}}\in\mathcal{C}(I,r)\cap{\mathbb{N}}^{d}$, then ${\mathbf{x}}^{\mathbf{a}}\in\overline{I^{r}}$ and thus $\overline{I^{r}}=(\\{{\mathbf{x}}^{\mathbf{a}}\mid{\mathbf{a}}\in\mathcal{C}(I,r)\cap{\mathbb{N}}^{d})\\}$. ###### Proof. By 3.4, there exists a minimal generator of $\overline{I^{p}}$, $\mathbf{x}^{\mathbf{b}}$, such that $q{\mathbf{a}}-{\mathbf{b}}\in d(q-1)\cdot S_{d}$ and from 3.3 applied to the monomial ideal $I^{p}$ we have that ${\mathbf{b}}\in\operatorname{np}(I^{p})+\left(d-1\right)\cdot S_{d}=p\cdot\operatorname{np}(I)+\left(d-1\right)\cdot S_{d}.$ Combining the displayed statements, we obtain $\displaystyle q{\mathbf{a}}$ $\displaystyle\in p\cdot\operatorname{np}(I)+(d-1)\cdot S_{d}+d(q-1)\cdot S_{d}$ $\displaystyle\iff{\mathbf{a}}$ $\displaystyle\in\frac{p}{q}\cdot\operatorname{np}(I)+\frac{d-1}{q}\cdot S_{d}+\frac{d(q-1)}{q}\cdot S_{d}$ $\displaystyle\iff{\mathbf{a}}$ $\displaystyle\in r\cdot\operatorname{np}(I)+\left(d-\frac{1}{q}\right)\cdot S_{d}.$ Finally, since $S_{d}\subseteq{\mathbb{R}}_{+}^{d}$, we have that $\mathcal{C}(I,r)\subseteq r\cdot NP(I)$ by (2.1). Thus if ${\mathbf{a}}\in\mathcal{C}(I,r)\cap{\mathbb{N}}^{d}$, then ${\mathbf{a}}\in r\cdot NP(I)$ which yields ${\mathbf{x}}^{\mathbf{a}}\in\overline{I^{r}}$ according to 3.1. The identity $\overline{I^{r}}=({\mathbf{x}}^{\mathbf{a}}\mid{\mathbf{a}}\in\mathcal{C}(I,r)\cap{\mathbb{N}}^{d})$ follows from the previous assertions. ∎ ###### Remark 3.6. While the previous theorem does not require the rational number $r=\frac{p}{q}$ to have $\gcd(p,q)=1$, in applications is desirable to work with the reduced form of $r$ in order to obtain the smallest possible region $\mathcal{C}(I,r)$. ## 4\. Algorithms for computing real powers Several algorithms are proposed below for computing real powers of monomial ideals. Our algorithms rely on several auxiliary computational tasks, which are highly non trivial, but can be performed currently by computer algebra systems such as [GS] or [4ti2]. Specifically, we assume that independent routines are used to compute the Newton polyhedron or polytope for a given monomial ideal. For this reason, we take these convex bodies as input for our algorithms. For Algorithm 1 we additionally assume the existence of a routine that finds all the lattice points in a bounded convex polytope. This task is discussed in detail in [DLHTY04]. ### 4.1. Minkowski Algorithm Our first algorithm uses the ideas presented in 3.5 and illustrated in 4.2 to confine the generators of a real power $\overline{I^{r}}$ within a convex region of bounded lattice distance from the Newton polytope $\operatorname{np}(I)$. Input: the Newton polytope $\operatorname{np}(I)$ of an ideal $I$, a rational number $r=\frac{p}{q}\in{\mathbb{Q}}_{+}$ Output: a list of monomial generators for the ideal $\overline{I^{r}}$ 1 /* Scaled newton polytope of I */ 2 3scalednp := $r\cdot np(I)$ /* Bounded convex set, as given by 3.5 */ 4 5$d:=$ dimension of the polynomial ring containing $I$ 6simplex := $d$-dimensional simplex with vertices at $\\{\mathbf{0},(d-\frac{1}{q})\mathbf{e}_{1},\ldots,(d-\frac{1}{q})\mathbf{e}_{d}\\}$. 7$C$ := minkowskiSum(scalednp, simplex) /* Find all lattice points and their monomial counterpart */ 8 9exponentVectors := latticePoints($C$) 10Initialize generators := $\emptyset$ 11for ${\mathbf{b}}$ in exponentVectors do 12 generators := append(${\mathbf{x}}^{\mathbf{b}}$, generators) /* Return the possibly non minimal monomial generators */ 13 Return generators. Algorithm 1 Minkowski Sum algorithm ###### Proposition 4.1. If $I$ is a monomial ideal of a $d$-dimensional polynomial ring and $r\in{\mathbb{R}}_{+}$, then Algorithm 1 returns a not necessarily minimal set of monomial generators for $\overline{I^{r}}$. ###### Proof. This follows from the assertion $\overline{I^{r}}=(\\{{\mathbf{x}}^{\mathbf{a}}\mid{\mathbf{a}}\in\mathcal{C}(I,r)\\})\cap{\mathbb{N}}^{d})$ of 3.5. In Algorithm 1 the set $\mathcal{C}(I,r)$, termed $C$, is constructed according to equation (3.2). ∎ ###### Example 4.2. Consider the ideal $I=(xy^{5},x^{2}y^{2},x^{4}y)$ and the rational number $r=\frac{4}{3}$. Then one can determine that $\overline{I^{4/3}}=(x^{2}y^{5},x^{2}y^{6},x^{2}y^{7},x^{3}y^{3},x^{3}y^{4},x^{3}y^{5},x^{3}y^{6},x^{4}y^{2},x^{4}y^{3},\newline x^{4}y^{4},x^{4}y^{5},x^{5}y^{2},x^{5}y^{3},x^{6}y^{2})$ based on identifying the lattice points in the convex region $\mathcal{C}\left(I,\frac{4}{3}\right)=\frac{4}{3}\cdot\operatorname{np}(I)+\frac{5}{3}\cdot S_{2}$ given by 3.5. Note that $\overline{I^{4/3}}$ is minimally generated by $G(\overline{I^{4/3}})=\\{x^{2}y^{5},x^{3}y^{3},x^{4}y^{2}\\}$. Thus, Algorithm 1 does not in general identify the minimal generators, but rather a possibly non minimal set of generators for $\overline{I^{r}}$. In the Figure 1, the region $\mathcal{C}(I,\frac{4}{3})$ is shaded in darker blue, while the rest of the scaled polyhedron $\frac{4}{3}\cdot NP(I)$ is shaded in lighter blue. $0$$1$$2$$3$$4$$5$$6$$7$$8$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$x$$y$$0$$1$$2$$3$$4$$5$$6$$7$$8$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$x$$y$$r\cdot NP(I)$Convex RegionMinimal GeneratorsInterior Points Figure 1. Computing $\overline{(xy^{5},x^{2}y^{2},x^{4}y)^{4/3}}$ using the Minkowski algorithm. ### 4.2. Hyperrectangle Algorithm The next algorithms depend on the notion of the hyperrectangle of a scaled Newton polyhedron, which is defined below. ###### Definition 4.3. Given a monomial ideal $I$ of a $d$-dimensional polynomial ring and $r\in{\mathbb{R}}_{+}$, define the set of scaled vertices of $I$ with respect to $r$ to be $\mathcal{V}(I,r)=\left\\{\lceil r\rceil:=\left(\lceil ra_{1}\rceil,\dots,\lceil ra_{d}\rceil\right)\,|\,x^{\in}G(I)\right\\}.$ Let $\alpha=(\alpha_{1},\dots,\alpha_{d})\in\mathcal{V}(I,r)$. Define (4.1) $\min(\mathcal{V}(I,r),i)=\min\limits_{\alpha\in\mathcal{V}(I,r)}\alpha_{i}\quad\text{ and }\quad\max(\mathcal{V}(I,r),i)=\max\limits_{\alpha\in\mathcal{V}(I,r)}\alpha_{i}.$ Finally, set the _hyperrectangle of $r\cdot NP(I)$_ to be the following set $\displaystyle\operatorname{hype}(I,r)$ $\displaystyle=\\{{\mathbf{c}}=(c_{1},\dots,c_{d})\,|\,c_{i}\in[\min(\mathcal{V}(I,r),i),\,\max(\mathcal{V}(I,r),i)]\\}$ $\displaystyle=\prod_{i=1}^{d}[\min(\mathcal{V}(I,r),i),\,\max(\mathcal{V}(I,r),i)].$ We now see that the generators for the $r$-th real power of $I$ are among the set of lattice points in $\operatorname{hype}(I,r)$. ###### Lemma 4.4. Let $I$ be a monomial ideal and let $r\in{\mathbb{R}}_{+}$. Denote the set of lattice points in $\operatorname{hype}(I,r)$ by $\mathcal{S}(I,r)$. Then 1. (1) $\lceil r\cdot\operatorname{np}(I)\rceil:=\\{(\lceil p_{1}\rceil,\ldots,\lceil p_{d}\rceil)\mid{\mathbf{p}}\in r\cdot\operatorname{np}(I)\\}\subseteq\mathcal{S}(I,r)$ 2. (2) $\overline{I^{r}}$ is generated by a subset of the lattice points in $\operatorname{hype}(I,r)$, more precisely $\overline{I^{r}}=(\\{{\mathbf{x}}^{\mid}\in r\cdot NP(I)\cap\operatorname{hype}(I,r)\cap{\mathbb{N}}^{d}\\}).$ ###### Proof. (1) Every point in ${\mathbf{p}}\in r\cdot\operatorname{np}(I)$ is a convex combination of the vertices of this polytope, which are in the set $V=\\{r\mid{\mathbf{x}}^{\in}G(I)\\}$. Since every coordinate $p_{i}$ of ${\mathbf{p}}$ is a convex combination of $i$-th coordinates of elements in $V$ we obtain that $p_{i}\in[\min_{\in V}{a_{i}},\max_{\in V}{a_{i}}]$ for $1\leq i\leq d$. Thus $\lceil p_{i}\rceil\in[\min(\mathcal{V},i),\,\max(\mathcal{V},i)]$, which settles the claim. (2) Temporarily denote $J:=({\mathbf{x}}^{\mid}\in r\cdot NP(I)\cap\mathcal{S}(I,r))$. Then $J\subseteq\overline{I^{r}}$ follows from 3.1. Now let $\in{\mathbb{N}}^{d}$ be such that ${\mathbf{x}}^{\in}\overline{I^{r}}$ and thus $\in r\cdot NP(I)\cap{\mathbb{N}}^{d}$. From (2.1) we know $r\cdot NP(I)=r\cdot\operatorname{np}(I)+r\cdot{\mathbb{R}}_{+}^{d}=r\cdot\operatorname{np}(I)+{\mathbb{R}}_{+}^{d},$ thus there exists $\in r\cdot\operatorname{np}(I)$ such that $\geq$. Since $\in{\mathbb{N}}^{d}$ it follows that $\geq\lceil\rceil=(\lceil b_{1}\rceil,\ldots,\lceil b_{d}\rceil]$, where $\lceil\rceil\in\lceil r\cdot\operatorname{np}(I)\rceil$. From part (1) it follows that $\lceil\rceil\in\mathcal{S}(I,r)$ and from $\lceil\rceil\geq$ we deduce $\lceil\rceil\in r\cdot\operatorname{np}(I)$ hence $\lceil\rceil\in r\cdot NP(I)+{\mathbb{R}}_{+}^{d}$. We have thus shown that $\lceil\rceil\in r\cdot NP(I)\cap\mathcal{S}(I,r)$, hence ${\mathbf{x}}^{\lceil\rceil}\in J$ and since $\geq\lceil\rceil$ we deduce ${\mathbf{x}}^{\in}J$. Thus the containment $\overline{I^{r}}\subseteq J$ has been established. ∎ Based on the previous result we produce the following algorithm. Input: generators $G(I)$ and the Newton polyhedron $NP(I)$ of an ideal $I$, a real number $r\in{\mathbb{R}}_{+}$ Output: a list of monomial generators for the ideal $\overline{I^{r}}$ 1 2$d:=$ dimension of the polynomial ring containing $I$ 3candidates := $\operatorname{hype}(I,r)\cap{\mathbb{N}}^{d}$ 4Initialize generators := $\emptyset$ 5for in candidates do 6 if in $r\cdot NP(I)$ then 7 generators := append(${\mathbf{x}}$, generators) 8 9 10Return generators. Algorithm 2 Hyperrectangle algorithm ###### Proposition 4.5. If $I$ is a monomial ideal and $r\in{\mathbb{R}}_{+}$, then Algorithm 2 returns a not necessarily minimal set of monomial generators for $\overline{I^{r}}$. ###### Proof. This follows from part (2) of 4.4. ∎ ###### Example 4.6. Figure 2 illustrates the set of lattice points in the hyperrectangle $\operatorname{hype}(I,\frac{4}{3})$ for the ideal $I=(xy^{5},x^{2}y^{2},x^{4}y)$. These are marked in solid yellow, solid purple and hollow black. The set of generators returned by Algorithm 2 corresponds to the yellow and purple lattice points, while the minimal generators correspond to the purple points. $0$$1$$2$$3$$4$$5$$6$$7$$8$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$x$$y$$0$$1$$2$$3$$4$$5$$6$$7$$8$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$x$$y$$r\cdot NP(I)$Boundary of $\operatorname{hype}(I,r)$Minimal GeneratorsInterior PointsExterior Points Figure 2. Computing $\overline{(xy^{5},x^{2}y^{2},x^{4}y)^{4/3}}$ using the Hyperrectangle algorithm (left) and Improved Hyperrectangle algorithm (right) In general, for fixed $I$ and $r$, the two convex sets $\mathcal{C}(I,r))$ and $\operatorname{hype}(I,r)$ where Algorithm 1 and Algorithm 2, respectively, look for a set of generators for $\overline{I^{r}}$ are incomparable. For an illustration consider Figure 1 in 4.2, where the set $\mathcal{C}(I,r))$ is shaded in darker blue and Figure 2 where the set $\operatorname{hype}(I,r)$ is the marked by the orange boundary. Note that there are no containments between the sets $\mathcal{C}(I,r)$ and $\operatorname{hype}(I,r)$ in this example. In general one does not expect a containment between the corresponding sets of lattice points $\mathcal{C}(I,r))\cap{\mathbb{N}}^{d}$ and $\operatorname{hype}(I,r)\cap{\mathbb{N}}^{d}$ either. However, the cardinality of the former set is typically smaller than the latter. We address this shortcoming in the next Algorithm 3. The exponent vectors for minimal generators of $\overline{I^{r}}$ are in $\mathcal{C}(I,r)\cap\operatorname{hype}(I,r)\cap{\mathbb{N}}^{d}$. However, as illustrated by Figure 1 and Figure 2, the exponents for the minimal generators of $\overline{I^{r}}$ can form a proper subset of $\mathcal{C}(I,r))\cap\operatorname{hype}(I,r)\cap{\mathbb{N}}^{d}$. The next variant improves on the hyperrectangle algorithm by reducing some redundancies in the traversal of lattice points. Using the while-loop on the final coordinate, the improved hyperrectangle algorithm stops looking for other generators after it finds a lattice point that is inside $r\cdot NP(I)$. Note that the improved hyperrectangle algorithm optimizes traversal of the set $\operatorname{hype}(I,r)\cap{\mathbb{N}}^{d}$ only on the last coordinate, so the benefits of using this algorithm over the hyperrectangle algorithm is more apparent in low dimensional rings. Input: the Newton polyhedron $NP(I)$ of an ideal $I$, a real number $r\in{\mathbb{R}}_{+}$ Output: a list of monomial generators for the ideal $\overline{I^{r}}$ 1 $d:=$ dimension of the polynomial ring containing $I$ 2startPoints := $\\{\,\in\operatorname{hype}(I,r)\;|\;b_{d}=\min(\mathcal{V},d)\\}$ 3Initialize generators := $\emptyset$ 4for in startPoints do 5 while $\ {\bf not}$ in $r\cdot NP(I)\ {\bf and}\ b_{d}\leq\max(\mathcal{V},d)$ do $:=(0,\dots,0,1)$ /* ‘‘move up’’ */ 6 7 8 if in $r\cdot NP(I)$ then 9 generators := append(${\mathbf{x}}$, generators) 10 /* Return the possibly non minimal monomial generators */ 11 Return generators. Algorithm 3 Improved Hyperrectangle algorithm ###### Example 4.7. Figure 2 illustrates the set of generators for the ideal $\overline{(xy^{5},x^{2}y^{2},x^{4}y)^{4/3}}$ returned by the improved hyperrectangle algorithm. The set of lattice points considered by this algorithm are marked in solid yellow and purple and hollow black. The set of generators returned by Algorithm 3 corresponds to the yellow and purple lattice points, while the minimal generator correspond to the purple lattice points only. Compared to Figure 2, fewer non minimal generators are returned. ### 4.3. Staircase Algorithm The algorithms presented in the previous sections (Algorithm 1, Algorithm 2, and Algorithm 3) have one common disadvantage in that they return possibly non minimal sets of generators for the real powers of monomial ideals. The next algorithm, termed the staircase algorithm, traverses lattice points near the boundary of the Newton polyhedron. The traversal is designed so that, in the 2-dimensional case, the minimal generators are found. A benefit of the following algorithm is to improve upon the runtime of Algorithm 1 and Algorithm 3. Algorithm 1 is slow in practice because of lattice points identification in step 1, while Algorithm 3 may be inefficient because a large number of operations could be performed to check if lattice points are in or outside $r\cdot NP(I)$. To alleviate this issue, the staircase algorithm optimizes the traversal of lattice points on the final two coordinates. The algorithm uses the notation in equation (4.1). Input: the Newton polyhedron $NP(I)$ of an ideal $I$, a real number $r\in{\mathbb{R}}_{+}$ Output: a list of monomial generators for the real power $\overline{I^{r}}$ 1 Initialize generators := $\emptyset$ 2$d:=$ dimension of the polynomial ring containing $I$ 3if $d=1$ then 4 Return $\\{{\mathbf{x}}^{\min(\mathcal{V},1)}\\}$ 5else 6 startPoints := $\big{\\{}\in\operatorname{hype}(I,r)\;|\;a_{d-1}=\min(\mathcal{V},d-1),\;a_{d}=\max(\mathcal{V},d)\big{\\}}$ 7 for in startPoints do 8 $:=$ 9 while in $\operatorname{hype}(I,r)$ do 10 if in $r\cdot NP(I)$ then 11 $:=$ $:=(0,\dots,0,1)$ /* ‘‘move down’’ */ 12 13 else 14 if in $r\cdot NP(I)$ then 15 generators := append(${\mathbf{x}}$, generators) 16 $:=$ $:=(0,\dots,1,0)$ /* ``move right'' */ 17 18 19 20 if in $r\cdot NP(I)$ then 21 generators := append(${\mathbf{x}}$, generators) 22 23 24 Return generators. Algorithm 4 Staircase algorithm ###### Example 4.8. Figure 3 shows the set of lattice points considered by the staircase algorithm within $\operatorname{hype}(I,\frac{4}{3})$ for the ideal $I=(xy^{5},x^{2}y^{2},x^{4}y)$. While all the lattice points along the path of the algorithm are considered, only the minimal generators corresponding to the purple lattice points are returned. $0$$1$$2$$3$$4$$5$$6$$7$$8$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$startend$x$$y$$r\cdot NP(I)$Boundary of $\operatorname{hype}(I,r)$Path of algorithmMinimal GeneratorsInterior PointsExterior Points Figure 3. Computing $\overline{(xy^{5},x^{2}y^{2},x^{4}y)^{4/3}}$ using the Staircase algorithm We are now ready to show the validity of Algorithm 4. We utilize terminology that is consistent with the visual descriptions in Figure 3. We call the path of the algorithm $\mathcal{P}(I,r)$ the set of values taken by the variable in Algorithm 4 for fixed inputs $I,r$. This set is the disjoint union of two subsets: the exterior path and the interior path defined below: $\displaystyle\mathcal{P}_{ext}(I,r)$ $\displaystyle=\\{\in\mathcal{P}(I,r)\setminus r\cdot NP(I)\\}$ $\displaystyle\mathcal{P}_{int}(I,r)$ $\displaystyle=\\{\in\mathcal{P}(I,r)\cap r\cdot NP(I)\\}.$ ###### Proposition 4.9. If $I$ is a monomial ideal of a $d$-dimensional polynomial ring and $d\in\\{1,2\\}$, then Algorithm 4 returns a minimal set of monomial generators for $\overline{I^{r}}$. If $d\geq 3$ then Algorithm 4 returns a not necessarily minimal set of monomial generators for $\overline{I^{r}}$. ###### Proof. In the case $d=1$, every monomial ideal $J\subseteq K[x_{1}]$ is principal, minimally generated by $x_{1}^{m}$, where $m=\min\\{a\mid x_{1}^{a}\in J\\}$. Applying this to $J=\overline{I^{r}}$ for which case $m=\min(\mathcal{V},1)$ yields $G(\overline{I^{r}})=\\{x_{1}^{\min(\mathcal{V},1)}\\}$, i.e., the output of Algorithm 4 in step 4. For the case $d=2$, first notice that because of the succession of down moves and right moves, the interior path $\mathcal{P}_{int}(I,r)$ is a disjoint union of vertical strips of the form $s_{a,b,c}:=\\{\gamma=(\gamma_{1},\gamma_{2})\mid\gamma_{1}=a,\,\gamma_{2}\in[b,c]\cap{\mathbb{N}}\\},$ where $b=\min\\{b^{\prime}\mid(a,b^{\prime})\in r\cdot NP(I)\\}$ by step 4 of the algorithm; see Figure 3 for an illustration. Moreover, the interior path contains one lattice point for each value of the $x_{2}$-coordinate in $[\min(\mathcal{V},1),\max(\mathcal{V},1)]$ so that in the decomposition (4.2) $\mathcal{P}_{int}(I,r)=\bigcup_{i=\min(\mathcal{V},1)}^{e}s_{i,b_{i},c_{i}}$ we must have $c_{\min(\mathcal{V},1)}=\max(\mathcal{V},2)$ and $b_{i}=c_{i+1}+1$ for each $i\leq e-1$, where $e$ is the maximum $x_{1}$ coordinate of any point on the interior path. In particular, if $i<j$ then the inequality $b_{i}>c_{j}$ holds. Let ${\mathbf{x}}^{\in}G(\overline{I^{r}})$. By 4.4 it follows that $(a_{1},a_{2})\in\operatorname{hype}(I,r)$, so $a_{2}\in[\min(\mathcal{V},1),\max(\mathcal{V},1)]$, and by the preceding remarks there exists a unique point $\in\mathcal{P}_{int}(I,r)$ with $b_{2}=a_{2}$. We claim that . If not, then $a_{1}<b_{1}$ since ${\mathbf{x}}$ is a minimal generator (i.e., lies ``left'' of ), and for this reason $a_{2}=b_{2}\leq c_{b_{1}}<b_{a_{1}}$ (i.e lies ``below'' the strip with $x_{1}$-coordinate $a_{1}$). Since $(a_{1},a_{2})\in r\cdot NP(I)$ and $b_{a_{1}}=\min\\{b^{\prime}\mid(a_{1},b^{\prime})\in r\cdot NP(I)\\}$, this yields a contradiction. We have shown that $G(\overline{I^{r}})\subseteq\\{{\mathbf{x}}^{\mid}\in\mathcal{P}_{int}(I,r)\\}.$ In the notation of (4.2), the algorithm returns the set $\\{x_{1}^{i}x_{2}^{b_{i}}\mid\min(\mathcal{V},1)\leq i\leq e\\}$. Each of the monomials $x_{1}^{i}x_{2}^{j}$ with $j\in(b_{i},c_{i}]\cap{\mathbb{N}}$ are not in $G(\overline{I^{r}})$ since they are divisible by $x_{1}^{i}x_{2}^{b_{i}}$. Thus $G(\overline{I^{r}})$ is contained in the returned set. Moreover, the returned set consists of minimal generators since no two of its elements are comparable under the divisibility relation. In fact, this proof shows that the case $d=2$ of the algorithm gives a minimal set of generators for the ideal generated by the monomials with exponents in a given convex set (in our application to real powers, this convex set is $r\cdot NP(I)$). We use this to approach the higher dimensional cases. The case $d>2$ is derived from the case $d=2$ by the following analysis. By virtue of 4.4 we have the identity $\displaystyle\overline{I^{r}}$ $\displaystyle=\left(\\{{\mathbf{x}}^{\mid}\ \in\operatorname{hype}(I,r)\cap r\cdot NP(I)\cap{\mathbb{N}}^{d}\right)$ $\displaystyle=\left(\sum_{\gamma\in\prod_{i=1}^{d-2}[\min(\mathcal{V},i),\,\max(\mathcal{V},i)]}x_{1}^{\gamma_{1}}\cdots x_{d-2}^{\gamma_{d-2}}\cdot I_{\gamma,r}\right),$ where $I_{\gamma,r}:=(\\{x_{d-1}^{a}x_{d}^{b}\mid\ (\gamma_{1},\ldots,\gamma_{d-2},a,b)\in r\cdot NP(I)\\})$ is an ideal in a 2-dimensional polynomial ring. According to the case $d=2$, steps 7–19 of the algorithm append the set $x_{1}^{\gamma_{1}}\cdots x_{d-2}^{\gamma_{d-2}}\cdot G(I_{\gamma,r})$ to the generators list. The union of these sets generates $\overline{I^{r}}$ by the above displayed identity. ∎ ###### Example 4.10. We give a visual illustration of using Algorithm 4 to compute the integral closure of $I=(y^{3},y^{2}z^{5},x^{2}y^{2},x^{2}z^{3})$, that is, $\overline{I^{1}}$ in Figure 4. In 3-dimensional space, the path of the algorithm is a disjoint union of paths, each corresponding to an ideal in a 2-dimensional ring as shown in the proof of 4.9. $0$$1$$2$$3$$4$$0$$1$$2$$3$$4$$0$$1$$2$$3$$4$$5$start pointsend points$x$$y$Boundary of $r\cdot NP(I)$Interior PointsMinimal GeneratorsExterior Points Figure 4. Computing generators for $\overline{(y^{3},y^{2}z^{5},x^{2}y^{2},x^{2}z^{3})}$ using the Staircase algorithm ## 5\. Continuity and jumping numbers for exponentiation In this section we analyze how the real powers of monomial ideals vary with the exponent. To be precise, for a fixed monomial ideal $I$ we consider continuity properties for the exponentiation function of base $I$ $\exp:{\mathbb{R}}_{+}\to\mathcal{T},\quad\exp(r)=\overline{I^{r}}$ whose domain is ${\mathbb{R}}_{+}$ with its Euclidean topology and whose codomain is the set $\mathcal{T}=\\{\overline{I^{r}}\mid r\in{\mathbb{R}}_{+}\\}$ endowed with the discrete topology. We start with two elementary properties enjoyed by the family of real powers of the fixed ideal. ###### Lemma 5.1. If $I$ is a monomial ideal and $r,s\in{\mathbb{R}}_{+}$ then 1. (1) if $s\geq r\geq 0$, then the containment $\overline{I^{s}}\subseteq\overline{I^{r}}$ holds, 2. (2) $\overline{I^{s}}\cdot\overline{I^{r}}\subseteq\overline{I^{s+r}}$. ###### Proof. Assertion (1) is clear from 3.1. To clarify assertion (2), note that monomials in $\overline{I^{s}}\cdot\overline{I^{r}}$ correspond to lattice points in the Minkowski sum $s\cdot NP(I)+r\cdot NP(I)=(s+r)\cdot NP(I).$ ∎ Part (2) of 5.1 shows that the real powers of a fixed monomial ideal form a graded family, although this terminology is more commonly used for families indexed by a discrete set. Property (1) of 5.1 allows to define for each $r\in{\mathbb{R}}$ the monomial ideal $\overline{I^{>r}}=\bigcup_{s>r}\overline{I^{s}}.$ We show that this ideal can be understood as a limit in $\mathcal{T}$, meaning that a sequence of real powers of $I$ where the exponents approach a real number $r$ from the right must stabilize to $\overline{I^{>r}}$. ###### Proposition 5.2. Let $I$ be a monomial ideal and let $\\{t_{n}\\}_{n\in{\mathbb{N}}}$ be a non- increasing sequence of non-negative real numbers with $r=\lim_{n\to\infty}t_{n}$. Then $\overline{I^{t_{n}}}=\overline{I^{>r}}$ for $n$ sufficiently large. ###### Proof. A non-increasing sequence of non-negative numbers $\\{t_{n}\\}_{n\in{\mathbb{N}}}$ gives an ascending chain of ideals $\overline{I^{t_{0}}}\subseteq\overline{I^{t_{1}}}\subseteq\cdots\subseteq\overline{I^{r}}$ by 5.1 (1). Since the polynomial ring is Noetherian, any such chain must in fact stabilize, i.e. there exists $N\gg 0$ such that $\overline{I^{t_{n}}}=\overline{I^{t_{m}}}$ for $m,n\geq N$. We show that the stable value of this chain is $\overline{I^{>r}}$. Indeed, from the definition of $\overline{I^{>r}}$ one deduces the containment $\overline{I^{t_{N}}}=\bigcup_{n=0}^{\infty}\overline{I^{t_{n}}}\subseteq\bigcup_{s>r}\overline{I^{s}}=\overline{I^{>r}}.$ Conversely, for each $s>r$, there exists $n\geq N$ such that $s>t_{n}$, hence one has the containments $\overline{I^{s}}\subseteq\overline{I^{t_{N}}}=\overline{I^{t_{n}}}$ for all $s>r$ and consequently $\overline{I^{t_{N}}}\supseteq\overline{I^{>r}}$. ∎ To distinguish those real numbers $r$ for which the function $\exp:{\mathbb{R}}_{+}\to\mathcal{T},\exp(r)=\overline{I^{r}}$ is right discontinuous, we term them jumping numbers. ###### Definition 5.3. A jumping number for $I$ is a real number $r\in{\mathbb{R}}_{+}$ for which the real powers of $I$ are not equal to $\overline{I^{r}}$ when we approach $r$ from the right, i.e. $\overline{I^{r}}\neq\overline{I^{>r}}.$ ###### Example 5.4. 0 is a jumping number for any monomial ideal since $\overline{I^{0}}=R$ but $\overline{I^{r}}$ is a proper ideal for any $r>0$. ###### Example 5.5. For $I=(x^{4},x^{2}y,xy^{3})$ we have that $\frac{1}{3}$ is not a jumping number while $\frac{1}{2}$ is a jumping number. This is because for small values of $\varepsilon>0$ there is an equality $\frac{1}{3}\cdot NP(I)\cap{\mathbb{N}}^{2}=\left(\frac{1}{3}+\varepsilon\right)\cdot NP(I)\cap{\mathbb{N}}^{2},$ while $\frac{1}{2}\cdot NP(I)\cap{\mathbb{N}}^{2}\neq\left(\frac{1}{2}+\varepsilon\right)\cdot NP(I)\cap{\mathbb{N}}^{2}$ because the point $(2,0)$ belongs to the leftmost set but not the rightmost. In fact, for the ideal $I$ in this example, we have $(x^{2},xy)=\overline{I^{1/3}}=\overline{I^{>1/3}}=\overline{I^{1/2}}\neq\overline{I^{>1/2}}=(x^{3},xy)$. $0$$1$$2$$3$$0$$1$$2$$3$$x$$y$$0$$1$$2$$3$$0$$1$$2$$3$$x$$y$ Figure 5. Comparing $\frac{1}{3}\cdot NP(I)$ and $\frac{1}{2}\cdot NP(I)$ To verify that right continuity is a special characteristic to study, we show that the exponentiation function is a left continuous function. Towards this end recall that any polyhedron admits a description as a finite intersection of half spaces. We term the linear inequalities describing a polyhedron as an intersection of half spaces its bounding inequalities. In particular, if $I$ is a monomial ideal in a polynomial ring of dimension $d$ then $NP(I)$ is a lattice polyhedron, hence there exist a $d\times s$ matrix $A$ with entries in ${\mathbb{N}}$ and a vector ${\mathbf{c}}\in{\mathbb{N}}^{d}$ such that (5.1) $NP(I)=\\{{\mathbf{x}}\in{\mathbb{R}}_{+}^{d}\mid A{\mathbf{x}}\geq{{\mathbf{c}}}\\}.$ In (5.1), if $A=[a_{ij}]$, we will further assume that we have $\gcd(a_{i1},\ldots,a_{is},c_{i})=1$ for each $1\leq i\leq d$. Moreover, scaling the Newton polyhedron amounts to scaling the constant term of the bounding inequalities, that is, $r\cdot NP(I)=\\{{\mathbf{x}}\in{\mathbb{R}}_{+}^{d}\mid A{\mathbf{x}}\geq r\cdot{{\mathbf{c}}}\\}.$ ###### Proposition 5.6. The function $\exp:{\mathbb{R}}_{+}\to\mathcal{T},\exp(r)=\overline{I^{r}}$ is left continuous. ###### Proof. Fix $r\in{\mathbb{R}}_{+}$ and consider the set $A_{r}={\mathbb{N}}^{d}\setminus r\cdot NP(I)$. Observe that each point ${\mathbf{p}}\in A_{r}$ lies at a positive Euclidean distance from any point in $r\cdot NP(I)$. Indeed in the notation of (5.1), writing i for the $i$-th row of $A$ we have ${}_{i}\cdot{\mathbf{p}}<rc_{i}$ for at least one $1\leq i\leq d$ and thus the distance from ${\mathbf{p}}$ to the hyperplane of equation ${}_{i}\cdot{\mathbf{x}}=rc_{i}$ is $d_{i}=(rc_{i}-_{i}\cdot{\mathbf{p}})/\sqrt{{}_{i}\cdot_{i}}>0$. In particular, since ${}_{i}\cdot{\mathbf{p}}\in{\mathbb{N}}$, it follows that $d_{i}\geq\delta_{i}:=(rc_{i}-{\rm prec}(rc_{i}))/\sqrt{{}_{i}\cdot_{i}}>0$, where ${\rm prec}(u)$ is the largest integer strictly smaller than $u$. Taking $\Delta=\min_{1\leq i\leq d}\delta_{i}$ we conclude that any ${\mathbf{p}}\in A_{r}$ lies at distance at least $\Delta>0$ from any point in $NP(I)$. Since each $\delta_{i}$ is a left continuous function of $r$, it follows that there exists $\varepsilon_{0}>0$ such that for any $0<\varepsilon<\varepsilon_{0}$ each point ${\mathbf{p}}\in A_{r}$ lies at a positive Euclidean distance from any point in $(r-\varepsilon)\cdot NP(I)$ as well. Equivalently we have $A_{r}\cap(r-\varepsilon)\cdot NP(I)=\emptyset$ which yields $A_{r}=A_{r-\varepsilon}$ and thus $r\cdot NP(I)\cap{\mathbb{N}}^{d}=(r-\varepsilon)\cdot NP(I)\cap{\mathbb{N}}^{d}$ and $\overline{I^{r}}=\overline{I^{r-\varepsilon}}$ for $0<\varepsilon<\varepsilon_{0}$. ∎ We now show that the real exponentiation function of a monomial ideal is a step function. ###### Corollary 5.7. Let $j<j^{\prime}$ be two consecutive jumping numbers for $I$. Then the function $\exp:{\mathbb{R}}_{+}\to\mathcal{T},\exp(r)=\overline{I^{r}}$ is constant on $(j,j^{\prime}]$ and $\overline{I^{j}}\neq\overline{I^{j^{\prime}}}$. ###### Proof. Since $j<j^{\prime}$ are consecutive jumping numbers, meaning there is no jumping number in $(j,j^{\prime})$, the exponentiation function is continuous on $(j,j^{\prime}]$ by a combination of 5.2 and 5.6, and left continuity at $j^{\prime}$. Since $\mathcal{T}$ carries the discrete topology, this continuity is equivalent to the function being constant on $(j,j^{\prime}]$. However, the exponentiation function is right discontinuous at $j$ by the definition of jumping number, thus $\overline{I^{j}}$ is distinct from the common value of the exponentiation function on $(j,j^{\prime}]$, that is, $\overline{I^{j}}\neq\overline{I^{j^{\prime}}}$. ∎ Our next aim is to show that the jumping numbers for monomial ideals are rational. Utilizing the notation in (5.1) and setting i to be the $i$-th row of the matrix $A$ therein, the facets of the Newton polyhedron are supported on hyperplanes $H_{i}$ with equation ${}_{i}{\mathbf{x}}=c_{i}$. Each facet $F_{i}$ of $NP(I)$ is thus cut out by a system formed by one equation and several inequalities of the form (5.2) $F_{i}=\left\\{{\mathbf{x}}\mid_{i}\cdot{\mathbf{x}}=c_{i},_{j}\cdot{\mathbf{x}}\geq c_{j}\text{ for }1\leq j\leq d,j\neq i\right\\}.$ ###### Proposition 5.8. Given a monomial ideal $I$ with facets $F_{i},1\leq i\leq s$ for $NP(I)$ described as in (5.2) above, the following are equivalent: 1. (1) $r\in{\mathbb{R}}_{+}$ is a jumping number for $I$; 2. (2) for some $1\leq i\leq s$ such that $c_{i}\neq 0$ there exists a lattice point ${\mathbf{p}}\in r\cdot F_{i}\cap{\mathbb{N}}^{d}$; 3. (3) for some $1\leq i\leq s$ such that $c_{i}\neq 0$ there exists an integer solution to the system of equations and inequalities that describes $r\cdot F_{i}$, namely (5.3) $\begin{cases}{}_{i}\cdot{\mathbf{x}}=rc_{i}\\\ {}_{j}\cdot{\mathbf{x}}\geq rc_{j}\text{ for }1\leq j\leq d,j\neq i.\end{cases}$ ###### Proof. $(2)\Leftrightarrow(3)$ is clear. $(1)\Rightarrow(2)$ We show the contrapositive. Assume that $r\in{\mathbb{R}}_{+}$ is such that the union of the facets of $r\cdot F_{i}$ of $r\cdot NP(I)$ corresponding to $c_{i}\neq 0$ contains no lattice point. Note that $r\cdot NP(I)\setminus(r+\varepsilon)\cdot NP(I)\subseteq\\{{\mathbf{x}}\mid_{i}\cdot{\mathbf{x}}\in[rc_{i},(r+\varepsilon)c_{i})\text{ for some }i\text{ with }c_{i}\neq 0\\}.$ Moreover there is at least one $1\leq i\leq d$ so that $c_{i}\neq 0$ since $I\neq R$. Taking $\varepsilon<\varepsilon_{0}:=\min_{c_{i}\neq 0}\\{(\rm{next}(rc_{i})-rc_{i})/c_{i}\\}$, where $\rm{next}(rc_{i})$ is the smallest integer strictly larger than $rc_{i}$, one can ensure that $[rc_{i},(r+\varepsilon)c_{i})\cap{\mathbb{N}}\subseteq\\{rc_{i}\\}$ whenever $c_{i}\neq 0$. This means that any possible lattice point ${\mathbf{t}}$ in $r\cdot NP(I)\setminus(r+\varepsilon)\cdot NP(I)$ satisfies ${}_{i}\cdot{\mathbf{t}}=rc_{i}$ for some $c_{i}\neq 0$. Thus we see that ${\mathbf{t}}$ lies on a facet $F_{i}$ which contains no lattice points by assumption. Thus there are no lattice points in $r\cdot NP(I)\setminus(r+\varepsilon)\cdot NP(I)$. It follows that $\overline{I^{r}}=\overline{I^{r+\varepsilon}}$ for $0<\varepsilon<\varepsilon_{0}$ and thus $r$ is not a jumping number for $I$. $(3)\Rightarrow(1)$ Let $\mathbf{p}\in{\mathbb{N}}^{d}$ be an integer solution to (5.3). Since this implies ${\mathbf{p}}\in r\cdot F_{i}\subseteq r\cdot NP(I)$, we see that ${\mathbf{x}}^{\mathbf{p}}\in\overline{I^{r}}$. Since ${\mathbf{p}}$ attains equality in the first equation of (5.3) it follows that ${\mathbf{p}}$ satisfies ${}_{i}\cdot{\mathbf{p}}<(r+\varepsilon)c_{i}$ for any $\varepsilon>0$. (This uses $c_{i}\neq 0$.) Thus we conclude ${\mathbf{p}}\not\in(r+\varepsilon)\cdot NP(I)$ and ${\mathbf{x}}^{\mathbf{p}}\notin\overline{I^{r+\epsilon}}$ for all $\epsilon>0$ and therefore ${\mathbf{x}}^{\mathbf{p}}\notin\overline{I^{>r}}$. Consequently $r$ is a jumping number. ∎ From the above characterization we obtain that jumping numbers control the behavior of all real powers of a given monomial ideal and are all rational numbers. ###### Theorem 5.9. Let $I$ be a monomial ideal. 1. (1) All jumping numbers for $I$ are rational. 2. (2) All distinct real powers of $I$ are given by rational exponents, i.e., for each $r\in{\mathbb{R}}_{+}$ there exists $r^{\prime}\in{\mathbb{Q}}$ so that $\overline{I^{r}}=\overline{I^{r^{\prime}}}$. Moreover $r^{\prime}$ can be taken to be a jumping number for $I$. 3. (3) If $r$ is a jumping number of $I$ then $nr$ is a jumping number for all $n\in{\mathbb{N}}$. 4. (4) If $\mathbf{v}$ is a vertex of $NP(I)$, then for all $n\in{\mathbb{N}}$ the number $r_{n}=\frac{n}{\gcd(v_{1},\cdots,v_{d})}$ is a jumping number of $I$ . 5. (5) The set of jumping numbers can be written as a finite union of scaled monoids $\mathcal{J}=\bigcup_{c_{i}\neq 0}\frac{1}{c_{i}}S_{i}.$ Here each $S_{i}$ is a submonoid of the numerical semigroup generated by the entries of the $i$-th row of the matrix $A$ in (5.1) and $c_{i}$ are the components of the vector ${\mathbf{c}}$ in (5.1). ###### Proof. (1) follows since 5.8 (3) yields that there is an integer solution ${\mathbf{p}}$ to an equation of the form ${}_{i}\cdot{\mathbf{p}}=rc_{i}$ where i is a row of the matrix $A$ in (5.1) and $c_{i}\neq 0$. Since the entries of ${}_{i},{\mathbf{p}}$, and $c_{i}$ are natural numbers, this gives $r\in{\mathbb{Q}}$. (2) If $r\in{\mathbb{Q}}_{+}$ is a jumping number, set $r^{\prime}=r$. If $r$ is not a jumping number, let $r^{\prime}=\inf\\{u\mid u>r\text{ and }u\text{ is a jumping number for }I\\}.$ Notice first that $r^{\prime}$ is in fact the minimum of the set above, equivalently $r^{\prime}\in{\mathbb{Q}}$ is a jumping number for $I$. Indeed, if this is not the case, then there is a sequence of pairwise distinct jumping numbers $\\{u_{n}\\}_{n\in{\mathbb{N}}}$ converging to $r^{\prime}$ from the right. Since we have assumed $r^{\prime}$ is not a jumping number, the exponential function with base $I$ is right continuous at $r^{\prime}$, thus it must be the case that $\overline{I^{u_{n}}}=\overline{I^{r^{\prime}}}$ for $n\gg 0$. This contradicts that the numbers $u_{n}$ are distinct jumping numbers, since distinct jumping numbers yield distinct real powers by 5.7. Another application of 5.7 together with the observation that $r$ is not a jumping number yields that the exponentiation function is constant on $[r,r^{\prime}]$, thus we conclude there is an equality $\overline{I^{r}}=\overline{I^{r^{\prime}}}$. (3) follows since the condition on integer solutions to the system (5.3) in 5.8 is preserved upon scaling the system by any natural number. (4) Each vertex ${\mathbf{v}}$ of $NP(I)$ furnishes an integer solution to the system of (in)equalities (5.3) corresponding to each facet $F_{i}$ such that ${\mathbf{v}}\in F_{i}$. Scaling by $r_{n}$ we see that $r_{n}\cdot{\mathbf{v}}\in{\mathbb{N}}^{d}$ is an integer solution to the analogous system $\begin{cases}{}_{i}\cdot{\mathbf{x}}=r_{n}c_{i},\\\ {}_{j}\cdot{\mathbf{x}}\geq r_{n}c_{j}\text{ for }1\leq j\leq d,i\neq j.\end{cases}$ 5.8 yields that $r_{n}$ is a jumping number for $I$. For (5), for each $1\leq i\leq d$, let $a_{ij}\in{\mathbb{N}}$ be the entries in the $i$-th row of the matrix $A$ in (5.1) and $c_{i}$ the entries of ${\mathbf{c}}$. For each $i$ with $c_{i}\neq 0$ set $S_{i}=\left\\{rc_{i}\mid\exists x_{1},\ldots,x_{d}\in{\mathbb{N}}\cup\\{0\\}\text{ s.t.}\sum_{j=1}^{d}a_{ij}x_{j}=rc_{i},\sum_{j=1}^{d}a_{lj}x_{j}\geq rc_{l}\text{ for }l\neq i\right\\}.$ It is clear that $S_{i}\subset{\mathbb{N}}\cup\\{0\\}$. Moreover $S_{i}$ is a monoid as $0\in S_{i}$ and $rc_{i},r^{\prime}c_{i}\in S_{i}$ imply $(r+r^{\prime})c_{i}\in S_{i}$ by summing the respective (in)equalities. The existence of a non-negative integer solution to the equation $\sum_{j=1}^{d}a_{ij}x_{j}=rc_{i}$ implies that $rc_{i}$ belongs to the numerical semigroup $M_{i}$ generated by the integers $a_{ij}$ for $1\leq j\leq d$, thus $S_{i}\subseteq M_{i}$. With this notation, 5.8 can be rephrased to say that the set of jumping numbers for $I$ is $\mathcal{J}=\bigcup_{c_{i}\neq 0}\frac{1}{c_{i}}S_{i}.$ ∎ In regards to item (1) of 5.9 we observe that every non-negative rational number is a jumping number for some monomial ideal. Indeed if $r=\frac{p}{q}$ with $p,q\in{\mathbb{N}},q\neq 0$ then $r$ is a jumping number of $I=(x_{1}^{q})$. Item (2) of 5.9 yields a new description for the image of the exponentiation function with base $I$ $\mathcal{T}=\\{\overline{I^{r}}\mid r\in{\mathbb{Q}}\text{ is a jumping number for }I\\}.$ Moreover, the elements of the set $\mathcal{T}$ listed above are pairwise distinct by 5.7. We end with a worked out example which illustrates the jumping numbers and real powers of a particular monomial ideal using the criterion in 5.8. ###### Example 5.10. The monomial ideal $I=(x^{9},x^{4}y^{3},x^{2}y^{5},y^{8})$ has Newton polyhedron depicted in Figure 6 with vertices at $(9,0),(4,3),(2,5),(0,8)$. $0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$x$$y$$NP(I)$$p\geq 0$$3p+2q\geq 16$$p+q\geq 7$$3p+5q\geq 27$$q\geq 0$ Figure 6. The Newton polyhedron of $(x^{9},x^{4}y^{3},x^{2}y^{5},y^{8})$ We show that the jumping numbers of $I$ are the elements of the following set (5.4) $\mathcal{J}=\left\\{0,\frac{i}{7},\frac{j}{16},\frac{k}{27}\mid i\geq 2,j\in\\{2,4,6\\}\text{ or }j\geq 8,k\in\\{3,6,9,11,12,14,15\\}\text{ or }k\geq 17\right\\}.$ The faces of the Newton polyhedron ${\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}F_{1}},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}F_{2}},{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}F_{3}},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}F_{4}},{\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}F_{5}}$ are shown in Figure 6 together with the corresponding bounding inequalities for $NP(I)$. Putting these inequalities in the form of (5.1) yields $\begin{bmatrix}1&0\\\ 3&2\\\ 1&1\\\ 3&5\\\ 0&1\end{bmatrix}\cdot\begin{bmatrix}p\\\ q\end{bmatrix}\geq\begin{bmatrix}0\\\ 16\\\ 7\\\ 27\\\ 0\end{bmatrix}.$ 5.9 (5) says that the jumping numbers depend on the following three monoids: $\displaystyle S_{2}$ $\displaystyle=$ $\displaystyle\\{16r\mid\exists p,q\in{\mathbb{N}}\cup\\{0\\}\text{ s.t. }p\geq 0,3p+2q=16r,p+q\geq 7r,3p+5q\geq 27r,q\geq 0\\},$ $\displaystyle S_{3}$ $\displaystyle=$ $\displaystyle\\{7r\mid\exists p,q\in{\mathbb{N}}\cup\\{0\\}\text{ s.t. }p\geq 0,3p+2q\geq 16r,p+q=7r,3p+5q\geq 27r,q\geq 0\\},$ $\displaystyle S_{4}$ $\displaystyle=$ $\displaystyle\\{27r\mid\exists p,q\in{\mathbb{N}}\cup\\{0\\}\text{ s.t. }p\geq 0,3p+2q\geq 16r,p+q\geq 7r,3p+5q=27r,q\geq 0\\}.$ Denote ${\mathbb{N}}_{0}={\mathbb{N}}\cup\\{0\\}$. It turns out that $S_{2}=2{\mathbb{N}}_{0}+9{\mathbb{N}}_{0}$, $S_{3}=2{\mathbb{N}}_{0}+3{\mathbb{N}}_{0}$, and $S_{4}=3{\mathbb{N}}_{0}+11{\mathbb{N}}_{0}+19{\mathbb{N}}_{0}$. The set of jumping numbers is $\mathcal{J}=\frac{1}{16}S_{2}\cup\frac{1}{7}S_{3}\cup\frac{1}{27}S_{4},$ Writing the the elements of each semigroup $S_{1},S_{2},S_{3}$ explicitly yields the set displayed in equation (5.4) above. Below we list the rational powers of $I$ for exponents $r\in(0,1]$. The generators have been color coded based on the bounded edge of the Newton polyhedron that is giving rise to change in generator(s) cf. 5.8 (2). Refer to the legend in Figure 6 for the color corresponding to each edge. $\overline{I^{r}}=\begin{cases}(y,x)&r\in(0,\frac{1}{9}]\\\ (y,{\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}x^{2}})&r\in(\frac{1}{9},\frac{1}{8}]\\\ ({\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}y^{2}},{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}xy},x^{2})&r\in(\frac{1}{8},\frac{2}{9}]\\\ (y^{2},xy,{\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}x^{3}})&r\in(\frac{2}{9},\frac{2}{8}]\\\ ({\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}y^{3}},xy,x^{3})&r\in(\frac{2}{8},\frac{2}{7}]\\\ (y^{3},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}xy^{2}},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}x^{2}y},x^{3})&r\in(\frac{2}{7},\frac{3}{9}]\\\ (y^{3},xy^{2},x^{2}y,{\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}x^{4}})&r\in(\frac{3}{9},\frac{3}{8}]\\\ ({\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}y^{4}},xy^{2},x^{2}y,x^{4})&r\in(\frac{3}{8},\frac{11}{27}]\\\ (y^{4},xy^{2},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}x^{3}y},x^{4})&r\in(\frac{11}{27},\frac{3}{7}]\\\ (y^{4},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}xy^{3}},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}x^{2}y^{2}},x^{3}y,x^{4})&r\in(\frac{3}{7},\frac{4}{9}]\\\ (y^{4},xy^{3},x^{2}y^{2},x^{3}y,{\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}x^{5}})&r\in(\frac{4}{9},\frac{4}{8}]\\\ ({\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}y^{5}},xy^{3},x^{2}y^{2},x^{3}y,x^{5})&r\in(\frac{4}{8},\frac{14}{27}]\\\ (y^{5},xy^{3},x^{2}y^{2},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}x^{4}y},x^{5})&r\in(\frac{14}{27},\frac{5}{9}]\\\ (y^{5},xy^{3},x^{2}y^{2},x^{4}y,{\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}x^{6}})&r\in(\frac{5}{9},\frac{9}{16}]\\\ (y^{5},{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}xy^{4}},x^{2}y^{2},x^{4}y,x^{6})&r\in(\frac{9}{16},\frac{4}{7}]\\\ (y^{5},xy^{4},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}x^{2}y^{3}},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}x^{3}y^{2}},x^{4}y,x^{6})&r\in(\frac{4}{7},\frac{5}{8}]\\\ ({\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}y^{6}},xy^{4},x^{2}y^{3},x^{3}y^{2},x^{4}y,x^{6})&r\in(\frac{5}{8},\frac{17}{27}]\\\ (y^{6},xy^{4},x^{2}y^{3},x^{3}y^{2},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}x^{5}y},x^{6})&r\in(\frac{17}{27},\frac{6}{9}]\\\ (y^{6},xy^{4},x^{2}y^{3},x^{3}y^{2},x^{5}y,{\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}x^{7}})&r\in(\frac{6}{9},\frac{11}{16}]\\\ (y^{6},{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}xy^{5}},x^{2}y^{3},x^{3}y^{2},x^{5}y,x^{7})&r\in(\frac{11}{16},\frac{19}{27}]\\\ (y^{6},xy^{5},x^{2}y^{3},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}x^{4}y^{2}},x^{5}y,x^{7})&r\in(\frac{19}{27},\frac{5}{7}]\\\ (y^{6},xy^{5},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}x^{2}y^{4}},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}x^{3}y^{3}},x^{4}y^{2},x^{5}y,x^{7})&r\in(\frac{5}{7},\frac{20}{27}]\\\ (y^{6},xy^{5},x^{2}y^{4},x^{3}y^{3},x^{4}y^{2},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}x^{6}y},x^{7})&r\in(\frac{20}{27},\frac{3}{4}]\\\ ({\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}y^{7}},xy^{5},x^{2}y^{4},x^{3}y^{3},x^{4}y^{2},x^{6}y,x^{7})&r\in(\frac{3}{4},\frac{7}{9}]\\\ (y^{7},xy^{5},x^{2}y^{4},x^{3}y^{3},x^{4}y^{2},x^{6}y,{\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}x^{8}})&r\in(\frac{7}{9},\frac{13}{16}]\\\ (y^{7},{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}xy^{6}},x^{2}y^{4},x^{3}y^{3},x^{4}y^{2},x^{6}y,x^{8})&r\in(\frac{13}{16},\frac{22}{27}]\\\ (y^{7},xy^{6},x^{2}y^{4},x^{3}y^{3},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}x^{5}y^{2}},x^{6}y,x^{8})&r\in(\frac{22}{27},\frac{23}{27}]\\\ (y^{7},xy^{6},x^{2}y^{4},x^{3}y^{3},x^{5}y^{2},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}x^{7}y},x^{8})&r\in(\frac{23}{27},\frac{6}{7}]\\\ (y^{7},xy^{6},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}x^{2}y^{5}},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}x^{3}y^{4}},{\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}x^{4}y^{3}},x^{5}y^{2},x^{7}y,x^{8})&r\in(\frac{6}{7},\frac{7}{8}]\\\ ({\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}y^{8}},xy^{6},x^{2}y^{5},x^{3}y^{4},x^{4}y^{3},x^{5}y^{2},x^{7}y,x^{8})&r\in(\frac{7}{8},\frac{8}{9}]\\\ (y^{8},xy^{6},x^{2}y^{5},x^{3}y^{4},x^{4}y^{3},x^{5}y^{2},x^{7}y,{\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}x^{9}})&r\in(\frac{8}{9},\frac{25}{27}]\\\ (y^{8},xy^{6},x^{2}y^{5},x^{3}y^{4},x^{4}y^{3},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}x^{6}y^{2}},x^{7}y,x^{9})&r\in(\frac{25}{27},\frac{15}{16}]\\\ (y^{8},{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}xy^{7}},x^{2}y^{5},x^{3}y^{4},x^{4}y^{3},x^{6}y^{2},x^{7}y,x^{9})&r\in(\frac{15}{16},\frac{26}{27}]\\\ (y^{8},xy^{7},x^{2}y^{5},x^{3}y^{4},x^{4}y^{3},x^{6}y^{2},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}x^{8}y},x^{9})&r\in(\frac{26}{27},1]\\\ \end{cases}$ Acknowledgement. We warmly thank the anonymous referees for providing careful and detailed comments which greatly improved the paper and in particular for their contributions to 2.5 and 5.8. This work was completed in the framework of the 2020 Polymath Jr. program https://geometrynyc.wixsite.com/polymathreu. The second author was supported by the NSF RTG grant in algebra and combinatorics at the University of Minnesota DMS–1745638. The fourth author was supported by NSF DMS–2101225. ## References * [ASW15] Zechariah Andersen and Sean Sather-Wagstaff, _Krull dimension of monomial ideals in polynomial rings with real exponents_ , Comm. Algebra 43 (2015), no. 8, 3411–3432. MR 3354100 * [CEHH17] Susan M Cooper, Robert JD Embree, Huy Tài Hà, and Andrew H Hoefel, _Symbolic powers of monomial ideals_ , Proceedings of the Edinburgh Mathematical Society 60 (2017), no. 1, 39–55. * [Ciu20] Cătălin Ciupercă, _Derivations and rational powers of ideals_ , Arch. Math. (Basel) 114 (2020), no. 2, 135–145. * [Ciu21] by same author, _Integral closure of strongly Golod ideals_ , Nagoya Math. J. (2021), 1–13. * [DLHTY04] Jesús A. De Loera, Raymond Hemmecke, Jeremiah Tauzer, and Ruriko Yoshida, _Effective lattice point counting in rational convex polytopes_ , J. Symbolic Comput. 38 (2004), no. 4, 1273–1302. * [GS] Daniel R. Grayson and Michael E. Stillman, _Macaulay2, a software system for research in algebraic geometry_ , Available at http://www.math.uiuc.edu/Macaulay2/. * [HS06] Craig Huneke and Irena Swanson, _Integral closure of ideals, rings, and modules_ , vol. 13, Cambridge University Press, 2006. * [ISW13] Daniel Ingebretson and Sean Sather-Wagstaff, _Decompositions of monomial ideals in real semigroup rings_ , Comm. Algebra 41 (2013), no. 11, 4363–4377. MR 3169524 * [Knu06] Allen Knutson, _Balanced normal cones and Fulton-MacPherson’s intersection theory_ , Pure Appl. Math. Q. 2 (2006), no. 4, Special Issue: In honor of Robert D. MacPherson. Part 2, 1103–1130. * [Lew20] James Lewis, _Limit behavior of the rational powers of monomial ideals_ , arXiv:2009.05173 (2020). * [Mil20] Ezra Miller, _Essential graded algebra over polynomial rings with real exponents_ , arXiv:2008.03819 (2020). * [Rus07] David E. Rush, _Rees valuations and asymptotic primes of rational powers in Noetherian rings and lattices_ , J. Algebra 308 (2007), no. 1, 295–320. * [4ti2] 4ti2 team, _4ti2—a software package for algebraic, geometric and combinatorial problems on linear spaces_.
# Non-embeddable II1 factors resembling the hyperfinite II1 factor Isaac Goldbring Department of Mathematics University of California, Irvine, 340 Rowland Hall (Bldg.# 400), Irvine, CA 92697-3875<EMAIL_ADDRESS>http://www.math.uci.edu/ isaac ###### Abstract. We consider various statements that characterize the hyperfinite II1 factors amongst embeddable II1 factors in the non-embeddable situation. In particular, we show that “generically” a II1 factor has the Jung property (which states that every embedding of itself into its ultrapower is unitarily conjugate to the diagonal embedding) if and only if it is self-tracially stable (which says that every such embedding has an approximate lifting). We prove that the enforceable factor, should it exist, has these equivalent properties. Our techniques are model-theoretic in nature. We also show how these techniques can be used to give new proofs that the hyperfinite II1 factor has the aforementioned properties. The author was partially supported by NSF CAREER grant DMS-1349399. ## 1\. introduction In [10], Murray and von Neumann proved that there exists a unique (up to isomorphism) separable hyperfinite II1 factor. This unique factor, henceforth denoted by $\mathcal{R}$, plays a crucial role in the theory of finite von Neumann algebras. By Connes’ seminal work in [4], we know that $\mathcal{R}$ is also the unique separable II1 factor possessing any of the following properties: injectivity, semidiscreteness, and amenability. In this article, our focus will be on some statements that characterize $\mathcal{R}$ amongst the class of separable embeddable II1 factors, where a separable tracial von Neumann algebra is embeddable if it embeds into some (equivalently, any) ultrapower $\mathcal{R}^{\mathcal{U}}$ of $\mathcal{R}$ with $\mathcal{U}$ a nonprincipal ultrafilter on $\mathbb{N}$. For example, as proven by Jung in [9], any embedding of $\mathcal{R}$ into $\mathcal{R}^{\mathcal{U}}$ is unitarily conjugate to the diagonal embedding. In [3], the authors say that a separable II1 factor $M$ has the Jung property if and only if any embedding of $M$ into $M^{\mathcal{U}}$ is unitarily conjugate to the diagonal embedding. In [2] (see also [3, Theorem 3.1.3]), the authors show that $\mathcal{R}$ is the unique separable embeddable II1 factor with the Jung property. In [1], the author defines a separable tracial von Neumann algebra $M$ to be self-tracially stable if any embedding of $M$ into $M^{\mathcal{U}}$ has an “approximate lifting.” (See the next section for a precise definition.) It is easy to see that any II1 factor with the Jung property is self-tracially stable (see [3, Proposition 3.3.14] for a proof). It follows that $\mathcal{R}$ is self-tracially stable. The fact that $\mathcal{R}$ is the unique separable embeddable self-tracially stable II1 factor is the content of [2, Theorem 2.4]. Recall that the Connes Embedding Problem (CEP) asks whether or not every separable111Separability is not an issue here if one allows ultrafilters over larger index sets. tracial von Neumann algebra is embeddable. As announced in the recent landmark paper [8], the Connes Embedding Problem has a negative answer. It thus makes sense to ask whether or not there are separable non- embeddable II1 factors that have the Jung property or are self-tracially stable. (See [3, Question 3.3.12] for an explicit mention of the former question.) 222We should mention that in [3, Theorem 3.2.5] it was shown that $\mathcal{R}$ is the unique separable embeddable II1 factor with the generalized Jung property, meaning that any two embeddings of itself into its ultrapower are conjugate by some (not necessarily inner) automorphism of the ultrapower; [3, Theorem 3.3.1] shows that there are non-embeddable factors with this property. Our first main result is that “generically” these are the same question. To explain this, recall that a tracial von Neumann algebra $M$ is existentially closed (or e.c. for short) if: whenever $M$ is contained in the tracial von Neumann algebra $N$, then there is an embedding of $N$ into $M^{\mathcal{U}}$ that restricts to the diagonal embedding of $M$.333Again, this definition makes sense for not necessarily separable factors using ultrfilters on larger index sets. Alternatively, one can give a purely syntactical, model-theoretic, definition which makes it clear that density character is irrelevant. The notion of e.c. tracial von Neumann algebras comes from model theory and has proven useful in applications of model theory to operator algebras. Much is known about the class of e.c. tracial von Neumann algebras: they must be McDuff II1 factors with only approximately inner automorphisms (see [5] for more on this class). The generic separable tracial von Neumann algebra is e.c. in the sense that in a natural Polish topology on the space of separable tracial von Neumann algebras, the e.c. algebras form a comeager set. The notion of e.c. factor can be relativized to the class of embeddable factors, in which case $\mathcal{R}$ is an e.c. embeddable factor. 444This follows immediately from the fact that $\mathcal{R}$ has the Jung property, but we will discuss another proof at the end of this paper. We show the following: ###### Theorem 1. If $M$ is a separable e.c. factor, then any embedding of $M$ into itself is approximately inner. From this theorem, it follows fairly quickly that if $M$ is separable, e.c., and self-tracially stable, then $M$ has the Jung property; see Proposition 6 below. We next turn to two model theoretic characterizations of $\mathcal{R}$ amongst embeddable factors. We call an e.c. (embeddable) factor enforceable if it embeds into all other e.c. (embeddable) factors.555This is not the original definition given in [7], but is equivalent by the results in Section 6 of that paper. Should the enforceable (embeddable) factor exist, it is automatically unique. In [7, Theorems 5.1 and 5.2], it is shown that $\mathcal{R}$ is the enforceable embeddable factor and that the CEP is equivalent to $\mathcal{R}$ being the enforceable factor. Due to the negative solution of the CEP, we see that $\mathcal{R}$ is not the enforceable factor. This does not, however, preclude the existence of the enforceable factor. We view the problem of the existence of the enforceable factor to be one of the central problems in the model theory of II1 factors, for if the enforceable factor exists, then it is a canonical object deserving of further study, whereas any proof that it does not exist yields a stronger refutation of CEP. In this paper, we prove: ###### Theorem 2. If the enforceable factor exists, then it has the Jung property. It is worth noting that by [6, Theorem 2.14], if $M$ is a II1 factor with the Jung property and $M$ is elementarily equivalent to $M\otimes M$, then $M\cong\mathcal{R}$.666Here, two II1 factors are elementarily equivalent if they have the same first-order theory. By the Keisler-Shelah Theorem, we can equivalently say that they have isomorphic ultrapowers. The reference [6] actually only deals with strongly self-absoring $\mathrm{C}^{*}$-algebras, but the proof there implies the result that we mention above. Consequently, if the enforceable factor $\mathcal{E}$ exists, then we have that $\mathcal{E}$ is not elementarily equivalent to $\mathcal{E}\otimes\mathcal{E}$.777By the work in [7, Remark 5.8], the failure of CEP already implies that $\mathcal{E}$, should it exist, could not be isomorphic to $\mathcal{E}\otimes\mathcal{E}$. Our final result concerns the finite forcing companion. A finitely generic factor is a particular kind of e.c. II1 factor with the generalized Jung property. (See [7, Definition 5.3] for a precise definition. Alternatively, [7, Propsition 3.10] presents a more workable version of the notion.) These factors always exist and any two of them are elementarily equivalent; the common first-order theory of the finitely generic factors is known as the finite forcing companion, denoted $T^{f}$. In the embeddable situation, $\mathcal{R}$ is a finitely generic embeddable factor (see [7, Corollary 3.14]), whence the finite forcing companion is simply the complete theory of $\mathcal{R}$. Since $\mathcal{R}$ embeds in any model of its theory (due to the axiomatizability of being McDuff), the fact that $\mathcal{R}$ has the generalized Jung property implies that it is the prime model of its theory.888In general, the prime model of a theory is a model which elementarily embeds into any other model of the theory and, if it exists, it is automatically unique. It is currently unknown whether or not $T^{f}$ has a prime model. However, if it does, then it is a non-embeddable factor with the Jung property: ###### Theorem 3. If $T^{f}$ has a prime model $M$, then $M$ has the Jung property. In the final section, we revisit the embeddable situation and give model- theoretic proofs that $\mathcal{R}$ has the Jung property and is self- tracially stable that might be of independent interest. In order to keep this note fairly short, we will freely use model-theoretic language when necessary. The reader is advised to consult [3, Section 2] for a more thorough introduction. We would like to thank Scott Atkinson and Srivatsav Kunnawalkam Eyavalli for helpful discussions in preparing this paper. ## 2\. Proofs of theorems We first prove Theorem 1. In fact, the following yields an even stronger result: ###### Theorem 4. Suppose that $M$ is an e.c. factor with subalgebra $N$. Then any embedding of $N$ in $M$ is approximately unitarily conjugate to the inclusion map. ###### Proof. Let $f:N\to M$ be an embedding. Let $P$ be the HNN extension obtained from $M$ and $f$; we note that $P$ is finite and there is a trace on $P$ such that the inclusion $M\subseteq P$ is trace-preserving (see [11, Corollary 4.2]). In particular, there is a unitary $u\in P$ such that $uf(x)u^{*}=x$ for all $x\in N$. Since $M$ is e.c., this implies that for any finite $F\subseteq N$ and $\epsilon>0$, there is a unitary $v\in M$ such that $\|vf(x)v^{*}-x\|_{2}<\epsilon$ for all $x\in F$, as desired. ∎ ###### Remark 5 (For the model theorists). Theorem 4 implies that, in any e.c. II1 factor, the quantifier-free type of a tuple implies its complete type. It would be interesting to see if one could leverage this fact to gain any further insight into the class of e.c. factors. In connection with Theorem 1, we say that a II1 factor $M$ has the weak Jung property if every endomorphism of $M$ is approximately inner. ###### Proposition 6. A separable II1 factor has the Jung property if and only if it has the weak Jung property and is self-tracially stable. ###### Proof. First suppose that $M$ has the Jung property and that $f:M\to M$ is an endomorphism. By viewing $f$ as taking values in $M^{\mathcal{U}}$, there is a unitary $u\in M^{\mathcal{U}}$ such that $uf(x)u^{*}=x$ for all $x\in M$. In particular, given any finite $F\subseteq M$ and $\epsilon>0$, there is a unitary $v\in M$ such that $\|vf(x)v^{*}-x\|_{2}<\epsilon$ for all $x\in F$, whence $f$ is approximately inner. As mentioned in the introduction, any II1 factor with the Jung property is self-tracially stable. The converse is clear. ∎ Given the fact that $\mathcal{R}$ is the unique separable embeddable factor with either the Jung property or the property of being self-tracially stable, the following question seems natural999On the other hand, there may be more than one non-embeddable factor with the generalized Jung property; see [3, Corollary 3.3.5].: ###### Question 7. Must there be at most one non-embeddable factor with the Jung property? That is self-tracially stable? We now move on to Theorem 2, which will follow from an alternative characterization of the enforceable factor. First, recall from [1] that if $\mathfrak{C}$ is a class of tracial von Neumann algebras, then a tracial von Neumann algebra $M$ is said to be $\mathfrak{C}$-tracially stable if whenever $f:M\to\prod_{\mathcal{U}}N_{i}$ is an embedding with $\mathcal{U}$ a nonprincipal ultrafilter on $\mathbb{N}$ and each $N_{i}$ belongs to $\mathfrak{C}$, then there are *-homomorphisms $f_{i}:M\to N_{i}$ such that $f(x)=(f_{i}(x))_{\mathcal{U}}$ for all $x\in M$. (We refer to the sequence $(f_{i})_{i\in\mathbb{N}}$ as an “approximate lifting” of $f$.) In particular, $M$ is self-tracially stable if and only if $M$ is $\\{M\\}$-tracially stable. We let $\mathfrak{E}$ denote the class of e.c. factors. The following theorem immediately implies Theorem 2: ###### Theorem 8. The II1 factor $M$ is the enforceable factor if and only if it is e.c. and $\mathfrak{E}$-tracially stable. In order to prove Theorem 8, we need to recall a few model-theoretic facts from [7]. First, if $M$ is an e.c. factor and $a$ is a tuple from $M$, the existential type of $a$ in $M$, denoted $\operatorname{etp}^{M}(a)$, is the collection of existential formulae $\varphi(x)$ such that $\varphi^{M}(a)=0$. Such an existential type is called isolated if, given any $\epsilon>0$, there is an existential formula $\varphi(x)$ and $\delta>0$ such that $\varphi^{M}(a)=0$ and whenever $N$ is an e.c. factor with a tuple $b\in N$ such that $\varphi^{N}(b)<\delta$, then there is $c\in N$ such that $\|b-c\|_{2}<\epsilon$ and for which $\operatorname{etp}^{M}(a)=\operatorname{etp}^{N}(c)$. The e.c. factor $M$ is called e-atomic if the existential types of all finite tuples are isolated. It is shown in [7, Section 6] that an e.c. factor $M$ is e-atomic if and only if it is the enforceable factor (in which case it is unique). ###### Proof of Theorem 8. First suppose that $M$ is the enforceable factor. We must show that $M$ is $\mathfrak{E}$-tracially stable. Towards this end, fix an embedding $f:M\to\prod_{\mathcal{U}}N_{i}$ with each $N_{i}$ e.c. Since $M$ is e.c., it is McDuff, whence singly generated. Fix a generator $a$ of $M$ and write $f(a)=(a_{i})_{\mathcal{U}}$. Fix $\epsilon>0$ and let $\varphi(x)$ and $\delta>0$ be as in the definition of isolated existential type for $a$ and $\epsilon$. Since $M$ is e.c., $f$ is an existential embedding, meaning that $\varphi^{\prod_{\mathcal{U}}N_{i}}(f(a))=0$ and thus $\varphi^{N_{i}}(a_{i})<\delta$ for $\mathcal{U}$-almost all $i$. For these $i$, there is $b_{i}\in N_{i}$ such that $\|a_{i}-b_{i}\|_{2}<\epsilon$ and for which the map $a\mapsto b_{i}$ extends to an isomorphism between $M$ and the subalgebra of $N_{i}$ generated by $b_{i}$. Thus, $f$ has an approximate lifting. Conversely, suppose that $M$ is e.c. and $\mathfrak{E}$-tracially stable. It follows that $M$ embeds into every e.c. II1 factor, whence $M$ is enforceable. ∎ Finally, we prove Theorem 3. ###### Proof of Theorem 3. Let $M$ be the prime model of $T^{f}$. To show that $M$ has the Jung property, we show that $M$ has the weak Jung property and is self-tracially stable. Fix a finitely generic factor $N$. Since $M$ is the prime model of $T^{f}$, we have that $M$ embeds elementarily in $N$. Thus, by [7, Corollary 3.12], $M$ itself is finitely generic. In particular, $M$ is e.c. and thus has the weak Jung property. It remains to show that $M$ is self-tracially stable. The argument for showing this is similar to that showing that the enforceable factor is self-tracially stable. Indeed, fix an embedding $f:M\to M^{\mathcal{U}}$. This time, given any $a\in M$, the complete type of $a$ in $M$, denoted $\operatorname{tp}^{M}(a)$, is isolated101010This follows from the fact that prime models of theories are atomic models.. Fix a generator $a$ of $M$ and write $f(a)=(a_{i})_{\mathcal{U}}$. Given $\epsilon>0$, there is some formula $\varphi(x)$ and $\delta>0$ such that $\varphi^{M}(a)=0$ and such that, given any model $N$ of $T^{f}$ and any $b\in N$ with $\varphi^{N}(b)<\delta$, there is $c\in N$ such that $\operatorname{tp}^{M}(a)=\operatorname{tp}^{N}(c)$ and $\|b-c\|_{2}<\epsilon$. Since $M$ is finitely generic, $f$ is an elementary map, whence $\varphi^{M^{\mathcal{U}}}(f(a))=0$ and thus $\varphi^{M}(a_{i})<\delta$ for $\mathcal{U}$-almost all $i$. As before, for these $i$, this guarantees the existence of $b_{i}\in M$ such that $\|a_{i}-b_{i}\|_{2}<\epsilon$ and such that the map $a\mapsto b_{i}$ extends to an embedding of $M$ into itself. Thus, $f$ has an approximate lift. ∎ ###### Remark 9. It is not clear if there is any relationship between the existence of the enforceable factor and the existence of the prime model of $T^{f}$. ## 3\. Revisiting the embeddable situation In this section, we show how our techniques from above can yield different proofs that $\mathcal{R}$ has the Jung property and is self-tracially similar. Recall from the introduction that $\mathcal{R}$ is the enforceable embeddable factor. Besides the model theory behind building models by games, the two main operator-algebraic ingredients in the proof are: * • Being hyperfinite is $\forall\bigvee\exists$-axiomatizable111111Morally speaking, one just axiomatizes the property that any finite tuple is within any positive tolerance of a copy of some matrix algebra., whence being hyperfinite is an enforceable property for embeddable factors. * • $\mathcal{R}$ is the unique separable hyperfinite factor. Noting that our proof from the previous section that the enforceable factor (should it exist) is self-tracially stable relativizes immediately to the embeddable situation, we obtain the fact that $\mathcal{R}$ is self-tracially stable, _without resorting to the fact that $\mathcal{R}$ has the Jung property_. Unfortunately, our proof in the previous section that the enforceable factor (should it exist) has the weak Jung property does not necessarily relativize to the embeddable situation as the following seems to be an open question: ###### Question 10. Is the class of embeddable tracial von Neumann algebras closed under HNN extensions? If the answer to the previous question is positive, then we learn that all e.c. embeddable factors (and thus, in particular, $\mathcal{R}$ itself) have the weak Jung property. Nevertheless, we can give a proof that is similar in spirit that does relativize to the embeddable situation. Indeed, fix an endomorphism $f:\mathcal{R}\to\mathcal{R}$; we show that $f$ is approximately inner. Let $a$ be a generator of $\mathcal{R}$. Since $\mathcal{R}$ is a finitely generic embeddable factor121212This follows from being the enforceable factor., we have $\operatorname{tp}^{\mathcal{R}}(a)=\operatorname{tp}^{\mathcal{R}}(f(a))$. Consequently, there is an elementary extension $N$ of $\mathcal{R}$ and an automorphism $\sigma$ of $N$ such that $\sigma(f(a))=a$. Using that $\mathcal{R}\subseteq N\rtimes_{\sigma}\mathbb{Z}$ and the class of embeddable factors is closed under crossed products by $\mathbb{Z}$ (and, more generally, by any amenable group), we have that, given any $\epsilon>0$, there is a unitary $u\in\mathcal{R}$ such that $\|uf(a)u^{*}-a\|_{2}<\epsilon$. Consequently, $f$ is approximately inner. Combining these proofs gives a new proof that $\mathcal{R}$ has the Jung property. We end with the following natural question which, to the best of our knowledge, is open: ###### Question 11. Is $\mathcal{R}$ the unique embeddable factor with the weak Jung property? As mentioned above, if the class of embeddable tracial von Neumann algebras is closed under HNN extensions, then there are a plethora of embeddable factors with the weak Jung property. ## References * [1] S. Atkinson, _Some results on tracial stability and graph products_ , to appear in Indiana Univ. Math. J. * [2] S. Atkinson and S. Kunnawalkam Elayavalli, _On ultraproduct embeddings and amenability for tracial von Neumann algebras_ , to appear in Int. Math. Res. Not. * [3] S. Atkinson, I. Goldbring, and S. Kunnawalkam Elayavalli, _Factorial relative commutants and the generalized Jung property for II 1 factors_, preprint. arXiv 2004.02293. * [4] A. Connes, _Classification of injective factors. Cases II 1, II∞, IIIλ, $\lambda\not=1$_, Ann. of Math. 104(1976), 73–115. * [5] I. Farah, I. Goldbring, B. Hart, and D. Sherman, _Existentially closed II 1 factors_, Fundamenta Mathematicae 233 (2016), 173-196. * [6] I. Farah, B. Hart, A. Tikuisis, and M. Rørdam, _Relative commutants of strongly self-absorbing $\mathrm{C}^{*}$-algebras_, Selecta Math. 23 (2017) 363-387. * [7] I. Goldbring, _Enforceable operator algebras_ , to appear in the Journal of the Institute of Mathematics of Jussieu. * [8] Z. Ji, A. Natarajan, T. Vidick, J. Wright and H. Yuen, MIP* = RE, preprint, arxiv 2001.04383. * [9] K. Jung, _Amenability, tubularity, and embeddings into $\mathcal{R}^{\omega}$_, Math. Ann. 338(2007), 241–248. * [10] F.J. Murray and J. von Neumann, _On rings of operators IV_ , Ann. of Math. 44 (1943), 716-808. * [11] Y. Ueda, _HNN extensions of von Neumann algebras_ , J. Funct. Anal. 225 (2005), 383-426.
aainstitutetext: Department of Applied Physics, Nanjing University of Science and Technology, Nanjing 210094, People’s Republic of Chinabbinstitutetext: Center for Theoretical Physics, Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea # Probing electroweak phase transition with multi-TeV muon colliders and gravitational waves Wei Liu b Ke-Pan Xie<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract We study the complementarity of the proposed multi-TeV muon colliders and the near-future gravitational wave (GW) detectors to the first order electroweak phase transition (FOEWPT), taking the real scalar extended Standard Model as the representative model. A detailed collider simulation shows the FOEWPT parameter space can be greatly probed via the the vector boson fusion production of the singlet, and its subsequent decay to the di-Higgs or di- boson channels. Especially, almost all the parameter space yielding detectable GW signals can be probed by the muon colliders. Therefore, if we could detect stochastic GWs in the future, a muon collider could provide a hopeful crosscheck to identify their origin. On the other hand, there is considerable parameter space that escapes GW detections but is within the reach of the muon colliders. The precision measurements of Higgs couplings could also probe the FOEWPT parameter space efficiently. ## 1 Introduction Revealing the nature of the electroweak phase transition (EWPT) is one of the most important tasks in particle physics after the discovery of the Higgs boson at the LHC Aad:2012tfa ; Chatrchyan:2012ufa . In the Standard Model (SM), lattice calculations have shown that the EWPT is a smooth crossover Kajantie:1996qd ; Rummukainen:1998as ; Laine:1998jb . However, the EWPT could be first-order (FO) in many new physics models beyond the SM (BSM), such as the real singlet extended SM (xSM) McDonald:1993ey ; Profumo:2007wc ; Espinosa:2011ax ; Cline:2012hg ; Alanne:2014bra ; Profumo:2014opa ; Alves:2018jsw ; Vaskonen:2016yiu ; Huang:2018aja ; Cheng:2018ajh ; Alanne:2019bsm ; Gould:2019qek ; Carena:2019une ; Ghorbani:2018yfr ; Ghorbani:2017jls , two-Higgs-doublet model Turok:1990zg ; Turok:1991uc ; Cline:2011mm ; Dorsch:2013wja ; Chao:2015uoa ; Basler:2016obg ; Haarr:2016qzq ; Dorsch:2017nza ; Andersen:2017ika ; Bernon:2017jgv ; Wang:2018hnw ; Wang:2019pet ; Kainulainen:2019kyp ; Su:2020pjw , left-right symmetric model Brdar:2019fur ; Li:2020eun 111A research for the Pati-Salam model can be found in Ref. Huang:2020bbe ., Georgi-Machacek model Zhou:2018zli and composite Higgs models Espinosa:2011eu ; Chala:2016ykx ; Chala:2018opy ; Bruggisser:2018mus ; Bruggisser:2018mrt ; Bian:2019kmg ; DeCurtis:2019rxl ; Xie:2020bkl , etc. A FOEWPT can drive the early Universe out of thermal equilibrium, providing the essential environment for the electroweak baryogenesis (EWBG) mechanism Morrissey:2012db ; Cline:2006ts ; Trodden:1998ym , which explains the observed cosmological matter-antimatter asymmetry. The FOEWPT can manifest itself at two different kinds of experiments: the gravitational wave (GW) detectors and the high energy particle colliders. In the former case, the stochastic GWs generated during the FOEWPT are expected to be detectable at a few near-future space-based laser interferometers such as LISA Audley:2017drz , BBO Crowder:2005nr , TianQin Luo:2015ght ; Hu:2017yoc , Taiji Hu:2017mde ; Guo:2018npi and DECIGO Kawamura:2011zz ; Kawamura:2006up . While in the latter case, the BSM physics related to FOEWPT might be probed at the colliders Ramsey-Musolf:2019lsf , such as the CERN LHC and future proton-proton colliders including HE-LHC Abada:2019ono , SppC CEPC- SPPCStudyGroup:2015csa and FCC-hh Benedikt:2018csr , or future electron- positron colliders such as CEPC CEPCStudyGroup:2018ghi , ILC Djouadi:2007ik and FCC-ee Abada:2019zxq . Generally speaking, the hadron colliders have high energy reach but suffer from the huge QCD backgrounds, while the electron- positron colliders are very accurate but limited by the relatively low collision energy due to the large synchrotron radiation. A muon collider might be able to offer both high collision energy and clean environment to probe the FOEWPT. On one hand, thanks to the suppressed synchrotron radiation compared to the electron, the energy of a muon collider can reach $\mathcal{O}(10)$ TeV. What’s more, the entire muon collision energy can be used to probe the short-distance reactions (hard processes). In contrast, at a $pp$ collider such as LHC, only a small fraction of the proton collision energy is available for the hard processes. On the other hand, due to the small QCD backgrounds, the muon collider is rather clean, allowing very precise measurements. The physics potential of a high energy muon collider has been discussed since the 1990s Barger:1995hr ; Barger:1996jm , while it receives a renewed interest recently Han:2012rb ; Chakrabarty:2014pja ; Ruhdorfer:2019utl ; DiLuzio:2018jwd ; Delahaye:2019omf ; Long:2020wfp ; Buttazzo:2018qqp ; Costantini:2020stv ; Han:2020uid ; Capdevilla:2020qel ; Han:2020uak ; Han:2020pif ; Bartosik:2020xwr ; Chiesa:2020awd ; Yin:2020afe ; Buttazzo:2020eyl ; Lu:2020dkx ; Huang:2021nkl . In this work, we investigate the possibility of probing FOEWPT at a multi-TeV muon collider and the complementarity with the GW experiments, taking the xSM as the benchmark model. Although the xSM is simple, it has captured the most important features of the FOEWPT induced by tree level barrier via renormalizable operators Chung:2012vg , and can serve as the prototype of many BSM models that trigger the FOEWPT. For the muon collider setup, we follow Ref. Han:2020pif to consider collision energies of 3, 6, 10 and 30 TeV, with integrated luminosities of 1, 4, 10 and 90 ab-1, respectively. This paper is organized as follows. We first introduce the xSM and derive its parameter space for FOEWPT in Section 2, where we also discuss the GW signals and their detectability at the future LISA detector. The phenomenology at high energy muon colliders is studied in Section 3, where both the direct (i.e. resonant production of the real singlet) and indirect (i.e. the Higgs coupling measurements) searches are considered. The complementarity between collider and GW experiments is also discussed. Finally, we conclude in Section 4. ## 2 FOEWPT in the xSM ### 2.1 The model Up to renormalizable level, the scalar potential of xSM can be generally written as $V=-\mu^{2}|H|^{2}+\lambda|H|^{4}+\frac{a_{1}}{2}|H|^{2}S+\frac{a_{2}}{2}|H|^{2}S^{2}+b_{1}S+\frac{b_{2}}{2}S^{2}+\frac{b_{3}}{3}S^{3}+\frac{b_{4}}{4}S^{4},$ (1) which has eight input parameters. However, one degree of freedom is unphysical due to the shift invariance of the potential under $S\to S+\sigma$; in addition, the measured Higgs mass $M_{h}=125.09$ GeV and vacuum expectation value (VEV) $v=246$ GeV put another two constraints, leaving us only five free physical input parameters. To remove the shift invariance, we fix $b_{1}=0$ in Eq. (1). In unitary gauge, Eq. (1) can be expanded around the VEV, i.e. $H=\frac{1}{\sqrt{2}}\begin{pmatrix}0\\\ v+h\end{pmatrix},\quad S=v_{s}+s,$ (2) and then the mass term of the two neutral scalars reads $V\supset\frac{1}{2}\begin{pmatrix}h&s\end{pmatrix}\mathcal{M}_{s}^{2}\begin{pmatrix}h\\\ s\end{pmatrix};\quad\mathcal{M}_{s}^{2}=\begin{pmatrix}\frac{\partial^{2}V}{\partial h^{2}}&\frac{\partial^{2}V}{\partial h\partial s}\\\ \frac{\partial^{2}V}{\partial h\partial s}&\frac{\partial^{2}V}{\partial s^{2}}\end{pmatrix}.$ (3) Diagonalizing $\mathcal{M}_{s}^{2}$ yields the mass eigenstates $h_{1}$, $h_{2}$ and the mixing angle $\theta$ between them, namely $\begin{pmatrix}h\\\ s\end{pmatrix}=U\begin{pmatrix}h_{1}\\\ h_{2}\end{pmatrix},\quad U=\begin{pmatrix}\cos\theta&-\sin\theta\\\ \sin\theta&\cos\theta\end{pmatrix},$ (4) such that the mass matrix becomes $U^{\dagger}\mathcal{M}_{s}^{2}U={\rm diag}\left\\{M_{h_{1}}^{2},M_{h_{2}}^{2}\right\\}$. Here we assume the lighter state $h_{1}$ is the SM Higgs-like boson. The requirement that $(v,v_{s})$ is an extremum of Eq. (1) yields two relations Alves:2018jsw $\mu^{2}=\lambda v^{2}+\frac{v_{s}}{2}(a_{1}+a_{2}v_{s}),\quad b_{2}=-\frac{1}{4v_{s}}\left[v^{2}(a_{1}+2a_{2}v_{s})+4v_{s}^{2}(b_{3}+b_{4}v_{s})\right],$ (5) where the coefficients $\lambda$, $a_{1}$ and $a_{2}$ can be further expressed in terms of $M_{h_{1}}$, $M_{h_{2}}$ and $\theta$, $\begin{split}\lambda=&~{}\frac{M_{h_{1}}^{2}c_{\theta}^{2}+M_{h_{2}}^{2}s_{\theta}^{2}}{2v^{2}},\\\ a_{1}=&~{}\frac{4v_{s}}{v^{2}}\left[v_{s}^{2}\left(2b_{4}+\frac{b_{3}}{v_{s}}\right)-M_{h_{1}}^{2}s_{\theta}^{2}-M_{h_{2}}^{2}c_{\theta}^{2}\right],\\\ a_{2}=&~{}\frac{1}{2v_{s}}\left[\frac{s_{2\theta}}{v}\left(M_{h_{1}}^{2}-M_{h_{2}}^{2}\right)-a_{1}\right],\end{split}$ (6) with $c_{\theta}$ and $s_{\theta}$ being short for $\cos\theta$ and $\sin\theta$, respectively. Fixing $M_{h_{1}}=M_{h}=125.09$ GeV and $v=246$ GeV, we can use the following five parameters $\left\\{M_{h_{2}},\theta,v_{s},b_{3},b_{4}\right\\},$ (7) as input, and derive other parameters such as $\mu^{2}$, $\lambda$ via Eq. (5) and Eq. (6). We use the strategy described in Appendix A to obtain the parameter space that satisfies the SM constraints. The dataset is stored in form of a list of the five input parameters in Eq. (7), and then used for the calculation of FOEWPT and GWs in the following subsection. ### 2.2 FOEWPT and GWs The scalar potential $V$ in Eq. (1) receives thermal corrections at finite temperature, becoming $\begin{split}V_{T}=&-\left(\mu^{2}-c_{H}T^{2}\right)|H|^{2}+\lambda|H|^{4}+\frac{a_{1}}{2}|H|^{2}S+\frac{a_{2}}{2}|H|^{2}S^{2}\\\ &+\left(b_{1}+m_{1}T^{2}\right)S+\frac{b_{2}+c_{S}T^{2}}{2}S^{2}+\frac{b_{3}}{3}S^{3}+\frac{b_{4}}{4}S^{4},\end{split}$ (8) where we only keep the gauge invariant $T^{2}$-order terms Dolan:1973qd ; Braaten:1989kk , and $c_{H}=\frac{3g^{2}+g^{\prime 2}}{16}+\frac{y_{t}^{2}}{4}+\frac{\lambda}{2}+\frac{a_{2}}{24},\quad c_{S}=\frac{a_{2}}{6}+\frac{b_{4}}{4},\quad m_{1}=\frac{a_{1}+b_{3}}{12}.$ (9) In our convention, $b_{1}=0$, the tadpole term for $s$ only arises at finite temperature. This term is found to be suppressed in most of the parameter space Profumo:2007wc ; Profumo:2014opa , however for completeness we also include it in numerical study. As we will see very soon, the tadpole has a non-negligible impact on the FOEWPT pattern. Thermal corrections change the vacuum structure of the scalar potential. In suitable parameter space, there exists a critical temperature $T_{c}$ at which the potential $V_{T}$ in Eq. (8) has two degenerate vacua, one with $h=0$ (EW- symmetric) and the other with $h\neq 0$ (EW-broken). Initially, the Universe stays in the EW-symmetric vacuum. As the Universe expands and the temperature falls below $T_{c}$, the $h\neq 0$ vacuum is energetically preferred and the Universe acquires a probability of decaying to it. The decay rate per unit volume is Linde:1981zj $\Gamma(T)\sim T^{4}\left(\frac{S_{3}(T)}{2\pi T}\right)^{3/2}e^{-S_{3}(T)/T},$ (10) where $S_{3}(T)$ is the Euclidean action of the $O(3)$-symmetric bounce solution. FOEWPT occurs when the decay rate per Hubble volume reaches $\mathcal{O}(1)$ and hence the EW-broken vacuum bubbles start to nucleate. This defines the nucleation temperature $T_{n}$, which satisfies $\Gamma(T_{n})=H^{4}(T_{n})$, with $H(T)$ being the Hubble constant at temperature $T$. For a radiation-dominated Universe and a phase transition at EW scale, $T_{n}$ can be solved by the approximate relation Quiros:1999jp $S_{3}(T_{n})/T_{n}\approx 140,$ (11) which we take as the criterion for a FOEWPT. For each data point derived in last subsection, we calculate $T_{n}$ by solving Eq. (11) with the Python package CosmoTransitions Wainwright:2011kj . Around $10\%$ of the data can trigger a FOEWPT. The left panel of Fig. 1 shows the collection of FOEWPT data points by plotting the initial and final states of the vacuum decay $(0,v_{s}^{i})\to(v^{f},v_{s}^{f})$. For a successful EWBG, the phase transition should be strong Moore:1998swa ; Zhou:2019uzq $v^{f}/T_{n}\gtrsim 1,$ (12) such that the EW sphaleron process in the EW-broken vacuum is suppressed. Hereafter we will focus on data points satisfying Eq. (12). We found that many data points yield a decay pattern of $(0,0)\to(v^{f},v_{s}^{f})$, but there are also considerable fraction of data that have $v_{s}^{i}\neq 0$. Quantitively, if we use $|v_{s}^{i}/v_{s}^{f}|\lesssim 0.01$ as the criterion of a $(0,0)\to(v^{f},v_{s}^{f})$ FOEWPT, then the fraction of data points falling in this pattern is around $8.8\%$, while in Ref. Alves:2018jsw the corresponding fraction is $99\%$. We have checked that the difference comes from the treatment of the thermal tadpole term in Eq. (8): we keep this term, while Ref. Alves:2018jsw drops it. Therefore, the tadpole term actually has a considerable impact on the FOEWPT pattern. Figure 1: Left: the collection of vacuum decay initial states $(0,v_{s}^{i})$ (red) and final states $(v^{f},v_{s}^{f})$ (blue). Right: the SNRs for the SFOEWPT data projected at the $\alpha$-$\beta/H$ plane. A FOEWPT generates stochastic GWs mainly through three sources: bubble collisions, sound waves in the plasma and the magneto-hydrodynamics turbulence Mazumdar:2018dfl . After cosmological redshift, those GWs today typically peak at $f\sim{\rm mHz}$ Grojean:2006bp , which is the sensitive region of a few next-generation space-based interferometers mentioned in the introuction. To obtain the GW spectrum today, we derive the following two parameters for each FOEWPT data point $\alpha=\frac{1}{g_{*}\pi^{2}T_{n}^{4}/30}\left(T\frac{\partial\Delta V_{T}}{\partial T}-\Delta V_{T}\right)\Big{|}_{T_{n}};\quad\beta/H=T_{n}\frac{d(S_{3}/T)}{dT}\Big{|}_{T_{n}},$ (13) where $\Delta V_{T}=V_{T}|_{T_{n},(v^{f},v_{s}^{f})}-V_{T}|_{T_{n},(0,v_{s}^{i})}$ is effective potential difference between the true and false vacua, and $g_{*}\sim 100$ is the number of relativistic degrees of freedom. In other words, $\alpha$ is the transition latent heat over the radiation energy, while $\beta/H$ is the Universe expansion time scale over the phase transition duration. Refs. Grojean:2006bp ; Caprini:2015zlo ; Caprini:2019egz point out that the GW spectrum $\Omega_{\rm GW}(f)$ of a FOEWPT can be expressed as numerical functions of $(\alpha,\beta/H,v_{b})$, where $v_{b}$ is the bubble expansion velocity. Taking $v_{b}=0.6$ as a benchmark, we are now able to calculate $\Omega_{\rm GW}(f)$ for each FOEWPT data point.222Note that $v_{b}$ is the bubble velocity with respect to the plasma at finite distance, while the velocity relevant for the EWBG calculation is actually $v_{w}$, which is defined as the relative velocity to the plasma just in front of the wall. The relation between $v_{b}$ and $v_{w}$ can be solved using hydrodynamics Espinosa:2010hh ; No:2011fi , and it is possible to have a high $v_{b}$ (good for GW signals) and low $v_{w}$ (good for EWBG) simultaneously No:2011fi ; Alves:2018oct ; Alves:2018jsw ; Alves:2019igs ; Alves:2020bpi . The suppression factor coming from the short duration of the sound wave period has been taken into account Ellis:2018mja ; Guo:2020grp . The signal-to-noise ratio (SNR) characterizes the detectability of GWs signals at an interferometer. Taking the LISA detector as an example, we calculate the SNR as ${\rm SNR}=\sqrt{\mathcal{T}\int_{f_{\rm min}}^{f_{\rm max}}df\left(\frac{\Omega_{\rm GW}(f)}{\Omega_{\rm LISA}(f)}\right)^{2}},$ (14) where $\Omega_{\rm LISA}$ is the sensitivity curve of the LISA detector Caprini:2015zlo , and $\mathcal{T}=9.46\times 10^{7}$ s the data-taking duration (around four years) Caprini:2019egz . According to Ref. Caprini:2015zlo , we use ${\rm SNR}>10~{}(50)$ as the detection threshold for a six-link (four-link) configuration LISA. In the right panel of Fig. 1 we plot the $(\alpha,\beta/H)$ distribution as well as the SNRs of our data points. As shown in the figure, the data with large $\alpha$ (which means larger energy released in the transition) and smaller $\beta/H$ (means longer duration of the transition) have larger SNRs.333There are some data points with $\alpha\gtrsim 1$, implying a strong supercooling. In this case, it is suggested that it is the percolation temperature $T_{p}$ rather than nucleation temperature $T_{n}$ that should be used to calculate $\alpha$ and $\beta/H$ Megevand:2016lpr ; Kobakhidze:2017mru ; Ellis:2018mja ; Ellis:2020awk ; Wang:2020jrd . Since most of our data lie in the $\alpha\lesssim 1$ region, we adopt the approximation $T_{n}\approx T_{p}$, and leave a more detailed treatment for the future work. Also note that we are using the traditional approach to derive FOEWPT profiles and calculate the GWs. It is shown that the alternative dimensional reduction approach can reduce the theoretical uncertainties significantly Croon:2020cgk . ## 3 Phenomenology at high energy muon colliders Besides the GWs, the FOEWPT parameter space of the xSM can lead to signals of a resonantly produced heavy scalar (direct search), and corrections to the SM Higgs couplings (indirect search) at the colliders. The corresponding phenomenology has been studied at the LHC and the proposed $pp$ or $e^{+}e^{-}$ colliders Profumo:2014opa ; Cao:2017oez ; Alves:2018oct ; Zhou:2020idp ; Alves:2018jsw ; Chen:2019ebq ; Huang:2016cjm ; Alves:2019igs ; Kozaczuk:2019pet ; Papaefstathiou:2020iag ; Alves:2020bpi ; No:2018fev ; Xie:2020wzn . Typically, the direct search is implemented at $pp$ colliders due to their high energy reach, while the indirect approach is preferred by the $e^{+}e^{-}$ colliders because of the high accuracy. In this section we will demonstrate that, with the clean background and sufficient collision energy, a multi-TeV muon collider is able to perform both the direct and indirect searches, exhibiting a great potential to test the FOEWPT. ### 3.1 Production and decays of the heavy scalar The heavy scalar $h_{2}$ can be produced at a lepton collider via the $Zh_{2}$ associated production or the vector boson fusion (VBF) process. At a collider with center-of-mass energy as high as a few TeV, the dominant channel is VBF444The $\gamma h_{2}$ associated production (so-called “radiative return”) can be comparable to the $Zh_{2}$ production at a low energy muon collider Chakrabarty:2014pja . $\begin{split}\mu^{+}\mu^{-}\rightarrow&~{}h_{2}\nu_{\mu}\bar{\nu}_{\nu}\quad\text{($W^{+}W^{-}$ fusion)},\\\ \mu^{+}\mu^{-}\rightarrow&~{}h_{2}\mu^{+}\mu^{-}\quad\text{($ZZ$ fusion)},\end{split}$ (15) and the production rate is $\sigma_{h_{2}}=s_{\theta}^{2}\times\sigma^{\text{SM}}_{h_{2}},$ (16) where the SM-like production rate $\sigma^{\text{SM}}_{h_{2}}$ is the SM Higgs VBF production cross section evaluated by replacing the Higgs mass with $M_{h_{2}}$. This is because the coupling of $h_{2}$ to the SM gauge bosons comes from mixing, see Eq. (4). In the left panel of Fig. 2, we have explicitly shown $\sigma^{\text{SM}}_{h_{2}}$ at the muon collider with different benchmark collision energies $\sqrt{s}$.555The cross sections are calculated by the<EMAIL_ADDRESS>event generator Alwall:2014hca with the model file written using FeynRules Alloul:2013bka . The collider simulations in the next two subsections are also implemented with those two packages. From the figure, it is clear that we can easily obtain hundreds of fb of cross section for $h_{2}$ with $\mathcal{O}({\rm TeV})$ mass. For a given $\sqrt{s}$, the $W^{+}W^{-}$ fusion contributes $\sim 90\%$ of the total cross section. Figure 2: Left: The SM-like production cross section of $h_{2}$ at muon colliders with different collision energies. Right: The scattering plots for the $\text{Br}(h_{2}\rightarrow VV)$ and $\text{Br}(h_{2}\rightarrow h_{1}h_{1})$ corresponding to the FOEWPT data points, and the value of $M_{h_{2}}$ is shown in color. The produced $h_{2}$ will subsequently decays to multiple final states, such as di-Higgs ($h_{1}h_{1}$), di-boson ($W^{+}W^{-}$ and $ZZ$) and di-fermion (e.g. $t\bar{t}$). For the di-boson and di-fermion channels, $\Gamma_{h_{2}\rightarrow XX}=s_{\theta}^{2}\times\Gamma^{\text{SM}}_{h_{2}\rightarrow XX},$ (17) where $X$ denotes the SM vector boson or fermion, and $\Gamma^{\text{SM}}_{h_{2}\rightarrow XX}$ is the decay width of the SM Higgs calculated at Higgs mass equal to $M_{h_{2}}$. For the di-Higgs channel, $\Gamma_{h_{2}\to h_{1}h_{1}}=\frac{\lambda_{h_{2}h_{1}h_{1}}^{2}}{32\pi M_{h_{2}}}\sqrt{1-\frac{4M^{2}_{h}}{M^{2}_{h_{2}}}},$ (18) where the $h_{2}h_{1}h_{1}$ coupling is defined by $\mathcal{L}_{\rm xSM}\supset\frac{1}{2!}\lambda_{h_{2}h_{1}h_{1}}h_{2}h_{1}^{2},$ (19) and at tree level Huang:2016cjm $\lambda_{h_{2}h_{1}h_{1}}=\left(\frac{1}{2}a_{1}+a_{2}v_{s}\right)c_{\theta}^{3}+(2a_{2}v-6\lambda v)s_{\theta}c_{\theta}^{2}\\\ +\left(6b_{4}v_{s}+2b_{3}-2a_{2}v_{s}-a_{1}\right)s_{\theta}^{2}c_{\theta}-a_{2}vs_{\theta}^{3}.$ (20) The branching ratios are $\text{Br}(h_{2}\rightarrow XX)=\frac{s_{\theta}^{2}\times\Gamma^{\text{SM}}_{h_{2}\rightarrow XX}}{s_{\theta}^{2}\times\sum_{X^{\prime}}\Gamma^{\text{SM}}_{h_{2}\rightarrow X^{\prime}X^{\prime}}+\Gamma_{h_{2}\to h_{1}h_{1}}},$ (21) $\text{Br}(h_{2}\rightarrow h_{1}h_{1})=\frac{\Gamma_{h_{2}\rightarrow h_{1}h_{1}}}{s_{\theta}^{2}\times\sum_{X^{\prime}}\Gamma^{\text{SM}}_{h_{2}\rightarrow X^{\prime}X^{\prime}}+\Gamma_{h_{2}\to h_{1}h_{1}}}.$ (22) The branching ratios of the FOEWPT data points are projected to the $\text{Br}(h_{2}\to h_{1}h_{1})$-$\text{Br}(h_{2}\to VV)$ plane in the right panel of Fig. 2, where $V=W^{\pm}$, $Z$. We see that the di-Higgs branching ratio can reach $\sim 80\%$, while the $VV$ branching ratio dominates for large $M_{h_{2}}$. In general, all data points satisfy $\text{Br}(h_{2}\to h_{1}h_{1})+\text{Br}(h_{2}\to VV)+\text{Br}(h_{2}\to t\bar{t})\approx 100\%,$ (23) and $\text{Br}(h_{2}\to t\bar{t})\lesssim 20\%$. In the following two subsections, we choose the $h_{2}\to h_{1}h_{1}\to b\bar{b}b\bar{b}$ and $h_{2}\to ZZ\to\ell^{+}\ell^{-}\ell^{+}\ell^{-}$ as two complementary channels for collider simulations. ### 3.2 Direct search: the $h_{2}\to h_{1}h_{1}\to b\bar{b}b\bar{b}$ channel Directly characterizing the portal coupling between the singlet and the Higgs boson, the $h_{2}h_{1}h_{1}$ coupling is of our primary interests. The signal of such a coupling is a resonant di-Higgs production at the muon collider, ${\rm VBF}\to h_{2}\rightarrow h_{1}h_{1}$. As the SM Higgs dominantly decays into $b\bar{b}$ pairs, the major final state of the signal consists of four $b$-jets which can be reconstructed into two $h_{1}$’s and then one $h_{2}$. For the $ZZ$ fusion production channel, the final state contains two additional forward muons. Here we focus on the so-called inclusive channel by including both the $W^{+}W^{-}$ fusion and $ZZ$ fusion events without detecting the additional muons. In this case, the main backgrounds are the SM VBF $h_{1}h_{1}$ and $ZZ$ production, with $h_{1}\to b\bar{b}$ and $Z\to b\bar{b}$.666There are also QCD backgrounds such as $4b2\nu_{\mu}$, but they turn out to be negligible after the Higgs candidates selection No:2018fev ; Han:2020pif , thus will not be considered here. Cross sections [ab] | $\sigma^{300}_{S}$ | $\sigma^{600}_{S}$ | $\sigma^{900}_{S}$ | $\sigma_{B}^{ZZ}$ | $\sigma_{B}^{h_{1}h_{1}}$ ---|---|---|---|---|--- No Cut | 360 | 198 | 155 | 1080 | 567 Cut I | 123 | 81.8 | 84.0 | 273 | 96.0 Cut II | 104 | 68.1 | 69.7 | 5.42 | 80.0 Cut III, 300 | 102 | | | 2.43 | 4.83 Cut III, 600 | | 50.1 | | $\mathcal{O}(10^{-2})$ | 5.96 Cut III, 900 | | | 35.7 | $\mathcal{O}(10^{-2})$ | 2.96 Table 1: Cut flows at a 10 TeV muon collider for the signals with $M_{h_{2}}=$ 300, 600, 900 GeV and the backgrounds. For the signals, we have assumed $s_{\theta}=0.1$ and $\text{Br}(h_{2}\rightarrow h_{1}h_{1})=25\%$. The signal and background events are generated at parton-level. We smear the jet four-momentum according to a jet energy resolution of $\Delta E/E=10\%$, and assume a conservative $b$-tagging efficiency rate of 70%. The events are required to have exactly four $b$-jets satisfying the following basic acceptance cuts, $p_{T}^{j}>30~{}{\rm GeV},\quad|\eta_{j}|<2.43,\quad M_{\text{recoil}}>200~{}\text{GeV},\quad\text{(Cut I)}$ (24) where the pseudo-rapidity cut is based on a detector angular coverage of $10^{\circ}<\theta<170^{\circ}$, and the recoil mass is defined as $M_{\rm recoil}=\sqrt{\left(p_{\mu^{+}}+p_{\mu^{-}}-p_{j_{1}}-p_{j_{2}}-p_{j_{3}}-p_{j_{4}}\right)^{2}}.$ (25) Next, we pair the four $b$-jets by minimizing $\chi_{j}^{2}=(M_{j_{1}j_{2}}-M_{h})^{2}+(M_{j_{3}j_{4}}-M_{h})^{2},$ (26) The pairs $(j_{1},j_{2})$ and $(j_{3},j_{4})$ are then identified as the Higgs candidates, in which the harder pair is defined as $(j_{1},j_{2})$. As shown in blue in the left panel of Fig. 3, $M_{j_{1}j_{2}}$ and $M_{j_{3}j_{4}}$ peak around $M_{h}$ for the signal, while peak around $M_{Z}=91.188$ GeV for the $ZZ$ background. Therefore, a invariant mass cut $|M_{j_{1}j_{2}}-M_{h}|<15~{}{\rm GeV},\quad|M_{j_{3}j_{4}}-M_{h}|<15~{}{\rm GeV},\\\ \quad\text{(Cut II)}$ (27) can significantly remove the $ZZ$ background. While most SM $h_{1}h_{1}$ events survive this cut, this background can be removed greatly by the cut on the four-jet system, $|M_{4j}-M_{h_{2}}|<30~{}{\rm GeV},\quad\text{(Cut III)}$ (28) as illustrated in orange in the left panel of Fig. 3. The cut flows for three chosen signal benchmarks at a 10 TeV muon collider are shown in Table 1, indicating Cut III is fairly powerful to improve the signal over background factor. Figure 3: Left: after the basic acceptance cuts, the invariant mass distributions of the jet pairs and four-jet system for the signal and main backgrounds at the 10 TeV muon collider. Here we select $M_{h_{2}}=$ 600 GeV as the signal benchmark. Right: the expected probe limits on $s_{\theta}^{2}\times\text{Br}(h_{2}\rightarrow h_{1}h_{1})$ for different muon collider setups. The scatter points are the FOEWPT data, in which red, green and blue colors represent ${\rm SNR}\in[50,+\infty)$, $[10,50)$ and $[0,10)$, respectively. The limit from ATLAS at the 13 TeV LHC with $\mathcal{L}=$ 36.1 fb-1 Aad:2019uzh and its extrapolation to the HL-LHC Alves:2018jsw are also shown for comparison. Given the collision energy $\sqrt{s}$ and the integrated luminosity $\mathcal{L}$, the signal and background event numbers are $\begin{split}S=&~{}\sigma_{S}\times\epsilon_{S}\times\mathcal{L}=\sigma^{\rm SM}_{h_{2}}\times s_{\theta}^{2}\times\text{Br}(h_{2}\to h_{1}h_{1})\times\epsilon_{S}\times\mathcal{L},\\\ B=&~{}\sigma_{B}\times\epsilon_{B}\times\mathcal{L},\end{split}$ (29) where $\sigma_{S,B}$ are the signal and background production rates, and $\epsilon_{S,B}$ are the corresponding cut efficiencies, respectively. Note that $\sigma_{B}$ is already fixed, and $\sigma^{\rm SM}_{h_{2}}$ as well as $\epsilon_{S,B}$ depends only on $M_{h_{2}}$. This implies that we can generate events for several $M_{h_{2}}$ benchmarks and derive the collider probe limits for $s_{\theta}^{2}\times\text{Br}(h_{2}\to h_{1}h_{1})$, and make the interpolation to derive the $s_{\theta}^{2}\times\text{Br}(h_{2}\to h_{1}h_{1})$ reach as a function of $M_{h_{2}}$. As for the probe limits, we use the Poisson likelihood function $\displaystyle L(S)=e^{(S+B)}\frac{(S+B)^{n}}{n!}$ (30) with the number of observed events ($n$) taken to be equal to the background events ($n=B$). To get the 95% confidence level exclusion limits, we use the test statistic $Q_{k}$ $\displaystyle Q_{S}\equiv-2\ln\left[\frac{L(S)}{L(0)}\right]=3.84.$ (31) When $B\gg S$, the above procedure reduces to the well-known $S/\sqrt{B}=1.96$ criterion. The sensitivity of the muon collider to FOEWPT can be obtained by projecting the FOEWPT parameter space to such 2-dimension plane. This is done in the right panel of Fig. 3, in which the reach of different collider setups are plotted as different colored solid lines, and the FOEWPT data points lying above a specific line can be probed by the corresponding muon collider. Note that our projections are derived with a rather conservative $b$-tagging efficiency of 70%. A more optimistic efficiency such as 90% can improve the results by a factor of 2, while an analysis without $b$-tagging will weaken the limits by a factor of 2 or 3, as in this case the non-$b$ jets (such as $W^{\pm}/Z\to jj$) also contribute to the backgrounds. However, either case has little visual effects in the log coordinate. The right panel of Fig. 3 demonstrates that the FOEWPT parameter space can be greatly probed by the muon colliders, and higher energy colliders (with also higher integrated luminosities) give better reach. The current and projected LHC reach is shown in black lines for comparison. Because of the high accuracy in the multi-jet final state, even a 3 TeV muon collider (1 ab-1) has a sensitivity more than one order of magnitude better than the HL-LHC (13 TeV, 3 ab-1), and a 30 TeV muon collider (90 ab-1) is able to probe $s_{\theta}^{2}\times\text{Br}(h_{2}\to h_{1}h_{1})$ up to $10^{-5}$, covering almost all of the FOEWPT parameter space. To manifest the complementarity with the GW experiments, we use different colors to mark the FOEWPT points with different SNRs: red, green and blue for ${\rm SNR}\in[50,+\infty)$, $[10,50)$ and $[0,10)$, respectively. Treating ${\rm SNR}=10$ as the detectable threshold, we see that those points which can be detected by LISA mostly lie in the reach of the muon colliders, especially for the $\sqrt{s}\geqslant 6$ TeV setups. This is a great opportunity to identify the origin of the stochastic GWs, if they were detected in the future. On the other hand, the muon colliders also have significant sensitivity to the blue data points which are not detectable at the LISA. For muon colliders with $\sqrt{s}\leqslant 10$ TeV, there are still appreciable number of points which can not be reached, due to the tiny $\text{Br}(h_{2}\rightarrow h_{1}h_{1})$ in those points. Hence, in the next subsection, we will change our strategy by looking for a complementary decay channel, $h_{2}\to ZZ$, to finally cover those points. ### 3.3 Direct search: the $h_{2}\to ZZ\to\ell^{+}\ell^{-}\ell^{+}\ell^{-}$ channel The FOEWPT data points with tiny $\text{Br}(h_{2}\rightarrow h_{1}h_{1})$ might be potentially probed via the $h_{2}\rightarrow W^{+}W^{-}$ or $h_{2}\to ZZ$ channels. To get a better accuracy we focus on the leptonic decay of the gauge bosons. Although the $W^{+}W^{-}$ channel has a larger branching ratio, the neutrinos in the final state make this channel more challenging. In this subsection we would like to focus on the $ZZ$ channel with $Z\to\ell^{+}\ell^{-}$, where $\ell=e,\mu$, leading to a four-lepton final state. Figure 4: Left: after the basic acceptance cuts, the invariant mass distributions of the lepton pairs and four-lepton system for the signal and main backgrounds at the 10 TeV muon collider. Here we select $M_{h_{2}}=$ 600 GeV as the signal benchmark. Right: the expected probe limits on $s_{\theta}^{2}\times\text{Br}(h_{2}\rightarrow ZZ)$ for different muon collider setups. The scatter points are the FOEWPT data, in which red, green and blue colors represent ${\rm SNR}\in[50,+\infty)$, $[10,50)$ and $[0,10)$, respectively. The limit from ATLAS at the 13 TeV LHC with $\mathcal{L}=$ 36.1 fb-1 Aaboud:2018bun and its extrapolation to the HL-LHC Alves:2018jsw are also shown for comparison.. Similar to the treatment in the di-Higgs channel in the previous subsection, three cuts are applied to the events. We first require the events to have exactly four charged leptons with total zero charge, and satisfy the acceptance cuts $p_{T}^{\ell}>30~{}{\rm GeV},\quad|\eta_{\ell}|<2.43,\quad M_{\text{recoil}}>200~{}\text{GeV},\quad\text{(Cut I)}$ (32) where $M_{\rm recoil}$ is defined similarly to Eq. (25). We then we pair the opposite-sign leptons by minimizing777We don’t distinguish the lepton flavors here. We have checked that pairing method which distinguishes the flavors by classifying the same-flavor opposite-charge leptons into a pair [such as $(e^{+}e^{-})(\mu^{+}\mu^{-})$] gives almost the same $M_{\ell^{+}\ell^{-}}$ distributions. $\chi_{\ell}^{2}=(M_{\ell^{+}_{1}\ell^{-}_{1}}-M_{Z})^{2}+(M_{\ell^{+}_{2}\ell^{-}_{2}}-M_{Z})^{2},$ (33) and put a second cut $|M_{\ell^{+}_{1}\ell^{-}_{1}}-M_{Z}|<10~{}{\rm GeV},\quad|M_{\ell^{+}_{2}\ell^{-}_{2}}-M_{Z}|<10~{}{\rm GeV},\quad\text{(Cut II)}$ (34) to the events. Note that this cut will select both the signal and background $ZZ$ events. Finally, a cut for the four-lepton system $|M_{4\ell}-M_{h_{2}}|<20~{}{\rm GeV},\quad\text{(Cut III)}$ (35) can cut away the background significantly, as illustrated in the left panel of Fig. 4. The collider reach in $ZZ$ channel can be shown as $s_{\theta}^{2}\times\text{Br}(h_{2}\rightarrow ZZ)$ limits in the right panel of Fig. 4, in which different colored lines denote different muon collider setups, and the LHC current and future limits are also shown for comparison. Different from the case of $h_{1}h_{1}$ channel, the reach of the HL-LHC is comparable to a 6 TeV muon collider (4 ab-1) for $M_{h_{2}}\lesssim 650$ GeV, thanks to the cleanness of the four-lepton final state. Nevertheless, better sensitivities can still be obtained for muon colliders with $\sqrt{s}\geqslant 10$ TeV, especially for a 30 TeV (90 ab-1) muon collider at which almost all the data points can be probed. We have checked that the FOEWPT data points escaping the $h_{1}h_{1}$ channel search can be generally reached in the $ZZ$ channel; as a result, the combination of $h_{1}h_{1}$ and $ZZ$ channels can cover almost the entire FOEWPT parameter space.888Fig. 4 shows that the 3 TeV muon collider (1 ab-1) fails to probe any FOEWPT data points. This is because we only consider parameter space with a mixing angle $\theta\leqslant 0.15$. For larger values of $\theta$ (such as 0.35 in Ref. Alves:2018jsw ), there is reachable parameter space for the 3 TeV muon collider. The complementarity with GW experiments are also shown in the figure. ### 3.4 Indirect search: the Higgs coupling deviations Besides the direct detection of the heavy scalar $h_{2}$, measuring the couplings of the Higgs-like boson $h_{1}$ also give hints of the FOEWPT, as those couplings usually deviate their corresponding SM values in the FOEWPT parameter space. For example, expanding the xSM Lagrangian as $\mathcal{L}_{\rm xSM}\supset\kappa_{V}\left(M_{W}^{2}W_{\mu}^{+}W^{-\mu}+\frac{1}{2}M_{Z}^{2}Z_{\mu}Z^{\mu}\right)\frac{2h_{1}}{v}-\kappa_{3}\frac{M_{h}^{2}}{2v}h_{1}^{3},$ (36) at tree level we obtain $\kappa_{V}=\kappa_{3}=1$ for the SM, while $\kappa_{V}=c_{\theta},\quad\kappa_{3}=\frac{2v}{M_{h}^{2}}\left[\lambda vc_{\theta}^{3}+\frac{1}{4}c_{\theta}^{2}s_{\theta}\left(2a_{2}v_{s}+a_{1}\right)+\frac{1}{2}a_{2}vc_{\theta}s_{\theta}^{2}+\frac{1}{3}s_{\theta}^{3}\left(3b_{4}v_{s}+b_{3}\right)\right],$ (37) for the xSM. Defining the deviations as $\delta\kappa_{V}=1-\kappa_{V},\quad\delta\kappa_{3}=\kappa_{3}-1,$ (38) we project the FOEWPT data points into the $\delta\kappa_{3}$-$\delta\kappa_{V}$ plane in Fig. 5. One finds that $\delta\kappa_{3}$ is always positive (and $\lesssim 0.8$). This can be understood by expanding the deviation at small mixing angle Alves:2018jsw $\delta\kappa_{3}=\theta^{2}\left(-\frac{3}{2}+\frac{2M_{h_{2}}^{2}-2b_{3}v_{s}-4b_{4}v_{s}^{2}}{M_{h}^{2}}\right)+\mathcal{O}(\theta^{3}),$ (39) where the $M_{h_{2}}^{2}/M_{h}^{2}$ term dominates the terms in the bracket, implying an enhanced Higgs triple coupling. Since we set $\theta\leqslant 0.15$ when scanning over the parameter space (see Appendix A), the $\delta\kappa_{V}$ distribution has a sharp edge at around $0.15^{2}/2\approx 0.01$. Figure 5: Indirect limits from the measurements of the Higgs couplings. The scatter points are the FOEWPT data, in which red, green and blue colors represent ${\rm SNR}\in[50,+\infty)$, $[10,50)$ and $[0,10)$, respectively. The colored vertical and horizontal lines are the projections of different setups of muon colliders. The projections of CEPC ($\sqrt{s}=250$ GeV) are also shown in dashed lines for comparison. Also shown in Fig. 5 are the projections of the reach for different setups of muon colliders. The corresponding probe limits are adopted from Ref. Han:2020pif , which uses the VBF single Higgs production to study the $h_{1}VV$ coupling and the vector boson scattering di-Higgs production to study the triple Higgs coupling. It is clear that the FOEWPT parameter space can be probed very efficiently using via such indirect approach. A 3 TeV muon collider is already able to cover most of the data points, and a 30 TeV muon collider could test almost the whole parameter space. ## 4 Conclusion FOEWPT is an important BSM phenomenon that might exist in the early Universe, and shed light on today’s collider and GW experiments. In this article, we perform a complementarity study of the proposed high energy muon colliders and the near-future space-based GW detectors to the FOEWPT. Choosing the xSM as the benchmark model, we first derive the FOEWPT parameter space, and then test the possibility of detecting it via GW signals or muon collider experiments. In the calculation of FOEWPT, we have included the thermal tadpole term, which is dropped in a few previous references. It is shown that the inclusion of tadpole term reduces the fraction of “$(0,0)\to(v^{f},v_{s}^{f})$ pattern” FOEWPT data points from 99% to 8.8%. A considerable fraction of FOEWPT parameter space yields GW signals with ${\rm SNR}\geqslant 10$ at the LISA detector, thus might be probed. Since the TianQin and Taiji projects in China have projected sensitivities similar to the LISA, we expect the FOEWPT parameter space could be probed and crosschecked by those detectors in the near future. For the muon colliders, we consider center of mass energies $\sqrt{s}=3$, 6, 10 and 30 TeV, with corresponding integrated luminosities $\mathcal{L}=1$, 4, 10 and 90 ab-1, respectively. A detailed parton-level collider simulation is performed for the VBF production of the heavy real singlet and its subsequent decay to the di-Higgs ($h_{2}\to h_{1}h_{1}\to b\bar{b}b\bar{b}$) and di-boson ($h_{2}\to ZZ\to\ell^{+}\ell^{-}\ell^{+}\ell^{-}$) channels. The results in Figs. 3 and 4 show that muon colliders offer great opportunity to probe the FOEWPT parameter space. In the di-Higgs channel, all muon collider setups have much higher reach compared to the HL-LHC; while in the di-boson channel, the reach of HL-LHC is comparable with a 6 TeV muon collider, however the muon colliders with $\sqrt{s}\geqslant 10$ TeV still work much better. Combining the di-Higgs and di-boson channels allows us to probe the whole parameter space. In addition, given the high accuracy of the muon colliders, the precision measurements of the Higgs gauge and triple couplings also help to test the FOEWPT. As for the complementarity with the GW experiments, a remarkable result is that almost all parameter space yielding detectable GWs is within the reach of the muon colliders. This implies if in the future we really detected some signals at the GW detector, a muon collider could provide very useful crosscheck to locate their origin. We also find that there is large parameter space that is not detectable via GWs, but can be probed at the muon colliders. ###### Acknowledgements. We are grateful to Huai-Ke Guo and Ligong Bian for the very useful discussions and sharing the codes. KPX is supported by the Grant Korea NRF-2019R1C1C1010050. ## Appendix A Deriving the phenomenologically allowed potential This appendix demonstrates how to derive the parameter space of a xSM scalar potential that satisfies current phenomenological bounds. In summary, our strategy contains two steps: 1. 1. Construct a potential in Eq. (1), and make sure it has a VEV in $(h,s)=(v,0)$, and a Higgs mass $M_{h_{1}}=M_{h}$. In this case, generally $b_{1}\neq 0$. 2. 2. Shift the $s$ field such that $b_{1}=0$, and match the new coefficients to the ones described in Eq. (7) of Section 2.1. In this case, generally $v_{s}\neq 0$. For the first step, an extremum at $(v,0)$ requires Lewis:2017dme $\mu^{2}=\lambda v^{2},\quad b_{1}=-\frac{v^{2}}{4}a_{1};$ (40) and other coefficients can be expressed by the mass eigenvalues and mixing angle $a_{1}=\frac{s_{2\theta}}{v}\left(M_{h_{1}}^{2}-M_{h_{2}}^{2}\right),\quad b_{2}+\frac{a_{2}}{2}v^{2}=M_{h_{1}}^{2}s_{\theta}^{2}+M_{h_{2}}^{2}c_{\theta}^{2},\quad\lambda=\frac{1}{2v^{2}}\left(M_{h_{1}}^{2}c_{\theta}^{2}+M_{h_{2}}^{2}s_{\theta}^{2}\right).$ (41) Therefore, we can use $\left\\{M_{h_{2}},\theta,a_{2},b_{3},b_{4}\right\\},$ (42) as input and derive other coefficients via Eq. (40) and Eq. (41). We randomly generate the input parameters in the following range:999If $h_{2}$ is too heavy, its thermal effect will be suppressed by the Boltzmann factor and the high-temperature expansion doesn’t apply. Therefore we require $M_{h_{2}}\leqslant 1$ TeV. $\begin{split}&M_{h_{2}}\in[250,1000]~{}{\rm GeV},\quad\theta\in[0,0.15],\\\ &b_{4}\in[0,4\pi/3],\quad a_{2}\in[-2\sqrt{\lambda b_{4}},4\pi],\quad|b_{3}|\in[0,4\pi v],\end{split}$ (43) where the upper limits of $a_{2}$, $|b_{3}|$ and $b_{4}$ come from the unitarity bound Lewis:2017dme ; No:2018fev , while the lower limit of $a_{2}$ is required by a bounded below potential Lewis:2017dme . Note that Eq. (40) and Eq. (41) only guarantee $(v,0)$ is a local minimum, and there might be another deeper minimum. For a given set of Eq. (42), one needs to check whether $(v,0)$ is the vacuum (i.e. global minimum) by hand. We found that $\sim 38\%$ of the sampling points yield $(v,0)$ as the vacuum. Such points are then phenomenologically allowed. For the second step, we shift $s\to s+\sigma$ to get the following redefinitions of the coefficients in Eq. (1) Espinosa:2011ax , $\begin{split}\mu^{2}\to&~{}\mu^{2}-\frac{1}{2}a_{2}\sigma^{2}-\frac{1}{2}a_{1}\sigma,\\\ a_{1}\to&~{}a_{1}+2a_{2}\sigma,\\\ b_{1}\to&~{}b_{1}+b_{4}\sigma^{3}+b_{3}\sigma^{2}+b_{2}\sigma,\\\ b_{2}\to&~{}b_{2}+3b_{4}\sigma^{2}+2b_{3}\sigma,\\\ b_{3}\to&~{}b_{3}+3b_{4}\sigma,\end{split}$ (44) and choose $\sigma$ so that the new $b_{1}=0$. As a result, the shifted $s$ has a VEV $v_{s}=-\sigma$. If a negative $v_{s}$ is obtained, a $\mathbb{Z}_{2}$ transformation $s\to-s$, $a_{1}\to-a_{1}$ and $b_{3}\to- b_{3}$ is further performed to make sure $v_{s}\to-v_{s}$ is positive. Now the new coefficients combined $v_{s}$ can be matched to the input parameters in Eq. (7) for the calculation of FOEWPT and GWs. To check the consistency of our treatment, we have verified that the parameters after shifting satisfy the constraints in Eq. (5) and Eq. (6). ## References * (1) ATLAS collaboration, G. Aad et al., _Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC_ , _Phys. Lett. B_ 716 (2012) 1–29, [1207.7214]. * (2) CMS collaboration, S. Chatrchyan et al., _Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC_ , _Phys. Lett. B_ 716 (2012) 30–61, [1207.7235]. * (3) K. Kajantie, M. Laine, K. Rummukainen and M. E. Shaposhnikov, _A Nonperturbative analysis of the finite T phase transition in SU(2) x U(1) electroweak theory_ , _Nucl. Phys. B_ 493 (1997) 413–438, [hep-lat/9612006]. * (4) K. Rummukainen, M. Tsypin, K. Kajantie, M. Laine and M. E. Shaposhnikov, _The Universality class of the electroweak theory_ , _Nucl. Phys. B_ 532 (1998) 283–314, [hep-lat/9805013]. * (5) M. Laine and K. Rummukainen, _What’s new with the electroweak phase transition?_ , _Nucl. Phys. B Proc. Suppl._ 73 (1999) 180–185, [hep-lat/9809045]. * (6) J. McDonald, _Electroweak baryogenesis and dark matter via a gauge singlet scalar_ , _Phys. Lett. B_ 323 (1994) 339–346. * (7) S. Profumo, M. J. Ramsey-Musolf and G. Shaughnessy, _Singlet Higgs phenomenology and the electroweak phase transition_ , _JHEP_ 08 (2007) 010, [0705.2425]. * (8) J. R. Espinosa, T. Konstandin and F. Riva, _Strong Electroweak Phase Transitions in the Standard Model with a Singlet_ , _Nucl. Phys. B_ 854 (2012) 592–630, [1107.5441]. * (9) J. M. Cline and K. Kainulainen, _Electroweak baryogenesis and dark matter from a singlet Higgs_ , _JCAP_ 01 (2013) 012, [1210.4196]. * (10) T. Alanne, K. Tuominen and V. Vaskonen, _Strong phase transition, dark matter and vacuum stability from simple hidden sectors_ , _Nucl. Phys. B_ 889 (2014) 692–711, [1407.0688]. * (11) S. Profumo, M. J. Ramsey-Musolf, C. L. Wainwright and P. Winslow, _Singlet-catalyzed electroweak phase transitions and precision Higgs boson studies_ , _Phys. Rev. D_ 91 (2015) 035018, [1407.5342]. * (12) A. Alves, T. Ghosh, H.-K. Guo, K. Sinha and D. Vagie, _Collider and Gravitational Wave Complementarity in Exploring the Singlet Extension of the Standard Model_ , _JHEP_ 04 (2019) 052, [1812.09333]. * (13) V. Vaskonen, _Electroweak baryogenesis and gravitational waves from a real scalar singlet_ , _Phys. Rev. D_ 95 (2017) 123515, [1611.02073]. * (14) F. P. Huang, Z. Qian and M. Zhang, _Exploring dynamical CP violation induced baryogenesis by gravitational waves and colliders_ , _Phys. Rev. D_ 98 (2018) 015014, [1804.06813]. * (15) W. Cheng and L. Bian, _From inflation to cosmological electroweak phase transition with a complex scalar singlet_ , _Phys. Rev. D_ 98 (2018) 023524, [1801.00662]. * (16) T. Alanne, T. Hugle, M. Platscher and K. Schmitz, _A fresh look at the gravitational-wave signal from cosmological phase transitions_ , _JHEP_ 03 (2020) 004, [1909.11356]. * (17) O. Gould, J. Kozaczuk, L. Niemi, M. J. Ramsey-Musolf, T. V. Tenkanen and D. J. Weir, _Nonperturbative analysis of the gravitational waves from a first-order electroweak phase transition_ , _Phys. Rev. D_ 100 (2019) 115024, [1903.11604]. * (18) M. Carena, Z. Liu and Y. Wang, _Electroweak phase transition with spontaneous Z 2-breaking_, _JHEP_ 08 (2020) 107, [1911.10206]. * (19) K. Ghorbani and P. H. Ghorbani, _Strongly First-Order Phase Transition in Real Singlet Scalar Dark Matter Model_ , _J. Phys. G_ 47 (2020) 015201, [1804.05798]. * (20) P. H. Ghorbani, _Electroweak Baryogenesis and Dark Matter via a Pseudoscalar vs. Scalar_ , _JHEP_ 08 (2017) 058, [1703.06506]. * (21) N. Turok and J. Zadrozny, _Electroweak baryogenesis in the two doublet model_ , _Nucl. Phys. B_ 358 (1991) 471–493. * (22) N. Turok and J. Zadrozny, _Phase transitions in the two doublet model_ , _Nucl. Phys. B_ 369 (1992) 729–742. * (23) J. M. Cline, K. Kainulainen and M. Trott, _Electroweak Baryogenesis in Two Higgs Doublet Models and B meson anomalies_ , _JHEP_ 11 (2011) 089, [1107.3559]. * (24) G. Dorsch, S. Huber and J. No, _A strong electroweak phase transition in the 2HDM after LHC8_ , _JHEP_ 10 (2013) 029, [1305.6610]. * (25) W. Chao and M. J. Ramsey-Musolf, _Catalysis of Electroweak Baryogenesis via Fermionic Higgs Portal Dark Matter_ , 1503.00028. * (26) P. Basler, M. Krause, M. Muhlleitner, J. Wittbrodt and A. Wlotzka, _Strong First Order Electroweak Phase Transition in the CP-Conserving 2HDM Revisited_ , _JHEP_ 02 (2017) 121, [1612.04086]. * (27) A. Haarr, A. Kvellestad and T. C. Petersen, _Disfavouring Electroweak Baryogenesis and a hidden Higgs in a CP-violating Two-Higgs-Doublet Model_ , 1611.05757. * (28) G. Dorsch, S. Huber, K. Mimasu and J. No, _The Higgs Vacuum Uplifted: Revisiting the Electroweak Phase Transition with a Second Higgs Doublet_ , _JHEP_ 12 (2017) 086, [1705.09186]. * (29) J. O. Andersen, T. Gorda, A. Helset, L. Niemi, T. V. I. Tenkanen, A. Tranberg et al., _Nonperturbative Analysis of the Electroweak Phase Transition in the Two Higgs Doublet Model_ , _Phys. Rev. Lett._ 121 (2018) 191802, [1711.09849]. * (30) J. Bernon, L. Bian and Y. Jiang, _A new insight into the phase transition in the early Universe with two Higgs doublets_ , _JHEP_ 05 (2018) 151, [1712.08430]. * (31) L. Wang, J. M. Yang, M. Zhang and Y. Zhang, _Revisiting lepton-specific 2HDM in light of muon $g-2$ anomaly_, _Phys. Lett. B_ 788 (2019) 519–529, [1809.05857]. * (32) X. Wang, F. P. Huang and X. Zhang, _Gravitational wave and collider signals in complex two-Higgs doublet model with dynamical CP-violation at finite temperature_ , _Phys. Rev. D_ 101 (2020) 015015, [1909.02978]. * (33) K. Kainulainen, V. Keus, L. Niemi, K. Rummukainen, T. V. Tenkanen and V. Vaskonen, _On the validity of perturbative studies of the electroweak phase transition in the Two Higgs Doublet model_ , _JHEP_ 06 (2019) 075, [1904.01329]. * (34) W. Su, A. G. Williams and M. Zhang, _Strong first order electroweak phase transition in 2HDM confronting future Z & Higgs factories_, 2011.04540. * (35) V. Brdar, L. Graf, A. J. Helmboldt and X.-J. Xu, _Gravitational Waves as a Probe of Left-Right Symmetry Breaking_ , _JCAP_ 12 (2019) 027, [1909.02018]. * (36) M. Li, Q.-S. Yan, Y. Zhang and Z. Zhao, _Prospects of gravitational waves in the minimal left-right symmetric model_ , 2012.13686. * (37) W.-C. Huang, F. Sannino and Z.-W. Wang, _Gravitational Waves from Pati-Salam Dynamics_ , _Phys. Rev. D_ 102 (2020) 095025, [2004.02332]. * (38) R. Zhou, W. Cheng, X. Deng, L. Bian and Y. Wu, _Electroweak phase transition and Higgs phenomenology in the Georgi-Machacek model_ , _JHEP_ 01 (2019) 216, [1812.06217]. * (39) J. R. Espinosa, B. Gripaios, T. Konstandin and F. Riva, _Electroweak Baryogenesis in Non-minimal Composite Higgs Models_ , _JCAP_ 01 (2012) 012, [1110.2876]. * (40) M. Chala, G. Nardini and I. Sobolev, _Unified explanation for dark matter and electroweak baryogenesis with direct detection and gravitational wave signatures_ , _Phys. Rev. D_ 94 (2016) 055006, [1605.08663]. * (41) M. Chala, M. Ramos and M. Spannowsky, _Gravitational wave and collider probes of a triplet Higgs sector with a low cutoff_ , _Eur. Phys. J. C_ 79 (2019) 156, [1812.01901]. * (42) S. Bruggisser, B. Von Harling, O. Matsedonskyi and G. Servant, _Baryon Asymmetry from a Composite Higgs Boson_ , _Phys. Rev. Lett._ 121 (2018) 131801, [1803.08546]. * (43) S. Bruggisser, B. Von Harling, O. Matsedonskyi and G. Servant, _Electroweak Phase Transition and Baryogenesis in Composite Higgs Models_ , _JHEP_ 12 (2018) 099, [1804.07314]. * (44) L. Bian, Y. Wu and K.-P. Xie, _Electroweak phase transition with composite Higgs models: calculability, gravitational waves and collider searches_ , _JHEP_ 12 (2019) 028, [1909.02014]. * (45) S. De Curtis, L. Delle Rose and G. Panico, _Composite Dynamics in the Early Universe_ , _JHEP_ 12 (2019) 149, [1909.07894]. * (46) K.-P. Xie, L. Bian and Y. Wu, _Electroweak baryogenesis and gravitational waves in a composite Higgs model with high dimensional fermion representations_ , _JHEP_ 12 (2020) 047, [2005.13552]. * (47) D. E. Morrissey and M. J. Ramsey-Musolf, _Electroweak baryogenesis_ , _New J. Phys._ 14 (2012) 125003, [1206.2942]. * (48) J. M. Cline, _Baryogenesis_ , in _Les Houches Summer School - Session 86: Particle Physics and Cosmology: The Fabric of Spacetime_ , 9, 2006\. hep-ph/0609145. * (49) M. Trodden, _Electroweak baryogenesis_ , _Rev. Mod. Phys._ 71 (1999) 1463–1500, [hep-ph/9803479]. * (50) LISA collaboration, P. Amaro-Seoane et al., _Laser Interferometer Space Antenna_ , 1702.00786. * (51) J. Crowder and N. J. Cornish, _Beyond LISA: Exploring future gravitational wave missions_ , _Phys. Rev. D_ 72 (2005) 083005, [gr-qc/0506015]. * (52) TianQin collaboration, J. Luo et al., _TianQin: a space-borne gravitational wave detector_ , _Class. Quant. Grav._ 33 (2016) 035010, [1512.02076]. * (53) Y.-M. Hu, J. Mei and J. Luo, _Science prospects for space-borne gravitational-wave missions_ , _Natl. Sci. Rev._ 4 (2017) 683–684. * (54) W.-R. Hu and Y.-L. Wu, _The Taiji Program in Space for gravitational wave physics and the nature of gravity_ , _Natl. Sci. Rev._ 4 (2017) 685–686. * (55) W.-H. Ruan, Z.-K. Guo, R.-G. Cai and Y.-Z. Zhang, _Taiji program: Gravitational-wave sources_ , _Int. J. Mod. Phys. A_ 35 (2020) 2050075, [1807.09495]. * (56) S. Kawamura et al., _The Japanese space gravitational wave antenna: DECIGO_ , _Class. Quant. Grav._ 28 (2011) 094011. * (57) S. Kawamura et al., _The Japanese space gravitational wave antenna DECIGO_ , _Class. Quant. Grav._ 23 (2006) S125–S132. * (58) M. J. Ramsey-Musolf, _The electroweak phase transition: a collider target_ , _JHEP_ 09 (2020) 179, [1912.07189]. * (59) FCC collaboration, A. Abada et al., _HE-LHC: The High-Energy Large Hadron Collider: Future Circular Collider Conceptual Design Report Volume 4_ , _Eur. Phys. J. ST_ 228 (2019) 1109–1382. * (60) M. Ahmad et al., _CEPC-SPPC Preliminary Conceptual Design Report. 1. Physics and Detector_ , . * (61) FCC collaboration, A. Abada et al., _FCC-hh: The Hadron Collider: Future Circular Collider Conceptual Design Report Volume 3_ , _Eur. Phys. J. ST_ 228 (2019) 755–1107. * (62) CEPC Study Group collaboration, M. Dong et al., _CEPC Conceptual Design Report: Volume 2 - Physics \ & Detector_, 1811.10545. * (63) ILC collaboration, G. Aarons et al., _International Linear Collider Reference Design Report Volume 2: Physics at the ILC_ , 0709.1893. * (64) FCC collaboration, A. Abada et al., _FCC-ee: The Lepton Collider: Future Circular Collider Conceptual Design Report Volume 2_ , _Eur. Phys. J. ST_ 228 (2019) 261–623. * (65) V. D. Barger, M. Berger, J. Gunion and T. Han, _s channel Higgs boson production at a muon muon collider_ , _Phys. Rev. Lett._ 75 (1995) 1462–1465, [hep-ph/9504330]. * (66) V. D. Barger, M. Berger, J. Gunion and T. Han, _Higgs Boson physics in the s channel at $\mu^{+}\mu^{-}$ colliders_, _Phys. Rept._ 286 (1997) 1–51, [hep-ph/9602415]. * (67) T. Han and Z. Liu, _Potential precision of a direct measurement of the Higgs boson total width at a muon collider_ , _Phys. Rev. D_ 87 (2013) 033007, [1210.7803]. * (68) N. Chakrabarty, T. Han, Z. Liu and B. Mukhopadhyaya, _Radiative Return for Heavy Higgs Boson at a Muon Collider_ , _Phys. Rev. D_ 91 (2015) 015008, [1408.5912]. * (69) M. Ruhdorfer, E. Salvioni and A. Weiler, _A Global View of the Off-Shell Higgs Portal_ , _SciPost Phys._ 8 (2020) 027, [1910.04170]. * (70) L. Di Luzio, R. Gröber and G. Panico, _Probing new electroweak states via precision measurements at the LHC and future colliders_ , _JHEP_ 01 (2019) 011, [1810.10993]. * (71) J. P. Delahaye, M. Diemoz, K. Long, B. Mansoulié, N. Pastrone, L. Rivkin et al., _Muon Colliders_ , 1901.06150. * (72) K. Long, D. Lucchesi, M. Palmer, N. Pastrone, D. Schulte and V. Shiltsev, _Muon Colliders: Opening New Horizons for Particle Physics_ , 2007.15684. * (73) D. Buttazzo, D. Redigolo, F. Sala and A. Tesi, _Fusing Vectors into Scalars at High Energy Lepton Colliders_ , _JHEP_ 11 (2018) 144, [1807.04743]. * (74) A. Costantini, F. De Lillo, F. Maltoni, L. Mantani, O. Mattelaer, R. Ruiz et al., _Vector boson fusion at multi-TeV muon colliders_ , _JHEP_ 09 (2020) 080, [2005.10289]. * (75) T. Han, Y. Ma and K. Xie, _High Energy Leptonic Collisions and Electroweak Parton Distribution Functions_ , 2007.14300. * (76) R. Capdevilla, D. Curtin, Y. Kahn and G. Krnjaic, _A Guaranteed Discovery at Future Muon Colliders_ , 2006.16277. * (77) T. Han, Z. Liu, L.-T. Wang and X. Wang, _WIMPs at High Energy Muon Colliders_ , 2009.11287. * (78) T. Han, D. Liu, I. Low and X. Wang, _Electroweak Couplings of the Higgs Boson at a Multi-TeV Muon Collider_ , 2008.12204. * (79) N. Bartosik et al., _Detector and Physics Performance at a Muon Collider_ , _JINST_ 15 (2020) P05001, [2001.04431]. * (80) M. Chiesa, F. Maltoni, L. Mantani, B. Mele, F. Piccinini and X. Zhao, _Measuring the quartic Higgs self-coupling at a multi-TeV muon collider_ , _JHEP_ 09 (2020) 098, [2003.13628]. * (81) W. Yin and M. Yamaguchi, _Muon $g-2$ at multi-TeV muon collider_, 2012.03928. * (82) D. Buttazzo and P. Paradisi, _Probing the muon g-2 anomaly at a Muon Collider_ , 2012.02769. * (83) M. Lu, A. M. Levin, C. Li, A. Agapitos, Q. Li, F. Meng et al., _The physics case for an electron-muon collider_ , 2010.15144. * (84) G.-y. Huang, F. S. Queiroz and W. Rodejohann, _Gauged $L_{\mu}{-}L_{\tau}$ at a muon collider_, 2101.04956. * (85) D. J. Chung, A. J. Long and L.-T. Wang, _125 GeV Higgs boson and electroweak phase transition model classes_ , _Phys. Rev. D_ 87 (2013) 023509, [1209.1819]. * (86) L. Dolan and R. Jackiw, _Symmetry Behavior at Finite Temperature_ , _Phys. Rev. D_ 9 (1974) 3320–3341. * (87) E. Braaten and R. D. Pisarski, _Resummation and Gauge Invariance of the Gluon Damping Rate in Hot QCD_ , _Phys. Rev. Lett._ 64 (1990) 1338. * (88) A. D. Linde, _Decay of the False Vacuum at Finite Temperature_ , _Nucl. Phys. B_ 216 (1983) 421. * (89) M. Quiros, _Finite temperature field theory and phase transitions_ , in _ICTP Summer School in High-Energy Physics and Cosmology_ , pp. 187–259, 1, 1999. hep-ph/9901312. * (90) C. L. Wainwright, _CosmoTransitions: Computing Cosmological Phase Transition Temperatures and Bubble Profiles with Multiple Fields_ , _Comput. Phys. Commun._ 183 (2012) 2006–2013, [1109.4189]. * (91) G. D. Moore, _Measuring the broken phase sphaleron rate nonperturbatively_ , _Phys. Rev. D_ 59 (1999) 014503, [hep-ph/9805264]. * (92) R. Zhou, L. Bian and H.-K. Guo, _Connecting the electroweak sphaleron with gravitational waves_ , _Phys. Rev. D_ 101 (2020) 091903, [1910.00234]. * (93) A. Mazumdar and G. White, _Review of cosmic phase transitions: their significance and experimental signatures_ , _Rept. Prog. Phys._ 82 (2019) 076901, [1811.01948]. * (94) C. Grojean and G. Servant, _Gravitational Waves from Phase Transitions at the Electroweak Scale and Beyond_ , _Phys. Rev. D_ 75 (2007) 043507, [hep-ph/0607107]. * (95) C. Caprini et al., _Science with the space-based interferometer eLISA. II: Gravitational waves from cosmological phase transitions_ , _JCAP_ 04 (2016) 001, [1512.06239]. * (96) C. Caprini et al., _Detecting gravitational waves from cosmological phase transitions with LISA: an update_ , _JCAP_ 03 (2020) 024, [1910.13125]. * (97) J. R. Espinosa, T. Konstandin, J. M. No and G. Servant, _Energy Budget of Cosmological First-order Phase Transitions_ , _JCAP_ 06 (2010) 028, [1004.4187]. * (98) J. M. No, _Large Gravitational Wave Background Signals in Electroweak Baryogenesis Scenarios_ , _Phys. Rev. D_ 84 (2011) 124025, [1103.2159]. * (99) A. Alves, T. Ghosh, H.-K. Guo and K. Sinha, _Resonant Di-Higgs Production at Gravitational Wave Benchmarks: A Collider Study using Machine Learning_ , _JHEP_ 12 (2018) 070, [1808.08974]. * (100) A. Alves, D. Gonçalves, T. Ghosh, H.-K. Guo and K. Sinha, _Di-Higgs Production in the $4b$ Channel and Gravitational Wave Complementarity_, _JHEP_ 03 (2020) 053, [1909.05268]. * (101) A. Alves, D. Gonçalves, T. Ghosh, H.-K. Guo and K. Sinha, _Di-Higgs Blind Spots in Gravitational Wave Signals_ , 2007.15654. * (102) J. Ellis, M. Lewicki and J. M. No, _On the Maximal Strength of a First-Order Electroweak Phase Transition and its Gravitational Wave Signal_ , _JCAP_ 04 (2019) 003, [1809.08242]. * (103) H.-K. Guo, K. Sinha, D. Vagie and G. White, _Phase Transitions in an Expanding Universe: Stochastic Gravitational Waves in Standard and Non-Standard Histories_ , 2007.08537. * (104) A. Megevand and S. Ramirez, _Bubble nucleation and growth in very strong cosmological phase transitions_ , _Nucl. Phys. B_ 919 (2017) 74–109, [1611.05853]. * (105) A. Kobakhidze, C. Lagger, A. Manning and J. Yue, _Gravitational waves from a supercooled electroweak phase transition and their detection with pulsar timing arrays_ , _Eur. Phys. J. C_ 77 (2017) 570, [1703.06552]. * (106) J. Ellis, M. Lewicki and J. M. No, _Gravitational waves from first-order cosmological phase transitions: lifetime of the sound wave source_ , _JCAP_ 07 (2020) 050, [2003.07360]. * (107) X. Wang, F. P. Huang and X. Zhang, _Phase transition dynamics and gravitational wave spectra of strong first-order phase transition in supercooled universe_ , _JCAP_ 05 (2020) 045, [2003.08892]. * (108) D. Croon, O. Gould, P. Schicho, T. V. I. Tenkanen and G. White, _Theoretical uncertainties for cosmological first-order phase transitions_ , 2009.10080. * (109) Q.-H. Cao, F. P. Huang, K.-P. Xie and X. Zhang, _Testing the electroweak phase transition in scalar extension models at lepton colliders_ , _Chin. Phys. C_ 42 (2018) 023103, [1708.04737]. * (110) L. Bian, H.-K. Guo, Y. Wu and R. Zhou, _Gravitational wave and collider searches for electroweak symmetry breaking patterns_ , _Phys. Rev. D_ 101 (2020) 035011, [1906.11664]. * (111) N. Chen, T. Li, Y. Wu and L. Bian, _Complementarity of the future $e^{+}e^{-}$ colliders and gravitational waves in the probe of complex singlet extension to the standard model_, _Phys. Rev. D_ 101 (2020) 075047, [1911.05579]. * (112) P. Huang, A. J. Long and L.-T. Wang, _Probing the Electroweak Phase Transition with Higgs Factories and Gravitational Waves_ , _Phys. Rev. D_ 94 (2016) 075008, [1608.06619]. * (113) J. Kozaczuk, M. J. Ramsey-Musolf and J. Shelton, _Exotic Higgs boson decays and the electroweak phase transition_ , _Phys. Rev. D_ 101 (2020) 115035, [1911.10210]. * (114) A. Papaefstathiou and G. White, _The Electro-Weak Phase Transition at Colliders: Confronting Theoretical Uncertainties and Complementary Channels_ , 2010.00597. * (115) J. No and M. Spannowsky, _Signs of heavy Higgs bosons at CLIC: An $e^{+}e^{-}$ road to the Electroweak Phase Transition_, _Eur. Phys. J. C_ 79 (2019) 467, [1807.04284]. * (116) K.-P. Xie, _Lepton-mediated electroweak baryogenesis, gravitational waves and the $4\tau$ final state at the collider_, 2011.04821. * (117) J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer et al., _The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations_ , _JHEP_ 07 (2014) 079, [1405.0301]. * (118) A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks, _FeynRules 2.0 - A complete toolbox for tree-level phenomenology_ , _Comput. Phys. Commun._ 185 (2014) 2250–2300, [1310.1921]. * (119) ATLAS collaboration, G. Aad et al., _Combination of searches for Higgs boson pairs in $pp$ collisions at $\sqrt{s}=$13 TeV with the ATLAS detector_, _Phys. Lett. B_ 800 (2020) 135103, [1906.02025]. * (120) ATLAS collaboration, M. Aaboud et al., _Combination of searches for heavy resonances decaying into bosonic and leptonic final states using 36 fb -1 of proton-proton collision data at $\sqrt{s}=13$ TeV with the ATLAS detector_, _Phys. Rev. D_ 98 (2018) 052008, [1808.02380]. * (121) I. M. Lewis and M. Sullivan, _Benchmarks for Double Higgs Production in the Singlet Extended Standard Model at the LHC_ , _Phys. Rev. D_ 96 (2017) 035037, [1701.08774].
Further author information: (Send correspondence to Ali Vosoughi) Ali Vosoughi: E-mail<EMAIL_ADDRESS> # Classification of Schizophrenia from Functional MRI Using Large-scale Extended Granger Causality Axel Wismüller Department of Electrical and Computer Engineering, University of Rochester, NY, USA Department of Imaging Sciences, University of Rochester, NY, USA Department of Biomedical Engineering, University of Rochester, NY, USA Faculty of Medicine and Institute of Clinical Radiology, Ludwig Maximilian University, Munich, Germany Department of Electrical and Computer Engineering, University of Rochester, NY, USA Department of Imaging Sciences, University of Rochester, NY, USA Department of Biomedical Engineering, University of Rochester, NY, USA Faculty of Medicine and Institute of Clinical Radiology, Ludwig Maximilian University, Munich, Germany M. Ali Vosoughi Department of Electrical and Computer Engineering, University of Rochester, NY, USA ###### Abstract The literature manifests that schizophrenia is associated with alterations in brain network connectivity. We investigate whether large-scale Extended Granger Causality (lsXGC) can capture such alterations using resting-state fMRI data. Our method utilizes dimension reduction combined with the augmentation of source time-series in a predictive time-series model for estimating directed causal relationships among fMRI time-series. The lsXGC is a multivariate approach since it identifies the relationship of the underlying dynamic system in the presence of all other time-series. Here lsXGC serves as a biomarker for classifying schizophrenia patients from typical controls using a subset of 62 subjects from the Centers of Biomedical Research Excellence (COBRE) data repository. We use brain connections estimated by lsXGC as features for classification. After feature extraction, we perform feature selection by Kendall’s tau rank correlation coefficient followed by classification using a support vector machine. As a reference method, we compare our results with cross-correlation, typically used in the literature as a standard measure of functional connectivity. We cross-validate 100 different training/test (90%/10%) data split to obtain mean accuracy and a mean Area Under the receiver operating characteristic Curve (AUC) across all tested numbers of features for lsXGC. Our results demonstrate a mean accuracy range of [0.767, 0.940] and a mean AUC range of [0.861, 0.983] for lsXGC. The result of lsXGC is significantly higher than the results obtained with the cross-correlation, namely mean accuracy of [0.721, 0.751] and mean AUC of [0.744, 0.860]. Our results suggest the applicability of lsXGC as a potential biomarker for schizophrenia. ###### keywords: machine learning, resting-state fMRI, Granger causality, functional connectivity, feature space, schizophrenia disorder ## 1 INTRODUCTION Schizophrenia is a psychiatric disorder characterized by thoughts or experiences that are out of touch with reality, decreased participation in daily activities, disorganized speech or behavior, and (probably) difficulty with concentration and memorization may also be present. The current diagnosis of schizophrenia is by using clinical evaluations of symptoms and behaviors; nevertheless, measurable biomarkers can be beneficial. Recent studies on brain imaging data have shown that information can be extracted non-invasively from brain activity. Despite these studies’ promising results, there is still scope for improvement, especially using more meaningful connectivity analysis approaches [1]. Extensive evidence has demonstrated that schizophrenia affects the brain’s connectivity [2]. Biomarkers from resting-state functional MRI (rs-fMRI) for schizophrenia can be derived using Multi-Voxel Pattern Analysis (MVPA) techniques [3]. MVPA is a framework based on pattern recognition that extracts differences in brain connectivity patterns among healthy individuals and individuals with neurological disease. Cross-correlation is commonly used in most MVPA studies to obtain a functional connectivity profile. For instance, one such study has obtained an accuracy of 0.79 on the slow frequency bands (0.01-0.1 Hz) [4]. As a result, one can argue that connectivity analysis of fMRI data can be used to learn meaningful information. However, cross- correlation is not fit to obtain directed measures of connectivity. Therefore, there may be more relevant information in the fMRI data that is not being grasped by cross-correlation. Several methods have been proposed to capture directional relations in multivariate time-series data, e.g., transfer entropy [5] and mutual information [6]. However, as the multivariate problem’s dimensions increase, the density function’s computation becomes computationally expensive [7, 8]. Under the Gaussian assumption, transfer entropy is equivalent to Granger causality [9]. However, the computation of multivariate Granger causality for short time series in large-scale problems is challenging [10, 11]. Large-scale Extended Granger Causality (lsXGC) is a recently proposed method for estimating directed causal relationships among fMRI time-series that combines dimension reduction with source time-series augmentation and uses predictive time-series modeling [12]. In this work, we investigate if alterations in directed connectivity evident in individuals with schizophrenia and if such directed measures enhance our ability to discriminate between schizophrenia patients and healthy controls. To this end, we apply lsXGC in the MVPA framework for estimating a measure of directed causal interdependence between fMRI time-series. This work is embedded in our group’s endeavor to expedite artificial intelligence in biomedical imaging by means of advanced pattern recognition and machine learning methods for computational radiology and radiomics, e.g., [ 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70 ]. ## 2 DATA ### 2.1 Participants The Centers of Biomedical Research Excellence (COBRE) data respiratory contains raw anatomical and functional MR data from 72 patients with schizophrenia and 74 healthy controls (ages ranging from 18 to 65 in each group). All subjects were screened and eliminated if they had; a history of mental retardation, a history of neurological disorder, history of severe head trauma with more than 5 minutes of loss of consciousness, history of substance dependence or abuse within the last 12 months [2]. Diagnostic information was collected using the Structured Clinical Interview used for DSM Disorders (SCID) [2]. ### 2.2 Resting-state fMRI data A multi-echo MPRAGE (MEMPR) sequence was used with the following parameters: TR/TE/TI = 2530/[1.64, 3.5, 5.36, 7.22, 9.08]/900 ms, flip angle = $7^{\circ}$, FOV = 256 x 256 mm2, slab thickness = 176 mm, matrix = 256 x 256 x 176, voxel size =1 x 1 x 1 mm3, number of echos = 5, pixel bandwidth = 650 Hz, total scan time = 6 min. With 5 echoes, the TR, TI and time to encode partitions for the MEMPR are similar to that of a conventional MPRAGE, resulting in similar GM/WM/CSF contrast. Resting-state fMRI data was collected with single-shot full k-space echo-planar imaging (EPI) with ramp sampling correction using the intercomissural line (AC-PC) as a reference (TR = 2 s, TE = 29 ms, matrix size = 64 x 64, 32 slices, voxel size = 3 x 3 x 4 mm3). Functional connectivity measurements were generated from a subsample of the COBRE dataset [71], a publicly available sample which we accessed through the Nilearn Python library [72]. All subjects of healthy controls and diseased patients under the age of 32 were selected, including 33 healthy and 29 diseased subjects, totaling 62 individuals. The images were already preprocessed using the NIAK resting-state pipeline [73], and additional details can be found in the reference [71]. The number of regions of interests has been selected to be 122 with functional brain parcellations [74]. ## 3 METHODS ### 3.1 Large-scale Extended Granger Causality (lsXGC) The Large-scale Extended Granger Causality (lsXGC) method has been developed based on 1) the principle of original Granger causality that quantifies the causal influence of time-series $\mathbf{x_{s}}$ on time-series $\mathbf{x_{t}}$ by quantifying the measure of improvement in the forecast of $\mathbf{x_{t}}$ in the presence of $\mathbf{x_{s}}$. 2) the idea of dimensionality reduction, which solves the problem of tackling an ill-posed system, which is often challenged in fMRI analysis since the number of acquired temporal samples usually is not sufficient for estimating the model parameters [65, 10]. Consider the ensemble of time-series $\mathcal{X}\in\mathbb{R}^{N\times T}$, where $N$ is the regions of interest (ROIs or number) of time-series and $T$ the number of temporal samples. Let $\mathcal{X}=(\mathbf{x_{1}},\mathbf{x_{2}},\dots,\mathbf{x_{N}})^{\mathsf{T}}$ be the whole multidimensional system and $x_{i}\in\mathbb{R}^{1\times T}$ a single time-series with $i=1,2,\dots,N$, where $\mathbf{x_{i}}=(x_{i}(1),x_{i}(2),\dots,x_{i}(T))$. To overcome the ill-posed problem, first $\mathcal{X}$ will be decomposed into its first $p$ high- variance principal components $\mathcal{Z}\in\mathbb{R}^{p\times T}$ using Principal Component Analysis (PCA), i.e., $\mathcal{Z}=W\mathcal{X},$ (1) where $W\in\mathbb{R}^{p\times N}$ represents the PCA coefficient matrix. Subsequently, the dimension-reduced time-series ensemble $\mathcal{Z}$ is augmented by one original time-series $\mathbf{x_{s}}$ yielding a dimension- reduced augmented time-series ensemble $\mathcal{Y}\in\mathbb{R}^{(p+1)\times T}$ for estimating the influence of $\mathbf{x_{s}}$ on all other time-series. Following this, we locally predict $\mathcal{X}$ at each time sample $t$, i.e., $\mathcal{X}(t)\in\mathbb{R}^{N\times 1}$ by calculating an estimate $\hat{\mathcal{X}}_{\mathbf{x_{s}}}(t)$. To this end, we fit an affine model based on a vector of $m$ vector of m time samples of $\mathcal{Y}(\tau)\in\mathbb{R}^{(p+1)\times 1}$($\tau=t-1,t-2,\dots,t-m$), which is $\mathbf{y}(t)\in\mathbb{R}^{m.(p+1)\times 1}$, and a parameter matrix $\mathcal{A}\in\mathbb{R}^{N\times m.(p+1)}$ and a constant bias vector $\mathbf{b}\in\mathbb{R}^{N\times 1}$, $\hat{\mathcal{X}}_{\mathbf{x_{s}}}(t)=\mathcal{A}\mathbf{y}(t)+\mathbf{b},~{}~{}t=m+1,m+2,\dots,T.$ (2) Now $\hat{\mathcal{X}}_{\setminus{\mathbf{x_{s}}}}(t)$, which is the prediction of $\mathcal{X}(t)$ without the information of $\mathbf{x_{s}}$, will be estimated. The estimation processes is identical to the previous one, with the only difference being that we have to remove the augmented time- series $\mathbf{x_{s}}$ and its corresponding column in the PCA coefficient matrix $W$. The computation of a lsXGC index is based on comparing the variance of the prediction errors obtained with and without consideration of $\mathbf{x_{s}}$. The lsXGC index $f_{\mathbf{x_{s}}\xrightarrow{}\mathbf{x_{t}}}$ , which indicates the influence of $\mathbf{x_{s}}$ on $\mathbf{x_{t}}$, can be calculated by the following equation: $f_{\mathbf{x_{s}}\xrightarrow{}\mathbf{x_{t}}}=\log{\frac{\mathrm{var}(e_{s})}{\mathrm{var}(e_{\setminus s})}},$ (3) where $e_{\setminus s}$ is the error in predicting $\mathbf{x_{t}}$ when $\mathbf{x_{s}}$ was not considered, and $e_{s}$ is the error, when $\mathbf{x_{s}}$ was used. In this study, we set $p=8$ and $m=1$. ### 3.2 Multi-voxel pattern analysis Brain connections served as features for classification in this study and were estimated by two methods, namely lsXGC and cross-correlation. Before using high-dimensional connectivity feature vectors as input to a classifier, feature selection was carried out to reduce input features’ dimension. #### 3.2.1 Feature selection In order to lessen the number of features, feature selection was performed on each training data set with k-fold cross-validation using Kendall’s Tau rank correlation coefficient [75] and $10\%-90\%$ of test-to-train split ratio. This approach quantifies each feature’s relevance to the task of classification and assigns ranks by testing for independence between different classes for each feature [75]. #### 3.2.2 Classification To cross-validate the classification performance in 100 iterations, the data set was divided into two groups: a training data set ($90\%$) and a test data set ($10\%$) that the percentage of samples for each class was preserved. Also, this was repeated with different numbers of features ranging from 5 to 175. A Support Vector Machine (SVM) [76] was used for classification between healthy subjects and schizophrenia patients. All procedures were performed using MATLAB 9.8 (MathWorks Inc., Natick, MA, 2020a), and Python 3.8. ## 4 RESULTS Mean connectivity matrices, which were extracted using lsXGC and cross- correlation, are shown in Fig. 1 for schizophrenia patient and healthy control cohorts. Distinct patterns are visible to the naked eye for both methods. In the following, we quantitatively investigate the difference between the two patient cohorts’ connectivity patterns using an MVPA approach. Figure 1: Mean connectivity matrices: top left: mean connectivity of healthy control subjects using lsXGC, top right: mean connectivity matrix of schizophrenia patients using lsXGC, bottom left: mean connectivity matrix of healthy control subjects using cross-correlation, bottom right: employing cross-correlation to obtain mean connectivity matrix of schizophrenia patients. Remarkably different methods appear to extract different connectivity features, and that they appear to be slight differences in connectivity patterns between the healthy subject and the schizophrenia patients. Classification results were evaluated using the Area Under the Receiver Operator Characteristic Curve (AUC) and accuracy. An AUC = 1 indicates a perfect classification, AUC = 0.5 indicates random class assignment. (a) Mean AUC (b) Mean accuracy Figure 2: Plots are comparing the performance of cross-correlation and the proposed large-scale extended Granger causality (lsXGC). The shaded areas represent the $95\%$ confidence interval. It demonstrates that lsXGC outperforms cross-correlation for most numbers of selected features. In this study, we chose eight as the number of the retained components of PCA in the lsXGC algorithm and model order of 1 for the multivariate vector autoregression function based on preliminary analyses. The plots of accuracy and AUC results in Fig. 2, clearly demonstrate that lsXGC outperforms cross- correlation for diversified numbers of features. Across the wide range of examined numbers of features, the performance of lsXGC is consistently higher with its mean AUC within [0.861, 0.983] and its mean accuracy within [0.767, 0.940]. On the other hand, cross-correlation performs quite poorly compared to lsXGC with its mean AUC within [0.744, 0.860] and its mean accuracy within [0.721, 0.751]. ## 5 CONCLUSIONS In this research, we use a recently developed method for brain connectivity analysis, large-scale Extended Granger Causality (lsXGC), and apply it to a subset of the COBRE data repository to classify individuals with schizophrenia from typical controls by estimating a measure of directed causal relations among regional brain activities recorded in resting-state fMRI. Following the construction of connectivity matrices as characterizing features for brain network analysis, we use Kendall’s tau rank correlation coefficient to select a significant feature and a support vector machine to classify. We demonstrate that our method (lsXGC) favorably compares to standard analysis using cross- correlation, as shown by the significantly enhancing accuracy and AUC values. The effectiveness of lsXGC as a potent biomarker for identifying schizophrenia in prospective clinical trials is yet to be validated. Nevertheless, our results suggest that our approach outperforms the current clinical standard, namely cross-correlation, at revealing meaningful information from functional MRI data. ###### Acknowledgements. This research was funded by Ernest J. Del Monte Institute for Neuroscience Award from the Harry T. Mangurian Jr. Foundation. This work was conducted as a Practice Quality Improvement (PQI) project related to American Board of Radiology (ABR) Maintenance of Certificate (MOC) for Prof. Dr. Axel Wismüller. This work is not being and has not been submitted for publication or presentation elsewhere. ## References * [1] Li, A., Zalesky, A., Yue, W., Howes, O., Yan, H., Liu, Y., Fan, L., Whitaker, K. J., Xu, K., Rao, G., et al., “A neuroimaging biomarker for striatal dysfunction in schizophrenia,” Nature Medicine 26(4), 558–565 (2020). * [2] Calhoun, V. D., Sui, J., Kiehl, K., Turner, J. A., Allen, E. A., and Pearlson, G., “Exploring the psychosis functional connectome: aberrant intrinsic networks in schizophrenia and bipolar disorder,” Frontiers in psychiatry 2, 75 (2012). * [3] Norman, K. A., Polyn, S. M., Detre, G. J., and Haxby, J. V., “Beyond mind-reading: multi-voxel pattern analysis of fMRI data,” Trends in cognitive sciences 10(9), 424–430 (2006). * [4] Cheng, H., Newman, S., Goñi, J., Kent, J. S., Howell, J., Bolbecker, A., Puce, A., O’Donnell, B. F., and Hetrick, W. P., “Nodal centrality of functional network in the differentiation of schizophrenia,” Schizophrenia research 168(1-2), 345–352 (2015). * [5] Schreiber, T., “Measuring information transfer,” Physical review letters 85(2), 461 (2000). * [6] Kraskov, A., Stögbauer, H., and Grassberger, P., “Estimating mutual information,” Physical review E 69(6), 066138 (2004). * [7] Mozaffari, M. and Yilmaz, Y., “Online multivariate anomaly detection and localization for high-dimensional settings,” arXiv preprint arXiv:1905.07107 (2019). * [8] Mozaffari, M. and Yilmaz, Y., “Online anomaly detection in multivariate settings,” in [2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP) ], 1–6, IEEE (2019). * [9] Barnett, L., Barrett, A. B., and Seth, A. K., “Granger causality and transfer entropy are equivalent for Gaussian variables,” Physical review letters 103(23), 238701 (2009). * [10] Vosoughi, M. A. and Wismüller, A., “Large-scale kernelized Granger causality to infer topology of directed graphs with applications to brain networks,” arXiv preprint arXiv:2011.08261 (2020). * [11] Wismüller, A., DSouza, A. M., Abidin, A. Z., and Vosoughi, M. A., “Large-scale nonlinear Granger causality: A data-driven, multivariate approach to recovering directed networks from short time-series data,” arXiv preprint arXiv:2009.04681 (2020). * [12] Vosoughi, M. A. and Wismüller, A., “Large-scale extended Granger causality for classification of marijuana users from functional MRI,” arXiv preprint arXiv:2101.01832 (2021). * [13] Nattkemper, T. W. and Wismüller, A., “Tumor feature visualization with unsupervised learning,” Medical Image Analysis 9(4), 344–351 (2005). * [14] Bunte, K., Hammer, B., Wismüller, A., and Biehl, M., “Adaptive local dissimilarity measures for discriminative dimension reduction of labeled data,” Neurocomputing 73(7-9), 1074–1092 (2010). * [15] Wismüller, A., Vietze, F., and Dersch, D. R., “Segmentation with neural networks,” in [Handbook of medical imaging ], 107–126, Academic Press, Inc. (2000). * [16] Leinsinger, G., Schlossbauer, T., Scherr, M., Lange, O., Reiser, M., and Wismüller, A., “Cluster analysis of signal-intensity time course in dynamic breast MRI: does unsupervised vector quantization help to evaluate small mammographic lesions?,” European radiology 16(5), 1138–1146 (2006). * [17] Wismüller, A., Vietze, F., Behrends, J., Meyer-Baese, A., Reiser, M., and Ritter, H., “Fully automated biomedical image segmentation by self-organized model adaptation,” Neural Networks 17(8-9), 1327–1344 (2004). * [18] Hoole, P., Wismüller, A., Leinsinger, G., Kroos, C., Geumann, A., and Inoue, M., “Analysis of tongue configuration in multi-speaker, multi-volume MRI data,” (2000). * [19] Wismüller, A., “Exploratory morphogenesis (XOM): a novel computational framework for self-organization,” Ph. D. thesis, Technical University of Munich, Department of Electrical and Computer Engineering (2006). * [20] Wismüller, A., Dersch, D. R., Lipinski, B., Hahn, K., and Auer, D., “A neural network approach to functional MRI pattern analysis—clustering of time-series by hierarchical vector quantization,” in [International Conference on Artificial Neural Networks ], 857–862, Springer (1998). * [21] Wismüller, A., Vietze, F., Dersch, D. R., Behrends, J., Hahn, K., and Ritter, H., “The deformable feature map-a novel neurocomputing algorithm for adaptive plasticity in pattern analysis,” Neurocomputing 48(1-4), 107–139 (2002). * [22] Behrends, J., Hoole, P., Leinsinger, G. L., Tillmann, H. G., Hahn, K., Reiser, M., and Wismüller, A., “A segmentation and analysis method for MRI data of the human vocal tract,” in [Bildverarbeitung für die Medizin 2003 ], 186–190, Springer (2003). * [23] Wismüller, A., “Neural network computation in biomedical research: chances for conceptual cross-fertilization,” Theory in Biosciences (1997). * [24] Bunte, K., Hammer, B., Villmann, T., Biehl, M., and Wismüller, A., “Exploratory observation machine (XOM) with Kullback-Leibler divergence for dimensionality reduction and visualization.,” in [ESANN ], 10, 87–92 (2010). * [25] Wismüller, A., Vietze, F., Dersch, D. R., Hahn, K., and Ritter, H., “The deformable feature map—adaptive plasticity for function approximation,” in [International Conference on Artificial Neural Networks ], 123–128, Springer (1998). * [26] Wismüller, A., “The exploration machine–a novel method for data visualization,” in [International Workshop on Self-Organizing Maps ], 344–352, Springer (2009). * [27] Wismüller, A., “Method, data processing device and computer program product for processing data,” (July 28 2009). US Patent 7,567,889. * [28] Huber, M. B., Nagarajan, M., Leinsinger, G., Ray, L. A., and Wismüller, A., “Classification of interstitial lung disease patterns with topological texture features,” in [Medical Imaging 2010: Computer-Aided Diagnosis ], 7624, 762410, International Society for Optics and Photonics (2010). * [29] Wismüller, A., “The exploration machine: a novel method for analyzing high-dimensional data in computer-aided diagnosis,” in [Medical Imaging 2009: Computer-Aided Diagnosis ], 7260, 72600G, International Society for Optics and Photonics (2009). * [30] Bunte, K., Hammer, B., Villmann, T., Biehl, M., and Wismüller, A., “Neighbor embedding XOM for dimension reduction and visualization,” Neurocomputing 74(9), 1340–1350 (2011). * [31] Meyer-Bäse, A., Lange, O., Wismüller, A., and Ritter, H., “Model-free functional MRI analysis using topographic independent component analysis,” International journal of neural systems 14(04), 217–228 (2004). * [32] Wismüller, A., “A computational framework for nonlinear dimensionality reduction and clustering,” in [International Workshop on Self-Organizing Maps ], 334–343, Springer (2009). * [33] Meyer-Base, A., Auer, D., and Wismüller, A., “Topographic independent component analysis for fMRI signal detection,” in [Proceedings of the International Joint Conference on Neural Networks, 2003. ], 1, 601–605, IEEE (2003). * [34] Meyer-Baese, A., Schlossbauer, T., Lange, O., and Wismüller, A., “Small lesions evaluation based on unsupervised cluster analysis of signal-intensity time courses in dynamic breast MRI,” International journal of biomedical imaging 2009 (2009). * [35] Wismüller, A., Lange, O., Auer, D., and Leinsinger, G., “Model-free functional MRI analysis for detecting low-frequency functional connectivity in the human brain,” in [Medical Imaging 2010: Computer-Aided Diagnosis ], 7624, 76241M, International Society for Optics and Photonics (2010). * [36] Meyer-Bäse, A., Saalbach, A., Lange, O., and Wismüller, A., “Unsupervised clustering of fMRI and MRI time series,” Biomedical Signal Processing and Control 2(4), 295–310 (2007). * [37] Huber, M. B., Nagarajan, M. B., Leinsinger, G., Eibel, R., Ray, L. A., and Wismüller, A., “Performance of topological texture features to classify fibrotic interstitial lung disease patterns,” Medical Physics 38(4), 2035–2044 (2011). * [38] Wismüller, A., Verleysen, M., Aupetit, M., and Lee, J. A., “Recent advances in nonlinear dimensionality reduction, manifold and topological learning.,” in [ESANN ], (2010). * [39] Meyer-Baese, A., Lange, O., Wismüller, A., and Hurdal, M. K., “Analysis of dynamic susceptibility contrast MRI time series based on unsupervised clustering methods,” IEEE Transactions on Information Technology in Biomedicine 11(5), 563–573 (2007). * [40] Wismüller, A., Behrends, J., Hoole, P., Leinsinger, G. L., Reiser, M. F., and Westesson, P.-L., “Human vocal tract analysis by in vivo 3d MRI during phonation: a complete system for imaging, quantitative modeling, and speech synthesis,” in [International Conference on Medical Image Computing and Computer-Assisted Intervention ], 306–312, Springer (2008). * [41] Wismüller, A., “Method and device for representing multichannel image data,” (Nov. 17 2015). US Patent 9,189,846. * [42] Huber, M. B., Bunte, K., Nagarajan, M. B., Biehl, M., Ray, L. A., and Wismüller, A., “Texture feature ranking with relevance learning to classify interstitial lung disease patterns,” Artificial intelligence in medicine 56(2), 91–97 (2012). * [43] Wismüller, A., Meyer-Baese, A., Lange, O., Reiser, M. F., and Leinsinger, G., “Cluster analysis of dynamic cerebral contrast-enhanced perfusion MRI time-series,” IEEE transactions on medical imaging 25(1), 62–73 (2005). * [44] Twellmann, T., Saalbach, A., Muller, C., Nattkemper, T. W., and Wismüller, A., “Detection of suspicious lesions in dynamic contrast enhanced MRI data,” in [The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society ], 1, 454–457, IEEE (2004). * [45] Otto, T. D., Meyer-Baese, A., Hurdal, M., Sumners, D., Auer, D., and Wismüller, A., “Model-free functional MRI analysis using cluster-based methods,” in [Intelligent Computing: Theory and Applications ], 5103, 17–24, International Society for Optics and Photonics (2003). * [46] Varini, C., Nattkemper, T. W., Degenhard, A., and Wismüller, A., “Breast MRI data analysis by lle,” in [2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541) ], 3, 2449–2454, IEEE (2004). * [47] Huber, M. B., Lancianese, S. L., Nagarajan, M. B., Ikpot, I. Z., Lerner, A. L., and Wismüller, A., “Prediction of biomechanical properties of trabecular bone in mr images with geometric features and support vector regression,” IEEE Transactions on Biomedical Engineering 58(6), 1820–1826 (2011). * [48] Meyer-Base, A., Pilyugin, S. S., and Wismüller, A., “Stability analysis of a self-organizing neural network with feedforward and feedback dynamics,” in [2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541) ], 2, 1505–1509, IEEE (2004). * [49] Meyer-Baese, A., Lange, O., Schlossbauer, T., and Wismüller, A., “Computer-aided diagnosis and visualization based on clustering and independent component analysis for breast MRI,” in [2008 15th IEEE International Conference on Image Processing ], 3000–3003, IEEE (2008). * [50] Wismüller, A., Meyer-Bäse, A., Lange, O., Schlossbauer, T., Kallergi, M., Reiser, M., and Leinsinger, G., “Segmentation and classification of dynamic breast magnetic resonance image data,” Journal of Electronic Imaging 15(1), 013020 (2006). * [51] Bhole, C., Pal, C., Rim, D., and Wismüller, A., “3d segmentation of abdominal ct imagery with graphical models, conditional random fields and learning,” Machine vision and applications 25(2), 301–325 (2014). * [52] Nagarajan, M. B., Coan, P., Huber, M. B., Diemoz, P. C., Glaser, C., and Wismüller, A., “Computer-aided diagnosis in phase contrast imaging x-ray computed tomography for quantitative characterization of ex vivo human patellar cartilage,” IEEE Transactions on Biomedical Engineering 60(10), 2896–2903 (2013). * [53] Wismüller, A., Meyer-Bäse, A., Lange, O., Auer, D., Reiser, M. F., and Sumners, D., “Model-free functional MRI analysis based on unsupervised clustering,” Journal of Biomedical Informatics 37(1), 10–18 (2004). * [54] Meyer-Baese, A., Wismüller, A., Lange, O., and Leinsinger, G., “Computer-aided diagnosis in breast MRI based on unsupervised clustering techniques,” in [Intelligent Computing: Theory and Applications II ], 5421, 29–37, International Society for Optics and Photonics (2004). * [55] Nagarajan, M. B., Coan, P., Huber, M. B., Diemoz, P. C., Glaser, C., and Wismüller, A., “Computer-aided diagnosis for phase-contrast x-ray computed tomography: quantitative characterization of human patellar cartilage with high-dimensional geometric features,” Journal of digital imaging 27(1), 98–107 (2014). * [56] Nagarajan, M. B., Huber, M. B., Schlossbauer, T., Leinsinger, G., Krol, A., and Wismüller, A., “Classification of small lesions on dynamic breast MRI: Integrating dimension reduction and out-of-sample extension into cadx methodology,” Artificial intelligence in medicine 60(1), 65–77 (2014). * [57] Yang, C.-C., Nagarajan, M. B., Huber, M. B., Carballido-Gamio, J., Bauer, J. S., Baum, T. H., Eckstein, F., Lochmüller, E.-M., Majumdar, S., Link, T. M., et al., “Improving bone strength prediction in human proximal femur specimens through geometrical characterization of trabecular bone microarchitecture and support vector regression,” Journal of electronic imaging 23(1), 013013 (2014). * [58] Wismüller, A., Nagarajan, M. B., Witte, H., Pester, B., and Leistritz, L., “Pair-wise clustering of large scale Granger causality index matrices for revealing communities,” in [Medical Imaging 2014: Biomedical Applications in Molecular, Structural, and Functional Imaging ], 9038, 90381R, International Society for Optics and Photonics (2014). * [59] Wismüller, A., Wang, X., DSouza, A. M., and Nagarajan, M. B., “A framework for exploring non-linear functional connectivity and causality in the human brain: mutual connectivity analysis (mca) of resting-state functional MRI with convergent cross-mapping and non-metric clustering,” arXiv preprint arXiv:1407.3809 (2014). * [60] Schmidt, C., Pester, B., Nagarajan, M., Witte, H., Leistritz, L., and Wismüller, A., “Impact of multivariate Granger causality analyses with embedded dimension reduction on network modules,” in [2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society ], 2797–2800, IEEE (2014). * [61] Wismüller, A., Abidin, A. Z., D’Souza, A. M., Wang, X., Hobbs, S. K., Leistritz, L., and Nagarajan, M. B., “Nonlinear functional connectivity network recovery in the human brain with mutual connectivity analysis (MCA): convergent cross-mapping and non-metric clustering,” in [Medical Imaging 2015: Biomedical Applications in Molecular, Structural, and Functional Imaging ], 9417, 94170M, International Society for Optics and Photonics (2015). * [62] Wismüller, A., Abidin, A. Z., DSouza, A. M., and Nagarajan, M. B., “Mutual connectivity analysis (MCA) for nonlinear functional connectivity network recovery in the human brain using convergent cross-mapping and non-metric clustering,” in [Advances in Self-Organizing Maps and Learning Vector Quantization ], 217–226, Springer (2016). * [63] Schmidt, C., Pester, B., Schmid-Hertel, N., Witte, H., Wismüller, A., and Leistritz, L., “A multivariate Granger causality concept towards full brain functional connectivity,” PloS one 11(4) (2016). * [64] Abidin, A. Z., Chockanathan, U., DSouza, A. M., Inglese, M., and Wismüller, A., “Using large-scale Granger causality to study changes in brain network properties in the clinically isolated syndrome (CIS) stage of multiple sclerosis,” in [Medical Imaging 2017: Biomedical Applications in Molecular, Structural, and Functional Imaging ], 10137, 101371B, International Society for Optics and Photonics (2017). * [65] DSouza, A. M., Abidin, A. Z., Leistritz, L., and Wismüller, A., “Exploring connectivity with large-scale Granger causality on resting-state functional MRI,” Journal of neuroscience methods 287, 68–79 (2017). * [66] Chen, L., Wu, Y., DSouza, A. M., Abidin, A. Z., Wismüller, A., and Xu, C., “MRI tumor segmentation with densely connected 3d cnn,” in [Medical Imaging 2018: Image Processing ], 10574, 105741F, International Society for Optics and Photonics (2018). * [67] Abidin, A. Z., DSouza, A. M., Nagarajan, M. B., Wang, L., Qiu, X., Schifitto, G., and Wismüller, A., “Alteration of brain network topology in HIV-associated neurocognitive disorder: A novel functional connectivity perspective,” NeuroImage: Clinical 17, 768–777 (2018). * [68] Abidin, A. Z., Deng, B., DSouza, A. M., Nagarajan, M. B., Coan, P., and Wismüller, A., “Deep transfer learning for characterizing chondrocyte patterns in phase contrast x-ray computed tomography images of the human patellar cartilage,” Computers in biology and medicine 95, 24–33 (2018). * [69] DSouza, A. M., Abidin, A. Z., Chockanathan, U., Schifitto, G., and Wismüller, A., “Mutual connectivity analysis of resting-state functional MRI data with local models,” NeuroImage 178, 210–223 (2018). * [70] Chockanathan, U., DSouza, A. M., Abidin, A. Z., Schifitto, G., and Wismüller, A., “Automated diagnosis of HIV-associated neurocognitive disorders using large-scale Granger causality analysis of resting-state functional MRI,” Computers in Biology and Medicine 106, 24–30 (2019). * [71] Bellec, P., “COBRE preprocessed with NIAK 0.17 - lightweight release,” (2016). * [72] Abraham, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., Gramfort, A., Thirion, B., and Varoquaux, G., “Machine learning for neuroimaging with scikit-learn,” Frontiers in neuroinformatics 8, 14 (2014). * [73] NIAK-pipeline, “http://niak.simexp-lab.org/,” (2019). Last accessed 19 August 2020. * [74] Bellec, P., “Mining the hierarchy of resting-state brain networks: selection of representative clusters in a multiscale structure,” in [2013 International Workshop on Pattern Recognition in Neuroimaging ], 54–57, IEEE (2013). * [75] Kendall, M. G., “The treatment of ties in ranking problems,” Biometrika 33(3), 239–251 (1945). * [76] Suykens, J. A. and Vandewalle, J., “Least squares support vector machine classifiers,” Neural processing letters 9(3), 293–300 (1999).
# Appliance Operation Modes Identification Using Cycles Clustering Abdelkareem Jaradat, Hanan Lutfiyya, Anwar Haque {ajarada3, hlutfiyy<EMAIL_ADDRESS> The University Of Western Ontario - Department Of Computer Science London, Ontario, Canada ###### Abstract The increasing cost, energy demand, and environmental issues has led many researchers to find approaches for energy monitoring, and hence energy conservation. The emerging technologies of Internet of Things (IoT) and Machine Learning (ML) deliver techniques that have the potential to efficiently conserve energy and improve the utilization of energy consumption. Smart Home Energy Management Systems (SHEMSs) have the potential to contribute in energy conservation through the application of Demand Response (DR) in the residential sector. In this paper, we propose appliances Operation Modes Identification using Cycles Clustering (OMICC) which is SHEMS fundamental approach that utilizes the sensed residential disaggregated power consumption in supporting DR by providing consumers the opportunity to select lighter appliance operation modes. The cycles of the Single Usage Profile (SUP) of an appliance are extracted and reformed into features in terms of clusters of cycles. These features are then used to identify the operation mode used in every occurrence using K-Nearest Neighbors (KNN). Operation modes identification is considered a basis for many potential smart DR applications within SHEMS towards the consumers or the suppliers. ## I Introduction the conservation of non-renewal resources is of interest not only because these resources are not renewable in the face of accelerated demand by the industrial and residential sectors, but also because of the impact on the climate [14]. Extensive research is currently being carried out on finding solutions to efficiently use energy sources. Research suggests that involving the consumer in energy conservation through insights on the details of energy consumption is effective and creates a noticeable difference in the reduction of energy waste [3]. Such involvement preferably requires providing consumers with individual load-level consumption information (diaggregated) rather than only providing the mains consumption information (aggregated) that represents the total consumption of all loads [21]. Exposing individual appliance-level energy information is capable of reducing energy consumption by increasing the consumer’s awareness of the consumption of individual appliances. [16]. Disaggregated consumption information can be obtained with the assistance of smart meters and sensors which can measure per appliance electricity consumption reading, or by decomposing the aggregated mains signal into individual appliance signals using event detection algorithms through Non- Intrusive Load Monitoring (NILM) methods [32]. A mechanism that is popular in many studies, Demand Response (DR), provides an opportunity for consumers to contribute in the reduction in electricity consumption by reducing their consumption or by shifting the electricity usage to off-peak periods in response to time-based tariffs, by either monetary or non-monetary incentives [17]. DR may be used to increase demand during periods of high supply and low demand, or DR could be used together with proper feedback to the consumer to reduce the power consumption per appliance. DR is also considered a more cost-effective option than investing in more power supplies such as building extra power stations [9]. This is not a new concept since it has been extensively investigated in recent years as an approach to reducing power consumption [17]. However, in most countries DR still plays a very limited role in energy conservation [29]. This is due to the fact that DR requires buildings to be equipped with sensors and automation systems which has still not been widely adopted [25]. Smart Home Energy Management Systems (SHEMSs) have the potential to connect pieces together and provide a solid solution [1] by enabling a set of sustainable smart applications for energy savings [4]. SHEMSs are capable of providing visual feedback to the consumer in the form of energy usage data, automation and control initiated by the utility party, load forecasting, and optimized load scheduling horizon [20]. With smart meters and sensors that can measure power consumption per appliance, it has become much easier with the assistance of signals processing mechanisms to monitor individual device power consumption and support SHEMSs applications [34]. In this paper, we propose Operation Modes Identification using Cycles Clustering (OMICC) that is considered a fundamental machine learning approach for SHEMSs to be built on top of OMICC. The main objective of OMICC is to process sensed disaggregated power consumption readings, and identify for each appliance what is the operation mode used for each detected activation time [13]. During the use of an appliance, there are several operation modes the user can select from. An operation mode is a specific setting set by a manufacturer to meet certain needs for the customer, such as light and heavy modes in a washing machine. Activating an appliance in a certain operation mode consumes energy differently than others. Thus, a DR opportunity arises such that OMICC identifies the operation modes and reports them with activation times to a SHEMS so that the consumer can take a proper action towards energy use reduction. These actions are typically in the form of load shifting and reduction [25]. Load shifting is performed by changing the time of use to off- peak hours when the demand is low and the price is cheaper. On the other hand, load reduction is obtained when consumers change their habits in terms of the pattern and frequency of selecting certain operation modes over others. Consequently, selecting lighter operation modes instead of the heavier ones upon appliance usage significantly reduces the power consumption associated with these appliances hence supporting DR. This way, OMICC is considered the analytical layer that a SHEMS relies on to apply DR for the residential sector. The rest of the paper is organized as the following: In section II we discuss previous work in the literature. Section III discusses the data analysis of appliances power consumption. In section IV we present the proposed approach, OMICC. Section V discusses the results and section IX concludes the paper and suggests future work. ## II Related Work Demand-response methods are potential solutions for improving the efficiency of future smart power grids. The literature consists of many DR approaches that aim to reduce consumer power bills and decrease the load on the grid. Haben el.al.[10] proposed a clustering method using Finite Mixture Models (FMMs) to identify households that are more suitable for demand reduction, and discovering clusters that models the types of peak demands and major any seasonality and variation in the data. Zheng et.al [36] proposed an incentive- based model for multiple energy carriers that takes into account the behavioral coupling effect of the consumers and the impact of energy storage unit. The results confirm benefits of the proposed model in reducing cost of multi-energy aggregator, and reduction in the dissatisfaction of consumers. A comprehensive optimization-based Automated Demand Response controller [2] is developed to optimize the operation of several types of household appliances aiming for reduction in consumer’s power bill within a predefined range with dissatisfaction minimization. A real-time price-based DR algorithm [35] is presented to gain optimal load control of devices in certain facility by creating a virtual electricity-trading process that utilizes the Stackelberg game. An advanced reward system [11] presented the concept of a comfort indicator in a framework where demand reduction requests for household appliances is communicated efficiently maintaining households’comfort levels. A demand response optimization framework [17] from the utility perspective was developed to minimize the user costs and utility cost. The literature describes many techniques used to analyze the electricity consumption data so that end user applications can be built on top of these approaches [31]. These techniques are primarily concerned with event detection [14] and event classification [13] in time series data. Typically the event is the activation and deactivation of appliances during its operation with the power distribution over time. Different approaches are concerned with extracting load signatures from aggregated power consumption data and classifying these signatures into the individual loads such as appliances and lights, best known as Non Intrusive Load Monitoring (NILM). Liao et.al. [18] proposed an approach for appliance load identification using Dynamic Time Warping (DTW). Liu et.al. [19] used a Nearest Neighbor Transient Identification method to identify the appliance creating the Transient Power Waveform (TPW) sample time-series. The DTW-based integrated distance is then utilized to calculate the similarity of TPW signatures and a template time series for an appliance. Wang et.al. [30] describes a NILM approach which uses Iterative Disaggregation based on Appliance Consumption Pattern (ILDACP). This approach combines Fuzzy C-means clustering algorithm to detect appliance operating status, and DTW search that identifies individual appliances energy consumption based on the appliance typical power consumption pattern (a template pattern). Machine learning algorithms are emerging in the context of load signature detection with both supervised and unsupervised algorithms. Barsim et.al. [4] proposed an approach that uses the typical event-based NILM system with only unsupervised algorithms to eliminate the need for training stage. The event detector is a grid-based clustering algorithm to segment the power signals into transition-state and steady-state segments. They extracted a set of features from the detected events and used them in a Mean-Shift Clustering algorithm. Fernandez et. al. [7] performed online learning approach for the identification of home appliances based on the Confidence Weighted algorithm (CW) and six other algorithms. Kang et.al. [15] used Probabilistic K-Nearest Neighbor (PKNN) to infer the device states from home appliance electrical power usage signals and also from sensor data that includes temperature and humidity. Prudenzi et.al. [27] proposed a procedure that provides three different sequential back propagation Artificial Neural Networks (ANNs) to process the load shape and identify load signatures. The classification is then performed by an unsupervised network implementing the Self-Organizing Map (SOM) of Kohonen [27]. TABLE I: Classification for the electrical loads based on operation behavior and the selected appliances to analyze in this paper. Load Type | Examples | Selection ---|---|--- Timer Based Loads | Air conditioning, water heater, refrigerator | - Preprogrammed Loads | Dishwashers, Ovens, Broilers, clothes washers, and clothes dryers | Dishwashers, clothes washers, and clothes dryers Variable Loads | electric cook-top, lighting, television, hair dryer, laptops, and almost the rest of house appliances | - Most of the work currently in the literature uses (NILM) [6] to identify individual appliances and loads from an aggregated load signal. The literature is limited in approaches that focus on analyzing disaggregated power data to identify the activation of household appliances and then classify each use of the appliance into its operation modes. With OMICC, appliance operation modes are identified, and thus, a DR mechanism is possible for consumers such that by lowering the frequency of using heavier operation modes the overall consumption per appliance is reduced. ## III Electrical Load Data Analysis We use Power Consumption DataSets (PCDSs) [23] to understand the power consumption characteristics of household electrical loads over time when used and discover potential DR opportunities[26]. A household electrical load is an electrical device, component or sub-component of a circuit that is used by a house residents, where this load consumes electric active power [33]. The main objective of this analysis is to understand the states that an electrical load goes through when these loads are used. This assists in forming statistical models representing each load over time. These models have the potential to replicate this data into a form of synthetic datasets which assist in validating proposed algorithms plus reduce the time and effort in collecting real PCDSs. ### III-A Electrical Load Classification Based on the characteristics of electrical loads, a classification is presented in Table I that categorizes electrical loads into three categories. Timer Based Loads consist of a set of cycles that is replicated over time while the load is active. These loads are regulated using a timing regulation controller, such as a thermostat that responds to temperature fluctuations. For example, in a refrigerator, the compressor cools the refrigerator interior and is activated when the interior temperature passes a preset upper threshold, however, it switches off when the temperature falls below a preset lower threshold. Figure 1: SUPs for the dishwasher showing three operation modes (a) Heavy, (b) Medium, (c) Light. In the Variable Loads category, loads are triggered on and off over time based on human behavior where the timing distribution of the interventions is correlated to a wide range of parameters related to households occupancy and life style such as wake up times, bed times, work times, number of house occupants and their ages, etc. Therefore, without having sufficient information about these parameters, it is difficult to precisely predict the start time nor the working duration for loads belong to this category. Consequently, the electrical load behavior of this kind of loads relies on their usage statistics. Preprogrammed Loads are activated manually by human intervention and usually consume relatively large amounts of power in a relatively short time. Once a specific load is activated, it goes through a set preprogrammed states where each state has a specific duration until the load automatically shuts off. A dishwasher is an example of this category. It cycles through a set of states such as water filling, washing, and rinse state. Each state is configured with a duration, and a power level. In this paper, the focus is spotted on analyzing the power consumption characteristics of appliances belong to this category. ### III-B Single Use Profile and Operation Modes A Single Use Profile (SUP) is used to formally model power consumption of a preprogrammed appliance between the time it is turned on and the time it is turned off. SUP represents the sequence of power consumption values consumed by an appliance from the moment of turning it on to the moment of that it is turned off. Hence, SUP with the sampling frequency $f_{s}=1Hz$ is defined by the sequence $\\{p_{i}\\}_{i=t_{on}}^{t_{off}}$ where $p_{i}$ is the instantaneous power reading at time $i$. The time stamps $t_{on},t_{off}$ are the turn on, turn off times respectively of the appliance and $i\in[t_{on},t_{off}]$ represents the index the $i^{th}$ sample. Typically, home appliances may run a SUP in one of several operation modes. An operation mode is characterized by its running time and different cycles that the appliance passes through. For example, a dishwasher may have two operation modes, a lighter one for barely used dishes, and a heavy one for greasy dishes. Activating certain appliance with a SUP in a certain operation mode consumes electricity differently than other operation modes. ### III-C Analysis Of Individual Appliances In this paper, the focus is on analyzing the behavior of the preprogrammed appliances. The appliances data that we analyzed are the clothes washer, clothes dryer, and dishwasher. #### III-C1 The Dishwasher Figure 2: (a) SUPs for clothes washer using heavy operation mode. (b) Smoothed SUP plot in heavy operation mode. (c) Smoothed SUP plot in medium operation mode. (d) Smoothed SUP plot in light operation mode The dishwasher has three main operation modes: Heavy, Medium, and Light. Each SUP of the dishwasher has three states: wash, rinse and dry, regardless of the operation mode. Three SUPs in three operation modes for the dishwasher are depicted in Fig. 1. The wash state involves filling the dishwasher with water and spraying the water through jets to get the first round of spray with regular water temperature. The water then is heated to the desired temperature setting. The sprinkler then starts washing by rotating and spraying hot water on the dishes. The rinse state that follows repeats the steps in the wash state. Finally, the dry state starts by increasing the interior air temperature of the machine for a short time since the insulator installed in the dishwasher keeps the interior on high temperature until the dry states completes. #### III-C2 The Clothes Washer Clothes washer SUP has three states during operation: wash, rinse, and spin. Fig. 2 graphically depicts SUPs of the clothes washer for three operation modes: Heavy, Medium, and Light. Fig. 2 (a) is the real consumption data plotted over time in heavy operation mode. In Fig. 2 (b, c, d) the plots corresponds to a smoothed version of the SUP plot using moving median to make it easier to understand. The washer starts its wash state by filling in the water to the main cavity, during which the washer consumes relatively low power. As the water fills in, the washer starts rotating. Initially, the power consumption is low but gradually increases due to the rise of the motor electrical load as the water level climbs up. Frequent variations in motor speeds are observed during the wash state. The rinse state follows for a shorter time. Finally, the spin state occurs when the machine stops pumping water to the cavity and starts drying clothes by spinning at a very high speed, therefore, the machine consumes the highest power. Figure 3: SUPs for clothes dryer with three operation modes (a) Heavy (b) Medium (c) Light. #### III-C3 The Dryer A SUP of the dryer contains these cycles: maximum temperature, average temperature, and minimum temperature. During the maximum temperature cycle, the dryer drum is heated to its maximum value in the dryer setting, and the duration of this state is relatively longer than other cycles. The average temperature cycle sets the core temperature to about half of the maximum value with relatively short duration. Finally, with the minimum temperature cycle, the heating element in the dryer is turned off. Fig. 3 shows a three SUPs for a dryer in three different operation modes: Heavy, Medium, and Light. ## IV SUP CLASSIFICATION WITH OMICC We propose Operation Modes Identification using Cycles Clustering (OMICC) approach to classify the operation modes of SUPs. The approach is focused on extracting features of SUPs that serves to classify into operations modes classes. Fig. 4 depicts the architecture of the model used in this approach. Figure 4: The basic architecture for the classification using OMICC. ### IV-A SUP Features Extraction This section describes the technique for extracting features from a detected SUP assuming a SUP is already detected and the $t_{on}$ is given [13]. The features are represented as the set of cycles that form a SUP. Each cycle is characterized by two abrupt changes in the power value. These features are used by the classification algorithm to classify the detected SUP into operation modes. #### IV-A1 Edge Detector The classification of the detected SUP into operation modes requires determining the cycles that form the SUP. The day consumption is represented by $D(t)$. Given a turn on time for a SUP, $t_{on}$, we define $D^{\prime}(t)$ as follows: $D^{\prime}(t)=D(t):\quad t\in[t_{on},midnight]$ (1) where $D^{\prime}(t)$ represents the consumption after the turn on time to the rest of the day. A median smoother function is then applied to $D^{\prime}(t)$ to cancel the low amplitude noise component. Edge detection is based on the Moving Step-Test (MST) [8] where MST is characterized by the following equation: $I(t)=|m_{t+\ell}-m_{t-\ell}|$ (2) where $I(t)$ is defined as the Indicator Function. $I(t)$ is used to detect edges of the cycles by finding a period of time that surrounds each edge where it is highly likely that an edge exists within this period. We define $m_{t+\ell}$ as the median value of the power consumption of the leading sequence that starts at $D^{\prime}(t)$ and ends at $D^{\prime}(t+\ell)$ and $m_{t-\ell}$ is the median value of the power consumption values for the lagging sequence that starts at $D^{\prime}(t-\ell)$ and ends at $D^{\prime}(t)$. The median function is selected since it gives a more accurate edge location by eliminating the presence of points that are located at the other end of the transition around the edge. A cycle in $D^{\prime}(t)$ is defined by two abrupt changes in the value of $D^{\prime}(t)$. An abrupt change in $D^{\prime}(t)$ is a single point of time $t_{m}\in[t_{s},t_{e}]$ which is referred to as Exact Edge. Exact edges are determined based on the values of $I(t)$ such that if the value of $I(t)$ is higher than a certain threshold, $\tau$, this indicates the start or the end of a cycle. Each period of time $[t_{s},t_{e}]$ such that $I(t)>\tau$ forms a Thick Edge i.e, a thick edge is period of time defined by a starting time $t_{s}$ and ending time $t_{e}$ where $I(t)$ is greater than $\tau$. A high amplitude of $I(t)$ in a thick edge indicates that the values of the two medians $m_{t-\ell}$ and $m_{t+\ell}$ are far from each other. Which in turns means that an abrupt change (Exact Edge) in $D^{\prime}(t)$ exists. Figure 5: The response of the indicator function $I(t)$ for the abrupt changes in $D^{\prime}(t)$. The indicator function, $I(t)$, facilitates finding exact edges in $D^{\prime}(t)$, rather than looking directly in $D(t)$. These sudden changes in $D^{\prime}(t)$ occurs at different power levels i.e., an exact edge could occur when the value of $D^{\prime}(t)$ is around 5W and jumps to 1000W, or it could change from 2000W to 6000W. In addition, exact edges occur from lower values to higher vales (a rising edge) or vise versa (a falling edge). i.e., an exact edge could occur when the power value of $D^{\prime}(t)$ is at 5W and rises to 2500W or when the power value falls from 2500W to 5W. On the other hand, $I(t)$ shows all the exact edges in $D^{\prime}(t)$ in a single reference, such that in the absence of any exact edge in $D^{\prime}(t)$, $I(t)$ shows a value closer to zero. However, whenever an exact edge occurs in $D^{\prime}(t)$, $I(t)$ shows a spike with high value in a short period of time. The threshold $\tau$ is calculated based on the standard deviation of $I(t)$ and Low Amplitude Canceling Multiplier, $\zeta$. The value of $\tau$ is defined as follows: $\tau=\zeta.\sigma(I(t)):\quad\zeta\in Z^{+}$ (3) where the multiplier, $\zeta$, adjusts the threshold $\tau$ in order to cancel whatever values of $I(t)$ that is less than the threshold value. This is graphically depicted in Fig. 5. By thresholding $I(t)$ with $\tau$, a trimmed version, $I^{\prime}(t)$, is obtained as presented in Fig. 6. We define $I^{\prime}(t)$ as the following: $I^{\prime}(t)=I(t)-\tau$ (4) where $I^{\prime}(t)$ is used to determine the thick edges set, $E$, within a SUP such that: $\begin{array}[]{l}E=\\{e_{0},e_{1},..,e_{n-1}\\}\\\ e_{i}=[t_{s_{i}},t_{e_{i}}]:\quad 0\leq i<n\\\ \end{array}$ (5) where $n$ is the size of $E$, $t_{s_{i}}$ and $t_{e_{i}}$ represent the start time stamp and end time stamp of the $i^{th}$ thick edge respectively. Fig. 6 illustrates the extraction of the thick edges. It shows the plot of $I^{\prime}(t)$ and a list of pairs of time stamps. The $i^{th}$ pair defines the start time, $t_{s_{i}}$, and end time, $t_{e_{i}}$, of the $i^{th}$ thick edge. #### IV-A2 Cycles Extractor Once thick edges set, $E$, is obtained, the cycles of the SUP is extracted using the following steps: ##### Edge Thinning This refers to selecting the exact edge $t_{m_{i}}$ from a thick edge $e_{i}=[t_{s_{i}},t_{e_{i}}]$. We pick $t_{m_{i}}$ such that it falls exactly in the center of $e_{i}$, so $t_{m_{i}}$ is defined as: $t_{m_{i}}=\frac{1}{2}(t_{s_{i}}+t_{e_{i}})$ (6) such that this equation is applied across all thick edges $e_{i}\in E$. All thick edges are thinned and grouped in the Exact Edges Set $X=\\{t_{m_{0}},t_{m_{1}},t_{m_{2}},...t_{m_{n}},\\}$ where $t_{m_{i}}$ represents the $i^{th}$ exact edge, and $n$ is the number of thick edges, consequently, equals the number of exact edges. Fig. 6 (a) demonstrates edge thinning where plots of $I^{\prime}(t),$ $D^{\prime}(t)$ are shown. $E$ is displayed as pairs of times $e_{i}=(t_{s_{i}},t_{e_{i}})$. These time stamps are shown in Fig. 6 (a) as gray dotted lines surrounding each thick edges in both sides. As the definition of the exact edge $t_{m}$ in Eq. 6, exact edges set $X$ are displayed as the yellow time stamps pointing to the middle of each thick edge period with a dashed pointer. Figure 6: (a) Edge thinning. (b) Extracting the exact edges. (c) The cycles set. ##### Extracting Cycles To extract the cycles set $C$ of $D^{\prime}(t)$, each cycle is defined by two consecutive exact edges $\\{t_{m_{i}},t_{m_{i+1}}\\}$ in addition to the power value of this cycle, $m_{i}$. To calculate the power value for the $i^{th}$ cycle, the sequence $Y$ is defined as the power values of $D^{\prime}(t)$ between the selected consecutive exact edges $\\{t_{m_{i}},t_{m_{i+1}}\\}$. We then calculate the cycle power value $m_{i}$ as the median of $Y$ such that: $m_{i}=Median(Y):\quad t\in[t_{m_{i}},t_{m_{i+1}}]$ (7) A demonstration of the cycles extraction is shown in Fig. 6 (b). The exact edges are displayed in yellow boxes pointing upwards towards $D^{\prime}(t)$ with dashed green lines. The cycle power is defined by the median of $D^{\prime}(t)$ between two adjacent exact edges. This is depicted as a black horizontal line representing the cycle’s power value within the two exact edges, $\\{t_{m_{i}},t_{m_{i+1}}\\}$. Finally, the cycles set $C$ is formed by collecting what defines a cycle into single tuple, where each tuple $c_{i}$ hold the start time, end time, and the power value of $c_{i}$. Therefore, the cycles set $C$ with $n$ number of cycles is defined as the following: $\begin{array}[]{l}C=\\{c_{0},c_{1},..,c_{n-1}\\}\\\ c_{i}=(t_{s_{i}},t_{e_{i}},m_{i}):\quad 0\leq i<n\end{array}$ (8) where $n$ is the size of $C$ and the $i^{th}$ cycle is defined by the enclosing two exact edges, $t_{s_{i}}$ the edge at the cycle start, and $t_{e_{i}}$ the edge at the cycle end, and the estimated power value of the cycle $m_{i}$. Fig. 7 shows the steps to extract the cycles of a SUP by Edge Detection, Edge Thinning, and Cycle Extraction. Fig. 7 (a) shows the $D^{\prime}(t)$ after smoothing with the median filter. In Fig. 7 (b) the cycles are extracted into $C$. Using this set, a traced version of $D^{\prime}(t)$ is sketched based on the information enclosed in the cycles set $C$. Each cycle $c_{i}$ is traced as a square wave with a magnitude of $m_{i}$ across the cycle period $[t_{s_{i}},t_{e_{i}}$. Figure 7: (a) The smoothed function $D^{\prime}(t)$. (b) A traced version of $D^{\prime}(t)$ using the cycles extracted. (c) The cycles in different colors based on the power value of each. #### IV-A3 Cycles Levels Clustering As the cycles of a SUP are obtained in $C$, these cycles are used to formulate the features used in the SUP classification. The main objective in this step is to cluster the extracted cycles in $C$ into a set of clusters based on the power level of each cycle. We selected K-Means Clustering Algorithm [22, Chapter 20] to do this task as K-means is considered a suitable fit for numerical based features [24]. The observations that are fed to K-means consist of the power level $m_{i}$ for each cycle $c_{i}\in C$ except the last cycle $e_{n-1}$ as this cycle corresponds to an idle time between SUPs where the power values approaches to zero. The k-means algorithm requires determining the number of clusters $k$ in advance. Therefore, we select $k=3$ based on the conducted data analysis since the power values of the cycles for all appliances that we analyzed are three power levels. Consequently, K-means produces $k$ mutual exclusive clusters sets $L_{0},...,L_{k-1}$ where combining all the clusters represents the observation set such that: $\bigcup_{i=0}^{k-1}L_{i}=L_{0}\cup L_{1}...\cup L_{k-1}=B$ (9) where the observations set $B=\\{m_{0},m_{1},...,m_{n-1}\\}$, and $n$ represents the number of observations. K-means calculates centroids set $R=\\{r_{0},r_{1},...,r_{i},...,r_{k-1}\\}$, where $r_{i}$ is the centroid for the cluster $L_{i}$ and $k$ is the number of clusters. Each centroids, $r_{i}$, represents the center point where all the belonging elements $m_{j}\in L_{i}$ that have a minimum distance with. Fig. 7 (c) shows an example of the cycles in different colors based on the power value of each cycle. Three highlighted areas indicated by the centroid values points $r_{0},r_{1},r_{2}$ show the power range for each cluster. The features set $X$ is calculated from both the clusters in $B$ and the centroids set $R$. Clusters in $B$ are sorted in ascending order based on the centroid value for each cluster such that cluster $L_{0}$ has the lowest centroid value and cluster $L_{k-1}$ has the highest centroid value. Each cluster $L_{i}$ is used to define a feature $x_{i}$ in the features set $X$. Therefore, the feature set $X$ is defined as: $X=\left\\{x_{0},x_{1},\dots,x_{k-1}\right\\}$ (10) where each feature $x_{i}$ is defined as the total duration of the cycles $c_{j}$ within the cluster, $L_{i}$, multiplied by the average power level of the cluster, which in this case is the value of the centroid of the cluster, $r_{i}$. Each feature $x_{i}$ is modeled as follows: $x_{i}=r_{i}\sum_{j=0}^{n_{i}}{|t_{e_{j}}-t_{s_{j}}|}$ (11) where $t_{e_{j}},t_{s_{j}}$ are the two exact edges which define a cycle, as stated in Eq. 8, and $n_{i}$ is the size of $L_{i}$ . Fig. 8 visually depicts features formation. Figure 8: Visual representation for feature calculation. ### IV-B Classification Of SUPs To Operation Modes This section discusses the classification of the SUPs into operation modes based on the features set $X$ that is calculated based on a clustering mechanism for the SUP cycles. #### IV-B1 K-Nearest-Neighbors (KNN) Algorithm The algorithm we use in the classification is the K-Nearest-Neighbors (KNN) [5]. It is a classification algorithm that is based on a voting scheme. An item is classified by a plurality voting of its neighbors. The item is then assigned to the class most common between its $k$ nearest neighbors where $k>0$. For example, if $k=1$ the item then is assigned to the class of that single neighbor. One of the advantages KNN offers to our study is that KNN is a Lazy Learning Algorithm [28]. This means that the generalization of the training set is postponed until a query is made to the algorithm. In other words, KNN keeps track of (at most) all available data all the time until a query of classification is made so it does the calculations for all the available data with the query. In contrast with the Eager Learning Algorithms, the training set is summarized into a model so that when a query is made the model is enough to do the decision apart form using the training data again. In addition, KNN is suitable for Online Recommendation Systems such as online stores that recommend certain items to the customer [12]. The reason for its suitability is that the data is continuously updating. As it updates part of the data, other parts my be considered obsolete because of certain trend in the market. Therefore, shrinking the available data into a model and classify upcoming queries based on this model may lead to less accurate results. #### IV-B2 Training Data Since we are solving our main problem using a classification technique, it is necessary to have a dataset that is used to train the classifier. Due to the nature of the KNN algorithm as a lazy learning algorithm, it is important to initialize the KNN model and compute initial values for its centroids so that it can work online. This initialization is considered as a kind of training stage for the KNN model which can not be obtained without an pre-labeled dataset. The architecture demonstrated in Fig. 4 shows that the classification module uses synthetic data based on existing sets. The simulator generates a synthetic dataset that consists of all possibilities of operation modes for appliance SUP with variations through changing tuning parameters that makes the dataset diverse. The simulator is already configured with distribution functions of the SUP cycles in terms of duration, power level, and repetition. The simulator then synthesizes a set of feature vectors that match the definition of Eq. 11, and is made available to the classifier. Figure 9: The precision, recall, and F1-score for each operation mode in every usage intensity. The training data is a synthetic dataset generated by Power Consumption Simulator (PCS) [13] . This dataset consists of three subsets, where each subset consists of 4000 observations for certain appliance (dishwasher, clothes dryer, clothes washer). The training set consists of a set of observations where each observation represents a single SUP as a set of features $X$ with size $k$ as defined in Eq. 10. As the feature set is extracted, the features set with $k$ features takes the form $X=\\{x_{0},x_{1},..,x_{i},..,x_{k-1}\\}$. The training and features sets are fed to the KNN classifier in order to classify each SUP into one of $n$ operation modes, $d_{i}$, within the set of operation modes $M=\\{d_{0},d_{1},...,d_{n-1}\\}$. In our study, the focus was on $n=3$ operation modes as $M=\\{Light,Medium,Heavy\\}$. ## V Results And Discussion Figure 10: The precision, recall and F1-score for each operation mode using OMICC compared to DTW. The performance for OMICC is evaluated using supervised learning classification metrics i.e., Precision, Recall, and F1-score. The classification of SUPs in OMICC is compared to our previous work by classifying SUPs using Dynamic Time Warping (DTW) classification [13]. The performance of the DTW approach and OMICC is summarized in Table II. The table shows the average precision, recall, and the F1-score across all the experiments conducted over the datasets. The precision and recall values of the DTW are 81% and 81% respectively. While these values are generally higher for OMICC with 84% and 85%. TABLE II: Average precision, recall, and F1-score values for all datasets using DTW based [13] and OMICC. Algorithm | Precision | Recall | F1-score ---|---|---|--- DTW | 0.81 | 0.81 | 0.80 OMICC | 0.84 | 0.85 | 0.83 Fig. 9 shows the relationship between the Household Usage Intensity Distribution, the operation mode, and the metrics. Usage intensity refers to the pattern of operation modes that the household selects over time for appliances [13]. The chart shows the precision, recall, and F1-score on the y-axis. The x-axis is divided into three lanes representing the usage intensity values (low, medium, high). In each lane, the operation mode is depicted. The chart shows high values of the metric for the major operation mode in each usage intensity lane. e.g., in the low intensity lane, the light mode is the operation mode that is used by the household the majority of the time. Therefore, the three metric values for the light mode are relatively higher than other modes. The heavy operation mode in the high intensity lane behaves in the same way. The medium operation mode in the medium intensity lane show this property in the precision, but generally it has higher performance than other usage intensity. The evaluation of the performance of the two algorithms used in terms of operation modes is depicted in Fig. 10. The charts shows the precision, recall and F1-score for each operation mode using box and whiskers. Fig. 10 shows that for the DTW the metrics values are averaged around 82% for the light and heavy operation modes. Otherwise, the metrics value is approximately 79%. On the other hand, the KNN chart presented in Fig. 10 shows more balanced results, for the three operation modes the metrics average values are around 82%. ## VI Conclusion And Future Work Our work is focused on providing techniques built on top of residential power consumption to better support DR. We proposed Operation Modes Identification using Cycles Clustering (OMICC) which is SHEMS fundamental approach that processes residential disaggregated power consumption data to support DR. The cycles for SUPs from appliances are extracted using MST and clustered using K-means to form features that represent the SUP. We then utilized KNN to classify SUPs and identify the corresponding operation modes accordingly. The identified operation modes gives insights to the consumer on mode selection, and hence energy consumption reduction through choosing lighter operation modes. A future improvement to the current work is to enhance the SUP classification by taking into consideration the cycles order within SUP in feature extraction. Furthermore, proposing a SHEMS on top of OMICC to expose its applicability by reporting the identified operation modes to consumers and offering opportunities for energy conservation. ## References * [1] Abdulla AlHammadi, Aisha AlZaabi, Bashayer AlMarzooqi, Suhail AlNeyadi, Zayed AlHashmi, and Maad Shatnawi. Survey of iot-based smart home approaches. In 2019 Advances in Science and Engineering Technology International Conferences (ASET), pages 1–6. IEEE, 2019. * [2] Sereen Althaher, Pierluigi Mancarella, and Joseph Mutale. Automated demand response from home energy management system under dynamic pricing and power and comfort constraints. IEEE Transactions on Smart Grid, 6(4):1874–1883, 2015. * [3] K Carrie Armel, Abhay Gupta, Gireesh Shrimali, and Adrian Albert. Is disaggregation the holy grail of energy efficiency? the case of electricity. Energy Policy, 52:213–234, 2013. * [4] Karim Said Barsim, Roman Streubel, and Bin Yang. An approach for unsupervised non-intrusive load monitoring of residential appliances. In Proceedings of the 2nd International Workshop on Non-Intrusive Load Monitoring, 2014. * [5] G. Biau and L. Devroye. Lectures on the Nearest Neighbor Method. Springer Series in the Data Sciences. Springer International Publishing, 2015. * [6] Mark Costanzo, Dane Archer, Elliot Aronson, and Thomas Pettigrew. Energy conservation behavior: The difficult path from information to action. American psychologist, 41(5):521, 1986. * [7] M Rodríguez Fernández, I González Alonso, and E Zalama Casanova. Online identification of appliances from power consumption data collected by smart meters. Pattern Analysis and Applications, 19(2):463–473, 2016. * [8] Roland Fried. On the robust detection of edges in time series filtering. Computational Statistics & Data Analysis, 52(2):1063–1074, 2007\. * [9] Sebastian Gölz. Does feedback usage lead to electricity savings? analysis of goals for usage, feedback seeking, and consumption behavior. Energy Efficiency, 10(6):1453–1473, 2017. * [10] Stephen Haben, Colin Singleton, and Peter Grindrod. Analysis and clustering of residential customers energy behavioral demand using smart meter data. IEEE transactions on smart grid, 7(1):136–144, 2015. * [11] Qinran Hu, Fangxing Li, Xin Fang, and Linquan Bai. A framework of residential demand aggregation with financial incentives. IEEE Transactions on Smart Grid, 9(1):497–505, 2016. * [12] FO Isinkaye, YO Folajimi, and BA Ojokoh. Recommendation systems: Principles, methods and evaluation. Egyptian Informatics Journal, 16(3):261–273, 2015. * [13] Abdelkareem Jaradat, Hanan Lutfiyya, and Anwar Haque. Demand response for residential uses: A data analytics approach. In 2020 IEEE World Forum on Internet of Things (WF-IoT). IEEE, 2020\. * [14] Daniel Jorde and Hans-Arno Jacobsen. Event detection for energy consumption monitoring. IEEE Transactions on Sustainable Computing, 2020. * [15] SeungJun Kang and Ji Won Yoon. Classification of home appliance by using probabilistic knn with sensor data. In 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), pages 1–5. IEEE, 2016. * [16] Jack Kelly and William Knottenbelt. Does disaggregated electricity feedback reduce domestic electricity consumption? a systematic review of the literature. arXiv preprint arXiv:1605.00962, 2016. * [17] Pan Li, Hao Wang, and Baosen Zhang. A distributed online pricing strategy for demand response programs. IEEE Transactions on Smart Grid, 10(1):350–360, 2017. * [18] Jing Liao, Georgia Elafoudi, Lina Stankovic, and Vladimir Stankovic. Power disaggregation for low-sampling rate data. In 2nd International Non-intrusive Appliance Load Monitoring Workshop, Austin, TX, 2014. * [19] Bo Liu, Wenpeng Luan, and Yixin Yu. Dynamic time warping based non-intrusive load transient identification. Applied energy, 195:634–645, 2017. * [20] Yuanyuan Liu, Bo Qiu, Xiaodong Fan, Haijing Zhu, and Bochong Han. Review of smart home energy management systems. Energy Procedia, 104:504–508, 2016. * [21] J. M. G. López, E. Pouresmaeil, C. A. Cañizares, K. Bhattacharya, A. Mosaddegh, and B. V. Solanki. Smart residential load simulator for energy management in smart grids. IEEE Transactions on Industrial Electronics, 66(2):1443–1452, Feb 2019. * [22] David JC MacKay and David JC Mac Kay. Information theory, inference and learning algorithms. Cambridge university press, 2003. * [23] Stephen Makonin, Z. Jane Wang, and Chris Tumpach. RAE: The Rainforest Automation Energy Dataset for Smart Grid Meter Data Analysis. pages 1–9, 2017. * [24] Vit Niennattrakul and Chotirat Ann Ratanamahatana. On clustering multimedia time series data using k-means and dynamic time warping. In 2007 International Conference on Multimedia and Ubiquitous Engineering (MUE’07), pages 733–738. IEEE, 2007. * [25] Fabiano Pallonetto, Mattia De Rosa, Federico Milano, and Donal P Finn. Demand response algorithms for smart-grid ready residential buildings using machine learning models. Applied Energy, 239:1265–1282, 2019. * [26] Manisa Pipattanasomporn, Murat Kuzlu, Saifur Rahman, and Yonael Teklu. Load Profiles of Selected Major Household Appliances and Their Demand Response Opportunities. IEEE Transactions on Smart Grid, 5(2):742–750, mar 2014. * [27] A Prudenzi. A neuron nets based procedure for identifying domestic appliances pattern-of-use from energy recordings at meter panel. In 2002 IEEE Power Engineering Society Winter Meeting. Conference Proceedings (Cat. No. 02CH37309), volume 2, pages 941–946. IEEE, 2012\. * [28] Mamta Punjabi and Gend Lal Prajapati. Lazy learner and pca: An evolutionary approach. In 2017 Computing Conference, pages 312–316. IEEE, 2017. * [29] The International Energy Agency. Demand response: Tracking clean energy progress, 2019. [Online at https://bit.ly/2PrWcZJ; accessed 27-October-2019]. * [30] Huijuan Wang and Wenrong Yang. An iterative load disaggregation approach based on appliance consumption pattern. Applied Sciences, 8(4):542, 2018. * [31] Y. Wang, Q. Chen, T. Hong, and C. Kang. Review of smart meter data analytics: Applications, methodologies, and challenges. IEEE Transactions on Smart Grid, 10(3):3125–3148, 2019. * [32] Yi Wang, Qixin Chen, Tao Hong, and Chongqing Kang. Review of smart meter data analytics: Applications, methodologies, and challenges. IEEE Transactions on Smart Grid, 10(3):3125–3148, 2018. * [33] Wikipedia contributors. Electrical load — Wikipedia, the free encyclopedia, 2020. [Online; accessed 17-August-2020]. * [34] Jiajia Yang, Junhua Zhao, Fushuan Wen, Weicong Kong, and Zhaoyang Dong. Mining the big data of residential appliances in the smart grid environment. IEEE Power and Energy Society General Meeting, 2016-Novem, 2016\. * [35] Mengmeng Yu and Seung Ho Hong. A real-time demand-response algorithm for smart grids: A stackelberg game approach. IEEE Transactions on smart grid, 7(2):879–888, 2015. * [36] Shunlin Zheng, Yi Sun, Bin Li, Bing Qi, Kun Shi, Yuanfei Li, and Xiazhe Tu. Incentive-based integrated demand response for multiple energy carriers considering behavioral coupling effect of consumers. IEEE Transactions on Smart Grid, 2020.
# Constant Amortized Time Enumeration of Eulerian trails Kazuhiro Kurita National Institute of Informatics, Tokyo, Japan, <EMAIL_ADDRESS>Kunihiro Wasa Toyohashi University of Technology, Aichi, Japan<EMAIL_ADDRESS> ###### Abstract In this paper, we consider enumeration problems for edge-distinct and vertex- distinct Eulerian trails. Here, two Eulerian trails are _edge-distinct_ if the edge sequences are not identical, and they are _vertex-distinct_ if the vertex sequences are not identical. As the main result, we propose optimal enumeration algorithms for both problems, that is, these algorithm runs in $\mathcal{O}(N)$ total time, where $N$ is the number of solutions. Our algorithms are based on the reverse search technique introduced by [Avis and Fukuda, DAM 1996], and the push out amortization technique introduced by [Uno, WADS 2015]. Keywords— Eulerian trail, Enumeration algorithm, Output-sensitive algorithm, Constant Amortized Time ## 1 Introduction An _Eulerian trail_ in a graph is a trail that visits all edges exactly once. Finding a Eulerian trail is a classical problem in graph theory. A famous Seven Bridges of Königsberg problem solved by Leonhard Euler in 1736 is one of a well-known application. Eulerian trails have many other applications such as CMOS circuit design [5] bioinformatics [11, 12], and automaton theory [9]. Deciding for the existence of a Eulerian trail can be done in time polynomial [8]. However, the counting problem of _edge-distinct_ Eulerian trails is #¶-complete [4] for general undirected graphs, although for directed graphs, Aardenne and Brujin [1] proposed the BEST algorithm whose running time is polynomial time. Here, two Eulerian trails are _edge-distinct_ if the edge sequences of them are different. Similarly, two Eulerian trails are _vertex- distinct_ if the vertex sequences of them are different. If a graph is simple, the set of edge-distinct Eulerian trails and that of vertex-distinct Eulerian trails are equivalent. Thus, counting Eulerian trails is intractable for general cases unless $\P=\\#\P$. Recently, Conte et al. [6] give an algorithm that answers whether a graph contains at least $z$ Eulerian trails in time polynomial in $m$ and $z$, where $m$ is the number of edges in the graph. In contrast to counting problems, _enumeration problems_ ask to output all the solutions without duplicates. Especially, enumeration problems for subgraphs satisfying some constraints have been widely studied, such as spanning trees [14], $st$-paths [7, 3], cycles [3], maximal cliques [16, 17], and many others [19]. In this paper, we focus on enumeration problems for Eulerian trails. As mentioned in above, because the counting version is intractable, the enumeration problem is also intractable with respect to the size of inputs. Hence, in this paper, we aim to develop efficient enumeration algorithms with respect to the size of _both inputs and outputs_. Such algorithms are called _output-sensitive algorithms_. In particular, an enumeration algorithm $\mathcal{A}$ runs in _polynomial amortized time_ if the total running time of $\mathcal{A}$ is $\mathcal{O}(poly(n)N)$ time, where $n$ is the size of inputs and $N$ is the number of solutions. That is, $\mathcal{A}$ outputs solutions in $\mathcal{O}(poly(n))$ time per each on average. In this contexts, the ultimate goal is to develop $\mathcal{O}(N)$ time enumeration algorithm, that is $poly(n)\in\mathcal{O}(1)$, and such optimal algorithms have been obtained [18, 13]. Under this evaluation, Kikuchi [10] proposed an $\mathcal{O}(mN)$ time algorithm for simple general graphs, where $m$ is the number of edges in an input graph. However, the existence of a constant amortized time enumeration algorithm for Eulerian trails is open. In this paper, we propose optimal enumeration algorithms for edge-distinct Eulerian trails and vertex-distinct Eulerian trails based on the reverse search technique [2]. Intuitively speaking, an enumeration algorithm based on the reverse search technique enumerates solutions by traversing on a tree- shaped search space, called the _family tree_. Each node on the tree corresponds to a prefix of a solution called a _partial solution_ and each leaf corresponds to some solution. The edge set of the tree is defined by the _parent-child_ relation between nodes. in particular, a partial solution $P$ is the _parent_ of a partial solution $P^{\prime}$ if $P^{\prime}$ is obtained by adding one edge to $P$. Although our algorithm is quite simple, with a sophisticated analysis [18] and contracting operations for graphs, we achieve constant amortized time per solution. ## 2 Preliminaries An undirected graph $G=(V,E)$ is a pair of a vertex set $V$ and an edge set $E\subseteq V\times V$. Note that $(u,v)\in E$ if and only if $(v,u)\in E$ for each pair of vertices $u,v$. Graphs may contain self loops and parallel edges. For each vertex $v$, the neighborhood $N(v,\mu_{v})$ is a multiset of the underlying set $N(v)$, where $N(v)$ is the set of vertices adjacent to $v$ and $\mu_{v}:N(v)\to\mathbb{N}^{+}$ is a function from $N(v)$ to a positive integer. That is, $\mu_{v}(u)$ represents the number of distinct edges $(v,u)$ in a graph. The _degree_ $d(v)$ of a vertex $v$ is defined as $d(v)=\lvert N(v,\mu_{v})\rvert=\sum_{u\in N(v)}\mu_{v}(u)$. A vertex $v$ is _pendant_ if $d(v)=1$. Let $\partial_{G}(v)$ be the set of edges incident to $v$. A sequence $\pi=(v_{1},e_{1},v_{2},e_{2},\dots,v_{\ell})$ is a _trail_ if for each $i=1,\dots,\ell-1$, $e_{i}=(v_{i},v_{i+1})$ and edges in $\pi$ are mutually distinct. Note that some vertex appears more than once in a trail. In particular, a trail $\pi$ is a _path_ if $v_{i}\neq v_{j}$ in $\pi$ for $i\neq j$. A _circuit_ is a trail such that the first vertex and the last vertex are equal. A trail $\pi$ is an _Eulerian trail_ if $\pi$ contains all the edges in $G$. $G$ is _Eulerian_ if $G$ has at least one Eulerian trail. It is known that $G$ is Eulerian if and only if either every vertex has even degree or exactly two vertices have odd degree. In what follows, we assume that input graphs are Eulerian. We define $E(\pi)=(e_{1},\dots,e_{\ell-1})$ as a subsequence of $\pi$ containing all the edges in $\pi$, and similarly, $V(\pi)=(v_{1},\dots,v_{\ell})$ as a subsequence of $\pi$ containing all the vertices in $\pi$. Let $\pi_{1}$ and $\pi_{2}$ be two Eulerian trails. We say $\pi_{1}$ is _edge-distinct_ from $\pi_{2}$ if $E(\pi_{1})\neq E(\pi_{2})$ and we say $\pi_{1}$ is _vertex-distinct_ from $\pi_{2}$ if $V(\pi_{1})\neq V(\pi_{2})$. A sequence $\pi^{\prime}$ is a _prefix_ of $\pi$ if $\pi^{\prime}=(v_{1},e_{1},v_{2},e_{2},\dots,v_{\ell^{\prime}})$ for some $1\leq\ell^{\prime}\leq\ell$. Assume that every Eulerian trail starts from $s$ and ends with $t$. A trail $P$ is a _partial Eulerian trail_ in $G$ if $P$ starts from $s$ and there is an Eulerian trail that contains $P$ as a prefix. Especially, if $E(G)\setminus P\neq\emptyset$, then $P$ is called a _proper partial Eulerian trail_. An edge $e$ is said to be _addible_ if $P+e$ is also a partial Eulerian trail. We denote by $t(P)$ the end vertex of a partial Eulerian trail $P$ such that $t(P)\neq s$. For each partial Eulerian trail $P$, we denote by $\overline{P_{G}}\coloneqq(V^{\prime},E(G)\setminus P)$ such that a vertex $v$ is contained in $V^{\prime}$ if there is an edge in $E(G)\setminus P$ such that $v$ is incident to it. That is, $\overline{P_{G}}$ is the set of edges which we have to add to $P$ for constructing a solution. Now, we formalize our problems as follows. ###### Problem 1 (Edge-distinct Eulerian trail enumeration). Given a graph $G$, output all the edge-distinct Eulerian trails in $G$ without duplicate. ###### Problem 2 (Vertex-distinct Eulerian trail enumeration). Given a graph $G$, output all the vertex-distinct Eulerian trails in $G$ without duplicate. ## 3 An algorithm for edge-distinct Eulerian trails In this section, we propose our enumeration algorithm for edge-distinct Eulerian trails based on the _reverse search_ technique proposed by Avis and Fukuda [2]. We remark that Eulerian trails in this section are considered as a sequence of edges. We enumerate solutions by traversing a directed rooted tree, called a _family tree_ $\mathcal{T}$. The tree consists of nodes $\mathcal{P}$ and edges $\mathcal{E}$. Each node $X$ in $\mathcal{P}$ is associated with a partial Eulerian trail $P$ and a graph $G$. Let $P(X)$ be the associated partial Eulerian trail of $X$. If no confusion arises, we identify a node with its associated a partial Eulerian trail. The _root_ of $\mathcal{T}$ is the trail $R$ consisting of $s$. For two partial Eulerian trails $P_{1}$ and $P_{2}$, there is a directed edge $(P_{1},P_{2})$ in $\mathcal{E}$ if $P_{1}\subset P_{2}$ and $\lvert P_{2}\setminus P_{1}\rvert=1$. We say that $P_{1}$ is the _parent_ of $P_{2}$ and $P_{2}$ is a _child_ of $P_{1}$. Note that a partial Euler path may have more than one children. Let $\mathcal{C}(P)$ be the set of child partial Eulerian trails of $P$. From the definition of the parent-child relation, for any partial Eulerian trail $P$, there is a unique path from $P$ to the root on $\mathcal{T}$ obtained by recursively removing an edge $(u,t(P))$ in $P$. Hence, for any node $P$ in $\mathcal{T}$, there is a path from $R$ to $P$ in $\mathcal{T}$. Let $\Gamma_{G}(P)\coloneqq\partial_{G}(t(P))\cap\overline{P_{G}}$ be the candidates of addible edges $e$ to $P$ without violating the parent-child relation. To traverse all the children of a partial Eulerian trail $P$, generating partial Eulerian trails $P+e$ is enough if $e\in\Gamma_{G}(P)$ satisfies the condition given in the next lemma. ###### Lemma 3. Let $P$ be a partial Eulerian trail in $G$ and $e$ be an edge in $\overline{P_{G}}$. Then, $P+e$ is a child of $P$ if and only if $\Gamma_{G}(P)=\left\\{e\right\\}$ or $e\in\Gamma_{G}(P)$ is not a bridge. ###### Proof. Suppose that $P+e$ is a child of $P$. Assume that $\lvert\Gamma_{G}(P)\rvert>1$ and $e$ is a bridge. This implies that $G-(P+e)$ is disconnected, and thus, this contradicts the assumption that $P+e$ is a partial Eulerian trail. If $\Gamma_{G}(P)$ contains exactly one edge, then the lemma clearly holds. Suppose that $\lvert\Gamma_{G}(P)\rvert>1$. From the definition of a partial Eulerian trail, $\Gamma_{G}(P)$ contains at most one bridge. Hence, $\Gamma_{G}(P)$ contains a non-bridge edge. ∎ Now, we give our proposed enumeration algorithm as follows. The algorithm starts with the root partial Eulerian trail $R$. Then, for each edge $e=(s,u)$ in $\Gamma_{G}(R)$, the algorithm generates a child partial Eulerian trail $(e)$ of $R$ if $e$ meets the condition by Lemma 3. The algorithm recursively generates descendant nodes of $R$ by adding edges in the candidate set by depth-first traversal on $\mathcal{T}$. If the algorithm finds an Eulerian trail, then the algorithm outputs it as a solution. The correctness of the algorithm is clear from the construction. Hence, we obtain the next theorem. ###### Theorem 4. Algorithm 1 outputs all the edge-distinct Eulerian trails in a given graph once and only once. ### 3.1 Amortized analysis: achieving constant amortized time 1 $R\leftarrow$ the root partial Eulerian trail; 2 Compute $\overline{R_{G}}$ from $G$; 3 foreach _edge $e\in\partial_{G}(s)$_ do 4 if _$e$ is not a bridge in $G$_ then 5 GenChild(_$R,e$_); 6 7Function _GenChild(_$P^{\prime}$ , $f$_)_: Input: Partial Eulerian trail $P^{\prime}$, edge $f$ 8 $P\leftarrow P^{\prime}+f$; 9 Compute $\overline{P_{G}}$ from $\overline{P^{\prime}_{G}}$; 10 if _$\overline{P_{G}}=\emptyset$_ then 11 Output $P$; 12 13 else 14 Find bridges in $\overline{P_{G}}$ by Tarjan’s algorithm; 15 foreach _edge $e\in\partial_{G}(t(P))\cap\overline{P_{G}}$_ do 16 if _$\Gamma_{G}(P)=\left\\{e\right\\}$ or $e$ is not a bridge_ then 17 GenChild(_$P$ , $e$_); 18 Algorithm 1 Proposed enumeration algorithm for Eulerian trails The goal of this section is to develop an optimal enumeration algorithm constant amortized time per solution based on the algorithm given in the previous section. We summarize our algorithm in Algorithm 1. We first consider the time complexity for generating all the children of a partial Eulerian trail $P$. Let $m_{P}=\lvert\overline{P_{G}}\rvert$. Finding all bridges in $\overline{P_{G}}$ consumes $\mathcal{O}(m_{P})$ time by Tarjan’s algorithm [15]. Thus, by using these bridges, we can find edges satisfying the condition in Lemma 3 in $\mathcal{O}(m_{P})$. In addition, $\overline{P_{G}}$ can be computed in constant time by just removing $f$ from $\overline{P^{\prime}_{G}}$. Hence, the following lemma holds. ###### Lemma 5. Each recursive call at Algorithm 1 of Algorithm 1 makes all the child recursive calls in $\mathcal{O}(m_{P})$ time. From Lemma 5, we need some trick to achieve constant amortized time enumeration since recursive calls near to the root recursive call needs $\mathcal{O}(m)$ time. In this paper, we employ the _push out amortization_ technique as the trick introduced by Uno [18]. We first introduce some terminology. Let $P$ be a partial Eulerian trail. The phrase “the algorithm _generates_ $P$” indicates that the algorithm computes $P$ and $\overline{P_{G}}$ at Algorithm 1 and Algorithm 1 of Algorithm 1, respectively. We denote by $T(P)$ of $P$ the exact number of computational steps for generating $P$. That is, $T(P)$ does not include the time complexity of descendants of $P$. Let $\overline{T}(P)$ be the sum of the computation time of generating children of $P$. Roughly speaking, the push out amortization technique gives a sufficient condition for proving that an enumeration algorithm runs in $\mathcal{O}(T^{*})$ per solution on average, where $T^{*}$ is the maximum time complexity among leaves. This sufficient condition is called the _push out condition_ defined as follows. ###### Definition 6. Let $P$ be a partial Eulerian trail. We call the following equation the _push out condition_ for our problem: For some two constants $\alpha>1$ and $\beta\geq 0$, $\overline{T}(P)\geq\alpha T(P)-\beta(\lvert\mathcal{C}(P)\rvert+1)T^{*}.$ Note that we slightly modify the original condition given in [18] to fit our purpose. The intuition behind this condition is that if the computation time of each node is smaller than the sum of that of descendants, then the whole computation time is dominated by the computation time of descendants, that is, the computation time of leaves. In our case, $T^{*}=\mathcal{O}(1)$ because $m_{P}$ on a leaf is constant. If each partial Eulerian trail has at least two children, then the condition clearly holds. However, if there is a partial Eulerian trail $P$ has at most one child, then the condition may not hold because for the child $P^{\prime}$ of $P$, $\overline{P_{G}}$ is larger than $\overline{P^{\prime}_{G}}$ and thus $\overline{T}(P)$ can be smaller than $T(P)$. Hence, we modify $\overline{P_{G}}$ by contracting vertices and edges so that each non-leaf partial Eulerian trail $P$ has at least two children. This _contraction_ of a graph is performed by applying the follow rules. Let $v$ be a vertex in $\overline{P_{G}}$. Contraction Rule 1 If $v$ is a pendant, then remove $v$. Contraction Rule 2 If $v$ is incident to exactly two distinct edges $(u,v),(v,w)$ and $t(P)\neq v$, then remove $v$ and add edge $(u,w)$ to $\overline{P_{G}}$. $u$$v$ (a) $u$ (b) Figure 1: Example of Contraction Rule 1. Vertex $v$ is removed from a graph. $u$$v$$w$ (a) $u$$w$ (b) Figure 2: Example of Contraction Rule 2. Vertex $v$ is removed. Note that the degree of other vertices are not changed. See Figures 1 and 2. Note that when Contraction Rule 1 is applied to $t$, the last vertex of Eulerian trails becomes the unique neighbor of $t$ in $\overline{P_{G}}$. We also note that when applying Contraction Rule 2, $u=w$ may hold, that is, a loop on $u$ may be generated. After recursively applying these operations, we say $G$ is _contracted_ if neither Contraction Rule 1 nor Contraction Rule 2 can not be applicable to $G$. Clearly, there is a surjection from partial Eulerian trails in $G$ to ones in the contracted graph of $G$. The next lemma shows the time complexity for obtaining contracted graphs. ###### Lemma 7. Let $G$ be a graph, $P$ be a partial Eulerian trail in $G$ such that there is the parent $P^{\prime}$ satisfying $P=P^{\prime}+(u,v)$. Assume that the contracted graph of $\overline{P^{\prime}_{G}}$ is already obtained. Then, the contraction of $\overline{P_{G}}$ can be done in constant time. ###### Proof. The degree of vertices other than $u$ and $v$ is same in $\overline{P^{\prime}_{G}}$ and $\overline{P_{G}}$. Note that each vertex in a contracted graph has degree at least three. Thus, Contraction Rule 1 is not applicable. Moreover, Contraction Rule 2 can be applied to $u$ or $v$ in $\overline{P_{G}}$. Hence, by checking the degree of $u$ and $v$, the contraction can be done in constant time. ∎ ###### Lemma 8. Each vertex in a contracted graph is incident to at least two non-bridge edges. ###### Proof. Let $G$ be a contracted graph such that $G$ has an Eulerian trail. Because of Contraction Rule 1 and Contraction Rule 2, all vertices are incident to at least three edges. Suppose that a vertex $v$ incident to two distinct bridges $b_{1}$ and $b_{2}$. Otherwise, the lemma holds. Let $B_{i}$ be a connected component of $G-v$ such that $b_{i}\in B_{i}$ for $i=1,2$. Then, there are two classes of Eulerian trails the one contains paths first visit $B_{1}$ and next visit $B_{2}$, and the other first visits $B_{2}$ and next $B_{1}$. Because $G$ is Eulerian, $v$ must be incident to at least two edges for visiting all the edges in $G\setminus(B_{1}\cup B_{2})$. Hence, the statement holds. ∎ Because detecting bridges can be done in $\mathcal{O}(m_{P})$ time [15], the time complexity of each iteration is $\mathcal{O}(m_{P})$. In addition, the difference between the number of edges in $\overline{P_{G}}$ and that in $\overline{Q_{G}}$ of a child $Q$ of $P$ is constant, and each proper partial Eulerian trail has at least two children by Lemma 8. Hence, the proposed algorithm satisfies the push out condition and runs in constant amortized time per solution _without outputting_. Next, we consider the output format of our algorithm. If we simply output all the solutions naïvely, then total time complexity is $\mathcal{O}(mN)$. Hence, to achieve constant amortized time enumeration, we modify our algorithm so that it only outputs the difference between two consecutive solutions. Outputting only the difference is a well known technique for reducing the total time complexity in enumeration field [16, 14] During the execution of the algorithm, the algorithm maintains two working memories $W$ and $\mathcal{I}$. These are implemented by doubly-linked lists. $W$ contains the current partial Eulerian trail and $\mathcal{I}$ contains the information of contractions. When the algorithm finds an edge $e$ such that $P+e$ is a child of $P$, then the algorithm appends $e$ to $W$. In addition, the algorithm also appends an information $\xi$ to $\mathcal{I}$, where $\xi$ represents how the algorithm contracts a graph. $\xi$ forms a set of tuples each of which contains what is added and removed from $G$, and which rule is used. Then, the algorithm prints $+(e,\xi)$. We can easily see that the length of each output is constant. When the algorithm backtracks from $P^{\prime}$, then the algorithm removes $e$ from $W$ and $\xi$ from $\mathcal{I}$, and prints $-(e,\xi)$. When the algorithm reaches a leaf, the algorithm prints $\\#$ to indicate that a solution is stored in $W$. By using the information stored in the working memory, we can get the corresponding Eulerian trail if necessary. Moreover, it is enough to allocate $\mathcal{O}(m)$ space to each working memory. Lemmas 3 and 8 imply that each partial Eulerian trail has either zero or at least two children. This yields the following corollary. ###### Corollary 9. The number of partial Eulerian trails generated by the algorithm is $\mathcal{O}(N)$. Remind that partial Eulerian trails corresponds to nodes on $\mathcal{T}$. Thus, by Corollary 9, the total length of outputs is $\mathcal{O}(N)$ and the next theorem holds. ###### Theorem 10. One can enumerate all the Eulerian trails in a graph in constant amortized time per solution with linear space and preprocess time in the size of the graph. ## 4 Algorithm for vertex-distinct Eulerian trails In this section, we focus on an enumeration of vertex-distinct Eulerian trails. In a way similar to the previous section, we develop an enumeration algorithm based on the reverse search technique. We first some terminologies. A pair $(u,v)$ of vertices is _good_ if there are two parallel edges $e_{1}$ and $e_{2}$ between $u$ and $v$ such that the corresponding trail of $e_{1}$ in the original graph differs from that of $e_{2}$. Otherwise, we say $(u,v)$ is _bad_. Note that we can check whether $(u,v)$ is good or bad in constant time. The main difference from the edge-distinct problem is that adding parallel edges between a bad pair to a current partial Eulerian trail may generate duplicate solutions. Note that two parallel edges between a good pair may yield two children if these parallel edges is generated by Contraction Rule 2. This is because an added edge by Contraction Rule 2 carries two or more edges. To ensure that each proper partial Eulerian trail has at least two children, that is, each vertex $v$ has at least two distinct edges, we slightly modify an input graph of each recursive call as follows. Contraction Rule 3 If $v$ has exactly two neighbors $u$ and $w$ such that $\mu_{v}(u)=1$, $(v,w)$ is bad, $u$ and $w$ are distinct from $v$, and $v$ has no loops, then remove $\mu_{v}(w)-1$ edges between $v$ and $w$. Then, add a vertex $v^{\prime}$ and $\mu_{v}(w)-1$ edges between $v^{\prime}$ and $w$. Contraction Rule 4 Remove incident edges of a vertex $v$ if $v$ has no loops, $v$ has exactly one neighbor $u$ such that $(u,v)$ is bad and $\mu_{v}(u)\geq 3$. Then, add $\lfloor\frac{\mu_{v}(u)}{2}\rfloor$ loops to $u$. In addition, add an edge $(u,v)$ if $\mu_{v}(u)$ is odd. $u$$v$$w$ (a) $u$$v$$w$$v^{\prime}$ (b) Figure 3: Example of Contraction Rule 3. After applying Contraction Rule 3, Contraction Rule 2 and Contraction Rule 4 can be applied to the resultant graph. $u$$v$ (a) $u$$v$ (b) Figure 4: Example of Contraction Rule 4. After applying this rule, Contraction Rule 1 may be applicable. Note that the degree of $u$ is not changed. See Figures 3 and 4. Note that after performing Contraction Rule 3, the number of edges in the resultant graph $G^{\prime}$ and the original graph $G$ are same, and Contraction Rule 2 is applicable to $v$ in $G^{\prime}$. In addition, after performing Contraction Rule 4, Contraction Rule 1 may be applicable. When Contraction Rule 3 is applicable to $G$, there are (a) at least one solution containing $(u,v,w)$ or $(w,v,u)$ and (b) at least one solutions containing $(w,v,w)$. This implies that if $t(P)=v$ and Contraction Rule 3 is applicable, then we can generate two partial Eulerian trails which are extended by (a) and (b). In addition, we can easily obtain a one-to-one corresponding between solutions in $G$ and $G^{\prime}$, which $G^{\prime}$ is obtained by applying Contraction Rule 3 or Contraction Rule 4. However, applying Contraction Rule 3 and Contraction Rule 4 may make $G$ drastically small. Let $(v_{1},\dots,v_{\ell},v_{\ell+1}=v_{1})$ be a vertex sequence such that for each $i=2,\dots,\ell$, $N(v_{i})=\left\\{v_{i-1},v_{i+1}\right\\}$ and $\mu_{v_{i}}(v_{i-1})=\mu_{v_{i}}(v_{i+1})=2$. For each $v_{i}$, if $t(P)=v_{i}$, then $P$ has two children. However, after adding an edge incident to $v_{i}$ to $P$, $v_{j}$ has at most one child for $j=2,\dots,\ell-1$. If an input graph forms this sequence, then after contracting by Contraction Rule 3 and Contraction Rule 1, the size of the resultant graph becomes constant. This shrinking prevents the algorithm from satisfying the push out condition. To avoid such a situation, we introduce another rule. $u$$v$$w$ (a) $u$$w$ (b) $u$$w$ (c) Figure 5: Example of Multibirth Rule. This modification yields one or two graphs such thse are obtained by replacing (a) with (b), and (a) with (c). Multibirth Rule Let $v$ be a vertex adjacent exactly two distinct neighbors $u,w$ such that (1) $t(P)\neq v$, (2) $v$ has no loops, (3) $\mu_{v}(u)=\mu_{v}(w)=2$, and (4) both $(u,v)$ and $(v,w)$ are bad. Then make a copy $G_{1}$ of an input graph $G$, and remove $v$ from $G_{1}$ and add two edges between $u$ and $w$. In addition, if $v$ is not a cut in $G$, then make an additional copy $G_{2}$ of an input graph $G$, and remove $v$ from $G_{2}$ and add self loops $(u,u)$ and $(w,w)$. This rule comes from the following observation: each solution in $G$ contains either (1) two $(u,v,w)$ or (2) $(u,v,u)$ and $(w,v,w)$. Note that if $v$ is a cut, then all the solutions contain the former subtrails. Moreover, the set of solutions in $G_{1}$ and $G_{2}$ are disjoint. Hence, if there is such a vertex $v$, then the algorithm replace a child $Q$ of $P$ with two children $Q_{1}$ and $Q_{2}$ whose associated graphs are $G_{1}$ and $G_{2}$, respectively. In this section, we say that a graph $G$ is _contracted_ if Contraction Rule 1 to Contraction Rule 4 can not be applicable to $G\setminus P$. If a graph is contracted, then we show the key lemmas in this section. ###### Lemma 11. Let $G$ be a contracted graph and $P$ be a partial Eulerian trail in $G$. Suppose that $\overline{P^{\prime}_{G}}$ of the parent $P^{\prime}$ of $P$ is already obtained. Then, contracting $\overline{P_{G}}$ can be done in constant time. ###### Proof. Let $e=(u,v)$ be an edge such that $P=P^{\prime}+e$. Firstly, removing $e$ from $\overline{P^{\prime}_{G}}$ can be done in constant time. We consider that which rule can be applied to $\overline{P_{G}}$. Because $\overline{P^{\prime}_{G}}$ is contracted, we have a case analysis with respect to the multiplicity $\mu_{v}$ of edges on $\overline{P^{\prime}_{G}}$ and the number of neighbors of $v$ as follows; Case A: $v$ has at least three distinct neighbors $u,w_{1},w_{2}$ If $\mu_{v}(u)>1$ or more than three distinct neighbors, then no rules can be applicable to $v$. We assume that $v$ has exactly three distinct neighbors $u,w_{1},w_{2}$ such that $\mu_{v}(u)=1$. A.1: $\mu_{v}(w_{1})\geq 2$ or $\mu_{v}(w_{2})\geq 2$ We can not apply any rules to $w_{1}$ and $w_{2}$ after adding $e$ to $P^{\prime}$. A.2: $\mu_{v}(w_{1})=1$ and $\mu_{v}(w_{2})\geq 1$ We first apply Contraction Rule 3 and Contraction Rule 2 to $v$. Then, possibly, we also apply Contraction Rule 1, Contraction Rule 2, and Contraction Rule 4 to the copy of $v$. If $w_{2}$ is incident to $w_{3}$ such that $\mu_{w_{2}}(w_{3})=1$, then Contraction Rule 2 can be applied to $w_{2}$. However, no more rule can be applied to neither $w_{1}$ nor $x$. Case B: $v$ has exactly two distinct neighbors $u,w$ B.1: $\mu_{v}(u)\geq 3$ If $\mu_{v}(u)=3$ and $\mu_{v}(w)=2$, then this case is the same as case A.1. Otherwise, we can not apply any rules to $v$ and $w$ after adding $e$ to $P^{\prime}$. B.2: $\mu_{v}(u)=2$ This case is the same as case A.2 after adding $e$ to $P^{\prime}$. B.3: $\mu_{v}(u)=1$ In this case, Contraction Rule 3 can be applied to $v$ on $\overline{P^{\prime}_{G}}$. Because $\overline{P^{\prime}_{G}}$ is contracted, this derives a contradiction. Thus, this case does not happen. We also have the same cases for $u$. Thus, from the above observation, the number of applying contracting rules is constant. Moreover, each rule can be proceeded in constant time. Therefore, the lemma holds. ∎ Let $P^{\prime}=P+e$ be the parent of $P$. Assume that $\overline{P_{G}}$ is obtained by removing $e$ and repeatedly applying Contraction Rule 1 to Contraction Rule 4 to $\overline{P^{\prime}_{G}}$ . Assume that $\overline{P^{\prime}_{G}}$ has no vertex to which Multibirth Rule can be applied. This yields the number of vertices to which Multibirth Rule is applicable in $\overline{P_{G}}$ is constant. Moreover, when making children of $P$, we avoid to generate all the whole copies of $\overline{P_{G}}$ by applying Multibirth Rule to reduce the amount of space usage. In stead of whole copying, we locally modify $v$ that is a target of Multibirth Rule. Hence, the following clearly holds. ###### Lemma 12. Let $G$ be a graph and $P$ be a partial Eulerian trail in $G$. Suppose that $\overline{P_{G}}$ is contracted. Then, children of $P$ that are generated by applying Multibirth Rule to $\overline{P_{G}}$ can be obtained in constant time. From the above discussion, we can obtain the following key lemma. ###### Lemma 13. A proper partial Eulerian trail $P$ of a contracted graph $G$ has at least two children. ###### Proof. Because $G$ is contracted, each vertex has at least two distinct neighbors. If $t(P)$ is incident to two bridges, then this contradicts with the definition of a partial Eulerian trail. Thus, $t(P)$ is incident to at most one bridge. In addition, if $t(P)$ is not incident to a bridge, then the lemma clearly holds. Suppose that $t(P)$ is incident to a bridge $e=(t(P),u)$. If $t(P)$ is adjacent to exactly two vertices, then this implies that we can apply Contraction Rule 3. Hence, $t(P)$ is adjacent to more than two distinct vertices and the statement holds. ∎ By Lemmas 11, 13 and 12, each non-leaf node $P$ on the family tree made by the algorithm has at least two children and can be done in $\mathcal{O}(m_{P})$ time, where $m_{P}$ is the number of edges in the contracted graph $\overline{P_{G}}$. Remind that the computation time is dominated by detecting bridges on $\overline{P_{G}}$. Thus, by the same discussion in the previous section, the main theorem in this section is established. ###### Theorem 14. All vertex-distinct Eulerian trails can be enumerated in constant amortized time per solution with linear time preprocessing and linear space. ## Acknowledgement This work was partially supported by JST CREST Grant Number JPMJCR18K3 and JSPS KAKENHI Grant Numbers 19K20350, JP19J10761, and JP20H05793, Japan. ## References * [1] T. Aardenne-Ehrenfest, van and N.G. Bruijn, de. Circuits and trees in oriented linear graphs. Simon Stevin : Wis- en Natuurkundig Tijdschrift, 28:203–217, 1951\. * [2] David Avis and Komei Fukuda. Reverse search for enumeration. Discrete Applied Mathematics, 65(1):21 – 46, 1996. First International Colloquium on Graphs and Optimization. * [3] Etienne Birmelé, Rui Ferreira, Roberto Grossi, Andrea Marino, Nadia Pisanti, Romeo Rizzi, and Gustavo Sacomoto. Optimal listing of cycles and st-paths in undirected graphs. In SODA 2012: the 24th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1884–1896, 2012. * [4] Graham R. Brightwell and Peter Winkler. Counting eulerian circuits is #p-complete. In Camil Demetrescu, Robert Sedgewick, and Roberto Tamassia, editors, Proceedings of the Seventh Workshop on Algorithm Engineering and Experiments and the Second Workshop on Analytic Algorithmics and Combinatorics, ALENEX /ANALCO 2005, Vancouver, BC, Canada, 22 January 2005, pages 259–262. SIAM, 2005. * [5] Zhi-Zhong Chen, Xin He, and Chun-Hsi Huang. Finding double euler trails of planar graphs in linear time. SIAM Journal on Computing, 31(4):1255–1285, 2002. * [6] Alessio Conte, Roberto Grossi, Grigorios Loukides, Nadia Pisanti, Solon P. Pissis, and Giulia Punzi. Fast Assessment of Eulerian Trails. In Fourth International Workshop on Enumeration Problems and Applications, 2020. * [7] Rui Ferreira, Roberto Grossi, Romeo Rizzi, Gustavo Sacomoto, and Sagot Marie-France. Amortized $\tilde{O}(|v|)$ -delay algorithm for listing chordless cycles in undirected graphs. In ESA 2014: the 22th Annual European Symposium on Algorithms, volume 8737 of Lecture Notes in Computer Science, pages 418–429, 2014. * [8] Carl Hierholzer and Chr Wiener. Ueber die Möglichkeit, einen Linienzug ohne Wiederholung und ohne Unterbrechung zu umfahren. Mathematische Annalen, 6(1):30–32, 1873. * [9] Jarkko Kari. Synchronizing finite automata on eulerian digraphs. Theoretical Computer Science, 295(1):223 – 232, 2003. * [10] Yosuke Kikuchi. Enumerating All Eulerian Trails (in Japanese). Technical Report Vol.2010-AL-131 No.7, IPSJ, 2010. * [11] Carl Kingsford, Michael C Schatz, and Mihai Pop. Assembly complexity of prokaryotic genomes using short reads. BMC Bioinformatics, 11(1), January 2010. * [12] Kuntal Roy. Optimum gate ordering of CMOS logic gates using euler path approach: Some insights and explanations. Journal of Computing and Information Technology, 15(1):85, 2007\. * [13] Nicole Schweikardt, Luc Segoufin, and Alexandre Vigny. Enumeration for fo queries over nowhere dense graphs. In Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, SIGMOD/PODS ’18, page 151–163, New York, NY, USA, 2018. Association for Computing Machinery. * [14] Akiyoshi Shioura, Akihisa Tamura, and Takeaki Uno. An optimal algorithm for scanning all spanning trees of undirected graphs. SIAM Journal on Computing, 26(3):678–692, 1997. * [15] R.Endre Tarjan. A note on finding the bridges of a graph. Information Processing Letters, 2(6):160 – 161, 1974. * [16] Etsuji Tomita, Akira Tanaka, and Haruhisa Takahashi. The worst-case time complexity for generating all maximal cliques and computational experiments. Theoretical Computer Science, 363(1):28 – 42, 2006. Computing and Combinatorics. * [17] Shuji Tsukiyama, Mikio Ide, Hiromu Ariyoshi, and Isao Shirakawa. A new algorithm for generating all the maximal independent sets. SIAM Journal on Computing, 6(3):505–517, 1977. * [18] Takeaki Uno. Constant time enumeration by amortization. In Proc. of WADS 2015, pages 593–605, 2015. * [19] Kunihiro Wasa. Enumeration of enumeration algorithms, 2016. arXiv:1605.05102.
aainstitutetext: CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, P. R. Chinabbinstitutetext: School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, P.R. Chinaccinstitutetext: Center for High Energy Physics, Peking University, Beijing 100871, Chinaddinstitutetext: School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, Chinaeeinstitutetext: International Centre for Theoretical Physics Asia-Pacific, Beijing/Hangzhou, China # Neutrino non-standard interactions meet precision measurements of $N_{\rm eff}$ Yong Du a,b,c,d,e,1 Jiang-Hao Yu111Corresponding author<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract The number of relativistic species, $N_{\rm eff}$, has been precisely calculated in the standard model, and would be measured to the percent level by CMB-S4 in future. Neutral-current non-standard interactions would affect neutrino decoupling in the early Universe, thus modifying $N_{\rm eff}$. We parameterize those operators up to dimension-7 in the effective field theory framework, and then provide a complete, generic and analytical dictionary for the collision term integrals. From precision measurements of $N_{\rm eff}$, the most stringent constraint is obtained for the dimension-6 vector-type neutrino-electron operator, whose scale is constrained to be above about 195 (331) GeV from Planck (CMB-S4). We find our results complementary to other experiments like neutrino coherent scattering, neutrino oscillation, collider, and neutrino deep inelastic scattering experiments. ††arxiv: 2101.10475 ## 1 Introduction The great triumph of the Standard Model (SM) of particle physics was the discovery of the Higgs particle in 2012 Aad:2012tfa ; Chatrchyan:2012ufa . However, SM can not be the complete theory as there are still several unsolved puzzles, such as neutrino masses, dark matter and baryon asymmetry of the Universe, which require new physics beyond the SM. Tremendous new physics models have been invented and studied to address these issues, yet no definite signals of any of these models have been observed at colliders or from low- energy precision measurements. This has in turn motivated physicists to search for new physics in a model-independent and systematic way. Effective Field Theories (EFTs) provide such a systematic and model- independent framework for the study of new physics, especially if its characteristic scale is above the weak scale. The EFT, obtained by integrating out the newly introduced heavy particles to the SM, is called the SM EFT (SMEFT) Weinberg:1979sa ; Buchmuller:1985jz ; Grzadkowski:2010es ; Lehman:2014jma ; Li:2020gnx ; Murphy:2020rsh ; Li:2020xlh ; Liao:2020jmn ; Liao:2016hru , which respects the SM gauge group and is valid until down to the weak scale. Below the weak scale, the corresponding EFT is the Low-energy EFT (LEFT) Jenkins:2017jig ; Liao:2020zyx ; Li:2020tsi ; Murphy:2020cly , where the top quark, the SU(2) gauge bosons and the Higgs particle of the SMEFT are all integrated out. As a consequence, the Lagrangian of the LEFT respects the $\rm SU(3)_{c}\times U(1)_{\rm EM}$ gauge group. Since the discovery of neutrino oscillations Davis:1968cp ; Ahmad:2001an ; Fukuda:1998mi ; An:2012eh ; Ahn:2002up ; Michael:2006rx , neutrino non- standard interactions (NSIs), firstly discussed in Refs. Wolfenstein:1977ue ; Mikheev:1986gs and nicely reviewed in Refs. Davidson:2003ha ; Ohlsson:2012kf ; Farzan:2017xzy ; Dev:2019anc ; Abazajian:2012ys , have gained significant attention in recent years and can be described by the LEFT framework. Very stringent constraints on these NSI operators have been obtained, see, for example, Refs. Agrawal:2013hya ; Nelson:2013pqa ; Pobbe:2017wrj ; Choudhury:2018xsm ; Friedland:2011za ; Babu:2020nna ; Falkowski:2017pss ; Escrihuela:2011cf ; Coloma:2017ncl ; Altmannshofer:2018xyo ; Babu:2019mfe ; Khan:2019cvi ; Papoulias:2019xaw ; Canas:2019fjw ; Falkowski:2019xoe for recent theoretical studies and Refs. Abbiendi:2003dh ; Breitweg:1999ssa ; Adloff:2000dp ; Khachatryan:2014rra ; Aad:2015zva for experimental investigation. On the other hand, since these NSI operators can be matched to SMEFT operators, constraints on the NSIs from low-energy experiments can also be translated into constraints on the SMEFT operators, thus also on the UV models. While we are not interested in UV completion of neutrino NSIs in this work, we comment that these NSI operators can be induced, for example, from the leptoquark model Dorsner:2016wpm and/or the $\rm U(1)^{\prime}$ models, see, for example, the discussion in Ref. Wise:2014oea . These neutrino NSI operators can be generically classified into charge-current (CC) and neutral-current (NC) ones.222In the case of generic neutrino interactions, see Refs. Lindner:2016wff ; Rodejohann:2017vup ; Bischer:2018zcz ; Bischer:2019ttk ; Khan:2019jvr . In Ref. Biggio:2009nt , bounds on the CC NSIs were obtained from the Cabibbo–Kobayashi–Maskawa (CKM) Cabibbo:1963yz ; Kobayashi:1973fv unitarity, weak universality tests from pion decay Loinaz:2004qc , short-baseline neutrino oscillation experiments KARMEN Eitel:2000by and NOMAD Astier:2001yj ; Astier:2003gs , and loop corrections to $\mu\to e$ conversion in gold Zyla:2020zbs . Very recently, CC NSIs were studied in Ref. Terol-Calvo:2019vck ; Du:2020dwr within the SMEFT framework. For NC NSIs at dimension-6, one-loop electroweak radiative corrections was recently calculated in Ref. Hill:2019xqk within the SM, including two-loop matching and three-loop running for the lepton sector. Constraints from collider searches, dark matter direct detection experiments, and superbeam experiments can be found in Refs. Harnik:2012ni ; Cadeddu:2018izq ; Huang:2018nxj ; Shoemaker:2018vii ; AristizabalSierra:2017joc ; Gonzalez- Garcia:2018dep ; Dutta:2017nht ; Bertuzzo:2017tuf ; Dent:2016wcr ; Cerdeno:2016sfi ; Coloma:2014hka ; Pospelov:2013rha ; Pospelov:2012gm ; Kopp:2007ne ; Liu:2020emq . See Refs. Falkowski:2018dmy ; Bischer:2018zcz ; Pandey:2019apj ; Deepthi:2016erc ; Deepthi:2017gxg for neutrino trident production and neutrino-electron scattering from DUNE, Refs. Davidson:2003ha ; Barranco:2005ps ; Barranco:2007ej ; Bolanos:2008km ; Biggio:2009nt ; Lei:2019nma ; Esmaili:2013fva ; Friedland:2004ah ; Friedland:2005vy ; Khan:2017oxw for oscillation experiments, Ref. Davidson:2003ha ; Biggio:2009kv for loop bounds on dimension-6 electron-neutrino contact operators, Refs. Tomalak:2020zfh ; Denton:2020hop ; Hoferichter:2020osn ; Akimov:2017ade ; Altmannshofer:2018xyo ; Miranda:2019skf ; Deniz:2010mp ; Khan:2016uon for neutrino coherent scattering experiments, and Ref. Ismail:2020yqc for FASER$\nu$. Note also that the dimension-6 NC electron self-interacting NSIs could modify the weak mixing angle. This angle would be very precisely measured by the upcoming low-energy MOLLER experiment at the Jefferson Lab Benesch:2014bas and the planned P2 experiment at MESA Berger:2015aaa . Recently, SM prediction of the weak mixing angle at two-loop has been obtained in Ref. Du:2019hwm . For NC neutrino NSIs up to dimension-7, part of them was previously investigated in Refs. Esteban:2018ppq ; Farzan:2018gtr ; Billard:2018jnl ; AristizabalSierra:2018eqm ; Kosmas:2017tsq ; Dent:2017mpr ; Liao:2017uzy ; Dent:2016wcr ; Lindner:2016wff , while a relatively more comprehensive study was recently presented in Ref. Altmannshofer:2018xyo . However, not all NC NSI operators up to dimension-7 are bounded from Ref. Altmannshofer:2018xyo or existing work. For example, dimension-6 neutrino self-interacting operators are not studied, since previous work mainly focuses on neutrino oscillation, neutrino coherent scattering and collider experiments etc., which are insensitive to these operators. Furthermore, at dimension-7, only neutrino-photon, neutrino-gluon, and neutrino-quark operators are investigated Altmannshofer:2018xyo , while the dimension-7 neutrino-electron operators are not yet considered to be constrained. In the early Universe where only neutrinos, electrons, positrons, and photons are present, these neutrino-neutrino, neutrino-electron/positron and neutrino- photon NC NSIs would affect neutrino decoupling, thus modifying the effective number of relativistic degrees of freedom, viz., $N_{\rm eff}$. In light of the precision measurements of $N_{\rm eff}$ from LEP ALEPH:2005ab and Planck Aghanim:2018eyx , the upcoming SPT-3G Benson:2014qhw and the Simon Observatory Ade:2018sbj , as well as the proposal from Cosmic Microwave Background-Stage 4 (CMB-S4) Abazajian:2016yjj , CORE DiValentino:2016foa , PICO Hanany:2019lle and CMB-HD Sehgal:2019ewc , one naturally expects constraints on these NC NSIs from $N_{\rm eff}$. In this work, we investigate all kinds of neutrino-neutrino, neutrino- electron/positron and neutrino-photon NC NSI operators up to dimension-7, as well as their impact on $N_{\rm eff}$. Since the light mediator directly serves as one additional degree of freedom and thus resulting in large $N_{\rm eff}$, these NC NSI operators we consider in this work are assumed to be induced by integrating out some heavy new physics above $\sim\mathcal{O}(\rm 100\,MeV)$ that is about the muon mass or heavier. To obtain new physics corrections to $N_{\rm eff}$, the SM prediction to $N_{\rm eff}$ has to be known precisely in the first place. However, it has been known for a long time that the precision calculation of $N_{\rm eff}$ is very challenging. Within the SM, the precision calculation of $N_{\rm eff}$ has been carried out through the density matrix formalism. Due to its complexity, however, the density matrix formalism is very difficult to generalize to other scenarios, for example, when new physics is present. For recent development of precision calculation of $N_{\rm eff}$, see Refs. Bennett:2019ewm ; Bennett:2020zkv ; Akita:2020szl ; Escudero:2018mvt ; Escudero:2020dfa . In this work, we adopt the strategy developed in Refs. Escudero:2018mvt ; Escudero:2020dfa that reproduces the SM prediction of $N_{\rm eff}$, works fast, and can be easily generalized to include effects from various new physics. To find corrections to $N_{\rm eff}$ from some new physics, this strategy has already been applied in Refs. Luo:2020sho ; Luo:2020fdt with the introduction of right-handed partners of neutrinos, Ref. Kelly:2020aks ; Adshead:2020ekg with dark matter and/or sterile neutrinos, Ref. Li:2020roy for dark photon, Ref. Venzor:2020ova with the introduction of neutrino-scalar interactions, Ref. Froustey:2020mcq with the inclusion of neutrino flavor oscillation and primordial nucleosynthesis, and Ref. Ibe:2020dly with a light $Z^{\prime}$ to explain the recent XENON1T excess Aprile:2020tmw . Applying this strategy to the calculation of $N_{\rm eff}$ with the inclusion of NC NSIs up to dimension-7, in this work, we * • provide a complete, generic and analytical dictionary for the collision term integrals in section 4. This dictionary can be used directly for computing corrections to $N_{\rm eff}$ from some new physics, either in the EFT framework up to dimension-7, or in some UV models as long as the new physics is above $\sim\mathcal{O}(\rm 100\,MeV)$; * • present our constraints on the NC neutrino NSI operators up to dimension-7 in section 5, and also compare our results with previous ones. The rest of this work is organized as follows. We briefly review neutrino decoupling in the early Universe and the definition of $N_{\rm eff}$ in section 2. In section 3, we discuss the strategy developed in Refs. Escudero:2018mvt ; Escudero:2020dfa , and then summarize our strategy for calculating the collision terms integrals. Since these collision term integrals are essential to boost the calculation of $N_{\rm eff}$, we provide a complete generic and analytical dictionary of the collision term integrals, as well as the NSI operators we study in this work in section 4. Constraints on these NC NSI operators are presented in section 5. We conclude in section 6. ## 2 Brief review of neutrino decoupling and $N_{\rm{eff}}$ In the early Universe when the temperature is above $\mathcal{O}(10)$ MeV and below the muon mass, electrons, positrons, neutrinos and photons are in thermal equilibrium from electroweak interactions. As the Universe expands and the temperature cools down, neutrinos decouple from the rest of the plasma at around $T_{\rm dec}=2$ MeV. The neutrinos then undergo simple dilution from the expansion of the Universe, while $e^{\pm}$ and photons are still in thermal equilibrium. However, when the photon temperature cools further down below the electron mass $m_{e}$, $\gamma\gamma\to e^{+}e^{-}$ becomes suppressed while the inverse process is still permitted, heating up the photons. The number of relativistic degrees of freedom during this period can be parameterized by $N_{\rm eff}$ Shvartsman:1969mm ; Steigman:1977kc ; Mangano:2001iu : $\displaystyle\rho_{R}=\left[1+\frac{7}{8}\left(\frac{4}{11}\right)^{\frac{4}{3}}N_{\rm eff}\right]\rho_{\gamma}$ (1) with $\rho_{\gamma}$ the photon energy density, and $\rho_{R}$ the total energy density from all relativistic species during this epoch. Equivalently, $\displaystyle N_{\rm eff}\equiv\left(\frac{\rho_{R}-\rho_{\gamma}}{\rho_{\nu}^{0}}\right)\left(\frac{\rho_{\gamma}^{0}}{\rho_{\gamma}}\right),$ (2) with $\rho_{\nu}^{0}$ the energy density of a single massless neutrino, and $\rho_{\gamma}^{0}$ the energy density of photons in the instantaneous decoupling limit. Obviously, in the instantaneous limit, $\rho_{\gamma}^{0}=\rho_{\gamma}$ and $\rho_{R}=3\rho_{\nu}^{0}+\rho^{\gamma}$, resulting in the well-known $N_{\rm eff}=3$. On the other hand, due to the tininess of neutrino masses, the three flavor neutrinos can be effectively taken as massless, permitting to express $N_{\rm eff}$ in eq. (2) also in terms of the photon temperature $T_{\gamma}$ and the neutrino temperature $T_{\nu}$ as, upon assuming $T_{\gamma}=T_{e}$ which is valid since photons and electrons are tightly coupled during neutrino decoupling, $\displaystyle N_{\rm eff}=3\left(\frac{11}{4}\right)^{4/3}\left(\frac{T_{\nu}}{T_{\gamma}}\right)^{4}.$ (3) Similarly, in the instantaneous decoupling limit, ${T_{\nu}}/{T_{\gamma}}=(4/11)^{1/3}$ Kolb:1990vq and once again $N_{\rm eff}=3$. However, it has been known for decades that the instantaneous decoupling picture is not accurate. Indeed, neutrinos are still slightly interacting with the electromagnetic plasma, and neutrino oscillations are also active during neutrino decoupling deSalas:2016ztq ; Mangano:2005cc ; Hannestad:2001iy ; Dolgov:2002ab . Furthermore, the electromagnetic plasma also receives corrections from finite temperature QED corrections. Taking all these effects into account, one finds $N_{\rm eff}=3.044$ Akita:2020szl ; Froustey:2020mcq . Corrections from these effects will be discussed further in detail in sections 3.1.1 and 4.3. ## 3 Setup of the Boltzmann equation Evolution of phase space distribution (PSD) of any particle in the early Universe is governed by the Boltzmann equation, which we briefly review in this subsection. As mentioned in the introduction, we follow the discussion in Ref. Escudero:2020dfa , which simplifies the calculation of $N_{\rm eff}$ significantly and reproduces the prediction for $N_{\rm eff}$ by using the density matrix formalism. ### 3.1 The Boltzmann equation The Boltzmann equation reads $\displaystyle\frac{\partial f_{i}}{\partial t}-Hp\frac{\partial f_{i}}{\partial p}=\mathcal{C}[f_{i}],$ (4) with $f_{i}(p,t)$ the PSD for particle $i$, $H$ the Planck constant that accounts for the dilution effect from the expansion of the Universe, and $\mathcal{C}$ the collision term defined as333Note that in our setup, we include the symmetry factor in the definition of $\langle\mathcal{M}^{2}\rangle$ throughout this work. $\displaystyle\mathcal{C}\left[f_{i}\right]\equiv$ $\displaystyle\frac{1}{2E_{i}}\sum_{X,Y}\int\prod_{i,j}d\Pi_{X_{i}}d\Pi_{Y_{j}}(2\pi)^{4}\delta^{4}\left(p_{i}+p_{X}-p_{Y}\right)$ $\displaystyle\times\left(\langle\mathcal{M}^{2}\rangle_{Y\rightarrow i+X}\prod_{i,j}f_{Y_{j}}\left[1\pm f_{i}\right]\left[1\pm f_{X_{i}}\right]-\langle\mathcal{M}^{2}\rangle_{i+X\rightarrow Y}\prod_{i,j}f_{i}f_{X_{i}}\left[1\pm f_{Y_{j}}\right]\right),$ (5) where $d\Pi_{{i}}\equiv d^{3}p_{i}/[(2\pi)^{3}2E_{i}]$ and “+ ($-$)” is for bosonic (fermionic) particles. Note that the difference in the last line above correctly accounts for the production and annihilation of particle $i$. Upon integrating over the phase space of particle $i$ on both sides of eq. (4), one finds444Without any ambiguity, we suppress the index $i$ starting from here. $\displaystyle\frac{dn}{dt}+3Hn$ $\displaystyle=\frac{\delta n}{\delta t}\equiv\int g\frac{d^{3}p}{(2\pi)^{3}}\mathcal{C}[f],$ (6) $\displaystyle\frac{d\rho}{dt}+3H(\rho+p)$ $\displaystyle=\frac{\delta\rho}{\delta t}\equiv\int gE\frac{d^{3}p}{(2\pi)^{3}}\mathcal{C}[f],$ (7) where $g$ is the intrinsic degree of freedom of particle $i$, $E$ is its energy, and $n$ and $\rho$ are the number and the energy densities of particle $i$ respectively. Note that after the phase space integration on the right hand side of eqs.(6-7), ${\delta n}/{\delta t}$ and ${\delta\rho}/{\delta t}$ are functions of the temperature $T$, the chemical potential $\mu$ and the model parameters only.555With the inclusion of NSI operators, it will also depend on the scale of new physics and the Wilson coefficients. Thus, in terms of the Hubble parameter, one can readily obtain the following equations through the application of the chain rule: $\displaystyle\frac{dT}{dt}=$ $\displaystyle\frac{1}{\frac{\partial n}{\partial\mu}\frac{\partial\rho}{\partial T}-\frac{\partial n}{\partial T}\frac{\partial\rho}{\partial\mu}}\left[-3H\left((p+\rho)\frac{\partial n}{\partial\mu}-n\frac{\partial\rho}{\partial\mu}\right)+\frac{\partial n}{\partial\mu}\frac{\delta\rho}{\delta t}-\frac{\partial\rho}{\partial\mu}\frac{\delta n}{\delta t}\right]$ (8) $\displaystyle\frac{d\mu}{dt}=$ $\displaystyle\frac{-1}{\frac{\partial n}{\partial\mu}\frac{\partial\rho}{\partial T}-\frac{\partial n}{\partial T}\frac{\partial\rho}{\partial\mu}}\left[-3H\left((p+\rho)\frac{\partial n}{\partial T}-n\frac{\partial\rho}{\partial T}\right)+\frac{\partial n}{\partial T}\frac{\delta\rho}{\delta t}-\frac{\partial\rho}{\partial T}\frac{\delta n}{\delta t}\right].$ (9) These two equations effectively describe the evolution of $T$ and $\mu$ for any particles in the early Universe, and can thus be used to solve the decoupling of neutrinos from the rest of the plasma as we will see later in this section. #### 3.1.1 Evolution of $T_{\gamma}$, $T_{\nu}$ and $\mu_{\nu}$ in the SM At the time of neutrino decoupling, since photons and electrons are still tightly coupled, one can safely set $\mu_{\gamma}=\mu_{e}=0$ and $T_{\gamma}=T_{e}$. By applying eqs. (8-9), one obtains Escudero:2020dfa $\displaystyle\frac{dT_{\gamma}}{dt}=$ $\displaystyle-\frac{4H\rho_{\gamma}+3H\left(\rho_{e}+p_{e}\right)+\frac{\delta\rho_{\nu e}}{\delta t}+\frac{\delta\rho_{\nu\mu}}{\delta t}+\frac{\delta\rho_{\nu\tau}}{\delta t}}{\frac{\partial\rho_{\gamma}}{\partial T_{\gamma}}+\frac{\partial\rho_{e}}{\partial T_{\gamma}}},$ (10) $\displaystyle\frac{dT_{\nu_{\alpha}}}{dt}=$ $\displaystyle- HT_{\nu_{\alpha}}+\frac{\delta\rho_{\nu_{\alpha}}}{\delta t}/\frac{\partial\rho_{\nu_{\alpha}}}{\partial T_{\nu_{\alpha}}},\quad\quad\alpha=e,\mu,\tau.$ (11) Note that the above equation for $T_{\gamma}$ is derived assuming the finite temperature corrections are negligible, while it has been known for a long time that this is not the case especially given the precision measurements of $N_{\rm eff}$ from future experiments. To be clearer, in the future, $N_{\rm eff}$ will be measured to the percent level, while finite temperature corrections to $N_{\rm eff}$ is also at the percent level Abazajian:2016yjj ; Abazajian:2019tiv ; Abitbol:2017nao ; Abazajian:2013oma ; DiValentino:2016foa ; Hanany:2019lle ; Sehgal:2019ewc ; Abazajian:2019eic . Therefore, to correctly interpret the results from future experiments and/or to disentangle contributions to $N_{\rm eff}$ from any potential new physics from the SM, the QED corrections have to be included. The leading-order QED corrections were obtained decades ago Heckler:1994tv ; Fornengo:1997wa , and higher-order corrections up to $\mathcal{O}(e^{4})$ were recently calculated in Ref. Bennett:2019ewm , where the authors found corrections to $N_{\rm eff}$ are about $-$0.0009 and $10^{-6}$ at $\mathcal{O}(e^{3})$ and $\mathcal{O}(e^{4})$ respectively. Since both corrections at $\mathcal{O}(e^{3})$ and $\mathcal{O}(e^{4})$ exceed the proposed precision target of the future experiments, we neglect those in our setup and only keep the finite temperature corrections up to $\mathcal{O}(e^{2})$. On the other hand, neutrino oscillations also lead to a correction to $N_{\rm eff}$, which is about 0.0007 as reported in Ref. Mangano:2005cc ; deSalas:2016ztq ; Gariazzo:2019gyi . Note that, as was pointed out in Ref. Bennett:2019ewm , since contributions to $N_{\rm eff}$ from neutrino oscillations and the finite temperature corrections at $\mathcal{O}(e^{3})$ are comparable, they shall both be included for a consistent precision calculation of $N_{\rm eff}$. However, as stated above, due to their smallness, we also neglect contributions from neutrino oscillations in this work. To conclude this subsection, we show the result for $T_{\gamma}$ with the inclusion of the aforementioned finite temperature corrections following the notations of Refs. Bennett:2019ewm ; Escudero:2020dfa : $\displaystyle\frac{dT_{\gamma}}{dt}=-\frac{4H\rho_{\gamma}+3H\left(\rho_{e}+p_{e}\right)+3HT_{\gamma}\frac{dP_{\text{int }}}{dT_{\gamma}}+\frac{\delta\rho_{\nu e}}{\delta t}+\frac{\delta\rho_{\nu\mu}}{\delta t}+\frac{\delta\rho_{\nu\tau}}{\delta t}}{\frac{\partial\rho_{\gamma}}{\partial T_{\gamma}}+\frac{\partial\rho_{e}}{\partial T_{\gamma}}+T_{\gamma}\frac{d^{2}P_{\text{int }}}{dT_{\gamma}^{2}}},$ (12) where $P_{\rm int}$ and $\rho_{\rm int}\equiv-P_{\rm int}+dP_{\rm int}/d\ln T_{\gamma}$ are finite temperature corrections to the electromagnetic pressure and the electromagnetic energy density respectively, whose analytical expressions can be found in Ref. Bennett:2019ewm . ### 3.2 Brief review of the collision term integrals From eqs.(12) and (11), one can then solve $T_{\gamma}(t)$ and $T_{\nu}(t)$, and thus $N_{\rm eff}$ at the time of neutrino decoupling. From eqs.(6-7), we conclude that to solve $T_{\gamma}(t)$ and $T_{\nu}(t)$, the remaining task is to first finish these phase space integrals, which are in general very challenging with no analytical expressions. This in turn slows down numerical calculation of $N_{\rm eff}$, especially in the presence of new physics. However, as pointed out in Ref. Escudero:2020dfa , analytical results for those collision term integrals exist in the Maxwell-Boltzmann limit, as a result, numerical calculation of $N_{\rm eff}$ can be boosted significantly. For specific processes in the SM, the author of Refs. Escudero:2020dfa presented analytical results for the collision terms integrals in Ref. Escudero:2018mvt , which, however, can not be generalized to processes in the presence of new physics. In light of this, and since the analytical forms of the collision term integrals are essential to boost the numerical calculation of $N_{\rm eff}$, we present a full generic and analytical dictionary for the collision term integrals in section 4, while present the method we use to obtain these collision term integrals in this subsection. To start, we consider the collision term integrals for $2\to 2$ processes as these are the only interaction types relevant for $N_{\rm eff}$ calculation in the SM and with the inclusion of NSI operators considered in this work. 666For decay or inverse decay, the collision term integral, defined is eq. (15) is a nine-fold one that can be reduced to a two-fold integral. There are many literatures in the past discussing the collision term integral, see, for example, Refs.Hannestad:1995rs ; Dolgov:1997mb ; Dolgov:1998sf ; Birrell:2014uka ; Oldengott:2017fhy ; Oldengott:2014qra ; Grohs:2015tfy ; Bennett:2019ewm ; Yunis:2020woq ; Kreisch:2019yzn ; Mangano:2005cc ; deSalas:2016ztq ; Gariazzo:2019gyi ; Esposito:2000hi ; Mangano:2001iu ; Froustey:2019owm . Upon leaving out the irrelevant factor $g$, for a generic process $1+2\to 3+4$, one can write the collision term integrals on the right hand of eqs.(6-7) generically as $\displaystyle C^{(j)}\equiv\int E^{j}_{1}\frac{d^{3}p_{1}}{(2\pi)^{3}}\mathcal{C}[f_{1}]\quad\Leftrightarrow\quad\left\\{\begin{array}[]{cl}j=0&\text{, for number density}\\\ j=1&\text{, for energy density}\end{array}\right.,$ (15) with $p_{i}$ and $E_{i}$ the four-momentum and the energy of the $i$-th particle. To simplify the collision term integral, we reproduce the results presented in Appendix D of Ref. Fradette:2018hhl and cite the result here:777We assume CP conservation that allows the factorization of $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$, which is a well-justified approximation for our purpose here. $\displaystyle C^{(j)}=$ $\displaystyle\frac{1}{2(2\pi)^{6}}\int E_{1}^{j}dE_{1}dE_{2}dE_{3}\cdot\left(|\vec{p}_{1}|\,|\vec{p}_{2}|\,|\vec{p}_{3}|\right)\cdot\Theta\left(Q+|\vec{p}_{1}|^{2}+|\vec{p}_{2}|^{2}+|\vec{p}_{3}|^{2}+2\gamma\right)$ $\displaystyle\quad\quad\times\left(\int_{\max\left(-1,\cos\theta_{-}\right)}^{\min\left(1,\cos\theta_{+}\right)}d(\cos\theta)\int_{\cos\alpha_{-}}^{\cos\alpha_{+}}d(\cos\alpha)\frac{\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}}{\sqrt{a\cos^{2}\alpha+b\cos\alpha+c}}\right.$ $\displaystyle\quad\quad\quad\quad\left.\left.\times\left[f_{3}f_{4}(1\pm f_{1})(1\pm f_{2})-f_{1}f_{2}(1\pm f_{3})(1\pm f_{4})\right]{\frac{}{}}\right)\right|_{p_{4}\to p_{1}+p_{2}-p_{3}},$ (16) where $\alpha$ ($\theta$) is the angle between $\vec{p}_{1}$ and $\vec{p}_{2}$ ($\vec{p}_{3}$), and we define $\displaystyle Q\equiv$ $\displaystyle\,m_{1}^{2}+m_{2}^{2}+m_{3}^{2}-m_{4}^{2},$ (17) $\displaystyle\gamma\equiv$ $\displaystyle\,E_{1}E_{2}-E_{1}E_{3}-E_{2}E_{3},$ (18) $\displaystyle\omega\equiv$ $\displaystyle\,Q+2\gamma+2|\vec{p}_{1}||\vec{p}_{1}|\cos\theta,$ (19) $\displaystyle a\equiv$ $\displaystyle\,-4|\vec{p}_{2}|^{2}\left(|\vec{p}_{1}|^{2}+|\vec{p}_{3}|^{2}-2|\vec{p}_{1}||\vec{p}_{3}|\cos\theta\right),$ (20) $\displaystyle b\equiv$ $\displaystyle\,4|\vec{p}_{2}|\left(|\vec{p}_{1}|-|\vec{p}_{3}|\cos\theta\right)\omega,$ (21) $\displaystyle c\equiv$ $\displaystyle\,4|\vec{p}_{2}|^{2}|\vec{p}_{3}|^{2}\sin^{2}\theta-\omega^{2}.$ (22) Note that $a\leq 0$ from above definition, and the integrating regions for $\alpha$ and $\theta$ are altered, where $\displaystyle\cos\alpha_{\pm}=$ $\displaystyle\,\frac{-b\mp\sqrt{b^{2}-4ac}}{2a},$ (23) $\displaystyle\cos\theta_{\pm}=$ $\displaystyle\,-\frac{Q+2|\vec{p}_{2}|^{2}+2\gamma\mp 2|\vec{p}_{2}|\sqrt{Q+|\vec{p}_{1}|^{2}+|\vec{p}_{2}|^{2}+|\vec{p}_{3}|^{2}+2\gamma}}{2|\vec{p}_{1}||\vec{p}_{3}|},$ (24) resulting from the requirement of the existence of physical solutions to the collision term integral. Note also that $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ is in general a function of $p_{ij}$, defined as $\displaystyle p_{ij}\equiv p_{i}\cdot p_{j},\quad(i,j=1,\dots,4),$ (25) which also depends on the angles $\alpha$ and $\theta$, making the integral in eq. (16) too cumbersome to be completed. Surprisingly, one huge simplification that eventually allows the completion of the integral in eq. (16) can be realized when (1) all the particles involved are massless, i.e., $m_{i}=0$ ($i=1,\dots,4$), leading to $\displaystyle Q=0,\quad{\rm min}(1,\cos\theta_{+})=1=\cos\theta_{+},$ (26) and (2) all the particles obey the Maxwell-Boltzmann distribution, permitting analytical expressions for almost all possible forms of $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$. The only exception is when there exists a light mediator in the $t$ and/or the $u$ channels, where it has been well-known that IR divergence emerges when all external particles become massless, the Compton scattering for example.888Note, however, that even in the case with a light mediator in the $s$ channel, there is no IR divergence and analytical results for the collision term integrals can always be obtained. However, this IR divergence has to cancel out for any sufficiently inclusive quantities, as is guaranteed by the Kinoshita-Lee-Nauenberg (KLN) theorem Kinoshita:1962ur ; Lee:1964is . Recently, it is also shown in Ref. Frye:2018xjj that to have IR finiteness, one does not necessarily need to sum over both the initial and the final states as stated by the KLN theorem, rather, one only needs to sum over all possible final (initial) state for a given fixed initial (final) state, as long as the forward scattering is also included. In this work, since we are interested in constraints on new physics from $N_{\rm eff}$ in a model independent manner within the EFT framework, we will mainly focus on scenarios with heavy mediators such that the IR divergence issue mentioned above never shows up. The only exception is the dimension-5 neutrino magnetic dipole operator, which we will discuss in section 4. Furthermore, in order to obtain analytical results, as discussed in last paragraph, we assume all particles (1) are massless, and (2) obey the Maxwell- Boltzmann distribution only when calculating the collision term integrals from eq. (16). We then present a complete dictionary for all possible $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ up to products with three $p_{ij}$ in $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ in section 4, which are also provided in auxiliary Mathematica notebook files. Corrections from non-vanishing masses and Fermi-Dirac/Bose-Einstein distribution are also discussed in section 4. ## 4 EFT operators and the collision term integrals As no new particles have been observed after the discovery of the Higgs particle in 2012 Aad:2012tfa ; Chatrchyan:2012xdj , EFTs have become the natural framework for the study of any new heavy physics. In this work, we are interested in corrections to $N_{\rm eff}$ from higher-dimensional operators in the early Universe. The active degrees of freedom at that time are neutrinos, photons, electrons and positrons. Since the neutrinos decouple from the rest of the plasma at around 2 MeV, for the EFTs to be valid, the potential new physics could be as light as $\sim\mathcal{O}(100\,\rm MeV)$ that is about the muon mass. Note that the lower bound of the new physics scale would also be constrained from, for example, Big Bang Nucleosynthesis. Given that future experiments like CMB-S4 could constrain $\Delta N_{\rm eff}<0.06$ at 95% CL Abazajian:2016yjj ; Abazajian:2019tiv ; Abitbol:2017nao ; Abazajian:2019eic with $\Delta N_{\rm eff}$ the corrections to SM prediction of $N_{\rm eff}$, one naturally expects the EFT operators could also be constrained from the precision measurements of $N_{\rm eff}$. In this section, we will first enumerate the relevant EFT operators up to dimension-7 in section 4.1. The resulting invariant amplitudes from these operators turn out to be functions of $p_{ij}$ defined in eq. (25), model parameters and the Wilson coefficients. Depending on how explicitly the $\langle\mathcal{M}^{2}\rangle$ depends on $p_{ij}$, the collision term integral in eq. (16) needs to be calculated case by case. From momentum-energy conservation, the redundancy in collision term integral computation can be reduced to a set of limited number of bases as presented in section 4.2. Starting from these bases, we then present a complete dictionary of the collision term integrals in section 4.4. ### 4.1 List of relevant EFT operators Dimensions | Operators | Wilson coefficients ---|---|--- dimension-5 | $\mathcal{O}_{1}^{(5)}=\frac{e}{8\pi^{2}}\left(\bar{\nu}_{\beta}\sigma^{\mu\nu}P_{L}\nu_{\alpha}\right)F_{\mu\nu}$ | $C_{1}^{(5)}$ dimension-6 | $\mathcal{O}_{1,f}^{(6)}=\left(\bar{\nu}_{\beta}\gamma_{\mu}P_{L}\nu_{\alpha}\right)\left(\bar{f}\gamma^{\mu}f\right)$ | $C_{1,f}^{(6)}$ $\mathcal{O}_{2,f}^{(6)}=\left(\bar{\nu}_{\beta}\gamma_{\mu}P_{L}\nu_{\alpha}\right)\left(\bar{f}\gamma^{\mu}\gamma_{5}f\right)$ | $C_{2,f}^{(6)}$ $\mathcal{O}_{3}^{(6)}=\left(\overline{\nu^{c}}_{\beta}P_{L}\nu_{\alpha}\right)\left(\overline{\nu^{c}}_{\beta^{\prime}}P_{L}\nu_{\alpha^{\prime}}\right)^{\clubsuit}$ | $C_{3}^{(6)}$ $\mathcal{O}_{4}^{(6)}=\left(\bar{\nu}_{\beta}\gamma_{\mu}P_{L}\nu_{\alpha}\right)\left(\bar{\nu}_{\beta^{\prime}}\gamma_{\mu}P_{L}\nu_{\alpha^{\prime}}\right)^{\clubsuit}$ | $C_{4}^{(6)}$ $\mathcal{O}_{5}^{(6)}=\left(\overline{\nu^{c}}_{\beta}\sigma^{\mu\nu}P_{L}\nu_{\alpha}\right)\left(\overline{\nu^{c}}_{\beta^{\prime}}\sigma^{\mu\nu}P_{L}\nu_{\alpha^{\prime}}\right)^{\clubsuit}$ | $C_{5}^{(6)}$ dimension-7 | $\mathcal{O}_{1}^{(7)}=\frac{\alpha}{12\pi}\left(\bar{\nu}_{\beta}P_{L}\nu_{\alpha}\right)F^{\mu\nu}F_{\mu\nu}$ | $C_{1}^{(7)}$ $\mathcal{O}_{2}^{(7)}=\frac{\alpha}{8\pi}\left(\bar{\nu}_{\beta}P_{L}\nu_{\alpha}\right)F^{\mu\nu}\widetilde{F}_{\mu\nu}$ | $C_{2}^{(7)}$ $\mathcal{O}_{5,f}^{(7)}=m_{f}\left(\bar{\nu}_{\beta}P_{L}\nu_{\alpha}\right)(\bar{f}f)$ | $C_{5,f}^{(7)}$ $\mathcal{O}_{6,f}^{(7)}=m_{f}\left(\bar{\nu}_{\beta}P_{L}\nu_{\alpha}\right)\left(\bar{f}i\gamma_{5}f\right)$ | $C_{6,f}^{(7)}$ $\mathcal{O}_{7,f}^{(7)}=m_{f}\left(\bar{\nu}_{\beta}\sigma^{\mu\nu}P_{L}\nu_{\alpha}\right)\left(\bar{f}\sigma_{\mu\nu}f\right)$ | $C_{7,f}^{(7)}$ $\mathcal{O}_{8,f}^{(7)}=\left(\bar{\nu}_{\beta}i\stackrel{{\scriptstyle\leftrightarrow}}{{\partial}}_{\mu}P_{L}\nu_{\alpha}\right)\left(\bar{f}\gamma^{\mu}f\right)$ | $C_{8,f}^{(7)}$ $\mathcal{O}_{9,f}^{(7)}=\left(\bar{\nu}_{\beta}i\stackrel{{\scriptstyle\leftrightarrow}}{{\partial}}_{\mu}P_{L}\nu_{\alpha}\right)\left(\bar{f}\gamma^{\mu}\gamma_{5}f\right)$ | $C_{9,f}^{(7)}$ $\mathcal{O}_{10,f}^{(7)}=\partial_{\mu}\left(\bar{\nu}_{\beta}\sigma^{\mu\nu}P_{L}\nu_{\alpha}\right)\left(\bar{f}\gamma_{\nu}f\right)$ | $C_{10,f}^{(7)}$ $\mathcal{O}_{11,f}^{(7)}=\partial_{\mu}\left(\bar{\nu}_{\beta}\sigma^{\mu\nu}P_{L}\nu_{\alpha}\right)\left(\bar{f}\gamma_{\nu}\gamma_{5}f\right)$ | $C_{11,f}^{(7)}$ Table 1: Effective operators relevant for $N_{\rm eff}$ up to dimension-7 with $\alpha,\beta,\alpha^{\prime},\beta^{\prime}=e,\mu,\tau$, the neutrino flavor indices, and $f=e$. Operators with $\clubsuit$’s are the extra operators we consider in this work and the symbol “$c$” along with related operators means charge conjugation. The last column shows our convention for the Wilson coefficients. We start from the SMEFT, obtained by integrating out the heavy new degrees of freedom introduced to the SM, where the Lagrangian can be expressed as the SM Lagrangian, plus a tower of higher-dimension operators $\mathcal{O}^{(j)}$: $\displaystyle\mathcal{L}=\mathcal{L}_{\rm SM}+\sum\limits_{j\geq 5}\frac{C_{j}}{\Lambda^{j-4}}\mathcal{O}^{(j)},$ (27) where $C_{j}$’s are the Wilson coefficients and $\Lambda$ is the characteristic scale of new physics. In this setup, the neutrino masses can be naturally generated through the dimension-5 Weinberg operator Weinberg:1979sa . However, for the rest of this work, we neglect the masses of neutrinos due to their tininess compared with the other scales involved in our calculation. In the early Universe where the active fields are neutrinos, photons, electron and positrons, the Universe can be described by the LEFT, where the top quark, the $\rm SU(2)$ gauge bosons, and the Higgs boson have also been integrated out within the SMEFT, inducing both CC and NC neutrino NSIs. See, for example, Ref. Wise:2014oea for the discussion. However, we point out that CC and NC NSIs are not necessarily generated by heavy particles above the weak scale, instead, it can also be generated by some light particles above the $\mathcal{O}(\rm 100\,MeV)$ scale. To illustrate this point, we briefly discuss a toy $Z^{\prime}$ and a toy pseudo-scalar models here: * • The toy $\rm U(1)^{\prime}$ model we consider, without restricting ourselves to any other constraints such as anomaly cancellation, collider and cosmological constraints etc., is the $Z^{\prime}$ model that can be written as $\displaystyle\mathcal{L}_{Z^{\prime}}=\mathcal{L}_{\rm SM}-\frac{1}{4}Z^{\prime}_{\mu\nu}Z^{\prime\mu\nu}+\frac{1}{2}m_{Z^{\prime}}^{2}Z^{\prime}_{\mu}Z^{\prime\mu}-g_{Z^{\prime}}Z^{\prime}_{\mu}\left(\bar{L}\gamma^{\mu}L+\bar{\ell}_{R}\gamma^{\mu}\ell_{R}\right),$ (28) where $L$ and $\ell_{R}$ are the left-handed lepton doublet and the right- handed lepton singlet under $\rm SU(2)_{L}$ respectively, and $Z^{\prime}$ is the new vector boson charged under the $\rm U(1)^{\prime}$ group. For our purpose, $Z^{\prime}$ needs not to be above the weak scale, and as long as $Z^{\prime}$ is above $\sim\mathcal{O}(\rm 100\,MeV)$ or equivalently the muon mass, $Z^{\prime}$ can be integrated out: $\displaystyle\mathcal{L}_{Z^{\prime}}\supset$ $\displaystyle\,\frac{1}{2}m_{Z^{\prime}}^{2}\left(\frac{g_{Z^{\prime}}}{p^{2}-m_{Z^{\prime}}^{2}}\right)^{2}\left(\bar{L}\gamma_{\mu}L+\bar{\ell}_{R}\gamma_{\mu}\ell_{R}\right)\left(\bar{L}\gamma^{\mu}L+\bar{\ell}_{R}\gamma^{\mu}\ell_{R}\right)$ $\displaystyle\,-\frac{g_{Z^{\prime}}^{2}}{p^{2}-m_{Z^{\prime}}^{2}}\left(\bar{L}\gamma^{\mu}L+\bar{\ell}_{R}\gamma^{\mu}\ell_{R}\right)\left(\bar{L}\gamma^{\mu}L+\bar{\ell}_{R}\gamma^{\mu}\ell_{R}\right)$ $\displaystyle\,\xrightarrow[]{p^{2}\ll m_{Z^{\prime}}^{2}}\frac{g_{Z^{\prime}}^{2}}{2m_{Z^{\prime}}^{2}}\left(\bar{L}\gamma^{\mu}L+\bar{\ell}_{R}\gamma^{\mu}\ell_{R}\right)\left(\bar{L}\gamma^{\mu}L+\bar{\ell}_{R}\gamma^{\mu}\ell_{R}\right)+\mathcal{O}\left(\frac{1}{m_{Z^{\prime}}^{4}}\right),$ (29) leading to the NC neutrino-electron and electron-electron contact interactions as seen above when $m_{Z^{\prime}}^{2}$ is larger than the momentum transfer $p^{2}$. During neutrino decoupling, since $p^{2}$ is of $\mathcal{O}(\rm 10\,MeV)^{2}$, thus as long as $m_{Z^{\prime}}$ is above $\mathcal{O}(\rm 100\,MeV)$, the EFT after integrating out $Z^{\prime}$ serves as a good framework for the study of this new physics. * • The toy pseudo-scalar model we consider, without considering any theoretical and/or experimental constraints, can be expressed as $\displaystyle\mathcal{L}_{\rm p.s.}=\mathcal{L}_{\rm SM}+\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\frac{1}{2}m_{\phi}\phi^{2}-ig_{\phi}^{\alpha\beta}\phi\bar{\nu}_{\alpha}\gamma_{5}\nu_{\beta},\quad\text{ with }\alpha,\beta=e,\mu,\tau,$ (30) where $\phi$ is the pseudo-scalar with mass $m_{\phi}$. Similarly, as long as $m_{\phi}$ is above $\sim\mathcal{O}(\rm 100\,MeV)$, one can integrate out the particle $\phi$, and obtain the contact neutrino self-interacting operators. The CC NSIs have been recently studied in Refs. Terol-Calvo:2019vck ; Du:2020dwr . In Ref. Du:2020dwr , the authors took the running and the matching effects at different EFT scales into account, and the resulting constraint on the UV scale $\Lambda$ was found to be as large as about 20 TeV from neutrino oscillation data. The NC operators are the relevant ones for our study in this work, part of which has been previously studied in Refs. Farzan:2018gtr ; Billard:2018jnl ; AristizabalSierra:2018eqm ; Kosmas:2017tsq ; Dent:2017mpr ; Liao:2017uzy ; Dent:2016wcr ; Lindner:2016wff , and a comprehensive study was recently presented in Ref. Altmannshofer:2018xyo . Note that, since the authors in Ref. Altmannshofer:2018xyo were interested in constraints on these NSIs from neutrino experiments, they did not consider any dimension-6 neutrino self-interacting operators as neutrinos only feebly interact with our matter world. However, since these operators are closely related to $N_{\rm eff}$ by modifying the neutrino number and the energy densities directly through neutrino self-interactions, we include these operators in this work and study constraints on these neutrino self- interacting operators from precision measurements of $N_{\rm eff}$.999Neutrino self-interactions was also proposed to alleviate the Hubble tension between measurements from the Planck Aghanim:2018eyx and the local groups Riess:2019cxk in Ref. Kreisch:2019yzn . For the most recent work on Hubble tension from neutrino self-interactions, see Refs. Brinckmann:2020bcn ; Choudhury:2020tka ; Das:2020xke ; Huang:2021dba . We summarize all the EFT operators up to dimension-7 in table 1 following the notations in Ref. Altmannshofer:2018xyo . One immediate observation from table 1 is that, the dimension-5 operator in the first row, i.e., the neutrino magnetic dipole operator, corresponds to the light-mediator scenario we discussed at the end of section 3.2. When the intermediate photon shows up in the $s$-channel, there is no IR divergence, and the collision term integral can be calculated analytically. However, since the intermediate photon can also appear in the $t$\- and/or $u$-channels for the $\nu_{\alpha}\nu_{\beta}\to\gamma^{*}\to\nu_{\alpha}\nu_{\alpha}$101010The collision term integrals for the $\nu_{\alpha}\nu_{\alpha}\to\gamma^{*}\to\nu_{\alpha}\nu_{\alpha}$ process simply vanish since the initial and the final states have exactly the same temperature, thus the number density and the energy densities of $\nu_{\alpha}$ remain the same before and after the interaction. process for example, the collision term integrals would exhibit the IR divergence discussed earlier. In principle, one could remove this divergence by applying the KLN theorem or following the procedure discussed in Ref. Frye:2018xjj for any inclusive observables. However, since the scenario with a light mediator resides in a different regime compared with all the other operators listed in table 1, we leave the light mediator scenario for a future project. We also point out that this operator is very stringently constrained from the magnetic moment of $\nu_{e}$ using Borexino Phase-II solar neutrino data Altmannshofer:2018xyo ; Borexino:2017fbd , which justifies our ignorance of the $\mathcal{O}_{1}^{(5)}$ operator for the calculation of $N_{\rm eff}$. We will discuss more on this in section 5. Now, including corrections from the dimension-6 and dimension-7 operators in table 1, the invariant amplitude $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ in eq. (16) can generically be written as: $\displaystyle\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}=\langle\mathcal{M}_{\rm SM}^{2}+\mathcal{M}_{\rm EFT}^{2}+2\,{\rm Re}\mathcal{M}_{\rm SM}\cdot\mathcal{M}_{\rm EFT}^{\dagger}\rangle_{1+2\to 3+4},$ (31) where $\mathcal{M}_{\rm SM}$ and $\mathcal{M}_{\rm EFT}$ are the amplitudes from $\mathcal{L}_{\rm SM}$ and the EFT operators in table 1 respectively. Clearly, when the potential new physics scale $\Lambda$ is about or above $\mathcal{O}(\Lambda_{W})$ with $\Lambda_{W}$ the weak scale, the interference term in eq. (31) would be of the same order as the SM contributions, and the $\langle\mathcal{M}_{\rm EFT}^{2}\rangle$ term can then be safely neglected. However, we point out that in the case where $\Lambda\ll\Lambda_{W}$, though contributions from the EFT operators dominate, one can not simply discard the $\langle\mathcal{M}_{\rm SM}^{2}\rangle$ term in eq. (31) since for some of the operators in table 1, for example, the $\mathcal{O}_{3,4,5}^{6}$ operators, the $\langle\mathcal{M}_{\rm SM}^{2}\rangle$ is the only part that tells how $T_{\gamma}$ and $T_{\nu_{\alpha}}$ evolve with time as we will see below. In light of this, we always keep all the three terms in eq. (31) during our calculation, while using the large $\Lambda$ limit to cross check our results. By plugging in $\mathcal{M}_{\rm SM}$ and $\mathcal{M}_{\rm EFT}$ in eq. (31), one obtains $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ as a function of $p_{ij}$, the SM model parameters, the scale of new physics $\Lambda$, and the Wilson coefficients $C_{j}$ in the last column of table 1. One can then calculate the collision term integrals through finishing the integral shown in eq. (16). However, due to momentum-energy conservation, redundancy exists in the calculation of collision term integrals. This redundancy can be sufficiently removed by first choosing a set of basis, which we discuss in the next subsection. ### 4.2 Choices of the independent bases To start, we realize that for both the SM and the EFT contributions, the relevant processes are either $2\to 2$ scattering or $2\to 2$ annihilating processes. Denoting these processes generically as $1+2\to 3+4$ with momentum $p_{i}$ for the $i$-th particle, the invariant amplitude $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$, defined in eq. (31), can be expressed as a tower of the momentum scalar product $p_{ij}$ defined in eq. (25): $\displaystyle\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}=\sum\limits_{i,\cdots,t=1}^{4}\sum\limits_{k=0}^{\infty}c_{ij\cdots mn\cdots st}(\\{m\\},\\{g\\},\\{C\\},\Lambda,\\{T\\},\\{\mu\\})\cdot\underbrace{p_{ij}\cdots p_{mn}\cdots p_{st}}_{\text{k = number of }p_{ij}\text{'s}},$ (32) where the coefficients $c_{ij\cdots mn\cdots st}$’s are generically functions of the mass set $\\{m\\}$ and the coupling set $\\{g\\}$ of the SM, the new physics scale $\Lambda$ and the corresponding Wilson coefficient set $\\{C\\}$ in the last column of table 1, the temperatures $T_{\gamma,\nu_{\alpha}}$ and the chemical potentials $\mu_{\gamma,\nu_{\alpha}}$. $k$ | Bases | Number of bases ---|---|--- 0 | 1 | 1 1 | $p_{12},\quad p_{13},\quad p_{14}$ | 3 2 | $p_{12}^{2},\quad p_{12}\cdot p_{13},\quad p_{12}\cdot p_{14},\quad p_{13}^{2},\quad p_{13}\cdot p_{14},\quad p_{14}^{2}$ | 6 3 | $p_{12}^{3},\quad p_{12}^{2}\cdot p_{13},\quad p_{12}^{2}\cdot p_{14},\quad p_{12}\cdot p_{13}^{2},\quad p_{12}\cdot p_{13}\cdot p_{14},\quad p_{12}\cdot p_{14}^{2}$ | 10 $p_{13}^{3},\quad p_{13}^{2}\cdot p_{14},\quad p_{13}\cdot p_{14}^{2},\quad p_{14}^{3}$ Table 2: The bases we choose with different $k$’s for the calculation of collision term integrals. See the main text for more discussion. In this work, since we are only interested in contributions from the SM and the EFT operators up to dimension-7 as listed in table 1, it turns out that we only need to consider $k$’s up to $k=3$ in eq. (32), which then gives, by leaving out the arguments of $c_{ij\cdots mn\cdots st}$’s, $\displaystyle\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}=$ $\displaystyle\,c_{0}$ $\displaystyle+\sum\limits_{\begin{subarray}{c}i,j=1\\\ i\neq j\end{subarray}}^{4}c_{ij}\cdot p_{ij}+\sum\limits_{\begin{subarray}{c}i,\dots,n=1\\\ i\neq j\\\ m\neq n\end{subarray}}^{4}c_{ijmn}\cdot p_{ij}\cdot p_{mn}$ $\displaystyle+\sum\limits_{\begin{subarray}{c}i,\cdots,t=1\\\ i\neq j\\\ m\neq n\\\ s\neq t\end{subarray}}^{4}c_{ijmnst}\cdot{p_{ij}\cdot p_{mn}\cdot p_{st}}.$ (33) Note that for $k=1$, and similarly for the $k=2,3$ cases, we have included contributions from $i=j$ in the $c_{0}$ term from the on-shell conditions. On the other hand, terms in the second and the third lines of eq. (33) are not all independent from momentum-energy conservation, resulting in redundancy when one calculates the collision term integrals from eqs. (33) and (16). However, this redundancy can be sufficiently removed by first choosing an independent basis in terms of $p_{ij}$, and then rewrite $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ as a linear combination of these bases. Depending on $k$, the independent bases we choose are presented in table 2. One can then readily express the momentum tower as linear combinations of these bases. For example, $\displaystyle p_{12}\cdot p_{23}\cdot p_{24}=$ $\displaystyle\,\frac{1}{4}\left(\left(m_{1}^{2}-m_{2}^{2}\right)^{2}-\left(m_{3}^{2}-m_{4}^{2}\right)^{2}\right)p_{12}+\frac{1}{2}\left(m_{2}^{2}+m_{3}^{2}-m_{1}^{2}-m_{4}^{2}\right)p_{12}\cdot p_{13}$ $\displaystyle+\frac{1}{2}\left(m_{2}^{2}+m_{4}^{2}-m_{1}^{2}-m_{3}^{2}\right)p_{12}\cdot p_{14}+p_{12}\cdot p_{13}\cdot p_{14}$ (34) $\displaystyle\to$ $\displaystyle\,p_{12}\cdot p_{13}\cdot p_{14}\quad\quad{\text{in the massless limit.}}$ (35) At this stage, the collision term integral in eq. (16) boils down to the collision term integral with $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ being replaced by the independent bases shown in table 2. Particularly, when all the masses involved are vanishing, $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ generically simplifies significantly as seen from the example above. Therefore, as also discussed at the end of section 3.2, to obtain analytical results for the collision term integrals, we assume $m_{i}=0\,(i=1,\dots,4)$111111Since the electron mass $m_{e}$ is the only one matters here, this assumption basically means that, when calculating the collision term integrals in eq. (16), we take the untra-relativistic limit for electrons in the early Universe. and all particles obey the Maxwell-Boltzmann distribution. We then present the complete generic and analytical dictionary of the collision term integrals in section 4.4. Corrections from finite electron mass $m_{e}$, spin statistics and neutrino chemical potentials are discussed in section 4.3. ### 4.3 Corrections from $m_{e}$, spin statistics and chemical potentials As already noticed in Ref. Escudero:2020dfa , corrections from finite electron mass $m_{e}$ and spin-statistics have to be included to reproduce $N_{\rm eff}$ obtained from the density matrix formalism. Furthermore, as one can see from Table 1 of Ref. Escudero:2020dfa , these corrections are of the same order as the finite temperature corrections discussed at the beginning of this section. Thus, to be consistent, these corrections have to be included. In Ref. Escudero:2020dfa , finite $m_{e}$ corrections are obtained by finding the ratios of the collision term integrals in eq. (16) by switching on and off $m_{e}$. Similarly, corrections from spin statistics are computed by finding the ratios of the collision term integrals with Fermi-Dirac/Bose-Einstein and Maxwell-Boltzmann distributions respectively. For a detailed discussion, see Refs. Escudero:2020dfa ; Luo:2020sho . Though the methods used for the collision term integrals are different, we reproduce the numbers in Table 6 of Ref. Escudero:2020dfa and/or Table III of Ref. Luo:2020sho . These corrections are then included in the Boltzmann equations and used to solve $N_{\rm eff}$. The results are presented and discussed in more detail in section 5. We also comment on that neutrino chemical potentials are highly suppressed due to the rapid $\bar{\nu}\nu\leftrightarrow e^{+}e^{-}\leftrightarrow\gamma\gamma$ conversion and that the electron chemical potential is negligibly small compared to the plasma temperature in the early Universe. In our setup, in order to be generic, we keep neutrino chemical potentials throughout our analytical calculations, but stress that they would have no visible impact on the current/planned precision measurement of $N_{\rm eff}$. For our numerical calculations, we choose $\mu_{\nu}=\mu_{\bar{\nu}}$ and $|\mu_{\nu}/T_{\gamma}|=10^{-4}$ with $T_{\gamma}=T_{\nu}=10\rm\,MeV$ as our initial conditions, and verify that this setup is numerically equivalent to vanishing neutrino chemical potentials as expected. ### 4.4 A complete generic and analytical dictionary of the collision term integrals In last subsection, we list in table 2 the independent bases by which the invariant amplitudes $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ can be expressed, and conclude that the redundancy of collision term integrals from momentum-energy conservation can be removed by working with these bases directly. In this subsection, we provide the complete analytical dictionary of the collision term integrals for particle “1” and up to $k=3$, with $k$ the number of $p_{ij}$’s in the invariant amplitude. We note that a subset of this complete dictionary was presented in the appendices of Ref. Escudero:2018mvt ; Luo:2020sho , which agrees with our results presented in this subsection as long as one specifies $T_{i}$ and $\mu_{i}$ accordingly. For the collision term integrals, we follow the procedure briefly summarized in section 3.2 and stick to our notation in eq. (16), where $j=0$ represents the collision term integral for the number density and $j=1$ that for the energy density. The dependence on $p_{ij}$ of $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ is reflected by the argument of $C^{(j)}$.121212We stress that the argument of $C^{(j)}$ here is only used to reflect the dependence on $p_{ij}$ of $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$, and this argument does not mean that $C^{(j)}$ depends on $p_{ij}$. Instead, $C^{(j)}$ only depends on the model parameters, the new physics scale, the Wilson coefficients, the temperatures $T_{\gamma,\nu_{\alpha}}$ and the chemical potentials $\mu_{\nu_{\alpha}}$. For example, $C^{(0)}(p_{12})$ means the collision term integral for the number density with $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}=p_{12}$131313We leave out any overall factors in $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ that are independent of $p_{ij}$ here and in the following. in eq. (16). #### 4.4.1 $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}=1$ $\displaystyle C^{(j)}(1)=\left\\{\begin{array}[]{ll}\frac{1}{128\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{2}T_{2}^{2}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{2}T_{4}^{2}\right],&j=0\\\ &\\\ \frac{1}{128\pi^{5}}\left[-2{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{3}T_{2}^{2}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{2}T_{4}^{2}(T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (39) From eq. (39), one notes that in the $j=0$ case, the collision term integral, corresponding to the number density, vanishes when $T_{1}=T_{3}$ and $T_{2}=T_{4}$. This is expected since it actually stands for a scattering process where the number density for each species remains the same before and after the interaction. This conclusion holds generically and is independent of the form of $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$, as one can also see clearly from the results below. On the other hand, if $T_{1}=T_{2}=T_{3}=T_{4}$ and $\mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}$, then all the $C^{(j)}$’s vanish, which is also as expected since particle self interactions do not modify the number and the energy densities as long as thermal equilibrium is maintained. This observation also acts as a cross-check of our analytical results presented in these subsections. #### 4.4.2 $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}=p_{ij}$ $\displaystyle C^{(j)}(p_{12})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{1}{32\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{3}T_{2}^{3}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{3}T_{4}^{3}\right],&j=0\\\ &\\\ \frac{1}{64\pi^{5}}\left[-6{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{4}T_{2}^{3}+3{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{3}T_{4}^{3}(T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (43) $\displaystyle C^{(j)}(p_{13})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{1}{64\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{3}T_{2}^{3}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{3}T_{4}^{3}\right],&j=0\\\ &\\\ \frac{1}{64\pi^{5}}\left[-3{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{4}T_{2}^{3}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{3}T_{4}^{3}(T_{3}+2T_{4})\right],&j=1\end{array}\,\,,\right.$ (47) $\displaystyle C^{(j)}(p_{14})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{1}{64\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{3}T_{2}^{3}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{3}T_{4}^{3}\right],&j=0\\\ &\\\ \frac{1}{64\pi^{5}}\left[-3{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{4}T_{2}^{3}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{3}T_{4}^{3}(2T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (51) #### 4.4.3 $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}=p_{ij}\cdot p_{mn}$ $\displaystyle C^{(j)}(p_{12}^{2})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{3}{8\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{4}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}\right],&j=0\\\ &\\\ \frac{1}{4\pi^{5}}\left[-6{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{4}+3{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}(T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (55) $\displaystyle C^{(j)}(p_{12}\cdot p_{13})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{3}{16\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{4}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}\right],&j=0\\\ &\\\ \frac{1}{4\pi^{5}}\left[-3{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}(T_{3}+2T_{4})\right],&j=1\end{array}\,\,,\right.$ (59) $\displaystyle C^{(j)}(p_{12}\cdot p_{14})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{3}{16\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{4}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}\right],&j=0\\\ &\\\ \frac{1}{4\pi^{5}}\left[-3{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}(2T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (63) $\displaystyle C^{(j)}(p_{13}^{2})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{1}{8\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{4}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}\right],&j=0\\\ &\\\ \frac{1}{8\pi^{5}}\left[-4{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}(T_{3}+3T_{4})\right],&j=1\end{array}\,\,,\right.$ (67) $\displaystyle C^{(j)}(p_{13}\cdot p_{14})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{1}{16\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{4}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}\right],&j=0\\\ &\\\ \frac{1}{8\pi^{5}}\left[-2{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}(T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (71) $\displaystyle C^{(j)}(p_{14}^{2})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{1}{8\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{4}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}\right],&j=0\\\ &\\\ \frac{1}{8\pi^{5}}\left[-4{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{4}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{4}T_{4}^{4}(3T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (75) #### 4.4.4 $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}=p_{ij}\cdot p_{mn}\cdot p_{st}$ $\displaystyle C^{(j)}(p_{12}^{3})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}-\frac{9}{\pi^{5}}\left[{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}\right],&j=0\\\ &\\\ -\frac{45}{2\pi^{5}}\left[2{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{6}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}(T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (79) $\displaystyle C^{(j)}(p_{12}^{2}\cdot p_{13})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}-\frac{9}{2\pi^{5}}\left[{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}\right],&j=0\\\ &\\\ -\frac{15}{2\pi^{5}}\left[3{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{6}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}(T_{3}+2T_{4})\right],&j=1\end{array}\,\,,\right.$ (83) $\displaystyle C^{(j)}(p_{12}^{2}\cdot p_{14})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}-\frac{9}{2\pi^{5}}\left[{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}\right],&j=0\\\ &\\\ -\frac{15}{2\pi^{5}}\left[3{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{6}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}(2T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (87) $\displaystyle C^{(j)}(p_{12}\cdot p_{13}^{2})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}-\frac{3}{\pi^{5}}\left[{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}\right],&j=0\\\ &\\\ -\frac{15}{4\pi^{5}}\left[4{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{6}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}(T_{3}+3T_{4})\right],&j=1\end{array}\,\,,\right.$ (91) $\displaystyle C^{(j)}(p_{12}\cdot p_{13}\cdot p_{14})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{3}{2\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{5}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}\right],&j=0\\\ &\\\ \frac{15}{4\pi^{5}}\left[-2{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{6}T_{2}^{5}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}(T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (95) $\displaystyle C^{(j)}(p_{12}\cdot p_{14}^{2})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}-\frac{3}{\pi^{5}}\left[{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}\right],&j=0\\\ &\\\ -\frac{15}{4\pi^{5}}\left[4{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{6}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}(3T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (99) $\displaystyle C^{(j)}(p_{13}^{3})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}\frac{9}{4\pi^{5}}\left[-{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{5}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}\right],&j=0\\\ &\\\ \frac{9}{4\pi^{5}}\left[-5{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{6}T_{2}^{5}+{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}(T_{3}+4T_{4})\right],&j=1\end{array}\,\,,\right.$ (103) $\displaystyle C^{(j)}(p_{13}^{2}\cdot p_{14})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}-\frac{3}{4\pi^{5}}\left[{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}\right],&j=0\\\ &\\\ -\frac{3}{4\pi^{5}}\left[5{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{6}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}(2T_{3}+3T_{4})\right],&j=1\end{array}\,\,,\right.$ (107) $\displaystyle C^{(j)}(p_{13}\cdot p_{14}^{2})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}-\frac{3}{4\pi^{5}}\left[{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}\right],&j=0\\\ &\\\ -\frac{3}{4\pi^{5}}\left[5{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{6}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}(3T_{3}+2T_{4})\right],&j=1\end{array}\,\,,\right.$ (111) $\displaystyle C^{(j)}(p_{14}^{3})=$ $\displaystyle\,\left\\{\begin{array}[]{ll}-\frac{9}{4\pi^{5}}\left[{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{5}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}\right],&j=0\\\ &\\\ -\frac{9}{4\pi^{5}}\left[5{e}^{\frac{\mu_{1}}{T_{1}}+\frac{\mu_{2}}{T_{2}}}T_{1}^{6}T_{2}^{5}-{e}^{\frac{\mu_{3}}{T_{3}}+\frac{\mu_{4}}{T_{4}}}T_{3}^{5}T_{4}^{5}(4T_{3}+T_{4})\right],&j=1\end{array}\,\,,\right.$ (115) With this complete dictionary, one can readily write down the Boltzmann equations for $T_{\gamma}$ and $T_{\nu}$ as long as $\langle\mathcal{M}^{2}\rangle_{1+2\to 3+4}$ in eqs. (11-12) is known. For example, besides SM contributions, only the $\mathcal{O}_{3,4,5}^{(6)}$ operators listed in table 1 introduce new neutrino self-interactions and thus modify the number and the energy densities of neutrinos of different flavors. For the $\nu_{\alpha}\nu_{\beta}\to\nu_{\alpha}\nu_{\beta}$ ($\alpha\neq\beta$) process, we find $\displaystyle\langle\mathcal{M}^{2}\rangle_{\nu_{\alpha}\nu_{\beta}\to\nu_{\alpha}\nu_{\beta}}^{{\rm SM}+\mathcal{O}_{3,4,5}^{(6)}}=$ $\displaystyle\,\left[32G_{F}^{2}\cdot p_{12}^{2}+\frac{32\sqrt{2}G_{F}C_{4}^{(6)}}{\Lambda^{2}}\cdot p_{12}^{2}+\frac{16}{\Lambda^{4}}\left((C_{4}^{(6)})^{2}-2(C_{3}^{(6)}-4C_{5}^{(6)})C_{5}^{(6)}\right)\cdot p_{12}^{2}\right.$ $\displaystyle\left.+\frac{4}{\Lambda^{4}}\left((C_{3}^{(6)})^{2}-16(C_{5}^{(6)})^{2}\right)\cdot p_{13}^{2}+\frac{32C_{5}^{(6)}}{\Lambda^{4}}\left(C_{3}^{(6)}+4C_{5}^{(6)}\right)\cdot p_{14}^{2}\right],$ (116) where $G_{F}$ is the Fermi constant and $\Lambda$ is the scale of the potential new physics. The first term in the square bracket is the pure SM contributions, the second term is the interference term between the SM and the $\mathcal{O}_{4}^{(6)}$ operator, and the remaining terms are the pure contributions from the $\mathcal{O}_{3,4,5}^{(6)}$ operators. One can then immediately write down the collision term integrals for the $\nu_{\alpha}\nu_{\beta}\to\nu_{\alpha}\nu_{\beta}$ process as $\displaystyle C^{(j)}_{\nu_{\alpha}\nu_{\beta}\to\nu_{\alpha}\nu_{\beta}}=$ $\displaystyle\,\left[32G_{F}^{2}\cdot p_{12}^{2}+\frac{32\sqrt{2}G_{F}C_{4}^{(6)}}{\Lambda^{2}}\cdot C^{(j)}(p_{12}^{2})\right.$ $\displaystyle\quad+\frac{16}{\Lambda^{4}}\left((C_{4}^{(6)})^{2}-2(C_{3}^{(6)}-4C_{5}^{(6)})C_{5}^{(6)}\right)\cdot C^{(j)}(p_{12}^{2})$ $\displaystyle\quad+\frac{4}{\Lambda^{4}}\left((C_{3}^{(6)})^{2}-16(C_{5}^{(6)})^{2}\right)\cdot C^{(j)}(p_{13}^{2})$ $\displaystyle\left.\quad+\frac{32C_{5}^{(6)}}{\Lambda^{4}}\left(C_{3}^{(6)}+4C_{5}^{(6)}\right)\cdot C^{(j)}(p_{14}^{2})\right],$ (117) where $C^{(j)}(p_{12}^{2})$, $C^{(j)}(p_{13}^{2})$ and $C^{(j)}(p_{14}^{2})$ are given in eqs. (55), (67) and (75) respectively, and $j=0\,(1)$ is for the number (energy) density of $\nu_{\alpha}$. Though not shown explicitly, $C^{(j)}(p_{12}^{2})$, $C^{(j)}(p_{13}^{2})$ and $C^{(j)}(p_{14}^{2})$ depend on the temperatures and the chemical potentials of $\nu_{\alpha,\beta}$. Specifically, one has $T_{1,3}=T_{\nu_{\alpha}}$, $T_{2,4}=T_{\nu_{\beta}}$, $\mu_{1,3}=\mu_{\nu_{\alpha}}$, $\mu_{2,4}=\mu_{\nu_{\beta}}$ for $\alpha,\beta=e,\mu,\tau$ and $\alpha\neq\beta$. The complete results of $\langle\mathcal{M}^{2}\rangle$ from SM and the EFT operators listed in table 1 are given in an auxiliary Mathematica notebook file for all relevant processes, together with all the replacement rules to rewrite $\langle\mathcal{M}^{2}\rangle$ in terms of the bases listed in table 2.141414The replacement rules are obtained with the help of Package-X Patel:2016fam . We point out that when all the Wilson coefficients vanish therein, we reproduce the SM results as presented in, for example, Ref. Dolgov:2002wy . Using the complete dictionary summarized in this section and building our code upon nudec_BSM from Ref. Escudero:2020dfa , we study corrections to $N_{\rm eff}$ from the NSI operators in table 1, and discuss the results in section 5. ## 5 Constraints on EFT operators from $N_{\rm eff}$ Figure 1: Constraints on the characteristic scale $\Lambda$ of new physics from $\Delta N_{\rm eff}=N_{\rm eff}^{\rm SM+EFT}-N_{\rm eff}^{\rm SM}$, where $N_{\rm eff}^{\rm SM}=3.044$ Akita:2020szl ; Froustey:2020mcq is the SM prediction of $N_{\rm eff}$, and $N_{\rm eff}^{\rm SM+EFT}$ is that from SM and new physics. Note that the plot is obtained by fixing all Wilson coefficients at unity and considering only one non-vanishing operator only at a time. See the main text for more discussion. With the complete dictionary presented in section 4, one can readily solve the Boltzmann equations for $T_{\gamma}$ and $T_{\nu_{\alpha}}$, and thus obtain corrections to $N_{\rm eff}$. In what follows, we define these corrections as $\displaystyle\Delta N_{\rm eff}=N_{\rm eff}^{\rm SM+EFT}-N_{\rm eff}^{\rm SM},$ (118) where $N_{\rm eff}^{\rm SM+EFT}$ is the theoretical prediction of $N_{\rm eff}$ with the inclusion of the NC NSI operators, and $N_{\rm eff}^{\rm SM}=3.044$ Akita:2020szl ; Froustey:2020mcq that from the pure SM. For Planck, we use the current result $N_{\rm eff}=2.99^{+0.34}_{-0.33}$ Aghanim:2018eyx at the 95% CL to obtain the constraints, and $\Delta N_{\rm eff}<0.06$ at 95% CL for CMB-S4 Abazajian:2016yjj ; Abazajian:2019tiv ; Abitbol:2017nao ; Abazajian:2019eic . Our code is built upon nudec_BSM from Ref. Escudero:2020dfa , and is then used to solve eqs. (11-12) numerically by Mathematica. During our numerical solutions, we keep terms proportional to $m_{e}$ in $\langle\mathcal{M}^{2}\rangle$ and assume $T_{\nu_{\mu}}=T_{\nu_{\tau}}\neq T_{\nu_{e}}$ and $\mu_{\nu_{\mu}}=\mu_{\nu_{\tau}}\neq\mu_{\nu_{e}}$. In the very large $\Lambda$ limit, we reproduce the results in Table 1 of Ref. Escudero:2020dfa for both $T_{\nu_{e}}=T_{\nu_{\mu,\tau}}$ and $T_{\nu_{e}}\neq T_{\nu_{\mu,\tau}}$. We then show our results for varying Wilson coefficients or the new physics scale $\Lambda$ in the following subsections. ### 5.1 Constraints on $\Lambda$ with fixed Wilson coefficients Following the notations clarified in table 1 and fixing the Wilson coefficients at unity, we present our results in figure 1. The constraints shown in figure 1 are obtained by assuming only one non-vanishing NSI operator at a time, and the results are presented from considering the latest results from Planck Aghanim:2018eyx in orange and the proposed precision goal of CMB-S4 Abazajian:2016yjj ; Abazajian:2019tiv ; Abitbol:2017nao ; Abazajian:2019eic in purple. Several points from this plot merit emphasizing: * • Constraints on dimension-6 EFT operators are generically stronger than those on the dimension-7 ones. The reason is that dimension-7 operators are more suppressed by one more power of $\Lambda$. Moreover, among the dimension-6 operators, currently, the Planck data leads to the most stringent constraint on the $\mathcal{O}_{1,e}^{(6)}$ operator, whose lower bound is presently constrained to be about 195 GeV. In the future, CMB-S4 would improve this lower bound to about 240 GeV, as one can see from the first purple histogram in figure 1. Quantitively, we summarize the lower bounds on $\Lambda$’s for all the operators shown in figure 1 in table 3. Operators | Lower bound on $\Lambda$ [GeV] ---|--- | | | Planck | CMB-S4 $\mathcal{O}_{1,e}^{(6)}$ | 194.98 | 331.13 $\mathcal{O}_{2,e}^{(6)}$ | 85.11 | 239.88, except (94.84, 102.33) $\mathcal{O}_{5,e}^{(7)}$ | 1.66 | 2.45, except (1.91, 2.45) $\mathcal{O}_{6,e}^{(7)}$ | 2.29 | 3.16 $\mathcal{O}_{7,e}^{(7)}$ | 3.16 | 4.47 $\mathcal{O}_{8,e}^{(7)}$ | 3.89 | 6.17 $\mathcal{O}_{9,e}^{(7)}$ | 3.47 | 6.17 $\mathcal{O}_{10,e}^{(7)}$ | 3.89 | 6.17 $\mathcal{O}_{11,e}^{(7)}$ | 3.47 | 6.17 Table 3: Constraints on EFT operators from current Planck data and future CMB-S4 proposal. All lower bounds are obtained by assuming one non-vanishing EFT operator at a time and fixing the Wilson coefficients at unity. Note that for $\mathcal{O}_{2,e}^{(6)}$ and $\mathcal{O}_{5,f}^{(7)}$, there are exception intervals for CMB-S4 as a result of destructive interference or a negative shift in $N_{\rm eff}$ from the EFT operators. See the main text for more discussion. * • As one can see from table 3, for the $\mathcal{O}_{2,e}^{(6)}$ and $\mathcal{O}_{5,f}^{(7)}$ operators, there exist intervals that can not be covered by future CMB-S4 if one considers only one operator at a time. However, if one considers multiple operators, these two exception intervals would be ruled out by future CMB-S4 result. For this reason, figure 1 is plotted by using the lower bounds 239.88 GeV and 2.45 GeV for $\mathcal{O}_{2,e}^{(6)}$ and $\mathcal{O}_{5,f}^{(7)}$ respectively. On the other hand, the exception interval for $\mathcal{O}_{2,e}^{(6)}$ results from the destructive interference between the SM and the NSI operators, while that for $\mathcal{O}_{5,f}^{(7)}$ results from a negative shift to $N_{\rm eff}$ when $\Lambda$ is small. We show this point in the second row of figure 2. Figure 2: Corrections to $N_{\rm eff}$ from varying $\Lambda$. Upper left: $\Delta N_{\rm eff}$ with the inclusion of $\mathcal{O}_{1,e}^{(6)}$ only. Very similar plots are obtained for other operators except for $\mathcal{O}_{(2,e),3,4,5}^{(6)}$, $\mathcal{O}_{1,2,(5,e)}^{(7)}$, thus we only show $\mathcal{O}_{1,e}^{(6)}$ for illustration. Upper right: $\Delta N_{\rm eff}$ from $\mathcal{O}_{1}^{(7)}$ only. Very similar plot is obtained for $\mathcal{O}_{2}^{(7)}$. Lower left: $\Delta N_{\rm eff}$ from $\mathcal{O}_{2,e}^{(6)}$ only. Lower right: $\Delta N_{\rm eff}$ from $\mathcal{O}_{5,e}^{(7)}$ only. In all these subfigures, the black curve stands for corrections to $N_{\rm eff}$ from the related NSI operator, the horizontal red dashed line is the constraint on $\Delta N_{\rm eff}$ from Planck, and the horizontal red dashed line is that from CMB-S4. * • Constraints on dimension-6 operators $\mathcal{O}_{3,4,5}^{(6)}$ are missing in figure 1. The reason can be understood as follows: (1) When $\Lambda\gtrsim\Lambda_{W}$ or $\Lambda\gg\Lambda_{W}$, SM contributions dominate and the resulting $N_{\rm eff}$ always agrees with the SM prediction – The deviation of $N_{\rm eff}$ from the SM prediction is always within the uncertainties of both Planck and CMB-S4; (2) For $\Lambda\ll\Lambda_{W}$, one might naïvely think the $\langle\mathcal{M}_{\rm SM}^{2}\rangle$ term in eq. (31) can be safely discarded and very large $N_{\rm eff}$ would be predicted from $\mathcal{O}_{3,4,5}^{(6)}$. However, as we already point out right below eq. (31), the SM part can not be ignored since in this case, it is the only part that governs the evolution of $T_{\gamma}$. Furthermore, when $\Lambda\ll\Lambda_{W}$, neutrino self interactions are rapid enough to eliminate any difference between $T_{\nu_{e}}$ and $T_{\nu_{\mu,\tau}}$, and neutrinos of all flavors have exactly the same temperature.151515With our choice of the Wilson coefficients and the small $\Lambda$, neutrino self- interacting rates are always larger than the Hubble rate such that the three flavor neutrinos always stay in thermal equilibrium. However, neutrino decoupling is not affected since photon-electron-positron sector is governed by weak interactions. We emphasize that the equal temperature of neutrinos is the direct result of neutrino self-interactions introduced by $\mathcal{O}_{3,4,5}^{(6)}$, the moderate Wilson coefficients, and the small $\Lambda$. This in turn results in vanishing corrections to the collision term integrals for $\mathcal{O}_{3,4,5}^{(6)}$ as discussed right after eq. (39). Thus, when $\Lambda$ is very small, corrections from $\mathcal{O}_{3,4,5}^{(6)}$ to $N_{\rm eff}$ vanish and the SM prediction is restored. * • While it is a very good approximation to neglect the temperature and the chemical differences among neutrinos when calculating $N_{\rm eff}$ within the SM framework, see Table 1 of Ref. Escudero:2020dfa for example, this approximation does not stay valid any more in the case where new physics introduces only neutrino self interactions as the $\mathcal{O}_{3,4,5}^{(6)}$ operators. In this scenario, if one takes the equal neutrino temperature and the equal chemical potential approximation for all the three-flavor neutrinos, then the collision term integrals simply vanish such that the effects of this new physics can never be observed. * • Constraints on $\mathcal{O}_{1}^{(7)}$ and $\mathcal{O}_{2}^{(7)}$ are also missing in figure 1, due to the suppression factors $\alpha/(12\pi)$ and $\alpha/(8\pi)$, respectively: At the amplitude level, both these two factors lead to suppression of $\mathcal{O}(10^{-4})$, thus the invariant amplitude $\langle\mathcal{M}^{2}\rangle$ is suppressed by a factor of $\mathcal{O}(10^{-8})$. Note also that there is no interference between the SM and $\mathcal{O}_{1,2}^{(7)}$. The upper right panel of figure 2 shows the prediction of $N_{\rm eff}$ with the inclusion of $\mathcal{O}_{1}^{(7)}$, and similar result is obtained for $\mathcal{O}_{2}^{(7)}$. * • Though we do not consider the magnetic dipole operator $\mathcal{O}_{1}^{(5)}$ in this work, and the $\mathcal{O}_{1,2}^{(7)}$ operators are not constrained by $N_{\rm eff}$ as discussed above, lower bounds on $\Lambda$ for these operators do exist from other experiments. For $\mathcal{O}_{1}^{(5)}$, it was concluded in Ref. Altmannshofer:2018xyo that the most stringent lower bound on $\Lambda$ was $2.7\times 10^{6}$ GeV from the magnetic moment of $\nu_{e}$ using Borexino Phase-II solar neutrino data Borexino:2017fbd . On the other hand, translating this constraint on the magnetic moment of $\nu_{e}$ from $\mathcal{O}_{1,2}^{(7)}$, they found $\Lambda>328$ GeV and $\Lambda>1081$ GeV for $\mathcal{O}_{1}^{(7)}$ and $\mathcal{O}_{2}^{(7)}$ respectively. Furthermore, $\mathcal{O}_{1,e}^{(6)}$ was also constrained to have a lower bound of 1005 GeV from a global fitting of neutrino oscillating data Esteban:2018ppq ; Altmannshofer:2018xyo . As one can see from table 3, the constraint on $\mathcal{O}_{1,e}^{(6)}$ from the global fitting is stronger than that from $N_{\rm eff}$. However, all the other operators in figure 1 are not constrained in Ref. Altmannshofer:2018xyo , making our work complementary to theirs as well as that in Ref. Du:2020dwr . We emphasize that conclusions above are obtained by fixing the Wilson coefficients at one and considering only one non-vanishing NSI operator at a time. In Ref. Du:2020dwr , we find that if multiple operators exist at the same scale, then the correlation among them may change the constraints by orders of magnitude. However, due to the computation challenge, this correlation effect is in general ignored except for some UV models where the number of operators at certain dimension is limited. In this work, we find when $\Lambda\sim\Lambda_{W}$ or smaller where NSI contributions to $\langle\mathcal{M}^{2}\rangle$ are comparable to or dominate over those from the SM, numerical computation of the Boltzmann equations is extremely slow or even impossible even with high-performance clusters. For this reason, the correlation mentioned above will not be discussed. ### 5.2 Constraints on Wilson coefficients with fixed $\Lambda$ Figure 3: Corrections to $N_{\rm eff}$ from varying Wilson coefficients with $\Lambda=1$ TeV and by considering only one non-vanishing NSI operator at a time. The upper left (right) panel corresponds to $\Delta N_{\rm eff}$ from $\mathcal{O}_{1(2),e}^{(6)}$ with $\Lambda=1000$ GeV, and the lower left (right) panel is the same but with $\Lambda=100$ GeV. The black curve stands for corrections to $N_{\rm eff}$ from the NSI operator, and the horizontal colorful lines have the same meaning as those in figure 2. Note the scale difference of the horizontal axes and see more discussion in the main text. Alternatively, we present the constraints for the Wilson coefficients with $\Lambda=1$ TeV and 100 GeV in this subsection, and the results are shown in the first and the second rows of figure 3, respectively. Constraints are shown for $\mathcal{O}_{1,e}^{(6)}$ and $\mathcal{O}_{2,e}^{(6)}$ only, and all the other Wilson coefficients stay unconstrained for the range we consider in figure 3. Quantitively, we find, assuming the same Wilson coefficients for neutrinos of different flavors, * • For $\Lambda=1$ TeV: $\displaystyle-28.7\lesssim C_{1,e}^{(6)}\lesssim 25.8\quad{\rm(Planck),}$ $\displaystyle\quad-11.8\lesssim C_{1,e}^{(6)}\lesssim 9.0\quad\text{ (CMB-S4)}$ (119) $\displaystyle-145.2\lesssim C_{2,e}^{(6)}\lesssim 141.3\quad{\rm(Planck),}$ $\displaystyle\quad-17.0\lesssim C_{2,e}^{(6)}\lesssim 15.8\quad\text{ (CMB-S4),}$ (120) except for $\displaystyle C_{2,e}^{(6)}\in(-116.7,-100.7)\cup(96.4,112.5){\text{ for CMB-S4}.}$ * • For $\Lambda=100$ GeV: $\displaystyle-0.29\lesssim C_{1,e}^{(6)}\lesssim 0.26\quad{\rm(Planck),}$ $\displaystyle\quad-0.12\lesssim C_{1,e}^{(6)}\lesssim 0.09\quad\text{ (CMB-S4)}$ (121) $\displaystyle-1.45\lesssim C_{2,e}^{(6)}\lesssim 1.42\quad{\rm(Planck),}$ $\displaystyle\quad-0.18\lesssim C_{2,e}^{(6)}\lesssim 0.15\quad\text{ (CMB-S4),}$ (122) except for $\displaystyle C_{2,e}^{(6)}\in(-1.17,-1.01)\cup(0.96,1.13){\text{ for CMB-S4}.}$ For $C_{2,e}^{(6)}$, the two exception intervals for CMB-S4 in the last line of the two bullets above result from destructive interference as already discussed in last subsection – For $C_{2,e}^{(6)}$ of $\mathcal{O}(10)$ or larger, $\mathcal{O}_{1,e}^{(6)}$ is effectively of the weak scale, leading to the destructive interference and thus the two intervals. This can be understood more explicitly from the analytical expressions of the neutrino total energy densities from $\mathcal{O}_{1,e}^{(6)}$ and $\mathcal{O}_{2,e}^{(6)}$:161616These results can be readily obtained by using our complete dictionary in section 4.4 or the analytical expressions in the auxiliary Mathematica notebook file. $\displaystyle\rho^{\rm interf.}_{\nu-\rm total}(\mathcal{O}_{1,e}^{(6)})\simeq$ $\displaystyle+\frac{256\sqrt{2}C_{1,e}^{(6)}G_{F}\sin^{2}\theta_{W}T_{\gamma}^{9}}{\pi^{5}\Lambda^{2}},$ (123) $\displaystyle\rho^{\rm interf.}_{\nu-\rm total}(\mathcal{O}_{2,e}^{(6)})\simeq$ $\displaystyle-\frac{40\sqrt{2}C_{2,e}^{(6)}G_{F}T_{\gamma}^{5}T_{\nu_{e}}^{4}}{\pi^{5}\Lambda^{2}}\times(1+4\sin^{2}\theta_{W}),$ (124) where $\theta_{W}$ is the weak mixing angle and we only show the interfering terms here by omitting any sub-leading effects in them in each case. Note that a larger (smaller) neutrino energy density would be equivalent to a higher (lower) neutrino temperature. Thus, as can be understood from eq.(3), the constructive (destructive) interference also explains the positive (negative) shift feature of $N_{\rm eff}$ from $\mathcal{O}_{1,e}^{(6)}$ ($\mathcal{O}_{2,e}^{(6)}$) in figures 2 and 3 when $\Lambda\sim\Lambda_{W}$. For the other NSI operators not shown in figure 3, since they are at least suppressed by one more power of $\Lambda$, Planck and CMB-S4 are not able to constrain those Wilson coefficients when $\Lambda$ is fixed at 1 TeV. Similar observation is obtained for $\Lambda=100$ GeV, with stronger constraints on $C_{1,e}^{(6)}$ and $C_{2,e}^{(6)}$, whose magnitudes are 100 times smaller compared with the $\Lambda=1$ TeV case as expected. ### 5.3 Comparison with current constraints on NC NSIs Figure 4: Constraints on $\epsilon_{e,L}$ (left panel) and $\epsilon_{e,R}$ (right panel) from $N_{\rm eff}$. The black curves stand for corrections to $N_{\rm eff}$ from the dimension-6 NC NSI operators, and the colorful lines are the same as those in figure 2. See more details on the notations used in these two plots. To compare with constraints on the dimension-6 operators $\mathcal{O}_{1(2),e}^{(6)}$ from other experiments, we first review the parameterization commonly used in the literatures to describe neutrino NSIs: $\displaystyle\mathcal{L}_{\rm NSI}^{\rm NC}=-2\sqrt{2}G_{F}\sum_{\alpha,\beta,f,P}\epsilon_{\alpha\beta}^{f,P}\left(\bar{\nu}_{\alpha}\gamma_{\mu}P_{L}\nu_{\beta}\right)\left(\bar{f}\gamma^{\mu}Pf\right),$ (125) with $f=e,u,d$ the charged fermioins, $\alpha,\beta=e,\mu,\tau$ the flavor of neutrinos, and $P=L,R$ the chiral projector operators where $L=(1-\gamma_{5})/2$ and $R=(1+\gamma_{5})/2$. Note that the nine $\epsilon_{\alpha\beta}^{f,P}$’s are all real, and Hermiticity of the Lagrangian guarantees that only six of them are independent. The relevant operators for our study in this work are $\epsilon_{\alpha\beta}^{e,P}$. One readily finds, in terms of the Wilson coefficients used in this work, the $\epsilon$ parameters can be expressed as $\displaystyle\epsilon_{\alpha\beta}^{e,L}=\frac{C_{1,e}^{(6)}-C_{2,e}^{(6)}}{\Lambda^{2}\cdot 2\sqrt{2}G_{F}},\quad\epsilon_{\alpha\beta}^{e,R}=\frac{C_{1,e}^{(6)}+C_{2,e}^{(6)}}{\Lambda^{2}\cdot 2\sqrt{2}G_{F}}.$ (126) Fixing $\Lambda\simeq 174.10$ GeV from the $\Lambda^{2}\cdot 2\sqrt{2}G_{F}=1$ condition such that the LEFT in our notation mimics that in eq. (125), we present our results for $\epsilon_{\alpha\beta}^{L,R}$ in figure 4 by including all NC NSIs in eq. (125) while ignoring all neutrino flavor dependence of $C_{(1,2),e}^{(6)}$. The colors in each subgraph of figure 4 have exactly the same meaning as those in figure 2, and to obtain the constraints, we once again ignore the neutrino flavor dependence of the LEFT Wilson coefficients and consider only one non-vanishing $\epsilon$ at a time in our analysis. However, we stress that, as one can see directly from eq. (126), one non-vanishing $\epsilon$ in general includes contributions from both $\mathcal{O}_{1,e}^{(6)}$ and $\mathcal{O}_{2,e}^{(6)}$. To summarize, we find the $\epsilon$’s are constrained by $N_{\rm eff}$ as171717Since we ignore the neutrino flavor dependence, we thus leave out the neutrino flavor indices here and in the following. $\displaystyle-1.60\lesssim\epsilon^{e,L}\lesssim 1.44\quad\text{(Planck)},\quad-0.61\lesssim\epsilon^{e,L}\lesssim 0.46\quad\text{(CMB-S4)};$ (127) $\displaystyle-1.60\lesssim\epsilon^{e,R}\lesssim 1.44\quad\text{(Planck)},\quad-0.39\lesssim\epsilon^{e,R}\lesssim 0.31\quad\text{(CMB-S4)}.$ (128) $\epsilon$’s Esteban:2018ppq Deniz:2010mp Davidson:2003ha Barranco:2005ps Barranco:2007ej Bolanos:2008km Khan:2017oxw Khan:2016uon Babu:2019mfe This work Planck CMB-S4 $\epsilon^{e,L}_{ee}$ [-0.010, 2.039] [-1.53, 0.38] [-0.07, 0.1] [-0.05, 0.12] [-0.03, 0.08] [-0.036, 0.063] [-0.017, 0.027] [-0.003, 0.003] [-0.08, 0.08] [-0.185, 0.380] [-0.130, 0.185] [-1.6, 1.44] [-0.61, 0.46] $\epsilon^{e,L}_{e\mu}$ [-0.179, 0.146] [-0.84, 0.84] - - [-0.13, 0.13] - [-0.152, 0.152] [-0.055,0.055] [-0.33, 0.35] [-0.025, 0.052] [-0.017, 0.040] [-1.6, 1.44] [-0.61, 0.46] $\epsilon^{e,L}_{e\tau}$ [-0.860, 0.350] [-0.84, 0.84] [-0.4, 0.4] [-0.44, 0.44] [-0.33, 0.33] - [-0.152, 0.152] [-0.055,0.055] [-0.33, 0.35] [-0.055, 0.023] [-0.042, 0.012] [-1.6, 1.44] [-0.61, 0.46] $\epsilon^{e,L}_{\mu\mu}$ [-0.364, 1.387] - [-0.03,0.03] - [-0.03, 0.03] - [-0.040, 0.04 ] [-0.010,0.010] - [-0.290, 0.390] [-0.192, 0.240] [-1.6, 1.44] [-0.61, 0.46] $\epsilon^{e,L}_{\mu\tau}$ [-0.035, 0.028] - [-0.1,0.1] - [-0.1, 0.1] - - - [-0.015, 0.013] [-0.010, 0.010] [-1.6, 1.44] [-0.61, 0.46] $\epsilon^{e,L}_{\tau\tau}$ [-0.350, 1.400] - [-0.5,0.5] - [-0.46, 0.24] [-0.16 , 0.110 ] [0.41, 0.66] [-0.040, 0.04 ] [-0.010,0.010] - [-0.360, 0.145] [-0.120, 0.095] [-1.6, 1.44] [-0.61, 0.46] $\epsilon^{e,R}_{ee}$ [-0.010, 2.039] [-0.07, 0.08] [-1, 0.5] [-0.04, 0.14] [0.004, 0.151] [-0.27, 0.59] [ -0.33 , 0.25 ] [-0.07, 0.07] [-0.04, 0.06] [-0.185, 0.380] [-0.130, 0.185] [-1.6, 1.44] [-0.39, 0.31] $\epsilon^{e,R}_{e\mu}$ [-0.179, 0.146] [-0.19, 0.19] - - [-0.13, 0.13] - [-0.236, 0.236] [-0.08, 0.08] [-0.15, 0.16] [-0.025, 0.052] [-0.017, 0.040] [-1.6, 1.44] [-0.39, 0.31] $\epsilon^{e,R}_{e\tau}$ [-0.860, 0.350] [-0.19, 0.19] [-0.7, 0.7] [-0.27, 0.27] [ -0.05 , 0.05 ] [-0.28, 0.28] - [-0.236, 0.236] [-0.08, 0.08] [-0.15, 0.16] [-0.055, 0.023] [-0.042, 0.012] [-1.6, 1.44] [-0.39, 0.31] $\epsilon^{e,R}_{\mu\mu}$ [-0.364, 1.387] - [-0.03,0.03] - [-0.03, 0.03] - [ -0.10 , 0.12 ] [-0.006, 0.006] - [-0.290, 0.390] [-0.192, 0.240] [-1.6, 1.44] [-0.39, 0.31] $\epsilon^{e,R}_{\mu\tau}$ [-0.035, 0.028] - [-0.1,0.1] - [-0.1, 0.1] - - - [-0.015, 0.013] [-0.010, 0.010] [-1.6, 1.44] [-0.39, 0.31] $\epsilon^{e,R}_{\tau\tau}$ [-0.350, 1.400] - [-0.5,0.5] - [-0.25, 0.43] [-1.05, 0.31] [ -0.10 , 0.12 ] [-0.006, 0.006] - [-0.360, 0.145] [-0.120, 0.095] [-1.6, 1.44] [-0.39, 0.31] Table 4: Summary of constraints on dimension-6 neutrino-electron NC NSIs from previous studies and this work. Constraints from a global fitting of all kinds of neutrino oscillation data plus the COHERENT result are obtained in Ref. Esteban:2018ppq , the TEXONO collaboration in Ref. Deniz:2010mp , the LEP, LSND and CHARM-II experiments in Ref. Davidson:2003ha , a global analysis of $\nu_{e}e$ and $\bar{\nu}_{e}e$ scattering data from LSND, Irvine, Rovno and MUNU experiments in Ref. Barranco:2005ps , OPAL, ALEPH, L3, DELPHI, LSND, CHARM-II, Irvine, Rovno and MUNU experiments in Ref. Barranco:2007ej , solar and reactor neutrino experiments in Ref. Bolanos:2008km , low-energy solar neutrinos at source and detector from the Borexino experiment in Ref. Khan:2017oxw , a global analysis of short baseline $\nu e$ and $\bar{\nu}e$ data from LSND, LAMPF, Irvine, Rovno, MUNU, TEXONO and KRANOYARSK in Ref. Khan:2016uon , and DUNE in Ref. Babu:2019mfe . In comparison, we list constraints on these $\epsilon$ parameters from previous studies and ours obtained in this work in table 4 by ignoring bounds from loops Davidson:2003ha ; Biggio:2009kv . Note that constraints from Ref. Esteban:2018ppq in the second column are originally presented in terms of $\epsilon_{\alpha\beta}^{e,L+R}\equiv\epsilon_{\alpha\beta}^{e,L}+\epsilon_{\alpha\beta}^{e,R}$. We translate them on individual $\epsilon_{\alpha\beta}^{e,L(R)}$ by assuming only one of them non-vanishing. Constraints from TEXONO are obtained at the Kuo-Sheng Nuclear Power Station in Ref. Deniz:2010mp , the LEP, LSND and CHARM-II experiments in Ref. Davidson:2003ha , a global analysis of $\nu_{e}e$ and $\bar{\nu}_{e}e$ scattering data from LSND, Irvine, Rovno and MUNU experiments in Ref. Barranco:2005ps , a combination of OPAL, ALEPH, L3, DELPHI, LSND, CHARM-II, Irvine, Rovno and MUNU experiments in Ref. Barranco:2007ej , solar and reactor neutrino experiments in Ref. Bolanos:2008km , low-energy solar neutrinos at source and detector from the Borexino experiment in Ref. Khan:2017oxw , a global analysis of short baseline $\nu e$ and $\bar{\nu}e$ data from LSND, LAMPF, Irvine, Rovno, MUNU, TEXONO and KRANOYARSK in Ref. Khan:2016uon , and the DUNE experiment in Ref. Babu:2019mfe . For constraints from Ref. Barranco:2005ps , we cite their results in the one-parameter case since it leads to the most stringent constraints on these NC NSIs, and similarly for results in Refs. Khan:2017oxw ; Khan:2016uon . For constraints from Ref. Babu:2019mfe , the upper and the lower intervals are obtained using an exposure of 300 and 850 kt.MW.yr for DUNE respectively. For constraints from Ref. Khan:2017oxw , the upper number is obtained from a detector-only study using low-energy solar neutrinos at Borexino, while the lower is the future prospect from a combined analysis of the detector and the source. For all the other cases in table 4, whenever two intervals appear, it means two “disjoint” ranges that are simultaneously allowed from their analyses. We refer the reader to the original references for more details. As one can see from table 4, in general, constraints from other experiments are stronger than those we obtain from Planck. However, from the last column of table 4, the results from CMB-S4 would be improved by a factor of $\sim 3$ (5) for $\epsilon^{e,L(R)}$. As a result, all the $\epsilon$’s would be bounded at the 10% level. On the other hand, in table 4, one notes that seven of these $\epsilon$’s are constrained at the 10% level from previous experiments, except the following five’s: $\epsilon_{ee}^{e,L}$ Barranco:2007ej ; Bolanos:2008km , $\epsilon_{\mu\mu}^{e,(L,R)}$ Davidson:2003ha ; Barranco:2007ej , $\epsilon_{\mu\tau}^{e,L}$ Esteban:2018ppq , $\epsilon_{ee}^{e,R}$ Deniz:2010mp , and $\epsilon_{\mu\tau}^{e,R}$ Esteban:2018ppq that are stringently constrained at the 1% level. Therefore, constraints on most of these $\epsilon$’s from CMB-S4 are basically comparable to the existing ones. For example, $\epsilon_{\tau\tau}^{e,R}$ is constrained to be [-0.25, 0.43] in Ref. Barranco:2007ej from the OPAL, ALEPH, L3, DELPHI, LSND, CHARM-II, Irvine, Rovno and MUNU experiments, while it would be [-0.39, 0.31] from CMB-S4. Furthermore, we point out that, in the four-parameter cases of Ref. Barranco:2005ps , our results for $\epsilon^{e,R}$ from CMB-S4 are slightly stronger than theirs. Similarly, in the two-parameter (correlated) case, our constraints on $\epsilon^{e,R}_{\mu\mu,ee}$ ($\epsilon^{e,L}_{e\mu,e\tau}$) are stronger than those obtained in Ref. Khan:2017oxw (Khan:2016uon ), while weaker or comparable to theirs for the other $\epsilon$’s. On the other hand, taking both $\epsilon^{e,L}$ and $\epsilon^{e,R}$ into account, we obtain simultaneous constraints on $\epsilon^{e,L}$ and $\epsilon^{e,R}$ as shown in figure 5, where the orange and the purple regions are still allowed by Planck and CMB-S4 respectively. The permitted regions are along the diagonal region on the $\epsilon_{e,L}$-$\epsilon_{e,R}$ plane since it is where contributions from $\mathcal{O}_{1,e}^{(6)}$ and $\mathcal{O}_{2,e}^{(6)}$ cancel. This effect becomes more evident when the magnitudes of $\epsilon^{e,L}$ and $\epsilon^{e,R}$ are large, as implied by the purple regions when $|\epsilon^{e,(L,R)}|\gtrsim 4$. The subfigure in the upper right corner of figure 5 is the enlarged allowed region from CMB-S4 near the origin. Since we assume neutrino flavor independence and $N_{\rm eff}$ is more sensitive to light degrees of freedom, our constraints are slightly less stringent than, but again very comparable to, those discussed in last paragraph. Our results presented in this work complement those from previous studies on NC neutrino NSIs from collider, neutrino coherent scattering and neutrino oscillation experiments. Figure 5: Simultaneous constraints on $\epsilon_{e,L}$ and $\epsilon_{e,R}$ from precision measurements of $N_{\rm eff}$ from Planck and CMB-S4. The allowed regions are indicated by the orange and the purple respectively. The subgraph in the upper right corner corresponds to the magnified allowed region from CMB-S4 near the origin. See the main text for a detailed discussion. ## 6 Conclusions Null observation of any new resonances after the discovery of the Higgs particle at the LHC has gradually changed our strategy in searching for new physics from specific UV models to model-independent studies. EFTs provide a systematic and model independent approach to heavy new physics. In the early Universe where the active fields are neutrinos, electrons, positrons and photons, the system can be described by the LEFT, even with the introduction of some new physics above the $\sim\mathcal{O}(100\rm\,MeV)$ scale. NC NSIs induced by the new physics would affect neutrino decoupling in the early Universe, thus would also modify the prediction of $N_{\rm eff}$. In light of the very precision measurements of $N_{\rm eff}$ from current Planck data and the precision target from CMB-S4, we present constraints on NC NSIs from $N_{\rm eff}$ up to dimension-7 in this work by assuming that all NC NSIs are induced by heavy mediators above $\sim\mathcal{O}(\rm 100\,MeV)$. To that end, we adopt the strategy developed in Refs. Escudero:2018mvt ; Escudero:2020dfa , which permits a fast and precision calculation of $N_{\rm eff}$, and can also be easily generalized to include various new physics. The fast calculation of $N_{\rm eff}$ largely seeds in the pre-calculated collision term integrals, which are only obtained for several specific processes in the SM. In this work, we provide a complete, generic and analytical dictionary for these collision term integrals in section 4. With our results, as long as the invariant amplitudes are known, one can refer to this dictionary to write down the Boltzmann equations, and then solve the prediction of $N_{\rm eff}$ from the SM or some new physics with few efforts. We also show an example for the application of this dictionary at the end of section 4. Including the NC NSIs and using the dictionary described above, we study constraints on these operators from precision measurements of $N_{\rm eff}$. Our results are presented in figure 1 and summarized in table 3, where the lower bounds on the scale of new physics $\Lambda$ is obtained by fixing the Wilson coefficients at unity and considering only one non-vanishing NSI operator at a time. We find that, the dimension-6 NSI operators $\mathcal{O}_{1,e}^{(6)}$ and $\mathcal{O}_{2,e}^{(6)}$ are constrained to be above $\sim 331$ GeV and $\sim 240$ GeV respectively from CMB-S4. On the other hand, due to suppression from the new physics scale, the couplings and $m_{e}$, dimension-7 operators $\mathcal{O}_{(5,6,7,8,9,10,11),e}^{(7)}$ only have visible corrections to $N_{\rm eff}$ when the new physics is relatively light, thus the current lower bounds on these operators are about $6$ GeV and $3$ GeV for $\mathcal{O}_{(7,8,9,10,11),e}^{(7)}$ and $\mathcal{O}_{(5,6),e}^{(7)}$, respectively. Operators $\mathcal{O}_{3,4,5}^{(6)}$ are not constrained from $N_{\rm eff}$ due to (1) negligible corrections to $N_{\rm eff}$ when $\Lambda\gtrsim\Lambda_{W}$ and (2) realization of thermal equilibrium among the three flavor neutrinos that results in vanishing contributions to $N_{\rm eff}$. Operators $\mathcal{O}_{(1,2),e}^{(7)}$ are also not constrained from $N_{\rm eff}$ due to suppression of tiny couplings. On the other hand, we also study constraints on the Wilson coefficients with $\Lambda$ fixed at 1 TeV and 100 GeV. The results are shown in figure 3 by taking only one non-vanishing NSI operator into account at a time. We find that only $C_{1,e}^{(6)}$ and $C_{2,e}^{(6)}$ are constrained by $N_{\rm eff}$ since the dimension-7 operators are all suppressed by one more power of $\Lambda$, as well as $m_{e}$ and the small couplings. At $\Lambda=100$ GeV, we find the magnitude of $C_{1,e}^{(6)}$ is constrained to be around 0.3 (0.1) from Planck (CMB-S4), while it is about 1.4 (0.2) for $C_{2,e}^{(6)}$ from Planck (CMB-S4). The results are summarized in eqs. (119-122). Constraints on the dimension-6 neutrino-electron NC NSI operators $\mathcal{O}_{(1,2),e}^{(6)}$ from precision measurements of $N_{\rm eff}$ are also compared with previous results from, for example, a global fitting of neutrino oscillation experiments and collider experiments. To that end, we first obtain constraints on the NC NISs using the $\epsilon$ parameterization, and then present the results in figure 4 and table 4 for one non-vanishing NC NSI operator at a time, and figure 5 for the inclusion of both operators. We find that constraints from precision measurements of $N_{\rm eff}$ from Planck are in general weaker than those from other experiments mentioned above. However, the improved results from CMB-S4 in future would become comparable for certain operators. Our work complements previous studies on NC NSIs from other experiments. In the future, if the cosmic neutrino background (C$\nu$B) could be directly measured, $N_{\rm eff}$ would be determined with a much better precision, and one could then expect also much stronger constraints on these NC neutrino NSIs from C$\nu$B. ###### Acknowledgements. We thank Shu-Yuan Guo for his valuable contribution at the early stage of this project, Miguel Escudero for helpful discussion, and the HPC Cluster of ITP- CAS for the computation support. YD and JHY were supported by the National Science Foundation of China (NSFC) under Grants No. 12022514 and No. 11875003. JHY was also supported by the National Science Foundation of China (NSFC) under Grants No. 12047503 and National Key Research and Development Program of China Grant No. 2020YFC2201501. ## References * (1) ATLAS Collaboration, G. Aad et al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, Phys. Lett. B716 (2012) 1–29, [arXiv:1207.7214]. * (2) CMS Collaboration, S. Chatrchyan et al., Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC, Phys. Lett. B 716 (2012) 30–61, [arXiv:1207.7235]. * (3) S. Weinberg, Baryon and Lepton Nonconserving Processes, Phys. Rev. Lett. 43 (1979) 1566–1570. * (4) W. Buchmuller and D. Wyler, Effective Lagrangian Analysis of New Interactions and Flavor Conservation, Nucl. Phys. B 268 (1986) 621–653. * (5) B. Grzadkowski, M. Iskrzynski, M. Misiak, and J. Rosiek, Dimension-Six Terms in the Standard Model Lagrangian, JHEP 10 (2010) 085, [arXiv:1008.4884]. * (6) L. Lehman, Extending the Standard Model Effective Field Theory with the Complete Set of Dimension-7 Operators, Phys. Rev. D 90 (2014), no. 12 125023, [arXiv:1410.4193]. * (7) H.-L. Li, Z. Ren, J. Shu, M.-L. Xiao, J.-H. Yu, and Y.-H. Zheng, Complete Set of Dimension-8 Operators in the Standard Model Effective Field Theory, arXiv:2005.00008. * (8) C. W. Murphy, Dimension-8 operators in the Standard Model Eective Field Theory, JHEP 10 (2020) 174, [arXiv:2005.00059]. * (9) H.-L. Li, Z. Ren, M.-L. Xiao, J.-H. Yu, and Y.-H. Zheng, Complete Set of Dimension-9 Operators in the Standard Model Effective Field Theory, arXiv:2007.07899. * (10) Y. Liao and X.-D. Ma, An explicit construction of the dimension-9 operator basis in the standard model effective field theory, JHEP 11 (2020) 152, [arXiv:2007.08125]. * (11) Y. Liao and X.-D. Ma, Renormalization Group Evolution of Dimension-seven Baryon- and Lepton-number-violating Operators, JHEP 11 (2016) 043, [arXiv:1607.07309]. * (12) E. E. Jenkins, A. V. Manohar, and P. Stoffer, Low-Energy Effective Field Theory below the Electroweak Scale: Operators and Matching, JHEP 03 (2018) 016, [arXiv:1709.04486]. * (13) Y. Liao, X.-D. Ma, and Q.-Y. Wang, Extending low energy effective field theory with a complete set of dimension-7 operators, JHEP 08 (2020) 162, [arXiv:2005.08013]. * (14) H.-L. Li, Z. Ren, M.-L. Xiao, J.-H. Yu, and Y.-H. Zheng, Low Energy Effective Field Theory Operator Basis at $d\leq 9$, arXiv:2012.09188. * (15) C. W. Murphy, Low-Energy Effective Field Theory below the Electroweak Scale: Dimension-8 Operators, arXiv:2012.13291. * (16) J. Davis, Raymond, D. S. Harmer, and K. C. Hoffman, Search for neutrinos from the sun, Phys. Rev. Lett. 20 (1968) 1205–1209. * (17) SNO Collaboration, Q. Ahmad et al., Measurement of the rate of $\nu_{e}+d\to p+p+e^{-}$ interactions produced by 8B solar neutrinos at the Sudbury Neutrino Observatory, Phys. Rev. Lett. 87 (2001) 071301, [nucl-ex/0106015]. * (18) Super-Kamiokande Collaboration, Y. Fukuda et al., Evidence for oscillation of atmospheric neutrinos, Phys. Rev. Lett. 81 (1998) 1562–1567, [hep-ex/9807003]. * (19) Daya Bay Collaboration, F. An et al., Observation of electron-antineutrino disappearance at Daya Bay, Phys. Rev. Lett. 108 (2012) 171803, [arXiv:1203.1669]. * (20) K2K Collaboration, M. Ahn et al., Indications of neutrino oscillation in a 250 km long baseline experiment, Phys. Rev. Lett. 90 (2003) 041801, [hep-ex/0212007]. * (21) MINOS Collaboration, D. Michael et al., Observation of muon neutrino disappearance with the MINOS detectors and the NuMI neutrino beam, Phys. Rev. Lett. 97 (2006) 191801, [hep-ex/0607088]. * (22) L. Wolfenstein, Neutrino Oscillations in Matter, Phys. Rev. D 17 (1978) 2369–2374. * (23) S. Mikheyev and A. Smirnov, Resonance Amplification of Oscillations in Matter and Spectroscopy of Solar Neutrinos, Sov. J. Nucl. Phys. 42 (1985) 913–917. * (24) S. Davidson, C. Pena-Garay, N. Rius, and A. Santamaria, Present and future bounds on nonstandard neutrino interactions, JHEP 03 (2003) 011, [hep-ph/0302093]. * (25) T. Ohlsson, Status of non-standard neutrino interactions, Rept. Prog. Phys. 76 (2013) 044201, [arXiv:1209.2710]. * (26) Y. Farzan and M. Tortola, Neutrino oscillations and Non-Standard Interactions, Front. in Phys. 6 (2018) 10, [arXiv:1710.09360]. * (27) P. Bhupal Dev et al., Neutrino Non-Standard Interactions: A Status Report, arXiv:1907.00991. * (28) K. N. Abazajian et al., Light Sterile Neutrinos: A White Paper, arXiv:1204.5379. * (29) P. Agrawal and V. Rentala, Identifying Dark Matter Interactions in Monojet Searches, JHEP 05 (2014) 098, [arXiv:1312.5325]. * (30) A. Nelson, L. M. Carpenter, R. Cotta, A. Johnstone, and D. Whiteson, Confronting the Fermi Line with LHC data: an Effective Theory of Dark Matter Interaction with Photons, Phys. Rev. D 89 (2014), no. 5 056011, [arXiv:1307.5064]. * (31) F. Pobbe, A. Wulzer, and M. Zanetti, Setting limits on Effective Field Theories: the case of Dark Matter, JHEP 08 (2017) 074, [arXiv:1704.00736]. * (32) D. Choudhury, K. Ghosh, and S. Niyogi, Probing nonstandard neutrino interactions at the LHC Run II, Phys. Lett. B 784 (2018) 248–254, [arXiv:1801.01513]. * (33) A. Friedland, M. L. Graesser, I. M. Shoemaker, and L. Vecchi, Probing Nonstandard Standard Model Backgrounds with LHC Monojets, Phys. Lett. B 714 (2012) 267–275, [arXiv:1111.5331]. * (34) K. S. Babu, D. Gonçalves, S. Jana, and P. A. N. Machado, Neutrino Non-Standard Interactions: Complementarity Between LHC and Oscillation Experiments, arXiv:2003.03383. * (35) A. Falkowski, M. González-Alonso, and K. Mimouni, Compilation of low-energy constraints on 4-fermion operators in the SMEFT, JHEP 08 (2017) 123, [arXiv:1706.03783]. * (36) F. J. Escrihuela, M. Tortola, J. W. F. Valle, and O. G. Miranda, Global constraints on muon-neutrino non-standard interactions, Phys. Rev. D 83 (2011) 093002, [arXiv:1103.1366]. * (37) P. Coloma, M. C. Gonzalez-Garcia, M. Maltoni, and T. Schwetz, COHERENT Enlightenment of the Neutrino Dark Side, Phys. Rev. D 96 (2017), no. 11 115007, [arXiv:1708.02899]. * (38) W. Altmannshofer, M. Tammaro, and J. Zupan, Non-standard neutrino interactions and low energy experiments, JHEP 09 (2019) 083, [arXiv:1812.02778]. * (39) K. S. Babu, P. S. B. Dev, S. Jana, and A. Thapa, Non-Standard Interactions in Radiative Neutrino Mass Models, JHEP 03 (2020) 006, [arXiv:1907.09498]. * (40) A. N. Khan and W. Rodejohann, New physics from COHERENT data with an improved quenching factor, Phys. Rev. D 100 (2019), no. 11 113003, [arXiv:1907.12444]. * (41) D. K. Papoulias, T. S. Kosmas, and Y. Kuno, Recent probes of standard and non-standard neutrino physics with nuclei, Front. in Phys. 7 (2019) 191, [arXiv:1911.00916]. * (42) B. C. Canas, E. A. Garces, O. G. Miranda, A. Parada, and G. Sanchez Garcia, Interplay between nonstandard and nuclear constraints in coherent elastic neutrino-nucleus scattering experiments, Phys. Rev. D 101 (2020), no. 3 035012, [arXiv:1911.09831]. * (43) A. Falkowski, M. González-Alonso, and Z. Tabrizi, Reactor neutrino oscillations as constraints on Effective Field Theory, JHEP 05 (2019) 173, [arXiv:1901.04553]. * (44) OPAL Collaboration, G. Abbiendi et al., Tests of the standard model and constraints on new physics from measurements of fermion pair production at 189-GeV to 209-GeV at LEP, Eur. Phys. J. C 33 (2004) 173–212, [hep-ex/0309053]. * (45) ZEUS Collaboration, J. Breitweg et al., Search for contact interactions in deep inelastic $e^{+}p\to e^{+}X$ scattering at HERA, Eur. Phys. J. C 14 (2000) 239–254, [hep-ex/9905039]. * (46) H1 Collaboration, C. Adloff et al., Search for compositeness, leptoquarks and large extra dimensions in $eq$ contact interactions at HERA, Phys. Lett. B 479 (2000) 358–370, [hep-ex/0003002]. * (47) CMS Collaboration, V. Khachatryan et al., Search for dark matter, extra dimensions, and unparticles in monojet events in proton–proton collisions at $\sqrt{s}=8$ TeV, Eur. Phys. J. C 75 (2015), no. 5 235, [arXiv:1408.3583]. * (48) ATLAS Collaboration, G. Aad et al., Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at $\sqrt{s}=$8 TeV with the ATLAS detector, Eur. Phys. J. C 75 (2015), no. 7 299, [arXiv:1502.01518]. [Erratum: Eur.Phys.J.C 75, 408 (2015)]. * (49) I. Doršner, S. Fajfer, A. Greljo, J. F. Kamenik, and N. Košnik, Physics of leptoquarks in precision experiments and at particle colliders, Phys. Rept. 641 (2016) 1–68, [arXiv:1603.04993]. * (50) M. B. Wise and Y. Zhang, Effective Theory and Simple Completions for Neutrino Interactions, Phys. Rev. D 90 (2014), no. 5 053005, [arXiv:1404.4663]. * (51) M. Lindner, W. Rodejohann, and X.-J. Xu, Coherent Neutrino-Nucleus Scattering and new Neutrino Interactions, JHEP 03 (2017) 097, [arXiv:1612.04150]. * (52) W. Rodejohann, X.-J. Xu, and C. E. Yaguna, Distinguishing between Dirac and Majorana neutrinos in the presence of general interactions, JHEP 05 (2017) 024, [arXiv:1702.05721]. * (53) I. Bischer and W. Rodejohann, General Neutrino Interactions at the DUNE Near Detector, Phys. Rev. D 99 (2019), no. 3 036006, [arXiv:1810.02220]. * (54) I. Bischer and W. Rodejohann, General neutrino interactions from an effective field theory perspective, Nucl. Phys. B 947 (2019) 114746, [arXiv:1905.08699]. * (55) A. N. Khan, W. Rodejohann, and X.-J. Xu, Borexino and general neutrino interactions, Phys. Rev. D 101 (2020), no. 5 055047, [arXiv:1906.12102]. * (56) C. Biggio, M. Blennow, and E. Fernandez-Martinez, General bounds on non-standard neutrino interactions, JHEP 08 (2009) 090, [arXiv:0907.0097]. * (57) N. Cabibbo, Unitary Symmetry and Leptonic Decays, Phys. Rev. Lett. 10 (1963) 531–533. * (58) M. Kobayashi and T. Maskawa, CP Violation in the Renormalizable Theory of Weak Interaction, Prog. Theor. Phys. 49 (1973) 652–657. * (59) W. Loinaz, N. Okamura, S. Rayyan, T. Takeuchi, and L. C. R. Wijewardhana, The NuTeV anomaly, lepton universality, and nonuniversal neutrino gauge couplings, Phys. Rev. D 70 (2004) 113004, [hep-ph/0403306]. * (60) KARMEN Collaboration, K. Eitel, Latest results of the KARMEN2 experiment, Nucl. Phys. B Proc. Suppl. 91 (2001) 191–197, [hep-ex/0008002]. * (61) NOMAD Collaboration, P. Astier et al., Final NOMAD results on muon-neutrino —$>$ tau-neutrino and electron-neutrino —$>$ tau-neutrino oscillations including a new search for tau-neutrino appearance using hadronic tau decays, Nucl. Phys. B 611 (2001) 3–39, [hep-ex/0106102]. * (62) NOMAD Collaboration, P. Astier et al., Search for nu(mu) —$>$ nu(e) oscillations in the NOMAD experiment, Phys. Lett. B 570 (2003) 19–31, [hep-ex/0306037]. * (63) Particle Data Group Collaboration, P. A. Zyla et al., Review of Particle Physics, PTEP 2020 (2020), no. 8 083C01. * (64) J. Terol-Calvo, M. Tórtola, and A. Vicente, High-energy constraints from low-energy neutrino nonstandard interactions, Phys. Rev. D 101 (2020), no. 9 095010, [arXiv:1912.09131]. * (65) Y. Du, H.-L. Li, J. Tang, S. Vihonen, and J.-H. Yu, Non-standard interactions in SMEFT confronted with terrestrial neutrino experiments, arXiv:2011.14292. * (66) R. J. Hill and O. Tomalak, On the effective theory of neutrino-electron and neutrino-quark interactions, Phys. Lett. B 805 (2020) 135466, [arXiv:1911.01493]. * (67) R. Harnik, J. Kopp, and P. A. Machado, Exploring nu Signals in Dark Matter Detectors, JCAP 07 (2012) 026, [arXiv:1202.6073]. * (68) M. Cadeddu and F. Dordei, Reinterpreting the weak mixing angle from atomic parity violation in view of the Cs neutron rms radius measurement from COHERENT, Phys. Rev. D 99 (2019), no. 3 033010, [arXiv:1808.10202]. * (69) G.-Y. Huang and S. Zhou, Constraining Neutrino Lifetimes and Magnetic Moments via Solar Neutrinos in the Large Xenon Detectors, JCAP 02 (2019) 024, [arXiv:1810.03877]. * (70) I. M. Shoemaker and J. Wyenberg, Direct Detection Experiments at the Neutrino Dipole Portal Frontier, Phys. Rev. D 99 (2019), no. 7 075010, [arXiv:1811.12435]. * (71) D. Aristizabal Sierra, N. Rojas, and M. Tytgat, Neutrino non-standard interactions and dark matter searches with multi-ton scale detectors, JHEP 03 (2018) 197, [arXiv:1712.09667]. * (72) M. Gonzalez-Garcia, M. Maltoni, Y. F. Perez-Gonzalez, and R. Zukanovich Funchal, Neutrino Discovery Limit of Dark Matter Direct Detection Experiments in the Presence of Non-Standard Interactions, JHEP 07 (2018) 019, [arXiv:1803.03650]. * (73) B. Dutta, S. Liao, L. E. Strigari, and J. W. Walker, Non-standard interactions of solar neutrinos in dark matter experiments, Phys. Lett. B 773 (2017) 242–246, [arXiv:1705.00661]. * (74) E. Bertuzzo, F. F. Deppisch, S. Kulkarni, Y. F. Perez Gonzalez, and R. Zukanovich Funchal, Dark Matter and Exotic Neutrino Interactions in Direct Detection Searches, arXiv:1701.07443. [Erratum: JHEP 04, 073 (2017)]. * (75) J. B. Dent, B. Dutta, S. Liao, J. L. Newstead, L. E. Strigari, and J. W. Walker, Probing light mediators at ultralow threshold energies with coherent elastic neutrino-nucleus scattering, Phys. Rev. D 96 (2017), no. 9 095007, [arXiv:1612.06350]. * (76) D. G. Cerdeño, M. Fairbairn, T. Jubb, P. A. N. Machado, A. C. Vincent, and C. Bœhm, Physics from solar neutrinos in dark matter direct detection experiments, JHEP 05 (2016) 118, [arXiv:1604.01025]. [Erratum: JHEP 09, 048 (2016)]. * (77) P. Coloma, P. Huber, and J. M. Link, Combining dark matter detectors and electron-capture sources to hunt for new physics in the neutrino sector, JHEP 11 (2014) 042, [arXiv:1406.4914]. * (78) M. Pospelov and J. Pradler, Dark Matter or Neutrino recoil? Interpretation of Recent Experimental Results, Phys. Rev. D 89 (2014), no. 5 055012, [arXiv:1311.5764]. * (79) M. Pospelov and J. Pradler, Elastic scattering signals of solar neutrinos with enhanced baryonic currents, Phys. Rev. D 85 (2012) 113016, [arXiv:1203.0545]. [Erratum: Phys.Rev.D 88, 039904 (2013)]. * (80) J. Kopp, M. Lindner, T. Ota, and J. Sato, Non-standard neutrino interactions in reactor and superbeam experiments, Phys. Rev. D 77 (2008) 013007, [arXiv:0708.0152]. * (81) D. Liu, C. Sun, and J. Gao, Constraints on neutrino non-standard interactions from LHC data with large missing transverse momentum, arXiv:2009.06668. * (82) A. Falkowski, G. Grilli di Cortona, and Z. Tabrizi, Future DUNE constraints on EFT, JHEP 04 (2018) 101, [arXiv:1802.08296]. * (83) S. Pandey, S. Karmakar, and S. Rakshit, Strong constraints on non-standard neutrino interactions: LHC vs. IceCube, JHEP 11 (2019) 046, [arXiv:1907.07700]. * (84) K. N. Deepthi, S. Goswami, and N. Nath, Can nonstandard interactions jeopardize the hierarchy sensitivity of DUNE?, Phys. Rev. D 96 (2017), no. 7 075023, [arXiv:1612.00784]. * (85) K. N. Deepthi, S. Goswami, and N. Nath, Challenges posed by non-standard neutrino interactions in the determination of $\delta_{CP}$ at DUNE, Nucl. Phys. B 936 (2018) 91–105, [arXiv:1711.04840]. * (86) J. Barranco, O. G. Miranda, C. A. Moura, and J. W. F. Valle, Constraining non-standard interactions in nu(e) e or anti-nu(e) e scattering, Phys. Rev. D 73 (2006) 113001, [hep-ph/0512195]. * (87) J. Barranco, O. G. Miranda, C. A. Moura, and J. W. F. Valle, Constraining non-standard neutrino-electron interactions, Phys. Rev. D 77 (2008) 093014, [arXiv:0711.0698]. * (88) A. Bolanos, O. G. Miranda, A. Palazzo, M. A. Tortola, and J. W. F. Valle, Probing non-standard neutrino-electron interactions with solar and reactor neutrinos, Phys. Rev. D 79 (2009) 113012, [arXiv:0812.4417]. * (89) M. Lei, N. Steinberg, and J. D. Wells, Probing Non-Standard Neutrino Interactions with Supernova Neutrinos at Hyper-K, JHEP 01 (2020) 179, [arXiv:1907.01059]. * (90) A. Esmaili and A. Y. Smirnov, Probing Non-Standard Interaction of Neutrinos with IceCube and DeepCore, JHEP 06 (2013) 026, [arXiv:1304.1042]. * (91) A. Friedland, C. Lunardini, and M. Maltoni, Atmospheric neutrinos as probes of neutrino-matter interactions, Phys. Rev. D 70 (2004) 111301, [hep-ph/0408264]. * (92) A. Friedland and C. Lunardini, A Test of tau neutrino interactions with atmospheric neutrinos and K2K, Phys. Rev. D 72 (2005) 053009, [hep-ph/0506143]. * (93) A. N. Khan and D. W. McKay, $\sin^{2}(\theta)w$ estimate and bounds on nonstandard interactions at source and detector in the solar neutrino low-energy regime, JHEP 07 (2017) 143, [arXiv:1704.06222]. * (94) C. Biggio, M. Blennow, and E. Fernandez-Martinez, Loop bounds on non-standard neutrino interactions, JHEP 03 (2009) 139, [arXiv:0902.0607]. * (95) O. Tomalak, P. Machado, V. Pandey, and R. Plestid, Flavor-dependent radiative corrections in coherent elastic neutrino-nucleus scattering, arXiv:2011.05960. * (96) P. B. Denton and J. Gehrlein, A Statistical Analysis of the COHERENT Data and Applications to New Physics, arXiv:2008.06062. * (97) M. Hoferichter, J. Menéndez, and A. Schwenk, Coherent elastic neutrino-nucleus scattering: EFT analysis and nuclear responses, Phys. Rev. D 102 (2020), no. 7 074018, [arXiv:2007.08529]. * (98) COHERENT Collaboration, D. Akimov et al., Observation of Coherent Elastic Neutrino-Nucleus Scattering, Science 357 (2017), no. 6356 1123–1126, [arXiv:1708.01294]. * (99) O. G. Miranda, G. Sanchez Garcia, and O. Sanders, Coherent elastic neutrino-nucleus scattering as a precision test for the Standard Model and beyond: the COHERENT proposal case, Adv. High Energy Phys. 2019 (2019) 3902819, [arXiv:1902.09036]. * (100) TEXONO Collaboration, M. Deniz et al., Constraints on Non-Standard Neutrino Interactions and Unparticle Physics with Neutrino-Electron Scattering at the Kuo-Sheng Nuclear Power Reactor, Phys. Rev. D 82 (2010) 033004, [arXiv:1006.1947]. * (101) A. N. Khan, Global analysis of the source and detector nonstandard interactions using the short baseline $\nu$-e and $\nu$¯-e scattering data, Phys. Rev. D 93 (2016), no. 9 093019, [arXiv:1605.09284]. * (102) A. Ismail, R. Mammen Abraham, and F. Kling, Neutral Current Neutrino Interactions at FASER$\nu$, arXiv:2012.10500. * (103) MOLLER Collaboration, J. Benesch et al., The MOLLER Experiment: An Ultra-Precise Measurement of the Weak Mixing Angle Using M{\o}ller Scattering, arXiv:1411.4088. * (104) N. Berger et al., Measuring the weak mixing angle with the P2 experiment at MESA, J. Univ. Sci. Tech. China 46 (2016), no. 6 481–487, [arXiv:1511.03934]. * (105) Y. Du, A. Freitas, H. H. Patel, and M. J. Ramsey-Musolf, Parity-Violating M{\o}ller Scattering at NNLO: Closed Fermion Loops, arXiv:1912.08220. * (106) I. Esteban, M. Gonzalez-Garcia, M. Maltoni, I. Martinez-Soler, and J. Salvado, Updated constraints on non-standard interactions from global analysis of oscillation data, JHEP 08 (2018) 180, [arXiv:1805.04530]. [Addendum: JHEP 12, 152 (2020)]. * (107) Y. Farzan, M. Lindner, W. Rodejohann, and X.-J. Xu, Probing neutrino coupling to a light scalar with coherent neutrino scattering, JHEP 05 (2018) 066, [arXiv:1802.05171]. * (108) J. Billard, J. Johnston, and B. J. Kavanagh, Prospects for exploring New Physics in Coherent Elastic Neutrino-Nucleus Scattering, JCAP 11 (2018) 016, [arXiv:1805.01798]. * (109) D. Aristizabal Sierra, V. De Romeri, and N. Rojas, COHERENT analysis of neutrino generalized interactions, Phys. Rev. D 98 (2018) 075018, [arXiv:1806.07424]. * (110) D. Papoulias and T. Kosmas, COHERENT constraints to conventional and exotic neutrino physics, Phys. Rev. D 97 (2018), no. 3 033003, [arXiv:1711.09773]. * (111) J. B. Dent, B. Dutta, S. Liao, J. L. Newstead, L. E. Strigari, and J. W. Walker, Accelerator and reactor complementarity in coherent neutrino-nucleus scattering, Phys. Rev. D 97 (2018), no. 3 035009, [arXiv:1711.03521]. * (112) J. Liao and D. Marfatia, COHERENT constraints on nonstandard neutrino interactions, Phys. Lett. B 775 (2017) 54–57, [arXiv:1708.04255]. * (113) ALEPH, DELPHI, L3, OPAL, SLD, LEP Electroweak Working Group, SLD Electroweak Group, SLD Heavy Flavour Group Collaboration, S. Schael et al., Precision electroweak measurements on the $Z$ resonance, Phys. Rept. 427 (2006) 257–454, [hep-ex/0509008]. * (114) Planck Collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6, [arXiv:1807.06209]. * (115) SPT-3G Collaboration, B. A. Benson et al., SPT-3G: A Next-Generation Cosmic Microwave Background Polarization Experiment on the South Pole Telescope, Proc. SPIE Int. Soc. Opt. Eng. 9153 (2014) 91531P, [arXiv:1407.2973]. * (116) Simons Observatory Collaboration, P. Ade et al., The Simons Observatory: Science goals and forecasts, JCAP 02 (2019) 056, [arXiv:1808.07445]. * (117) CMB-S4 Collaboration, K. N. Abazajian et al., CMB-S4 Science Book, First Edition, arXiv:1610.02743. * (118) CORE Collaboration, E. Di Valentino et al., Exploring cosmic origins with CORE: Cosmological parameters, JCAP 04 (2018) 017, [arXiv:1612.00021]. * (119) NASA PICO Collaboration, S. Hanany et al., PICO: Probe of Inflation and Cosmic Origins, arXiv:1902.10541. * (120) N. Sehgal et al., CMB-HD: An Ultra-Deep, High-Resolution Millimeter-Wave Survey Over Half the Sky, arXiv:1906.10134. * (121) J. J. Bennett, G. Buldgen, M. Drewes, and Y. Y. Wong, Towards a precision calculation of the effective number of neutrinos $N_{\rm eff}$ in the Standard Model I: The QED equation of state, JCAP 03 (2020) 003, [arXiv:1911.04504]. * (122) J. J. Bennett, G. Buldgen, P. F. de Salas, M. Drewes, S. Gariazzo, S. Pastor, and Y. Y. Wong, Towards a precision calculation of $N_{\rm eff}$ in the Standard Model II: Neutrino decoupling in the presence of flavour oscillations and finite-temperature QED, arXiv:2012.02726. * (123) K. Akita and M. Yamaguchi, A precision calculation of relic neutrino decoupling, JCAP 08 (2020) 012, [arXiv:2005.07047]. * (124) M. Escudero, Neutrino decoupling beyond the Standard Model: CMB constraints on the Dark Matter mass with a fast and precise $N_{\rm eff}$ evaluation, JCAP 02 (2019) 007, [arXiv:1812.05605]. * (125) M. Escudero Abenza, Precision early universe thermodynamics made simple: $N_{\rm eff}$ and neutrino decoupling in the Standard Model and beyond, JCAP 05 (2020) 048, [arXiv:2001.04466]. * (126) X. Luo, W. Rodejohann, and X.-J. Xu, Dirac neutrinos and $N_{{\rm eff}}$, JCAP 06 (2020) 058, [arXiv:2005.01629]. * (127) X. Luo, W. Rodejohann, and X.-J. Xu, Dirac neutrinos and $N_{{\rm eff}}$ II: the freeze-in case, arXiv:2011.13059. * (128) K. J. Kelly, M. Sen, and Y. Zhang, Intimate Relationship Between Sterile Neutrino Dark Matter and $\Delta N_{\rm eff}$, arXiv:2011.02487. * (129) P. Adshead, Y. Cui, A. J. Long, and M. Shamma, Unraveling the Dirac Neutrino with Cosmological and Terrestrial Detectors, arXiv:2009.07852. * (130) J.-T. Li, G. M. Fuller, and E. Grohs, Probing dark photons in the early universe with big bang nucleosynthesis, JCAP 12 (2020) 049, [arXiv:2009.14325]. * (131) J. Venzor, A. Pérez-Lorenzana, and J. De-Santiago, Bounds on neutrino-scalar non-standard interactions from big bang nucleosynthesis, arXiv:2009.08104. * (132) J. Froustey, C. Pitrou, and M. C. Volpe, Neutrino decoupling including flavour oscillations and primordial nucleosynthesis, JCAP 12 (2020) 015, [arXiv:2008.01074]. * (133) M. Ibe, S. Kobayashi, Y. Nakayama, and S. Shirai, Cosmological Constraint on Vector Mediator of Neutrino-Electron Interaction in light of XENON1T Excess, JHEP 12 (2020) 004, [arXiv:2007.16105]. * (134) XENON Collaboration, E. Aprile et al., Excess electronic recoil events in XENON1T, Phys. Rev. D 102 (2020), no. 7 072004, [arXiv:2006.09721]. * (135) V. Shvartsman, Density of relict particles with zero rest mass in the universe, Pisma Zh. Eksp. Teor. Fiz. 9 (1969) 315–317. * (136) G. Steigman, D. Schramm, and J. Gunn, Cosmological Limits to the Number of Massive Leptons, Phys. Lett. B 66 (1977) 202–204. * (137) G. Mangano, G. Miele, S. Pastor, and M. Peloso, A Precision calculation of the effective number of cosmological neutrinos, Phys. Lett. B 534 (2002) 8–16, [astro-ph/0111408]. * (138) E. W. Kolb and M. S. Turner, The Early Universe, vol. 69. 1990\. * (139) P. F. de Salas and S. Pastor, Relic neutrino decoupling with flavour oscillations revisited, JCAP 07 (2016) 051, [arXiv:1606.06986]. * (140) G. Mangano, G. Miele, S. Pastor, T. Pinto, O. Pisanti, and P. D. Serpico, Relic neutrino decoupling including flavor oscillations, Nucl. Phys. B 729 (2005) 221–234, [hep-ph/0506164]. * (141) S. Hannestad, Oscillation effects on neutrino decoupling in the early universe, Phys. Rev. D 65 (2002) 083006, [astro-ph/0111423]. * (142) A. Dolgov, S. Hansen, S. Pastor, S. Petcov, G. Raffelt, and D. Semikoz, Cosmological bounds on neutrino degeneracy improved by flavor oscillations, Nucl. Phys. B 632 (2002) 363–382, [hep-ph/0201287]. * (143) K. Abazajian et al., CMB-S4 Decadal Survey APC White Paper, Bull. Am. Astron. Soc. 51 (2019), no. 7 209, [arXiv:1908.01062]. * (144) CMB-S4 Collaboration, M. H. Abitbol et al., CMB-S4 Technology Book, First Edition, arXiv:1706.02464. * (145) Topical Conveners: K.N. Abazajian, J.E. Carlstrom, A.T. Lee Collaboration, K. Abazajian et al., Neutrino Physics from the Cosmic Microwave Background and Large Scale Structure, Astropart. Phys. 63 (2015) 66–80, [arXiv:1309.5383]. * (146) K. Abazajian et al., CMB-S4 Science Case, Reference Design, and Project Plan, arXiv:1907.04473. * (147) A. Heckler, Astrophysical applications of quantum corrections to the equation of state of a plasma, Phys. Rev. D 49 (1994) 611–617. * (148) N. Fornengo, C. Kim, and J. Song, Finite temperature effects on the neutrino decoupling in the early universe, Phys. Rev. D 56 (1997) 5123–5134, [hep-ph/9702324]. * (149) S. Gariazzo, P. de Salas, and S. Pastor, Thermalisation of sterile neutrinos in the early Universe in the 3+1 scheme with full mixing matrix, JCAP 07 (2019) 014, [arXiv:1905.11290]. * (150) S. Hannestad and J. Madsen, Neutrino decoupling in the early universe, Phys. Rev. D 52 (1995) 1764–1769, [astro-ph/9506015]. * (151) A. Dolgov, S. Hansen, and D. Semikoz, Nonequilibrium corrections to the spectra of massless neutrinos in the early universe, Nucl. Phys. B 503 (1997) 426–444, [hep-ph/9703315]. * (152) A. Dolgov, S. Hansen, and D. Semikoz, Nonequilibrium corrections to the spectra of massless neutrinos in the early universe: Addendum, Nucl. Phys. B 543 (1999) 269–274, [hep-ph/9805467]. * (153) J. Birrell, C.-T. Yang, and J. Rafelski, Relic Neutrino Freeze-out: Dependence on Natural Constants, Nucl. Phys. B 890 (2014) 481–517, [arXiv:1406.1759]. * (154) I. M. Oldengott, T. Tram, C. Rampf, and Y. Y. Wong, Interacting neutrinos in cosmology: exact description and constraints, JCAP 11 (2017) 027, [arXiv:1706.02123]. * (155) I. M. Oldengott, C. Rampf, and Y. Y. Y. Wong, Boltzmann hierarchy for interacting neutrinos I: formalism, JCAP 04 (2015) 016, [arXiv:1409.1577]. * (156) E. Grohs, G. M. Fuller, C. T. Kishimoto, M. W. Paris, and A. Vlasenko, Neutrino energy transport in weak decoupling and big bang nucleosynthesis, Phys. Rev. D 93 (2016), no. 8 083522, [arXiv:1512.02205]. * (157) R. Yunis, C. R. Argüelles, and D. López Nacir, Boltzmann hierarchies for self-interacting warm dark matter scenarios, JCAP 09 (2020) 041, [arXiv:2002.05778]. * (158) C. D. Kreisch, F.-Y. Cyr-Racine, and O. Doré, Neutrino puzzle: Anomalies, interactions, and cosmological tensions, Phys. Rev. D 101 (2020), no. 12 123505, [arXiv:1902.00534]. * (159) S. Esposito, G. Miele, S. Pastor, M. Peloso, and O. Pisanti, Nonequilibrium spectra of degenerate relic neutrinos, Nucl. Phys. B 590 (2000) 539–561, [astro-ph/0005573]. * (160) J. Froustey and C. Pitrou, Incomplete neutrino decoupling effect on big bang nucleosynthesis, Phys. Rev. D 101 (2020), no. 4 043524, [arXiv:1912.09378]. * (161) A. Fradette, M. Pospelov, J. Pradler, and A. Ritz, Cosmological beam dump: constraints on dark scalars mixed with the Higgs boson, Phys. Rev. D 99 (2019), no. 7 075004, [arXiv:1812.07585]. * (162) T. Kinoshita, Mass singularities of Feynman amplitudes, J. Math. Phys. 3 (1962) 650–677. * (163) T. Lee and M. Nauenberg, Degenerate Systems and Mass Singularities, Phys. Rev. 133 (1964) B1549–B1562. * (164) C. Frye, H. Hannesdottir, N. Paul, M. D. Schwartz, and K. Yan, Infrared Finiteness and Forward Scattering, Phys. Rev. D 99 (2019), no. 5 056015, [arXiv:1810.10022]. * (165) CMS Collaboration, S. Chatrchyan et al., Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC, Phys. Lett. B716 (2012) 30–61, [arXiv:1207.7235]. * (166) A. G. Riess, S. Casertano, W. Yuan, L. M. Macri, and D. Scolnic, Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics beyond $\Lambda$CDM, Astrophys. J. 876 (2019), no. 1 85, [arXiv:1903.07603]. * (167) T. Brinckmann, J. H. Chang, and M. LoVerde, Self-interacting neutrinos, the Hubble parameter tension, and the Cosmic Microwave Background, arXiv:2012.11830. * (168) S. Roy Choudhury, S. Hannestad, and T. Tram, Updated constraints on massive neutrino self-interactions from cosmology in light of the $H_{0}$ tension, arXiv:2012.07519. * (169) A. Das and S. Ghosh, Flavor-specific Interaction Favours Strong Neutrino Self-coupling, arXiv:2011.12315. * (170) G.-y. Huang and W. Rodejohann, Solving the Hubble tension without spoiling Big Bang Nucleosynthesis, arXiv:2102.04280. * (171) Borexino Collaboration, M. Agostini et al., Limiting neutrino magnetic moments with Borexino Phase-II solar neutrino data, Phys. Rev. D 96 (2017), no. 9 091103, [arXiv:1707.09355]. * (172) H. H. Patel, Package-X 2.0: A Mathematica package for the analytic calculation of one-loop integrals, Comput. Phys. Commun. 218 (2017) 66–70, [arXiv:1612.00009]. * (173) A. Dolgov, Neutrinos in cosmology, Phys. Rept. 370 (2002) 333–535, [hep-ph/0202122].
# Fast Non-line-of-sight Imaging with Two-step Deep Remapping Dayu Zhu Georgia Institute of Technology <EMAIL_ADDRESS>Wenshan Cai Georgia Institute of Technology <EMAIL_ADDRESS> ###### Abstract Conventional imaging only records photons directly sent from the object to the detector, while non-line-of-sight (NLOS) imaging takes the indirect light into account. Most NLOS solutions employ a transient scanning process, followed by a physical based algorithm to reconstruct the NLOS scenes. However, the transient detection requires sophisticated apparatus, with long scanning time and low robustness to ambient environment, and the reconstruction algorithms are typically time-consuming and computationally expensive. Here we propose a new NLOS solution to address the above defects, with innovations on both equipment and algorithm. We apply inexpensive commercial Lidar for detection, with much higher scanning speed and better compatibility to real-world imaging. Our reconstruction framework is deep learning based, with a generative two-step remapping strategy to guarantee high reconstruction fidelity. The overall detection and reconstruction process allows for millisecond responses, with reconstruction precision of millimeter level. We have experimentally tested the proposed solution on both synthetic and real objects, and further demonstrated our method to be applicable to full-color NLOS imaging. ## 1 Introduction Imaging is a seemingly mature technique that is employed to record the real- world scene, which has been developed over centuries. One central technique of imaging is to record the distribution of photons which are emitted from or scattered by the target and further received by the camera. Apart from the planar retrievals, i.e. pictures, people also strive to extract the depth or phase information of objects and reconstruct three-dimensional scenes, which is a thriving field of research in computer vision [45, 17]. Most imaging and reconstruction methods can only recover the information in the line-of-sight (LOS) field of the camera, which implies that there should be no obstacle between the direct light path connecting the object and the camera. Otherwise, the light from the object will be reflected or deflected, which alters the intensity and directionality of light and further inevitably impedes and encrypts the original photon distribution. As a result, the object is ‘hidden’ from the plain sight and becomes non-line-of-sight (NLOS) for observation, and the encrypted information is usually too weak and has to be treated as noise in imaging. Figure 1: Illustrator of the non-line-of-sight (NLOS) imaging. The light path between the object and the camera is blocked by an occluder, thus the camera cannot directly capture the information of the object. With the aid of a wall or a secondary object which scatters the photons from the NLOS object to the camera, NLOS imaging may be achieved. Lidar is to actively probe the NLOS scene, and information of the reflected light is used to reconstruct the NLOS object. Although the NLOS information is greatly distorted, it is still possible to recover the original photon distribution and achieve NLOS imaging and reconstruction [18]. Numerous attempts have been made to analytically decode the NLOS information from the captured pictures [9, 2, 5, 4, 7, 8]. Despite effective in some cases, the LOS and NLOS data are greatly entangled on the image plane and there is always no guarantee of reasonable separation of them. Thus, instead of passively decoding the images, an alternative approach is to actively probe the NLOS scene [50, 36, 52, 20, 39, 49, 24, 1, 48, 26]. This active method provides much more information than a passive image and much higher degree of freedom for NLOS imaging and reconstruction. This genre usually applies a laser to scan the scene spatially, and a photon detector is used to detect the transient events of back-scattered photons. The NLOS reconstruction is then achieved by analyzing the back-projection conditions of the transients or through other geometric optical transformations [3, 50]. The algorithms may proceed iteratively until the reconstruction converges. The active scheme can achieve high fidelity of NLOS reconstruction, meanwhile, some major shortcomings hinder the universal application of this NLOS imaging approach. On the physical level, laser scanning and transient detection processes are necessary. The laser scanning process may take minutes to hours with a lab-built scanning system, which is not applicable to real-time scenarios, and the ultrafast lasers are expensive and may not satisfy the safe standard for out-of-lab applications. As for the detection equipment, the transient events require lab-built optical systems with sophisticated apparatus, such as single-photon avalanche photodetectors (SPAD), ultrafast photon counting modules, streak cameras, etc [21, 6, 27]. The overall equipment settings are highly complicated and vulnerable to ambient perturbations, thus they are more suitable for ultrafast optical experiments than real-world imaging scenes. At the algorithm level, the transient detection will collect a huge amount of data, and the reconstruction process always deals with matrices or tensors of gigantic sizes. These properties lead to high consumption of memory, storage and computational resources, and the reconstruction time will hardly fulfill the expectation of real-time NLOS imaging and reconstruction. To address these pressing challenges, in this paper we introduce a novel methodology for real time NLOS imaging. Instead of using sophisticated devices for transient experiment, we employ a low-end Lidar as both the probe and detection devices [51]. Compared to in-lab scanning systems with ultrafast laser and SPAD, Lidar is commercially available at a much lower cost (only a few hundred dollars), with substantially faster operation speed (scanning time at the millisecond level) and strong robustness to environmental factors. Not relying on transient time-of-flight (ToF) information of photons, we utilize the depth map and intensity map collected by the Lidar to achieve real-time, full-color NLOS imaging. Our algorithm is deep learning based and consists of two neural networks: a compressor and a generator [33]. There are some pioneering works that have introduced deep learning into NLOS imaging and revealed the promising future of deep learning assisted NLOS imaging [13, 15, 35, 37, 2]. The reconstruction speeds of deep learning approaches are inherently much faster than geometric physical-based methods, with better versatility and generalization abilities. The reported methods are mostly built on supervised learning and try to establish the direct mapping between the detected data and target NLOS scenes. However, in sharp contrast to most computer vision tasks, in NLOS reconstruction the input data and output scenes share almost no common geometric features (such as edges or shapes), thus the supervised mapping of input and the label (i.e. the ground truth NLOS scene) may not be well-learnt by a conventional convolutional neural network (CNN) [34, 32]. Besides, generating high dimensional data with explicit semantics, such as images or 3D scenes, is never a closed-form problem, and a generative model such as generative adversarial network (GAN) or variational autoencoder (VAE) is necessary [22, 30]. Therefore, the reconstruction performance of these reported methods is limited by the direct mapping with supervised learning, such as blurriness of reconstructed results and false retrieval of complicated objects. In this work we propose a deep learning framework that consists of two components and achieves NLOS reconstruction in two steps. The generator is the decoder of a trained VAE responsible for generating various scenes, and the CNN compressor converts the detected depth and intensity information into a low-dimensional latent space for the generator. When the Lidar collects new depth and intensity maps, the compressor will compress them into a latent vector, then the generator will recover the NLOS scene by decoding the vector. This methodology overcomes the flaws of previous deep learning approaches, which were based on a single network and trained in a supervised fashion. Our framework is capable of achieving state-of-the-art reconstruction performance, with strong robustness to ambient surroundings and the position, gesture and viewpoint of the NLOS objects. We have demonstrated the functionality of our approach on a synthetic dataset, then applied transfer learning and verified the performance through real-world experiments. To illustrate that our approach is not limited to the reconstruction of geometric features of NLOS scenes, we step further to experimentally prove the effectiveness of our method for the recovery of full-color information. The methodology presented here potentially offers a paradigm shift for applications of real-time NLOS imaging, such as driving assistant, geological prospecting, biomedical imaging, animal behavioral experiments, and many more [31, 47, 46, 20, 11, 29, 53, 40, 38]. ## 2 Methods ### 2.1 Lidar-based detection The typical NLOS imaging scenario is depicted in Fig. 1. When an obstacle blocks the direct light path between the object and the camera, the conventional LOS imaging would fail to function as the detector receives no direct information from the target. However, another object or a wall between the camera and the target may serve as a ‘relay station’ and provide an indirect light path for NLOS imaging: a photon sent from the target object can be reflected by the relay object and further collected by the camera. After that, information from both the target and the relay objects is embedded in the photon distribution received by the camera. To disentangle the two sources of data, we need to actively probe the surroundings to extract data under different perturbations. The probe is usually conducted through sending a laser in the field of view (FOV) to scan the environment and then collecting the perturbed photon distributions. In this work we use a low-cost Lidar (Intel RealSense, the functioning details of the Lidar are provided in the Supplementary Material) as both the laser source and the camera. The laser beam on the Lidar scans the FOV of 70 by 55 degrees, then the detector on it sequentially collects the photons reflected from the surrounding objects. Based on the ToF information of the received photons, the on-chip processor of the Lidar will infer a depth map and a light intensity map. In general cases (Fig. 2a), most of the photons received by the Lidar are the ones directly reflected back and travel along the reverse path. There are also photons deflected by the wall to different directions, further scattered by surrounding objects and finally collected by the Lidar. However, the light intensities along these multi-reflection pathways will be much lower than that of the direct-reflection light. In this case, the Lidar will count the travel time of the direct-reflection photons and the calculated depth will represent the orthogonal distance between the Lidar plane and the incidence point on the wall, which is typical for the regular Lidar operation. Figure 2: Two cases of Lidar detection (top view). (a) Generally, when the light directly reflected from the wall is far stronger than the light experiencing multi-reflection, the Lidar will denote the correct depth as the actual distance between itself and the wall. (b) If the total intensity of the beams undergoing equal-length multi-reflection is more intense than the direct-reflected light, the Lidar will store the intensity of multi-reflected light and treat the multi-reflection optical path length as the direct- reflection one, which leads to a ‘fake detected distance’. Figure 3: (a) Algorithm workflow and (b) experimental setup for NLOS imaging. (a) The NLOS reconstruction framework consists of two neural networks, a generator and a compressor. The reconstruction process is two-step: the compressor compresses the detected data as a latent vector, and the generator decodes the latent vector into a depth map of the NLOS object. The compressor is adapted from MobileNetV2, and the generator is the decoder of a variational autoencoder (VAE) with ResNet blocks. The input contains the depth and intensity maps detected by the Lidar, stacked as a 4-channel tensor with channel-wise normalization and Gaussian noises. (b) The experimental setup for NLOS imaging used in this work. In another case (Fig. 2b), if the incidence angle is very oblique (>45 degree) and the point of illumination on the wall is close to the target object (<0.3 m in our cases), the multi-reflection paths may overshadow the direct- reflection path. Considering the three-bounce multi-reflection cases (Lidar- wall-object-wall-Lidar), there will be numerous possible three-bounce light paths with the same optical path length, and their total light intensity can be stronger than the direct-reflection one. As a result, the Lidar will denote the distance as half of the multi-reflection optical path length, i.e., the NLOS distance, rather than the direct distance. Although this so-called ‘multipath artifacts’ is undesired in conventional Lidar applications, it is favorable in our NLOS imaging [19]. In the common case as shown in Fig. 2a, the Lidar will provide the correct depth and intensity, and the information represents the geometric and reflection features of the relay wall, which also implies the position of the Lidar relative to the wall. While in the NLOS case (Fig. 2b), even though the received photons by the Lidar contain information of actual light intensities and optical paths, the Lidar will incorrectly map them to ‘fake’ locations. In this way, the ground truth photon distribution that reveals the NLOS information will be distorted and overshadowed by the Lidar mechanism. Our methodology focuses on redistributing the intensity and depth maps to extract the original NLOS information and thus achieving imaging and reconstruction in a highly efficient and faithful manner. ### 2.2 Two-step deep remapping framework Since the redistributing or remapping function varies with different NLOS scenes, here we apply deep learning to learn the connection between the ground truth NLOS objects and detected maps of depth and intensity. There are some pioneering works that have introduced supervised learning and applied deep neural networks (NN, such as U-Net [42]) into NLOS imaging. However, the remapping task has two intrinsic requirements: (1) transform the collected information into NLOS representations, and (2) generate NLOS scenes with clean semantics [14]. As a result, a regression-oriented NN trained in the supervised fashion cannot simultaneously meet these two requirements with decent performance. In contrast, here we propose a reconstruction framework consisting of two networks, each fulfilling one requirement: a compressor transforms the detected depth and intensity maps into a latent vector, and a generator decodes the latent vector to the predicted NLOS scene. With the sequential functions of the two components, the algorithm can reconstruct NLOS scenes with high fidelity, comparable to the performance of the state-of-the- art transient-based methods, with a processing time of no more than milliseconds. The pipeline of the framework is depicted in Fig. 3a. The generator is the decoder of a VAE. Similar to GAN, VAE is a powerful tool for scene and image synthesis, and offers better controllability on the sampling of the latent space. A typical CNN-based VAE consists of an encoder and a decoder with mirror structures to each other. The VAE applied here is composed of ResNet blocks to improve the resolving power and training effectiveness, and we denote it as Res-VAE [23]. Specifically, the encoder down-samples an input image $x$ (64 by 64 in our experiments) to a 256-dimension latent vector $z$, which means the encoder learns the posterior $p(z|x)$, and $z$ is reparameterized to force its distribution $p(z)$ to be a standard Gaussian distribution. Further, the decoder ($p(x’|z)$) expands $z$ to a reconstructed image $x^{\prime}$. The training process is to lower the difference between $x$ and $x^{\prime}$ (reconstruction loss), as well as the distance between $p(z)$ and a standard Gaussian distribution (Kullback-Leibler divergence). Once the training is finished, the decoder will be able to generate images similar to the ones in the training dataset, and the latent vectors can be randomly sampled from the standard Gaussian distribution. In our case, the input $x$ is the depth map of the NLOS scene relative to the wall, which corresponds to the perpendicular distances between the wall and the elements on the point clouds of NLOS objects. Since the mapping between $x$ and $z$ is mostly one-to-one, the reconstruction is no longer necessary to transform the detected information $y$ to a high-dimension $x$. Instead, it only needs to compress $y$ to a low dimensional vector $z$, which can be facilitated by a CNN-based compressor. The role of the compressor is rather standard in computer vision, which is fed with images and generates the predicted labels. Here the compressor is adapted from a lightweight network, MobileNetV2, to achieve good performance and avoid overfitting [43]. The input of the compressor is the depth-and-intensity map detected by the Lidar, and the output predicts the latent vectors of the target NLOS depth map encoded by the pretrained Res-VAE. ## 3 Experiments To validate the efficacy of the Lidar-based approach and the deep learning reconstruction algorithm, we have conducted 4 categories of experiments. The first one is based on a synthetic dataset, and the second experiment is on real objects with transfer learning applied. We further demonstrate the power of our method for full-color NLOS imaging, with experiments on images of everyday objects and complex scenes, respectively. Figure 4: The NLOS reconstruction experiment on a synthetic dataset. The reconstruction results of randomly selected everyday items in the test set are presented. The first row is the ground truth depth maps of the items, and the second row shows the reconstructed depth maps. The last row presents the inputs for the NLOS reconstruction, which are the simulated depth and intensity maps by our NLOS renderer. Since we assume the NLOS objects are situated on the right of the Lidar, the left halves of the depth and intensity maps are almost identical for different items. Therefore, here we only present the right halves of these maps. For each case, the left image represents the depth map, and the one on the right is for the intensity. ### 3.1 Experiment on a synthetic dataset NLOS renderer. We first train and test the functionality of our framework on a synthetic dataset, and the weights can be further optimized when applied to real-world experiments through transfer learning. The synthetic dataset is composed of the NLOS depth maps and the corresponding simulated depth and intensity maps detected by the Lidar. As a result, a NLOS renderer is necessary to simulate the behavior of the Lidar [28]. Our rendering framework is based on three-bounce simulation. For each incidence, if there is one multi-reflection optical path length whose total intensity (total intensity is the intensity summation of all the three-bounce light paths with the same optical path length) is greater than the direct-reflection intensity, the NLOS depth and intensity will be denoted. Otherwise, it will be a common LOS case. The implementation details of the renderer are described in the Supplementary Material. The final depth intensity maps consist of 80 by 64 pixels. Since the renderer conducts matrix operations and multiprocessing calculation, the rendering for one object takes only seconds, which is much more efficient than traditional ray-tracing renderers. The targets for rendering are around 30,800 depth maps of everyday objects (cars, tables, lamps, boats, chairs, etc.) from ShapeNet [12, 54]. To improve the variance of the dataset, the objects are situated with various altitudes, angles and distances to the wall. Figure 5: The NLOS reconstruction experiment on real objects. The first row displays the pictures of the actual objects used in this experiment, and the second row corresponds to the depth labels for training. The reconstructed depth maps are shown in the third row. The last row presents the inputs for the NLOS reconstruction, which are the depth and intensity maps collected by the Lidar (right halves only as explained before). Networks training. Once the dataset is formed, it is time to train the neural networks in our framework. The Res-VAE needs to be trained first since we need to push the corresponding latent vectors of the depth maps into a dictionary. During the training of Res-VAE, the Kullback-Leibler distance is not necessary to be suppressed to a very low level. Since the compressor will infer the prior distribution $p(z)$ through $p(z|y)$, it is insignificant if $p(z)$ is not strictly Gaussian. With the acquisition of the latent vectors, the compressor is trained to map the rendered depth and intensity maps to the latent vectors. As illustrated in Figs. 1 and 3b, we assume the Lidar is located on the left of the object, so the left halves of the depth and intensity maps will mostly denote the LOS information of the wall, while the right parts are encoded with the information of both the wall and the NLOS objects. The right halves of the depth maps correspond to larger distances, thereby leading to weaker intensities. To prevent the subtle signals on the right from being flooded by the overwhelming information on the left, we split the maps to the left and right halves, and vertically stack them into a 4-channel input image (ordered as right-depth, right-intensity, left-depth, left-intensity), with each channel normalized independently as illustrated in Fig. 3(a). Besides, the left-depth and left-intensity are almost the same for different data points since they mostly document the LOS information. Hence, we introduce Gaussian noises to these two channels in order to add randomness and prevent overfitting. The 30,800 data points are randomly split into a training dataset of 24,600 and a test dataset of 6,200. The details of the training process and the structures of the generator and the compressor are provided in the Supplementary Material. Reconstruction performance. With the training of both the generator and the compressor, we will be able to predict the NLOS scenes based on the captured depth and intensity maps. The NLOS reconstruction results of different categories of ShapeNet-items from the test dataset are plotted in Fig. 4. The first two rows are the ground truth depth maps and the reconstructed results, respectively. The third row of the figure presents the rendered depth and intensity maps which are used as the input for the reconstruction framework. Since their left halves are almost identical and indistinguishable to the eye, we only show their right halves for each case. Although the examples are randomly chosen and with a wide range of shapes, the reconstruction algorithm is able to recover significant features of them. This experiment demonstrates the high quality of the NLOS imaging through the developed method in this work. ### 3.2 Real-world NLOS reconstruction Having confirmed the performance of our NLOS imaging method on the synthetic dataset, we further apply it to real-world NLOS scenes. In real-world applications we usually cannot conduct an enormous amount of detections to generate a big training dataset, therefore we utilize the networks with weights pretrained on the synthetic dataset, further continue to train the networks on a relatively small real-world dataset through transfer learning. We have 54 3D-printed objects as the target objects, which are models of common objects like buses, tables, pianos, etc. The experimental environment is presented in Fig. 3b, with the objects positioned at 9 different locations with 5 rotational angles (rotation only for non-circular items), generating a dataset of around 1,900 NLOS cases. Although the generator does not need to retrain, the compressor trained on the synthetic dataset is not directly applicable to the real-world environment. Besides, such a tiny dataset (randomly split into 1,600 as the training dataset and 300 as the test set) is not compatible to a large-scale network due to inevitable overfitting, and a simple network is unable to handle the task. In this case, we leverage transfer learning to mitigate the training dilemma and transfer the knowledge that the compressor has learnt from the synthetic dataset to the real-world dataset. The target labels for the compressor are the latent vectors of the virtual depth maps corresponding to the 1,900 scenes. Since the compressor has been previously trained on the synthetic dataset, we continue to train it on the real-world training dataset for several epochs until the training loss and test loss are about to diverge. Next, we freeze all the network weights except for the last linear layer, and train it until the loss becomes steady. The entire process of the transfer learning takes only a couple of minutes, and the reconstruction performance on the test dataset is displayed in Fig. 5. Each column presents one example, and the four rows are the pictures of the actual objects, the virtual depth labels, the reconstructed depth maps, and the right halves of detected depth and intensity maps, respectively. It is worth noting that our methodology is able to reconstruct sufficient geometric and positional information of the NLOS objects. And if there is nothing behind the occluder, the framework will not provide a false-positive NLOS result. Certain details are not fully recovered (such as the motorbike), which is expected since some of the reflected light cannot be captured by the Lidar, thus the loss of information is inevitable. Nevertheless, the reconstruction quality is comparable or superior to those of the state-of-the-art methods, with much lower requirements on equipment and ambient environment along with much faster imaging speed. Since the imaging scale and data format are all different from other methods, it is impossible to compare them quantitatively, while a qualitative comparison and error analysis are provided in Supplementary Material. The average accuracy of the reconstructed depth is 98.27$\%$ (root mean square error of 4.68 mm, see Supplementary Material for metric definition). Such a performance indicates our methodology is applicable to small-scale NLOS scenarios that require good precision and fast speed, such as biomedical imaging, animal behavioral surveillance, etc. If we further use open-source Lidar that allows access to detailed ToF information, the NLOS signal will be captured even though it is no stronger than the LOS signal. In this case, the imaging will not be limited to the cases where the multipath artifacts happen. Besides, the maps acquired by the Lidar will indicate the position of Lidar and bidirectional reflectance distribution function (BRDF) of the wall, which functions for calibration and makes the model readily transferable to other imaging setups (such as positional displacement of Lidar, material change of wall or objects) by meta-learning or phase compensation. As all deep learning algorithms are data-driven, we will enlarge our dataset in further study, and setup-independent NLOS imaging will also be formalized in future. Still, since the model has learnt the inherent physics of photon remapping, it will be able to recover some NLOS features even if the imaging condition is altered or the target is far from the distribution of the dataset. ### 3.3 Full-color NLOS imaging NLOS reconstruction mostly refers to the reconstruction of the geometric features of NLOS objects, while regular LOS imaging also captures and records the color information. Here we step forward to demonstrate the capacity of our methodology for the recovery of NLOS scenes with full color information. Most commercial Lidars utilize infrared laser to perceive the surrounding environment. However, it is impossible to detect the color information in the visible wavelength range through an infrared laser at a single wavelength. One promising solution is to build an optical system with red, green and blue (RGB) lasers as the scanning seed [13]. While stick to commercially available devices, we demonstrate a Lidar and a camera are able to cooperatively recover the color images displayed in the NLOS scenes. Since the Lidar is equipped with a camera, we do not need to add additional detectors or components to the system. The experimental setup for full-color NLOS imaging is illustrated in Fig. 6. A screen displays images but is invisible to the camera, while we aim to recover the colored images based on the light projected to the wall. In this experiment, the reconstruction objective will not be single-channel depth maps, but three-channel RGB images instead [44]. The algorithm is similar to previous, as depicted in Fig. 6a. The training dataset is composed of RGB images of everyday objects. Each time the screen displays a new image, the camera will capture an image of the light spot on the wall, and the Lidar also denotes the depth map. We have collected 48,800 data points and 43,000 of them are grouped into a training set. During the training phase, the Res-VAE is trained to generate RGB images of the objects, and the latent vectors of the images in the training set are documented. As for the compressor, the input is a 4-channel tensor, which is stacked with the RGB image collected by the camera along with the depth map detected by the Lidar. Here we add random Gaussian noises to the depth maps to introduce variance. The depth maps will function more significantly if the angle and position of the Lidar are dynamic, which is left to future research. To illustrate that the functionality of our NLOS framework is not limited to specific neural networks, here the compressor is adapted from ResNet50, which is also more compatible to the large-scale dataset with RGB information. The reconstructed RGB images of different categories of everyday items are displayed in Fig. 7, with original images, reconstructed images, and RGB light intensity maps arranged from the top row to the bottom. It is evidenced that the geometric features and color information are retained to a great extent, which validates the capacity of the proposed approach. Providing the RGB scanning light source is introduced, we will be able to recover both the shape and color of nonluminous objects. Figure 6: (a) Algorithm workflow and (b) experimental setup for full-color NLOS imaging. (a) The algorithm is essentially the same as the one presented in Fig. 3, while the input here is the concatenated tensor of captured RGB light intensity map and the depth map with Gaussian noise. The reconstruction objective is to obtain the full-color image. (b) When the screen displays an image, the device is able to detect the depth map along with the RGB light intensity map projected on the wall, which will be used for full-color NLOS imaging. Figure 7: Full-color NLOS reconstruction experiment on displayed images of everyday items. The results of different categories of everyday items are presented, with the first row as the original images, and the second row as the reconstructed full-color images. The third row presents the RGB light intensity maps captured by the camera on the Lidar. Figure 8: Full-color NLOS reconstruction experiment on displayed images of complex scenes. Randomly selected examples from the whole test set are presented, with the original images, the reconstructed full-color images, and the RGB light intensity maps shown sequentially. ### 3.4 Full-color NLOS imaging for complex scenes A mature imaging technique should not have constraints on the number of objects to be captured, and the imaging quality should not degrade with the increased complexity of the scene. To explore the full capacity of our solution, we test it on a much more complicated RGB dataset, STL10 [16]. Instead of images of isolated objects in pure black background, STL10 contains real pictures taken with different devices, angles, positions, and surrounding environments. The pictures may have more than one objects, and the semantic meanings are intricate and entangled with each other. To accommodate the increased amount of information in the image, we extend the dimension of the latent vector from 256 to 512, while other parameters and the training pipeline remain the same as those in the previous experiments. Some examples from the test set (50,000 data points in the training set, 10,000 in the test set) are presented in Fig. 8. Since the images in STL10 are not well- categorized, the examples are randomly picked from the whole test set. It is noticeable that major profiles and color information are correctly unfolded, while some fine details are sacrificed. The performance is mainly limited by the generative capability of the generator. Introducing advanced VAE models, such as $\beta$-VAE, IntroVAE, VQ-VAE-2, is expected to mitigate this issue [10, 25, 41]. ## 4 Conclusion and discussion We have developed a general methodology for real-time NLOS imaging, with innovation in both imaging equipment and reconstruction algorithm. Physically, we choose low-cost Lidar to replace the complicated devices commonly used for ultrafast experiment, and our scheme features much faster detection speed and better robustness to ambient environment. Algorithmically, we have proposed a deep learning framework consisting of two networks, a compressor and a generator. The framework is not in the typical manner of supervised learning, and the reconstruction process is two-step. The efficacy of the methodology has been verified experimentally, on both real-object NLOS reconstruction and full-color NLOS imaging, with the state-of-the-art performance. Our approach is directly pertinent to some real-world applications, such as corner- detection for unmanned vehicles, NLOS remote sensing, NLOS medical imaging, to name a few. We realize additional efforts are needed to bring the proposed method to perfection. The Lidar used in this work has a very low laser power, which is designed for small range detection. We plan to transfer our technique to high- performance Lidar with open-source access to ToF information, such as those equipped on self-driving cars, to fulfill large-scale NLOS tasks, including but not limited to NLOS imaging of street scenes, buildings, and vehicles. Other follow-up efforts include the introduction of RGB light sources for full-color detection of nonluminous objects, development of techniques to disentangle the effects of the position and angle of Lidar, etc. As for the algorithm, the projected bottleneck for complex NLOS tasks will be the generative capability of the generator. Compared to GANs, the generated images of most VAEs are blurrier. Introducing most advanced VAE architectures is expected to resolve this problem. Since the compressor is responsible for extracting NLOS information from the depth and intensity maps, it would be favorable to update attention blocks to energize the compressor with better efficiency on locating the NLOS features. In addition, instead of training the generator and the compressor sequentially as performed in this work, we expect improved performance if they are trained concurrently, similar to the mechanism of GAN. ## 5 Acknowledgements We thanks Su Li from Taxes A&M University and Jingzhi Hu from Peking University for fruitful discussions. ## References * [1] Byeongjoo Ahn, Akshat Dave, Ashok Veeraraghavan, Ioannis Gkioulekas, and Aswin C Sankaranarayanan. Convolutional approximations to the general non-line-of-sight imaging operator. In Proceedings of the IEEE International Conference on Computer Vision, pages 7889–7899, 2019. * [2] Miika Aittala, Prafull Sharma, Lukas Murmann, Adam Yedidia, Gregory Wornell, Bill Freeman, and Frédo Durand. Computational mirrors: Blind inverse light transport by deep matrix factorization. In Advances in Neural Information Processing Systems, pages 14311–14321, 2019. * [3] Victor Arellano, Diego Gutierrez, and Adrian Jarabo. Fast back-projection for non-line of sight reconstruction. Optics express, 25(10):11574–11583, 2017. * [4] Manel Baradad, Vickie Ye, Adam B. Yedidia, Frédo Durand, William T. Freeman, Gregory W. Wornell, and Antonio Torralba. Inferring light fields from shadows. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. * [5] Mufeed Batarseh, Sergey Sukhov, Zhiqin Shen, Heath Gemar, Reza Rezvani, and Aristide Dogariu. Passive sensing around the corner using spatial coherence. Nature communications, 9(1):1–6, 2018. * [6] Wolfgang Becker. Advanced time-correlated single photon counting techniques, volume 81. Springer Science & Business Media, 2005. * [7] Jacopo Bertolotti, Elbert G Van Putten, Christian Blum, Ad Lagendijk, Willem L Vos, and Allard P Mosk. Non-invasive imaging through opaque scattering layers. Nature, 491(7423):232–234, 2012. * [8] Jeremy Boger-Lombard and Ori Katz. Passive optical time-of-flight for non line-of-sight localization. Nature communications, 10(1):1–9, 2019. * [9] Katherine L Bouman, Vickie Ye, Adam B Yedidia, Frédo Durand, Gregory W Wornell, Antonio Torralba, and William T Freeman. Turning corners into cameras: Principles and methods. In Proceedings of the IEEE International Conference on Computer Vision, pages 2270–2278, 2017. * [10] Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in $\beta$-vae. arXiv preprint arXiv:1804.03599, 2018. * [11] Susan Chan, Ryan E Warburton, Genevieve Gariepy, Jonathan Leach, and Daniele Faccio. Non-line-of-sight tracking of people at long range. Optics express, 25(9):10109–10117, 2017. * [12] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. * [13] Wenzheng Chen, Simon Daneau, Fahim Mannan, and Felix Heide. Steady-state non-line-of-sight imaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6790–6799, 2019. * [14] Wenzheng Chen, Fangyin Wei, Kiriakos N Kutulakos, Szymon Rusinkiewicz, and Felix Heide. Learned feature embeddings for non-line-of-sight imaging and recognition. ACM Transactions on Graphics (TOG), 39(6):1–18, 2020. * [15] Javier Grau Chopite, Matthias B Hullin, Michael Wand, and Julian Iseringhausen. Deep non-line-of-sight reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 960–969, 2020. * [16] Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. volume 15 of Proceedings of Machine Learning Research, pages 215–223, 11–13 Apr 2011. * [17] Boguslaw Cyganek and J Paul Siebert. An introduction to 3D computer vision techniques and algorithms. John Wiley & Sons, 2011. * [18] Daniele Faccio, Andreas Velten, and Gordon Wetzstein. Non-line-of-sight imaging. Nature Reviews Physics, pages 1–10, 2020. * [19] Stefan Fuchs. Calibration and multipath mitigation for increased accuracy of time-of-flight camera measurements in robotic applications. PhD thesis, Universitätsbibliothek der Technischen Universität Berlin, 2012. * [20] Genevieve Gariepy, Francesco Tonolini, Robert Henderson, Jonathan Leach, and Daniele Faccio. Detection and tracking of moving objects hidden from view. Nature Photonics, 10(1):23–26, 2016. * [21] Massimo Ghioni, Angelo Gulinatti, Ivan Rech, Franco Zappa, and Sergio Cova. Progress in silicon single-photon avalanche diodes. IEEE Journal of selected topics in quantum electronics, 13(4):852–862, 2007. * [22] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. * [23] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. * [24] Felix Heide, Lei Xiao, Wolfgang Heidrich, and Matthias B Hullin. Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3222–3229, 2014. * [25] Huaibo Huang, Ran He, Zhenan Sun, Tieniu Tan, et al. Introvae: Introspective variational autoencoders for photographic image synthesis. In Advances in neural information processing systems, pages 52–63, 2018. * [26] Mariko Isogawa, Dorian Chan, Ye Yuan, Kris Kitani, and Matthew O’Toole. Efficient non-line-of-sight imaging from transient sinograms. In European Conference on Computer Vision, pages 193–208. Springer, 2020. * [27] J Itatani, F Quéré, Gennady L Yudin, M Yu Ivanov, Ferenc Krausz, and Paul B Corkum. Attosecond streak camera. Physical review letters, 88(17):173903, 2002. * [28] Adrian Jarabo, Julio Marco, Adolfo Muñoz, Raul Buisan, Wojciech Jarosz, and Diego Gutierrez. A framework for transient rendering. ACM Transactions on Graphics (ToG), 33(6):1–10, 2014. * [29] Ori Katz, Pierre Heidmann, Mathias Fink, and Sylvain Gigan. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nature photonics, 8(10):784–790, 2014. * [30] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. * [31] Jonathan Klein, Christoph Peters, Jaime Martín, Martin Laurenzis, and Matthias B Hullin. Tracking objects outside the line of sight using 2d intensity images. Scientific reports, 6(1):1–9, 2016. * [32] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017. * [33] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436–444, 2015. * [34] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. * [35] Xin Lei, Liangyu He, Yixuan Tan, Ken Xingze Wang, Xinggang Wang, Yihan Du, Shanhui Fan, and Zongfu Yu. Direct object recognition without line-of-sight using optical coherence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11737–11746, 2019. * [36] Xiaochun Liu, Ibón Guillén, Marco La Manna, Ji Hyun Nam, Syed Azer Reza, Toan Huu Le, Adrian Jarabo, Diego Gutierrez, and Andreas Velten. Non-line-of-sight imaging using phasor-field virtual wave optics. Nature, 572(7771):620–623, 2019. * [37] Christopher A Metzler, Felix Heide, Prasana Rangarajan, Muralidhar Madabhushi Balaji, Aparna Viswanath, Ashok Veeraraghavan, and Richard G Baraniuk. Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging. Optica, 7(1):63–71, 2020. * [38] Ji Hyun Nam, Eric Brandt, Sebastian Bauer, Xiaochun Liu, Eftychios Sifakis, and Andreas Velten. Real-time non-line-of-sight imaging of dynamic scenes. arXiv preprint arXiv:2010.12737, 2020. * [39] Matthew O’Toole, David B Lindell, and Gordon Wetzstein. Confocal non-line-of-sight imaging based on the light-cone transform. Nature, 555(7696):338–341, 2018. * [40] Adithya Pediredla, Akshat Dave, and Ashok Veeraraghavan. Snlos: Non-line-of-sight scanning through temporal focusing. In 2019 IEEE International Conference on Computational Photography (ICCP), pages 1–13. IEEE, 2019. * [41] Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2. In Advances in Neural Information Processing Systems, pages 14866–14876, 2019. * [42] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015. * [43] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510–4520, 2018. * [44] Charles Saunders, John Murray-Bruce, and Vivek K Goyal. Computational periscopy with an ordinary digital camera. Nature, 565(7740):472–475, 2019. * [45] Ashutosh Saxena, Sung Chung, and Andrew Ng. Learning depth from single monocular images. Advances in neural information processing systems, 18:1161–1168, 2005. * [46] Nicolas Scheiner, Florian Kraus, Fangyin Wei, Buu Phan, Fahim Mannan, Nils Appenrodt, Werner Ritter, Jurgen Dickmann, Klaus Dietmayer, Bernhard Sick, and Felix Heide. Seeing around street corners: Non-line-of-sight detection and tracking in-the-wild using doppler radar. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2068–2077, 2020. * [47] Brandon M. Smith, Matthew O’Toole, and Mohit Gupta. Tracking multiple objects outside the line of sight using speckle imaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. * [48] Christos Thrampoulidis, Gal Shulkind, Feihu Xu, William T Freeman, Jeffrey H Shapiro, Antonio Torralba, Franco NC Wong, and Gregory W Wornell. Exploiting occlusion in non-line-of-sight active imaging. IEEE Transactions on Computational Imaging, 4(3):419–431, 2018\. * [49] Chia-Yin Tsai, Kiriakos N Kutulakos, Srinivasa G Narasimhan, and Aswin C Sankaranarayanan. The geometry of first-returning photons for non-line-of-sight imaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7216–7224, 2017. * [50] Andreas Velten, Thomas Willwacher, Otkrist Gupta, Ashok Veeraraghavan, Moungi G Bawendi, and Ramesh Raskar. Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nature communications, 3(1):1–8, 2012. * [51] Claus Weitkamp. Lidar: range-resolved optical remote sensing of the atmosphere, volume 102. Springer Science & Business, 2006. * [52] Shumian Xin, Sotiris Nousias, Kiriakos N Kutulakos, Aswin C Sankaranarayanan, Srinivasa G Narasimhan, and Ioannis Gkioulekas. A theory of fermat paths for non-line-of-sight shape reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6800–6809, 2019. * [53] Feihu Xu, Gal Shulkind, Christos Thrampoulidis, Jeffrey H Shapiro, Antonio Torralba, Franco NC Wong, and Gregory W Wornell. Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging. Optics express, 26(8):9945–9962, 2018. * [54] Qiangeng Xu, Weiyue Wang, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. Disn: Deep implicit surface network for high-quality single-view 3d reconstruction. In Advances in Neural Information Processing Systems, volume 32, pages 492–502, 2019.
# Transparency in Multi-Human Multi-Robot Interaction Jayam Patel1* —, Tyagaraja Ramaswamy1, Zhi Li1 and Carlo Pinciroli1* — 1 Department of Robotics Engineering, Worcester Polytechnic Institute, MA, USA. Email: {jupatel, tramaswamy, zli11<EMAIL_ADDRESS> ###### Abstract Transparency is a key factor in the performance of human-robot interaction. A transparent interface allows operators to be aware of the state of a robot and to assess the progress of the tasks at hand. When multi-robot systems are involved, transparency is a greater challenge, due to the larger number of variables affecting the behavior of the robots as a whole. Existing work studies transparency with single operators and multiple robots. Studies on transparency that focus on multiple operators interacting with a multi-robot systems are limited. This paper fills this gap by presenting a novel human- swarm interface for multiple operators. Through this interface, we study which graphical elements are contributing to multi-operator transparency by comparing four “transparency modes”: (i) no transparency (no operator receives information from the robots), (ii) central transparency (the operators receive information only relevant to their personal task), (iii) peripheral transparency (the operators share information on each others’ tasks), and (iv) mixed transparency (both central and peripheral). We report the results in terms of awareness, trust, and workload from a user study involving 18 participants engaged in a complex multi-robot task. ## I Introduction Human-robot teams are often envisioned in complex scenarios [1], including humanitarian missions [2, 3], interplanetary exploration [4], ecosystem restoration [5, 6], mining [7], bridge inspection [8] and surgery [9]. The success of these missions depends on effective and efficient team interaction. One crucial requirement to make this vision a reality is making the multi- robot system more transparent [10], i.e., legible and interpretable, for the human operators. Transparency is a key property of any human-machine interface. Transparent interfaces offer high usability and foster increased situational awareness [10, 11, 12, 13, 14]. Transparent interfaces limit or remove ambiguity, improve trust, and enhance decision-making [15, 16]. Lyon’s models of transparency [17] and the situational awareness-based transparency (SAT) [18] model provide guidelines for an effective interaction between an operator and a machine. However, these models are designed and tested with a single operator in mind. The problem of designing a transparent interface intensifies when there are multiple human operators, or ‘the machine’ is, in fact, a multi-robot system. This is because the heterogeneous nature and sheer number of combined interactions among operators and robots affect the behavior and the performance of the entire system in non-trivial ways. As an example, imagine an automated warehouse in which hundreds of robots navigate and transport heavy objects. The robots might drop objects, experience hardware failure, or transport an incorrect object. To resolve these issues, the operators must collaborate and resolve the issues using information from robots and other operators. In this paper, we study the information that each human operator must process and use, which affects cognitive load [19, 20]. To decrease cognitive load, a possible approach is to limit the amount and the type of information presented to the operator. However, this creates a trade-off with transparency, which intuitively suggests more information would be better. We explore the design space of graphical user interfaces for human-robot interaction, focusing on the multi-human multi-robot scenario, which has, so far, received limited attention. We consider four types of interfaces, each presenting different amounts and types of information, and each corresponding to a specific ‘transparency mode’. Figure 1: Central and peripheral regions of the field of view. The green region indicates the central field of view. The yellow region indicates the peripheral field of view. To characterize these modes, we study the effect of the field of view (FoV) an interface offers to the operators. We define the FoV as the observable area an operator can see through the interface camera. As shown in Fig. 1, we categorize the FoV into two regions, based on the distance from the center of the screen: central and peripheral. The central FoV is the region closest to the center; the peripheral FoV is the remaining region. Using this categorization, we produced four types of interfaces, each differing in the way information is displayed: * • No Transparency (NT): no information is available to the operator, used as a baseline for comparison; * • Central Transparency (CT): information is available at the center of the FoV and displayed directly on the robots; * • Peripheral Transparency (PT): information is shown at the boundaries of the FoV as dedicated widgets; * • Mixed Transparency (MT): a combination of central and peripheral transparency. We investigate the effects of the transparency modes on operator performance, awareness, task load, and trust in the system. This paper offers two main contributions: 1. 1. A novel augmented-reality-based interface for multi-human multi-robot interaction with the mentioned transparency modes. This interface is an improvement of our mixed-granularity control interface [21, 22] for single operators; 2. 2. A study, which to the best of our knowledge is the first, of the effects of transparency in multi-human multi-robot interaction. Our user study involved 18 participants in teams of 2, each team controlling 9 robots in an object transport scenario. The paper is organized as follows. In Sec. II, we discuss related work on transparency. In Sec. III, we present our system and its design. In Sec. IV, we report our user study procedures and results followed by analysis in Sec. V and summarize the paper in Sec. VI. ## II Background Transparency is an important research topic in human-machine and human-robot interaction [23, 24]. Transparency affects usability [25, 26, 27], performance [28, 29], trust [30, 31] and explainability [32, 33]. The effect of these factors increases with the type and quantity of information provided to the operator [34, 35]. Coarse information often negatively affects decision time, trust, situational awareness, and performance; in contrast, detailed information typically results in higher cognitive load. Ghiringhelli et al. [36] first proposed to graphically represent the actions of the robots using augmented reality for single operators. Chen et al. [13] and Mercado et al. [37] tested the impact of transparency on situational awareness, trust, and workload of an operator. Their work is based on simulated point-mass models of the robots, which lack important physical properties of mobile robots and create a ‘reality gap’ between results collected with a simulated environment and the results collected with physical environment [38]. With multiple operators, a novel problem arises: the need for operators to share robots and their information, to achieve a new form of transparency which we call operator-level transparency. To the best of our knowledge, there is no study on this topic in the context of multi-robot systems controlled through augmented reality (AR). ## III Transparency-based Interaction System ### III-A System Overview Our system comprises four components (see Fig. 2): 1. 1. A distributed AR interface implemented as an app for an Apple iPad; 2. 2. A team of robots, pre-programmed with various behaviors to reach a defined point, recognize objects, and perform collective transport; 3. 3. Vicon [39], a motion capture system that localizes the robots and the movable objects in the environment; 4. 4. ARGoS [40], a multi-robot simulator modified to act as ‘software glue’ between the app, the robots, and the Vicon. We replaced the simulated physics engine shipped with ARGoS with a plug-in that receives positional data from the Vicon motion capture system, and developed new sensor and actuator plug-ins that interface with those on-board the robots. With these plug-ins, ARGoS acts as a middleware functionally similar to the roscore of ROS. Figure 2: System overview. ### III-B User Interface Our interface integrates an AR software development kit, Vuforia [41], and the Unity [42] game engine. The interface detects robots and movable objects by their fiducial markers. The robots and objects recognized by the interface are overlaid by virtual objects. The operator can manipulate the virtual objects to send commands to the robots. For example, the operator can translate a virtual object with a one-finger swipe and rotate it with a two-finger twist. It is also possible to select a team of robots by drawing a closed path with a continuous one-finger swipe. Fig. 3 shows a screenshot of the default view of the application. The top-right corner shows the menu buttons to toggle the visibility of the transparency modes. The bottom-left corner shows the real- time global coordinate frame. Figure 3: Screenshot of our augmented-reality interface running on an iPad. The black arrow indicates the origin marker that corresponds to the coordinate frame of the interface. ### III-C Granularity of Control In our previous work [21, 22], we proposed an interface capable of mixed granularity of control for single operators. The ‘granularity’ refers to the possibility to interact at the robot-, team-, and environment-level. Robot- and team-level control allow the operator to send direct commands to individual robots or groups of them. With environment-level granularity, the operator indicates the desired effect of a modification of the environment, e.g., moving an object, and the robots autonomously execute the action. As discussed in [21, 22], mixed granularity of control can outperform any individual level of control. We use these interface features in the present study. (a) Object recognition (b) New Goal Defined (c) Robots approach and push (d) Transport complete Figure 4: Object manipulation by interaction with virtual objects. The overlaid dotted black arrow indicates the one-finger swipe gesture used to move the virtual object and the overlaid red dotted arrow indicates the two- finger rotation gesture. Object Manipulation. The interface overlays virtual objects over the recognized objects (see Fig. 4). The user can move multiple virtual objects to define their respective desired poses, and teams of robots transport these objects to destination. If two or more operators simultaneously control the same object, the system processes the pose received last. (a) Robot recognition (b) New robot position Figure 5: Robot manipulation by interaction with virtual robots. The overlaid dotted black arrow indicates the one-finger swipe gesture to move the virtual robot and the arrowhead color indicates the moved virtual robots. Robot Manipulation. The interface overlays virtual robots over the recognized robots (see Fig. 5). The color of the virtual robot resembles the color of the fiducial markers to differentiate between robots. The user can move multiple virtual robots to define their respective desired poses. If the robot is part of a team performing collective transport, the other robots in the same team pause until the selected robot reaches the desired pose. If the robot is part of a team not involved in collective transport, the selected robot overwrites the goal pose with the newly defined pose and does not affect its team members. If two or more operators simultaneously control the same robot, the system processes the last pose received. Fig. 5 shows how virtual robots look. (a) Robot team selection (b) Robot team creation (c) Robot team manipulation (d) Robot team re-positioned Figure 6: Robot team creation and manipulation by interacting with the interface. The overlaid dotted black arrow indicates the one-finger swipe gesture to move the virtual cube for re-positioning the the team of robots. Robot Team Selection and Manipulation. The user can draw a closed path with a one-finger continuous swipe to select all the robots in the enclosed region (see Fig. 6). A contour-shaped virtual object with a virtual cube at its centroid appears in the graphical view. The user can manipulate this cube to define the desired pose for the selected team of robots. The user can handle only one team at a time. If two or more operators have the same robot in their team, the robot processes the pose received last. ### III-D Collective Transport Figure 7: Collective transport state machine. We employ a collective transport behavior based on the finite state machine (FSM) shown in Fig. 7. This behavior is identical to the one presented in our previous work [21]. The states in the FSM are explained next. Reach Object. Upon receiving the desired goal pose for the object, the robots organize themselves around the object in a circular manner. These poses are decided based on the number of robots in the team and their distance from the object. This state comes to an end when all the robots reach their designated poses. Approach Object. The robots move towards the centroid of the object. This state is completed when all the robots are in contact with the object. Push Object. The robots first rotate in-place facing the direction of the goal. The robots then move towards the goal. The robots modulate their speeds to maintain a set distance from the centroid of the object and keep their formation. If a robot breaks the formation, the team switches back to Approach Object, waits for its completion, and subsequently resumes the transport behavior. The state comes to an end once the object reaches the goal position. Rotate Object. The robots rearrange around the object and move along a circular path, thereby rotating the object in place. If any robot breaks the formation, the team rearranges and resumes object rotation. The state ends when the object reaches the desired orientation. ### III-E Transparency Modes We present different transparency modes based on the visual FoV of our interface. The interface provides an option to switch between modes. The modes incorporate transparency features that reflect an operator’s perception, comprehension, and projection, i.e., in terms of the three levels described in the SAT model [18]. Table I lists the features, the transparency modes, and the corresponding the SAT levels of information. TABLE I: Analogy between the features in our interface and the levels of the SAT model. Transparency mode | Our feature | SAT level ---|---|--- Central Transparency | On-robot status | Level 1 + 2 Robot Direction Pointer | Level 2 Shared Awareness | Level 3 Peripheral Transparency | On-robot status | Level 1 + 2 Object Panel | Level 2 Text-based Log | Level 3 No Transparency (NT). Operators can send control commands, but without access to any feedback information. Figure 8: Central Transparency showing on-robot status and directional indicator. Central Transparency (CT). The interface overlays each robot with a direction vector and text to report the current task (see Fig. 8). The direction vector indicates the heading of the robot. The color of the vectors resemble the color of the fiducial markers to differentiate between vectors when the robots are close to each other. The interface updates the information 10 times per second. The displayed states are: Idle, Reach, and Error. The interface also reports the commands of other operators in real time, to foster collaboration and shared awareness, and to minimize (ideally avoid) conflicting control of the same robots and objects. This information is only visible if an operator is focusing the tablet camera on a specific robot or object, i.e., at the center of the FoV. Figure 9: Peripheral Transparency mode showing text-based lock, object panel and robot panel (clockwise from top-left). Peripheral Transparency (PT). The interface displays a robot panel, an object panel, and a text-based log at the edges of the screen (see Fig. 9). The robot panel shows the robots as icons. The highlighted icons correspond to the robots that are moving or performing operator-defined actions. The interface conveys error conditions as blinking red exclamation points. Analogously, the object panel shows the objects as icons. The interface highlights the icons that correspond to an object manipulated by the robots. The interface also offers the option to select an object icon to lock it for future use. By locking, an operator indicates that they intend to work with that object. The interface of other operators highlights the lock with a red icon. An operator can lock only one object at a time, removing past locks when a new one is requested. The text-based log reports the last 3 control actions taken by other operators. Mixed Transparency (MT). This mode offers the features of both central and peripheral transparency. ## IV User Study ### IV-A Hypotheses The primary purpose of this work is to investigate the effect of different transparency modes on the operators’ awareness, workload, trust, interaction, and performance in a multi-human multi-robot scenario. We based our experiments on three hypotheses: 1. H1: Mixed transparency (MT) has the best outcome as compared to other modes, in terms of the mentioned metrics. 2. H2: Operators prefer mixed transparency (MT) to the other modes. 3. H3: Operators prefer central transparency (CT) to peripheral transparency (PT). ### IV-B Gamified User Study Figure 10: User study experiment setup. The shaded red box indicates the goal region. We devised a gamified scenario in which the operators must use the robots to perform object transport. Teams of two participants had to move 6 objects (2 big and 4 small) from their initial position to a goal region. Big objects were worth 2 points each, and small objects were worth 1 point each. The operators had to work collaboratively to gain as many points as possible (out of a maximum of 8) in a fixed time limit of 8 minutes. The operators could move the big objects using the collective transport behavior, or using the robot or robot-team manipulation modalities at will. Small objects could only be transported with the robot and team control modalities. The operators were given 9 robots to complete the game. Fig. 10 shows the initial positions of the robots, the objects, and the goal region. ### IV-C Participant Sample We recruited 18 university students (10 female, 8 male) with ages ranging from 19 to 41 ($23.78\pm 5.08$) in accordance with protocols approved by WPI’s IRB111https://www.wpi.edu/research/resources/compliance/institutional-review- board. No participant had any prior experience with the system. ### IV-D Procedures Each session of the study approximately took 105 minutes and involved four games. After signing the consent form, we explained the scenario and gave the participants 10 minutes to play with the system. After each game, the participants had to answer a subjective questionnaire. All the participants played the game with all transparency modes (NT, CT, PT, MT) once. We randomized the order of the modes to reduce learning effects. ### IV-E Metrics We recorded subjective and objective measures for each participant for each task. We used the following measures: Situational Awareness. We used the Situational Awareness Rating Technique (SART) [43] on a 10-point Likert scale [44] to assess the situational awareness after each game. Task Workload. We used the NASA TLX [45] scale on a 4-point Likert scale to compare the perceived workload in each game. Trust. We used the trust questionnaire [46] on a 10-point Likert scale to compare the trust in the interface affected by each transparency mode. Interaction. We used a custom questionnaire (see Fig. 11) on a 5-point Likert scale to assess the operator-level and robot-level interaction. * • Did you understand your teammate’s intentions? Were you able to understand why your teammate took a certain action? * • Could you understand your teammate’s actions? Could you understand what your teammate was doing at any particular time? * • Could you follow the progress of the game? Were you able to gauge how much of it was pending? * • Did you understand what the robots were doing, i.e., how and why the robots were behaving the way they did? * • Was the information provided by the interface clear to understand? Figure 11: Questionnaire to assess the operator-level and robot-level interaction. Each question is expressed with a 5-point Likert scale. Performance. We used the points earned in each game as a metric to scale the performance achieved with each transparency mode. Usability. We asked the participants to select the features (Log, Robot Panel, Object Panel, and On-Robot Status) they used during the study. Additionally, we asked them to rank the transparency modes from 1 to 4, 1 being the highest rank. TABLE II: Results with relationships between transparency modes. The relationship are based on mean ranks obtained through Friedman’s Test. The symbol ∗ denotes significant difference ($p<0.05$) and the symbol ∗∗ denotes marginally significant difference ($p<0.10$). The symbol - denotes negative scales and lower ranking is a good ranking. Attributes | Relationship | $\chi^{2}(3)$ | $p$-value ---|---|---|--- SART SUBJECTIVE SCALE Instability of Situation- | not significant | $4.192$ | $0.241$ Complexity of Situation- | NT$>$MT$>$PT$>$CT∗∗ | $6.435$ | $0.092$ Variability of Situation- | not significant | $4.192$ | $0.241$ Arousal | NT$>$MT$>$PT$>$CT∗∗ | $7.093$ | $0.069$ Concentration of Attention | not significant | $4.664$ | $0.198$ Spare Mental Capacity | not significant | $3.526$ | $0.317$ Information Quantity | MT$>$CT$=$PT$>$NT∗ | $16.160$ | $0.001$ Information Quality | MT$>$CT$>$PT$>$NT∗ | $11.351$ | $0.010$ Familiarity with Situation | not significant | $1.911$ | $0.591$ NASA TLX SUBJECTIVE SCALE Mental Demand- | not significant | $6.169$ | $0.104$ Physical Demand- | not significant | $3.526$ | $0.317$ Temporal Demand- | not significant | $0.564$ | $0.903$ Performance- | not significant | $4.573$ | $0.206$ Effort- | NT$>$PT$>$CT$>$MT∗ | $9.203$ | $0.027$ Frustration- | NT$>$CT$>$MT$>$PT∗ | $9.205$ | $0.027$ TRUST SUBJECTIVE SCALE Competence | not significant | $3.703$ | $0.295$ Predictability | PT$>$CT$>$MT$>$NT∗∗ | $6.359$ | $0.095$ Reliability | not significant | $4.338$ | $0.227$ Faith | not significant | $1.891$ | $0.595$ Overall Trust | PT$>$MT$>$CT$=$NT∗ | $12.607$ | $0.005$ Accuracy | PT$>$MT$=$CT$>$NT∗ | $12.214$ | $0.007$ INTERACTION SUBJECTIVE SCALE Teammate’s Intent | MT$>$PT$>$CT$>$NT∗ | $23.976$ | $0.000$ Teammate’s Action | MT$>$PT$=$CT$>$NT∗ | $22.511$ | $0.000$ Task Progress | MT$>$CT$>$PT$>$NT∗ | $25.619$ | $0.000$ Robot Status | CT$>$PT$>$MT$>$NT∗ | $13.608$ | $0.003$ Information Clarity | CT$>$PT$>$MT$>$NT∗ | $12.078$ | $0.007$ PERFORMANCE OBJECTIVE SCALE Points Scored | not significant | $5.554$ | $0.135$ Figure 12: Feature Usability. Figure 13: Game Preference. TABLE III: Ranking scores based on the Borda count. The gray cells indicate the leading scenario for each type of ranking. Borda Count | NT | CT | PT | MT ---|---|---|---|--- Based on Collected Data Ranking (Table II) | 17.5 | 40 | 39 | 43.5 Based on Preference Data Ranking (Fig. 13) | 18 | 46 | 45 | 72 ## V Analysis and Discussion Table II shows the results for all the subjective scales and the objective metrics. We used the Friedman test [47] to analyze the data and to assess the significance among different games. We formed rankings based on the mean ranks for all the attributes that showed statistical significance (set to $p<0.05$) or marginal significance (set to $p<0.10$). Fig. 12 shows the percentage of operators using a particular feature. Fig. 13 reports how participants ranked the transparency modes. We used the Borda count [48] to calculate the rankings. We inverted the ranking of the negative scales when calculating the Borda count scores. Table III shows the results for each category. This table indicates mixed transparency (MT) as the overall winner in terms of performance, as well as the preferred mode across participants, in accordance with hypotheses H1 and H2. The data suggests that central transparency is better than peripheral transparency, confirming hypothesis H3. Mixed Transparency. This mode is the overall best choice for operators. The data suggests that this mode has the best information quality and quantity. The operators could pick the information they wanted from central and peripheral regions. However, the operators reported higher perceived complexity and higher arousal than with other transparency modes. The usability tests suggested this mode as the best to understand the teammate’s intent and actions. This justifies mixed transparency as the first choice. Central Transparency. This mode has the lowest complexity and arousal. The users found it easier to focus on the center of the screen, and use on-robot status over the side panels. The users reported better information quality and clarity w.r.t to peripheral transparency. $55.55\%$ of the operators preferred this mode over peripheral transparency. Peripheral Transparency. The operators found the information displayed at periphery of the screen hard to parse and access. This led to increased effort, complexity, and arousal w.r.t. the central transparency mode. However, as the information was available on-demand and was not constantly displayed in the FoV, the users reported the lowest amount of frustration. Operators also preferred the icon panels over the text-based log. Additionally, the operators preferred PT over CT to gain awareness of their teammate’s intention. Performance. Our experiments did not report a substantial difference in performance across transparency modes. We hypothesize that this lack of difference is due to a learning effect across the four runs that each team had to perform. We could not avoid this learning effect through randomization of the transparency modes or pre-study training. The training sessions improved the participants’ understanding of the interface and its features, but did not improve operation proficiency. We attribute this issue to the fact that our study was conducted with real robots, exposing the participants to real-world issues with robots they never encountered before (e.g. noise, failures). Fig. 14 reports the performance recorded in each game. Fig. 15 shows the increase in performance sorted by game performed (learning effect). Task performance dropped or stayed the same for teams that used no transparency after using other transparency modes. Figure 14: Task performance for each transparency mode. Figure 15: Learning effect in the user study. ## VI Conclusion and Future Work In this paper, we studied the effects of different transparency modes in multi-human multi-robot interaction. We classified transparency based on visual FoV. We demonstrated the design of a novel augmented-reality interface that supports different modes of transparency and provides both operator-level and robot-level information. We performed a user study with 18 operators to assess the effects of these modes of transparency on awareness, workload, trust, and interaction. Mixed transparency outperformed other modes in terms of overall effect and usability, and the participants chose mixed transparency as the best mode. We also compared central transparency with peripheral transparency. More operators preferred central transparency (55.55%) over peripheral transparency (45.45%). Although the difference between the central and peripheral transparency is small, these modes of transparency have their respective benefits. Central transparency offers better robot-level information, while peripheral transparency provides better operator-level information. We recognize that the sample size of our study is limited, making our study in some ways exploratory from a statistical standpoint. However, the complexity of the task we studied is compelling, especially when compared with existing literature. The next iteration of our work will focus on expanding the user study in two directions. First, understanding the effects of learning and training on transparency, i.e., comparing the need of information for a novice user with the needs of an expert user. Second, studying the effects of our transparency features on the operator’s reaction time, i.e., the time taken to resolve a problem. ## Acknowledgements This work was funded by an Amazon Research Award. ## References * [1] M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo, “Swarm robotics: A review from the swarm engineering perspective,” _Swarm Intelligence_ , vol. 7, no. 1, pp. 1–41, 2013. * [2] R. R. Murphy, _Disaster robotics_. MIT press, 2014. * [3] A. P. Hamins, C. Grant, N. P. Bryner, A. W. Jones, and G. H. Koepke, “Research roadmap for smart fire fighting,” NIST, Tech. Rep. 1191, 2015. * [4] D. Goldsmith, “Book review: Voyage to the milky way: the future of space exploration/tv books, 1999,” _Sky and Telescope_ , vol. 98, no. 5, p. 81, 1999. * [5] J. S. Denkenberger, C. T. Driscoll, S. W. Effler, D. M. O’Donnell, and D. A. Matthews, “Comparison of an urban lake targeted for rehabilitation and a reference lake based on robotic monitoring,” _Lake and Reservoir Management_ , vol. 23, no. 1, pp. 11–26, 2007. * [6] T. M. Buters, P. W. Bateman, T. Robinson, D. Belton, K. W. Dixon, and A. T. Cross, “Methodological ambiguity and inconsistency constrain unmanned aerial vehicles as a silver bullet for monitoring ecological restoration,” _Remote Sensing_ , vol. 11, no. 10, p. 1180, 2019. * [7] R. F. Rubio, “Mining: the challenge knocks on our door,” _Mine Water and the Environment_ , vol. 31, no. 1, pp. 69–73, 2012. * [8] J.-K. Oh, G. Jang, S. Oh, J. H. Lee, B.-J. Yi, Y. S. Moon, J. S. Lee, and Y. Choi, “Bridge inspection robot system with machine vision,” _Automation in Construction_ , vol. 18, no. 7, pp. 929–941, 2009. * [9] S. Sirouspour and P. Setoodeh, “Multi-operator/multi-robot teleoperation: an adaptive nonlinear control approach,” in _2005 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, 2005, pp. 1576–1581. * [10] K. A. Roundtree, M. A. Goodrich, and J. A. Adams, “Transparency: Transitioning From Human–Machine Systems to Human-Swarm Systems,” _Journal of Cognitive Engineering and Decision Making_ , p. 155534341984277, Apr. 2019. [Online]. Available: http://journals.sagepub.com/doi/10.1177/1555343419842776 * [11] A. Bhaskara, M. Skinner, and S. Loft, “Agent Transparency: A Review of Current Theory and Evidence,” _IEEE Transactions on Human-Machine Systems_ , pp. 1–10, 2020. [Online]. Available: https://ieeexplore.ieee.org/document/8982042/ * [12] T. Chakraborti, A. Kulkarni, S. Sreedharan, D. E. Smith, and S. Kambhampati, “Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior,” _Proceedings of the international conference on automated planning and scheduling_ , vol. 29, pp. 86–96, 2019. * [13] J. Y. C. Chen, S. G. Lakhmani, K. Stowers, A. R. Selkowitz, J. L. Wright, and M. Barnes, “Situation awareness-based agent transparency and human-autonomy teaming effectiveness,” _Theoretical Issues in Ergonomics Science_ , vol. 19, no. 3, pp. 259–282, May 2018. [Online]. Available: https://www.tandfonline.com/doi/full/10.1080/1463922X.2017.1315750 * [14] S. Tulli, F. Correia, S. Mascarenhas, F. S. Melo, and A. Paiva, “Effects of Agents’ Transparency on Teamwork?” _International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems_ , pp. 22–37, 2019. * [15] R. Kalpagam Ganesan, Y. K. Rathore, H. M. Ross, and H. Ben Amor, “Better Teaming Through Visual Cues: How Projecting Imagery in a Workspace Can Improve Human-Robot Collaboration,” _IEEE Robotics & Automation Magazine_, vol. 25, no. 2, pp. 59–71, Jun. 2018. [Online]. Available: https://ieeexplore.ieee.org/document/8359206/ * [16] B. Hoppenstedt, T. Witte, J. Ruof, K. Kammerer, M. Tichy, M. Reichert, and R. Pryss, “Debugging Quadrocopter Trajectories in Mixed Reality,” in _Augmented Reality, Virtual Reality, and Computer Graphics_ , L. T. De Paolis and P. Bourdot, Eds. Cham: Springer International Publishing, 2019, vol. 11614, pp. 43–50. [Online]. Available: http://link.springer.com/10.1007/978-3-030-25999-0_4 * [17] J. B. Lyons, “Being transparent about transparency: A model for human-robot interaction,” in _2013 AAAI Spring Symposium Series_ , 2013. * [18] J. Y. Chen, K. Procci, M. Boyce, J. Wright, A. Garcia, and M. Barnes, “Situation Awareness-Based Agent Transparency:,” Defense Technical Information Center, Fort Belvoir, VA, Tech. Rep., Apr. 2014. [Online]. Available: http://www.dtic.mil/docs/citations/ADA600351 * [19] G. A. Miller, “The magical number seven, plus or minus two: Some limits on our capacity for processing information.” _Psychological review_ , vol. 63, no. 2, p. 81, 1956. * [20] M. Lewis, H. Wang, S. Y. Chien, P. Velagapudi, P. Scerri, and K. Sycara, “Choosing autonomy modes for multirobot search,” _Human Factors_ , vol. 52, no. 2, pp. 225–233, 2010. * [21] J. Patel, Y. Xu, and C. Pinciroli, “Mixed-granularity human-swarm interaction,” in _Robotics and Automation (ICRA), 2019 IEEE International Conference on_. IEEE, 2019. * [22] J. Patel and C. Pinciroli, “Improving human performance using mixed granularity of control in multi-human multi-robot interaction,” _arXiv preprint arXiv:1909.07487_ , 2019. * [23] S. Lakhmani, J. Abich, D. Barber, and J. Chen, “A Proposed Approach for Determining the Influence of Multimodal Robot-of-Human Transparency Information on Human-Agent Teams,” in _Foundations of Augmented Cognition: Neuroergonomics and Operational Neuroscience_ , D. D. Schmorrow and C. M. Fidopiastis, Eds. Cham: Springer International Publishing, 2016, vol. 9744, pp. 296–307. [Online]. Available: http://link.springer.com/10.1007/978-3-319-39952-2_29 * [24] H. Felzmann, E. Fosch-Villaronga, C. Lutz, and A. Tamo-Larrieux, “Robots and Transparency: The Multiple Dimensions of Transparency in the Context of Robot Technologies,” _IEEE Robotics & Automation Magazine_, vol. 26, no. 2, pp. 71–78, Jun. 2019. [Online]. Available: https://ieeexplore.ieee.org/document/8684252/ * [25] S.-Y. Chien, M. Lewis, K. Sycara, A. Kumru, and J.-S. Liu, “Influence of Culture, Transparency, Trust, and Degree of Automation on Automation Use,” _IEEE Transactions on Human-Machine Systems_ , pp. 1–10, 2019. [Online]. Available: https://ieeexplore.ieee.org/document/8836099/ * [26] Y. Zhu, T. Aoyama, and Y. Hasegawa, “Enhancing the Transparency by Onomatopoeia for Passivity-Based Time-Delayed Teleoperation,” _IEEE Robotics and Automation Letters_ , vol. 5, no. 2, pp. 2981–2986, Apr. 2020. [Online]. Available: https://ieeexplore.ieee.org/document/8990033/ * [27] A. R. Panganiban, G. Matthews, and M. D. Long, “Transparency in Autonomous Teammates: Intention to Support as Teaming Information,” _Journal of Cognitive Engineering and Decision Making_ , p. 155534341988156, Nov. 2019. [Online]. Available: http://journals.sagepub.com/doi/10.1177/1555343419881563 * [28] T. Chen, D. Campbell, L. F. Gonzalez, and G. Coppin, “Increasing Autonomy Transparency through capability communication in multiple heterogeneous UAV management,” in _2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. Hamburg, Germany: IEEE, Sep. 2015, pp. 2434–2439. [Online]. Available: http://ieeexplore.ieee.org/document/7353707/ * [29] S. G. Lakhmani, J. L. Wright, M. R. Schwartz, and D. Barber, “Exploring the Effect of Communication Patterns and Transparency on Performance in a Human-Robot Team,” _Proceedings of the Human Factors and Ergonomics Society Annual Meeting_ , vol. 63, no. 1, pp. 160–164, Nov. 2019. [Online]. Available: http://journals.sagepub.com/doi/10.1177/1071181319631054 * [30] G. Matthews, J. Lin, A. R. Panganiban, and M. D. Long, “Individual Differences in Trust in Autonomous Robots: Implications for Transparency,” _IEEE Transactions on Human-Machine Systems_ , pp. 1–11, 2019. [Online]. Available: https://ieeexplore.ieee.org/document/8908731/ * [31] S. Guznov, J. Lyons, M. Pfahler, A. Heironimus, M. Woolley, J. Friedman, and A. Neimeier, “Robot Transparency and Team Orientation Effects on Human–Robot Teaming,” _International Journal of Human–Computer Interaction_ , pp. 1–11, Oct. 2019. [Online]. Available: https://www.tandfonline.com/doi/full/10.1080/10447318.2019.1676519 * [32] M. Daily, Y. Cho, K. Martin, and D. Payton, “World embedded interfaces for human-robot interaction,” in _System Sciences, 2003. Proceedings of the 36th Annual Hawaii International Conference on_. IEEE, 2003, pp. 6–pp. * [33] J. L. Wright, “Transparency in Human-agent Teaming and its Effect on Automation-induced Complacency,” _Procedia Manufacturing_ , vol. 3, pp. 968–973, 2015\. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S235197891500150X * [34] J. L. Wright, J. Y. Chen, M. J. Barnes, and P. A. Hancock, “Agent reasoning transparency: The influence of information level on automation induced complacency,” US Army Research Laboratory Aberdeen Proving Ground United States, Tech. Rep., 2017. * [35] J. L. Wright, J. Y. Chen, M. J. Barnes, and M. W. Boyce, “The Effects of Information Level on Human-Agent Interaction for Route Planning,” _Proceedings of the Human Factors and Ergonomics Society Annual Meeting_ , vol. 59, no. 1, pp. 811–815, Sep. 2015. [Online]. Available: http://journals.sagepub.com/doi/10.1177/1541931215591247 * [36] F. Ghiringhelli, J. Guzzi, G. A. Di Caro, V. Caglioti, L. M. Gambardella, and A. Giusti, “Interactive augmented reality for understanding and analyzing multi-robot systems,” in _Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on_. IEEE, 2014, pp. 1195–1201. * [37] J. E. Mercado, M. A. Rupp, J. Y. C. Chen, M. J. Barnes, D. Barber, and K. Procci, “Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management,” _Human Factors: The Journal of the Human Factors and Ergonomics Society_ , vol. 58, no. 3, pp. 401–415, May 2016. [Online]. Available: http://journals.sagepub.com/doi/10.1177/0018720815621206 * [38] N. Jakobi, P. Husbands, and I. Harvey, “Noise and the reality gap: The use of simulation in evolutionary robotics,” in _European Conference on Artificial Life_. Springer, 1995, pp. 704–720. * [39] “VICON motion capture system,” http://vicon.com, accessed:. * [40] C. Pinciroli, V. Trianni, R. O’Grady, G. Pini, A. Brutschy, M. Brambilla, N. Mathews, E. Ferrante, G. Di Caro, F. Ducatelle, M. Birattari, L. M. Gambardella, and M. Dorigo, “ARGoS: a modular, parallel, multi-engine simulator for multi-robot systems,” _Swarm Intelligence_ , vol. 6, no. 4, pp. 271–295, 2012. * [41] “Vuforia augmented reality,” http://vuforia.com, accessed:. * [42] U. G. Engine, “Unity game engine-official site,” _Online][Cited: October 9, 2008.] http://unity3d. com_ , pp. 1534–4320, 2008. * [43] R. M. Taylor, “Situational awareness rating technique (sart): The development of a tool for aircrew systems design,” in _Situational awareness_. Routledge, 1990, pp. 111–128. * [44] R. LIKERT, “A technique for the measurement of attitudes,” _Arch Psych_ , vol. 140, p. 55, 1932. [Online]. Available: https://ci.nii.ac.jp/naid/10024177101/en/ * [45] S. G. Hart and L. E. Staveland, “Development of nasa-tlx (task load index): Results of empirical and theoretical research,” in _Advances in psychology_. Elsevier, 1988, vol. 52, pp. 139–183. * [46] A. Uggirala, A. K. Gramopadhye, B. J. Melloy, and J. E. Toler, “Measurement of trust in complex and dynamic systems using a quantitative approach,” _International Journal of Industrial Ergonomics_ , vol. 34, no. 3, pp. 175–186, 2004. * [47] M. Friedman, “The use of ranks to avoid the assumption of normality implicit in the analysis of variance,” _Journal of the american statistical association_ , vol. 32, no. 200, pp. 675–701, 1937. * [48] D. Black, “Partial justification of the borda count,” _Public Choice_ , pp. 1–15, 1976.
# Non-linear Plane Gravitational Waves as Space-time Defects F. L. Carneiro<EMAIL_ADDRESS>Instituto de Física, Universidade de Brasília 70.919-970 Brasília DF, Brazil S. C. Ulhoa<EMAIL_ADDRESS>International Center of Physics, Instituto de Física, Universidade de Brasília, 70910-900, Brasília, DF, Brazil Canadian Quantum Research Center, 204-3002 32 Ave Vernon, BC V1T 2L7 Canada J. W. Maluf<EMAIL_ADDRESS>Instituto de Física, Universidade de Brasília 70.919-970 Brasília DF, Brazil J. F. da Rocha-Neto<EMAIL_ADDRESS>Instituto de Física, Universidade de Brasília 70.919-970 Brasília DF, Brazil ###### Abstract We consider non-linear plane gravitational waves as propagating space-time defects, and construct the Burgers vector of the waves. In the context of classical continuum systems, the Burgers vector is a measure of the deformation of the medium, and at a microscopic (atomic) scale, it is a naturally quantized object. One purpose of the present article is ultimately to probe an alternative way on how to quantize plane gravitational waves. ## I Introduction Non-linear gravitational waves constitute a class of exact solutions of Einstein’s field equations of general relativity. Several of these exact solutions form a subset of solutions known as plane gravitational waves, or pp-waves (plane-fronted gravitational waves). These waves, in turn, are characterized by a local (in space and in time) deformation of an otherwise flat space-time. For this reason, it is possible to think of non-linear plane gravitational waves as propagating space-time defects, since the latter are also locally flat. Non-linear plane gravitational waves and space-time defects share many features. Both field configurations (i) are established over a flat space-time background, (ii) induce a local deformation in the background geometry, (iii) may have an axial symmetry (along the $\displaystyle z$ axis, for instance), (iv) may have a singularity along an axis (the $\displaystyle z$ axis, for instance). Therefore, it is possible to define and evaluate the Burgers vector for non-linear plane gravitational waves. The Burgers vector in a crystalline lattice or inside a metal determines the nature of the defect. Metals may be deformed both elastically and plastically. Elastic deformations take place at low external stresses, and are reversible, whereas plastic deformations are irreversible. The latter deformations are described by moving, dynamical dislocations, which imprint a permanent deformation in the metal. The two ordinary types of dislocations in a metal are the screw and edge dislocations. A screw dislocation occurs when a half plane moves, or slips, relatively to one adjacent half plane (consider two infinite, adjacent and parallel planes, the upper and lower planes; one upper half plane remains attached to the adjacent lower half plane, whereas the other upper half plane slips with respect to the adjacent lower half plane), and an edge dislocation is characterized by a missing half plane in an otherwise perfect lattice. We refer the reader to Chapter 5 of Ref. Smallman , where a clear explanation of these defects is presented. The Burgers vector is constructed by first establishing a Burgers circuit, which is a circuit around the defect. The idea is to first consider a closed, regular circuit in a perfect crystalline lattice. If the lattice is deformed by one type of dislocation, the circuit established in the perfect lattice fails to close in the deformed medium, if the circuit encompasses the defect. The vector that must be added in order to close the circuit, in the presence of a defect, is precisely the Burgers vector. At the atomic scale in a physical lattice, the Burgers vector is quantized, i.e., it is a multiple of a minimum atomic distance. In the context of the 4-dimensional space-time geometry, space-time (or topological) defects have already been investigated in some depth, see for instance Refs. Katanaev ; Holz ; Puntigam ; Todd ; Bilby . Non-linear gravitational waves are relatively simple solutions of Einstein’s equations, as we learn from the well known review by Ehlers and Kundt Ehlers . These waves have been recently reconsidered in connection with the memory effect ZDGH1 ; ZDGH2 ; ZDGH3 ; ZDGH4 . The memory effect is the permanent displacement of massive particles and ordinary objects of a physical system caused by the passage of a non-linear gravitational wave (although the memory effect is also considered in the context of linearised gravitational waves). In particular, the dynamical state of the massive particles is different before and after the passage of the wave Artigo1 ; Artigo2 ; Artigo3 ; Artigo4 , in view of the velocity memory effect. The “permanent displacement” mentioned above may be understood as a plastic deformation of the physical medium, that is constituted by massive particles, and in this sense propagating defects in metals (and crystalline lattices) and non-linear gravitational waves share relevant features. On the other hand, linearised gravitational waves may be understood as elastic deformations of the space- time. In this article we will consider several relevant circuits in the space-time of pp-waves and construct the corresponding Burgers vector. The graphical distribution of Burgers vectors in the three dimensional space allows an alternative understanding and characterization of these waves. We will also evaluate the gravitational pressure that the pp-waves exert on certain surfaces, that allows to obtain the gravitational force applied on idealized particles. The evaluation of the gravitational pressure is carried out with the definitions established in the teleparallel equivalent of general relativity. Under certain approximations, the resulting expressions of the gravitational pressure are quite simple and interesting. This article is divided as follows. In section II the Teleparallelism Equivalent to General Relativity is described. In section III the generalized pp-waves are introduced. The strain tensor and the Burgers vector are evaluated also in section III. In section IV, the gravitational pressure and the force exerted by the wave are calculated. Finally in the last section the conclusions are presented. We use geometrical unities system where $\displaystyle G=c=1.$ ## II Teleparallelism Equivalent to General Relativity (TEGR) In this section we briefly introduce the ideas of Teleparallelism Equivalent to General Relativity (TEGR) along the lines of reference maluf2013teleparallel . In this approach, the gravitational field is represented in terms of the dynamic tetrad field $\displaystyle e^{a}\,_{\mu}$, but at the same time it establishes the reference system by choosing the six additional components when compared to the metric tensor. The geometric framework of TEGR is such that absolute parallelism is a fundamental attribute of space-time. This condition is determined by the Weitzenböck connection $\displaystyle\Gamma_{\mu\lambda\nu}=e^{a}\,_{\mu}\partial_{\lambda}e_{a\nu}\,$ which has a vanishing curvature and a torsion tensor defined by $T^{a}\,_{\lambda\nu}=\partial_{\lambda}e^{a}\,_{\nu}-\partial_{\nu}e^{a}\,_{\lambda}\,.$ (1) The Weitzenböck connection is related to the Christoffel’s symbols, $\displaystyle{}^{0}\Gamma_{\mu\lambda\nu}$, identically by $\Gamma_{\mu\lambda\nu}={}^{0}\Gamma_{\mu\lambda\nu}+K_{\mu\lambda\nu}\,,$ (2) where $\displaystyle K_{\mu\lambda\nu}$ is the contortion tensor, and is given by $\displaystyle\displaystyle K_{\mu\lambda\nu}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\frac{1}{2}(T_{\lambda\mu\nu}+T_{\nu\lambda\mu}+T_{\mu\lambda\nu})\,,$ (3) with $\displaystyle T_{\mu\lambda\nu}=e_{a\mu}T^{a}\,_{\lambda\nu}$. The expression (2) induces a direct relationship between Ricci scalar and a quadratic combination of torsions. It reads $eR(e)\equiv-e({1\over 4}T^{abc}T_{abc}+{1\over 2}T^{abc}T_{bac}-T^{a}T_{a})+2\partial_{\mu}(eT^{\mu})\,.$ (4) It should be noted that the left-hand side of the above expression is the Hilbert-Einstein Lagrangian. Thus the TEGR Lagrangian density is given by $\displaystyle\displaystyle\mathfrak{L}(e_{a\mu})$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle-\kappa\,e\,({1\over 4}T^{abc}T_{abc}+{1\over 2}T^{abc}T_{bac}-T^{a}T_{a})-\mathfrak{L}_{M}$ (5) $\displaystyle\displaystyle\equiv$ $\displaystyle\displaystyle-\kappa\,e\Sigma^{abc}T_{abc}-\mathfrak{L}_{M}\;,$ where $\displaystyle\kappa=1/(16\pi)$, $\displaystyle\mathfrak{L}_{M}$ is the Lagrangian density of matter fields and $\displaystyle\Sigma^{abc}$ is defined as $\Sigma^{abc}={1\over 4}(T^{abc}+T^{bac}-T^{cab})+{1\over 2}(\eta^{ac}T^{b}-\eta^{ab}T^{c})\;,$ (6) with $\displaystyle T^{a}=e^{a}\,_{\mu}T^{\mu}=e^{a}\,_{\mu}T^{\nu}\,_{\nu}\,{}^{\mu}$. Hence the field equations that are equivalent to Einstein’s equations read $\partial_{\nu}\left(e\Sigma^{a\lambda\nu}\right)={1\over{4\kappa}}e\,e^{a}\,_{\mu}(t^{\lambda\mu}+T^{\lambda\mu})\;,$ (7) where $t^{\lambda\mu}=\kappa\left[4\,\Sigma^{bc\lambda}T_{bc}\,^{\mu}-g^{\lambda\mu}\,\Sigma^{abc}T_{abc}\right]\,,$ (8) is the gravitational energy-momentum tensor. Such an expression goes one step further towards the solution of the longstanding problem of gravitational energy. Due to the anti-symmetric feature of $\displaystyle\Sigma^{a\lambda\nu}$, it is possible to obtain $\partial_{\lambda}\partial_{\nu}\left(e\Sigma^{a\lambda\nu}\right)\equiv 0\,.$ (9) Then the energy-momentum vector is $P^{a}=\int_{V}d^{3}x\,e\,e^{a}\,_{\mu}(t^{0\mu}+T^{0\mu})\,,$ (10) this can be equivalently expressed as $P^{a}=4k\,\int_{V}d^{3}x\,\partial_{\nu}\left(e\,\Sigma^{a0\nu}\right)\,.$ (11) It should be noted that the energy-momentum vector is invariant under coordinate transformations of the 3-dimensional space, and under time reparametrizations. On the other hand, it transforms as a vector under SO(3,1) symmetry. From equations (7) and (9) one obtains $\displaystyle{d\over{dt}}\int_{V}d^{3}x\,e\,e^{a}\,_{\mu}(t^{0\mu}+T^{0\mu})=-\oint_{S}dS_{j}\,\left[e\,e^{a}\,_{\mu}(t^{j\mu}+T^{j\mu})\right]\,,$ which indicates an energy-momentum flux, where the surface $\displaystyle S$ delimits the volume $\displaystyle V$ . The gravitational energy-momentum flux is defined as $\Phi^{a}_{g}=\oint_{S}dS_{j}\,\,(e\,e^{a}\,_{\mu}t^{j\mu})\,,$ (12) and the energy-momentum flux of matter fields as $\Phi^{a}_{m}=\oint_{S}dS_{j}\,\,(e\,e^{a}\,_{\mu}T^{j\mu})\,.$ (13) Then, it is possible to write $\displaystyle\displaystyle{{dP^{a}}\over{dt}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle-\left(\Phi^{a}_{g}+\Phi^{a}_{m}\right)$ (14) $\displaystyle\displaystyle=$ $\displaystyle\displaystyle-4k\oint_{S}dS_{j}\,\partial_{\nu}(e\Sigma^{aj\nu})\,.$ The momentum flux is given by the spatial part of the above equation, thus ${{dP^{(i)}}\over{dt}}=-\oint_{S}dS_{j}\,\phi^{(i)j}\,,$ (15) where $\phi^{(i)j}=4k\partial_{\nu}(e\Sigma^{(i)j\nu})\,.$ (16) Equation (15) has the nature of force, therefore equation (16) represents the pressure on a direction $\displaystyle(i)$ over a surface oriented towards $\displaystyle j$. ## III The generalized pp-waves The line element of the pp waves can be described in double null coordinates $\displaystyle u,v$ by the generalized line element $ds^{2}=H(u,x,y)du^{2}+dx^{2}+dy^{2}+2dudv-2a_{1}(u,x,y)dudx-2a_{2}(u,x,y)dudy\,.$ (17) The surfaces $\displaystyle u=constant$ are flat and the wave propagates along the null direction $\displaystyle v$. The functions $\displaystyle a_{1,2}(u,x,y)$ may be eliminated locally by an appropriate choice of coordinates, therefore they may be chosen as zero. However, some topological properties of the space-time may be lost when such a choice is made podolsky2014gyratonic . Topological defects manifests as a global effect, thus it is worth considering the generalized form of the pp-waves metric and particularizing it latter as special cases. The line element (17) may be written in Cartesian coordinates by means of the relations $u=\frac{z-t}{\sqrt{2}}\,,\qquad v=\frac{z+t}{\sqrt{2}}\,.$ (18) The line element becomes $\displaystyle\displaystyle ds^{2}$ $\displaystyle\displaystyle=\left(\frac{H}{2}-1\right)dt^{2}+dx^{2}+dy^{2}+\left(\frac{H}{2}+1\right)dz^{2}-Hdtdz$ $\displaystyle\displaystyle+\sqrt{2}a_{1}dtdx-\sqrt{2}a_{1}dxdz+\sqrt{2}a_{2}dtdy-\sqrt{2}a_{2}dydz\,.$ (19) The regular pp-waves may be obtained by choosing $\displaystyle a_{1,2}(u,x,y)=0$, as mentioned above. A tetrad field adapted to a stationary observer and related to the line element above can be can be written as $e_{a\mu}=\left(\begin{array}[]{cccc}-A&a_{1}/\sqrt{2}A&a_{2}/\sqrt{2}A&-H/2A\\\ 0&1&0&0\\\ 0&0&1&0\\\ 0&-a_{1}/\sqrt{2}A&-a_{2}/\sqrt{2}A&1/A\\\ \end{array}\right)\,,$ (20) where $\displaystyle a$ and $\displaystyle\mu$ denote lines and rows, respectively, and $\displaystyle A=\sqrt{1-H/2}$. A particular class of pp-waves that can be described by the line element (17) are the gyratonic waves frolov2005gravitational ; frolov2005gravitational2 ; PRD75 . Those waves represent the outer field of the gyratons, which are spinning particles moving at the speed of light. In order to describe these waves, we introduce cylindrical coordinates in the transverse plane of the propagation line by means of the standard relations $\displaystyle\displaystyle x$ $\displaystyle\displaystyle=\rho\cos{\phi}\,,$ $\displaystyle\displaystyle y$ $\displaystyle\displaystyle=\rho\sin{\phi}\,,$ together with the functions, $\displaystyle\displaystyle a_{1}$ $\displaystyle\displaystyle=-\frac{J}{\rho}\sin{\phi}\,,$ (21) $\displaystyle\displaystyle a_{2}$ $\displaystyle\displaystyle=\frac{J}{\rho}\cos{\phi}\,.$ (22) Then, the line element (19) becomes $ds^{2}=\left(\frac{H}{2}-1\right)dt^{2}+d\rho^{2}+\rho^{2}d\phi^{2}+\sqrt{2}Jdtd\phi-\sqrt{2}Jdzd\phi+\left(1+\frac{H}{2}\right)dz^{2}-Hdtdz\,.$ (23) The function $\displaystyle J$ is related to the spinning nature of the gyratons. Similarly, the tetrad field adapted to a stationary observer for the above metric may be rewritten as $e^{\prime}_{a\mu}=\left(\begin{array}[]{cccc}-A&0&\frac{J}{\sqrt{2}A}&-H/2A\\\ 0&\cos(\phi)&-\rho\sin(\phi)&0\\\ 0&\sin(\phi)&\rho\cos(\phi)&0\\\ 0&0&-\frac{J}{\sqrt{2}A}&1/A\\\ \end{array}\right)\,.$ (24) It should be noted that for $\displaystyle J=0$ the regular pp-waves are recovered. From the field equations, the relation between $\displaystyle H$ and $\displaystyle J$ is given by $\nabla^{2}H=\frac{2}{\rho^{2}}\left(\partial_{u}\partial_{\phi}J\right)\,,$ (25) where $\displaystyle\nabla^{2}\equiv\partial_{\rho}\partial_{\rho}+\frac{1}{\rho}\partial_{\rho}+\frac{1}{\rho^{2}}\partial_{\phi}\partial_{\phi}$ in cylindrical coordinates. For $\displaystyle J=J(u)\Rightarrow\partial_{\phi}J=0$ $\nabla^{2}H=0\,,$ (26) then it is possible to solve the equation (25) to obtain the following classes of solutions $\displaystyle\displaystyle H_{0}$ $\displaystyle\displaystyle=-C_{0}\ln{\left(\frac{x^{2}+y^{2}}{a^{2}}\right)}f(u)\,,$ (27) $\displaystyle\displaystyle H_{1+}$ $\displaystyle\displaystyle=-C_{1+}\left(x^{2}-y^{2}\right)f(u)\,,$ (28) $\displaystyle\displaystyle H_{1\times}$ $\displaystyle\displaystyle=-C_{1\times}\left(xy\right)f(u)\,,$ (29) $\displaystyle\displaystyle H_{2+}$ $\displaystyle\displaystyle=-C_{2+}\frac{x^{2}-y^{2}}{\left(x^{2}+y^{2}\right)^{2}}f(u)\,,$ (30) $\displaystyle\displaystyle H_{2\times}$ $\displaystyle\displaystyle=-C_{2\times}\frac{xy}{\left(x^{2}+y^{2}\right)^{2}}f(u)\,.$ (31) Here, the multiplicative factors have a proper dimension to leave $\displaystyle H$ dimensionless, for instance $\displaystyle C_{1+}$ and $\displaystyle C_{1\times}$ have dimension of inverse squared distance, $\displaystyle C_{2+}$ and $\displaystyle C_{2\times}$ have dimension of squared distance, while $\displaystyle C_{0}$ is dimensionless. The constant $\displaystyle a$ in (27) delimits the validity region of the solution in vacuum, i.e., the radius of the source. These functions are not the only possible solutions, but they have a well establish physical meaning. The function $\displaystyle f(u)$ is arbitrary and establishes the form of the pulse, usually chosen as a Gaussian. In the next subsections, effects such as deformations and distortions associated with generalized pp-waves will be analyzed. ### III.1 The strain tensor In solid mechanics, the deformation of materials is an important feature in understanding their properties. When an elastic deformation is present, the strain tensor quantifies the relative amount of change during deformation. In the case of plastic deformations such as dislocations, where Hooke’s law does not apply everywhere, a dislocation core is constructed and outside this core the Hooke’s law is applied. In a plastic deformation, the components of the strain tensor can be written as a function of the dislocation intensity, i.e., the Burgers vector. We can import this concept into space-time. Thus we introduce the strain tensor, understood as the difference between the geometries before and after a given event, for instance, the passage of a gravitational wave. Therefore the strain tensor is defined as deformation $\varepsilon_{\mu\nu}\equiv\left(g_{\mu\nu}-\bar{g}_{\mu\nu}\right)$ (32) where $\displaystyle\bar{g}_{\mu\nu}$ is the flat space-time metric in an arbitrary coordinate system and $\displaystyle g_{\mu\nu}$ is the metric tensor of the gravitational wave. Usually, in the case of metals, this tensor is built in three dimensions. Here it is always possible to take the three- dimensional part of the deformation tensor for comparison. The strain tensor calculated from the metric (19) is given by $\varepsilon_{\mu\nu}=\frac{1}{2}\left(\begin{array}[]{cccc}H&-\sqrt{2}a_{1}&-\sqrt{2}a_{2}&H\\\ -\sqrt{2}a_{1}&0&0&-\sqrt{2}a_{1}\\\ -\sqrt{2}a_{2}&0&0&-\sqrt{2}a_{2}\\\ H&-\sqrt{2}a_{1}&-\sqrt{2}a_{2}&H\\\ \end{array}\right)\,.$ (33) A deformation is a measurable effect, that is, it is not dependent on the chosen coordinate system. Hence, it is necessary to project such a quantity on the Lorentz symmetry indices. We get then $\varepsilon^{ab}=e^{a\mu}e^{b\nu}\varepsilon_{\mu\nu}=\frac{1}{2}\left(\begin{array}[]{cccc}H/A&-\sqrt{2}a_{1}/A&-\sqrt{2}a_{2}/A&H/A\\\ -\sqrt{2}a_{1}/A&0&0&-\sqrt{2}a_{1}/A\\\ -\sqrt{2}a_{2}/A&0&0&-\sqrt{2}a_{2}/A\\\ H/A&-\sqrt{2}a_{1}/A&-\sqrt{2}a_{2}/A&H/A\\\ \end{array}\right)\,.$ (34) As a consequence the 3D strain tensor of a gyratonic wave is $\varepsilon^{(i)(j)}=\frac{1}{2}\left(\begin{array}[]{ccc}0&0&\frac{\sqrt{2}J}{\rho A}\sin{\phi}\\\ 0&0&-\frac{\sqrt{2}J}{\rho A}\cos{\phi}\\\ \frac{\sqrt{2}J}{\rho A}\sin{\phi}&-\frac{\sqrt{2}J}{\rho A}\cos{\phi}&H/A\\\ \end{array}\right)\,,$ (35) therefore it is possible to see that regular pp-waves are responsible for longitudinal deformations while gyratonics also cause transversal shearing. ### III.2 The Burgers vector In a spacetime with torsion, the Burgers vector is defined as $b^{a}=\frac{1}{2}\int_{S}T^{a}\,_{\mu\nu}dx^{\mu}\wedge dx^{\nu}\,,$ (36) where $\displaystyle\wedge$ is the exterior product. This means that torsion is the dislocation superficial density. Due to the torsion symmetry and the properties of tetrad (20), the spatial components of the Burgers vector can be written as $\displaystyle b^{(i)}=\oint_{\mathcal{C}}{e^{(i)}\,_{j}dx^{j}}\,,$ where $\displaystyle C$ is a path delimited by the surface $\displaystyle S$. It is worth noting that the result of this integral depends on the path taken. In order to construct the Burgers circuit, a plane can be chosen. First, let us consider a square with side $\displaystyle 2L$ in the YZ plane, centered at an arbitrary point $\displaystyle(t_{0},x_{0},y_{0},z_{0})$. Thus $\displaystyle\displaystyle b^{(i)}$ $\displaystyle\displaystyle=\int^{y_{0}+L}_{y_{0}-L}{e^{(i)}\,_{2}(t_{0},x_{0},y,z_{0}+L)dx^{2}}+\int^{z_{0}-L}_{z_{0}+L}{e^{(i)}\,_{3}(t_{0},x_{0},y_{0}-L,z)dx^{3}}$ $\displaystyle\displaystyle+\int^{y_{0}-L}_{y_{0}+L}{e^{(i)}\,_{2}(t_{0},x_{0},y,z_{0}-L)dx^{2}}+\int^{z_{0}+L}_{z_{0}-L}{e^{(i)}\,_{3}(t_{0},x_{0},y_{0}+L,z)dx^{3}}\,.$ (37) Then, the only non-vanishing component of the Burgers vector is $\displaystyle b^{(3)}=-\frac{1}{\sqrt{2}}\int^{y_{0}+L}_{y_{0}-L}{\left.\frac{a_{2}}{A}\right|^{z=z_{0}+L}_{z=z_{0}-L}dy}+\int^{z_{0}+L}_{z_{0}-L}{\left.\frac{1}{A}\right|^{y=y_{0}+L}_{y=y_{0}-L}dz}\,.$ The result for a regular pp-wave can be obtained by choosing $\displaystyle a_{1,2}=0$, yielding $b^{(3)}=\int^{z_{0}+L}_{z_{0}-L}{\left.\frac{1}{A}\right|^{y=y_{0}+L}_{y=y_{0}-L}dz}\,.$ (38) We must note that the Burgers vector is calculated locally, therefore depends on the circuit shape and the location of its center. This means that we have a distribution of vectors in space, i.e., a vector field. Each set of coordinates $\displaystyle(t_{0},x_{0},y_{0},z_{0})$ will have its own Burgers vector within its neighborhood. In the figures below we show the form of this distribution for some choices of $\displaystyle H$, all for a regular pp-wave. All figures were obtained for $\displaystyle 2L=0.002$ and $\displaystyle f(u)=e^{-u^{2}}$. In all of them below, the side bar indicates a color scale for the vector modulus. Figure 1: Burgers vector for $\displaystyle H_{0}$ with $\displaystyle t=x=0$ and $\displaystyle C_{0}=1$. The points were distributed with $\displaystyle y_{0}$ ranging from $\displaystyle-10$ to $\displaystyle-1$ and from $\displaystyle+1$ to $\displaystyle+10$, in steps of $\displaystyle 0.05$; and $\displaystyle z_{0}$ ranging from $\displaystyle-3$ to $\displaystyle+3$, in steps of $\displaystyle 0.1$. The region around the propagation axis was excluded from integration. Figure 2: Burgers vector for $\displaystyle H_{2+}$ with $\displaystyle t=x=0$ and $\displaystyle C_{2+}=1$. The points were distributed with $\displaystyle y_{0}$ ranging from $\displaystyle-5$ to $\displaystyle-0.8$ and from $\displaystyle+0.8$ to $\displaystyle+5$, in steps of $\displaystyle 0.05$; and $\displaystyle z_{0}$ ranging from $\displaystyle-3$ to $\displaystyle+3$, in steps of $\displaystyle 0.1$. The region around the propagation axis was excluded from integration. The bars on the right side indicate the modulus of the Burgers vector on both sides. Figure 3: Burgers vector for $\displaystyle H_{2\times}$ with $\displaystyle t=0$, $\displaystyle x=0.1$ and $\displaystyle C_{2+}=1$. The points were distributed with $\displaystyle y_{0}$ ranging from $\displaystyle-5$ to $\displaystyle-0.8$ and from $\displaystyle+0.8$ to $\displaystyle+5$, in steps of $\displaystyle 0.05$; and $\displaystyle z_{0}$ ranging from $\displaystyle-3$ to $\displaystyle+3$, in steps of $\displaystyle 0.1$. The region around the propagation axis was excluded from integration. The bars on the right side indicate the modulus of the Burgers vector on both sides. Figure 4: Burgers vector for $\displaystyle H_{1+}$ with $\displaystyle t=x=0$ and $\displaystyle C_{2+}=a=1$. The points were distributed with $\displaystyle y_{0}$ ranging from $\displaystyle-1.5$ to $\displaystyle 1.5$, in steps of $\displaystyle 0.05$; and $\displaystyle z_{0}$ ranging from $\displaystyle-3$ to $\displaystyle+3$, in steps of $\displaystyle 0.1$. The bars on the right side indicate the modulus of the Burgers vector on both sides. If we choose a similar circuit in the plane XZ, we obtain $\displaystyle b^{(3)}=-\frac{1}{\sqrt{2}}\int^{L}_{-L}{\left.\frac{a_{1}}{A}\right|^{z=L}_{z=-L}dx}+\int^{L}_{-L}{\left.\frac{1}{A}\right|^{x=L}_{x=-L}dz}\,.$ In Figures 1, 2, 3 and 4, a non-vanishing distribution of Burgers vectors for the regular pp-wave is displayed. They are consistent with the polarization of the chosen $\displaystyle H$ solution. It is worth noting that the Burgers vector has a direct relationship with the strain tensor, despite the latter coming from the metrical tensor while the first comes from the space-time torsion. With $\displaystyle J=0$ we have only the component $\displaystyle(3)(3)$ of the strain tensor, and at the same time there is only the component in the z direction of the Burgers vector, in all polarizations. That is, the choice of $\displaystyle H$ is the only determinant for the evolution of the system. The most interesting solution is obtained by choosing a circuit perpendicular to the axis of propagation Z. In order to present this solution, we consider a circular path, centered at the propagation axis, and focus only on the gyratonic wave. The Burgers vector is $b^{(i)}=\int_{0}^{2\pi}{e^{\prime(i)}\,_{2}d\phi}=\int_{0}^{2\pi}{e^{\prime(3)}\,_{2}d\phi}\,,$ (39) where $\displaystyle e^{\prime}$ is given by (24). Similarly to the above cases, the only non-vanishing component is $b^{\prime(3)}=b=-\frac{1}{\sqrt{2}}\int_{0}^{2\pi}{\frac{J}{A}d\phi}\,.$ (40) We can see that the Burgers vector is zero if evaluated for a regular pp-wave in this circuit, i.e., $\displaystyle J=0$. An interesting result occurs when we have an axially symmetric wave, i.e., $\displaystyle J=J(u)$ and $\displaystyle H=H(u,\rho)$. In this particular case it is possible to evaluate the integral analytically, obtaining $b=-\frac{2\pi J}{\sqrt{2}A}\,.$ (41) We can define a dislocation core and write the strain tensor as a function of the Burgers vector. The strain tensor can be transformed into polar coordinates as $\varepsilon_{(\phi)(z)}=-\sin\phi\,\varepsilon_{(1)\left(3\right)}+\cos\phi\,\varepsilon_{(2)(3)}\,.$ (42) Thus, using (35) and (41), we obtain $\varepsilon_{(\phi)(z)}=\frac{b}{2\pi\rho}\,,$ (43) The result above is exactly the same result observed in a crystal with a screw dislocation, as can be seen in equation (5.3) of Smallman . The obtained stress field, outside the dislocation core, falls off as $\displaystyle\rho^{-1}$, therefore consists in a long range field. This fact can cause a particle to feel the effects of dislocation even outside the core. ## IV Gravitational pressure In this section we aim to calculate the gravitational force imparted by the gyratonic gravitational waves. We calculate the torsion components. The non- vanishing components are tor $\begin{split}&T^{(0)(0)(1)}=-T^{(3)(1)(3)}=-\frac{1}{4\rho A^{2}}\left(2\sqrt{2}\sin{\phi}\partial_{t}J-\sin{\phi}\partial_{\phi}H+\rho\cos{\phi}\partial_{\rho}H\right)\,,\\\ &T^{(0)(0)(2)}=-T^{(3)(2)(3)}=-\frac{1}{4\rho A^{2}}\left(-2\sqrt{2}\cos{\phi}\partial_{t}J+\cos{\phi}\partial_{\phi}H+\rho\sin{\phi}\partial_{\rho}H\right)\,,\\\ &T^{(0)(0)(3)}=T^{(3)(0)(3)}=-\frac{1}{4A^{3}}\partial_{t}H\,,\\\ &T^{(0)(1)(3)}=\frac{1}{2\rho A^{2}}\left(\sqrt{2}\sin{\phi}\partial_{t}J-\sin{\phi}\partial_{\phi}H+\rho\cos{\phi}\partial_{\rho}H\right)\,,\\\ &T^{(0)(2)(3)}=\frac{1}{2\rho A^{2}}\left(-\sqrt{2}\cos{\phi}\partial_{t}J+\cos{\phi}\partial_{\phi}H+\rho\sin{\phi}\partial_{\rho}H\right)\,,\\\ &T^{(3)(0)(1)}=-\frac{1}{\sqrt{2}\rho A^{2}}\sin{\phi}\partial_{t}J\,,\\\ &T^{(3)(0)(2)}=\frac{1}{\sqrt{2}\rho A^{2}}\cos{\phi}\partial_{t}J\,.\end{split}$ Then, using the above quantities, we have $\begin{split}&\Sigma^{(0)01}=\Sigma^{(3)01}=-\frac{1}{4\sqrt{2}}\frac{\partial_{\rho}H}{\sqrt{2-H}}\\\ &\Sigma^{(0)02}=\Sigma^{(3)02}=-\frac{1}{4\sqrt{2}\rho^{2}}\frac{\partial_{\phi}H}{\sqrt{2-H}}+\frac{\partial_{t}J}{2\rho^{2}\sqrt{2-H}}\\\ &\Sigma^{(1)01}=\frac{1}{4}\frac{\partial_{t}H}{2-H}\cos{\phi}\\\ &\Sigma^{(1)02}=-\frac{1}{4\rho}\frac{\partial_{t}H}{2-H}\sin{\phi}\\\ &\Sigma^{(1)03}=-\frac{1}{4\rho}\frac{\partial_{\phi}H\sin{\phi}-\rho\partial_{\rho}H\cos{\phi}}{2-H}\\\ &\Sigma^{(2)01}=\frac{1}{4}\frac{\partial_{t}H}{2-H}\sin{\phi}\\\ &\Sigma^{(2)02}=\frac{1}{4\rho}\frac{\partial_{t}H}{2-H}\cos{\phi}\\\ &\Sigma^{(2)03}=\frac{1}{4\rho}\frac{\partial_{\phi}H\cos{\phi}+\rho\partial_{\rho}H\sin{\phi}}{2-H}\,.\end{split}$ The non-null components of the energy-momentum tensor are $t^{00}=t^{03}=t^{33}=-\frac{k}{8\rho^{2}A^{2}}\left[\rho^{2}(\partial_{\rho}H)^{2}+(\partial_{\phi}H)^{2}+2\partial_{u}J\partial_{\phi}H\right]\,.$ (44) Finally, the gravitational pressure defined by (16) is given by $\phi^{(3)3}=-\frac{k}{8\rho A^{3}}\left[\rho^{2}(\partial_{\rho}H)^{2}+(\partial_{\phi}H)^{2}+2\partial_{u}J\partial_{\phi}H\right]\,.$ (45) One can calculate the total force by integrating over a surface $\displaystyle S$, $\displaystyle F^{(i)}=-\int{\phi^{(i)j}dS_{j}}\,.$ Hence, if a surface whose normal vector is oriented towards the z axis is chosen, then $F_{z}\equiv F^{(3)}=\int{\phi^{(3)3}dS_{3}}=\frac{k}{8}\int d\rho d\phi\frac{\rho^{2}(\partial_{\rho}H)^{2}+(\partial_{\phi}H)^{2}+2\partial_{u}J\partial_{\phi}H}{\rho A^{3}}\,.$ (46) We see that the direction of the force is longitudinal, and therefore the gravitational wave imparts a force on hypothetical particles along the direction of the propagation of the wave. In the case of an axially symmetric wave, i.e., $\displaystyle H=H(u,\rho)$ and $\displaystyle J=J(u)$, the integral (46) can be evaluated analytically, where the surface of integration S is chosen as a disc centered in the $\displaystyle z$ axis, with $\displaystyle\rho_{1}$ and $\displaystyle\rho_{2}$ as the inner and outer radii, respectively. In this case, where the solution of the Einstein equation (26) is given by (27), we obtain $F_{z}=-\frac{C_{0}}{16}\left(\frac{1}{A(u,\rho_{2})}-\frac{1}{A(u,\rho_{1})}\right)\,,$ (47) where we considered $\displaystyle a=1$ in the solution (27). The gravitational force can be calculated numerically for the different solutions. In figure 5 we see the respective graphs as a function of the z coordinate, specifically at $\displaystyle t=0$. We can see that this is a negative force for all cases considered. There is a deformation parallel to the Burgers vector itself, as indicated by the strain tensor component $\displaystyle\varepsilon_{(3)(3)}$. Figure 5: Forces for solutions (27,30,31) with $\displaystyle C_{0}=1/4$, $\displaystyle C_{2+}=C_{2X}=1$, $\displaystyle a=1$, $\displaystyle t=0$ and $\displaystyle f(u)=e^{-u^{2}}=J$. ## V Conclusion In this article we analyzed how generalized pp-waves can be interpreted as topological defects. For this purpose, we calculated the strain tensor associated with such waves, as well as the dislocation determined by the Burgers vector. In particular, we chose a square path in the YZ and XZ planes and obtained a distribution of Burgers vectors for several $\displaystyle H$ solutions. With that we could see that there is a well defined Burgers vector for regular pp-waves. In the same sense, we chose a circular path perpendicular to the z axis that allowed us to obtain the strain tensor as a function of the modulus of the Burgers vector. Thus we compare the result with a dislocation in a crystal. Surprisingly, we saw that the gyratonic wave shares similarities with a crystal endowed with a topological defect with cylindrical symmetry. We conclude that in order to describe a given metric as a topological defect, it is necessary to take into account both the strain tensor and the Burgers vector. The results obtained with the gyratonic waves are very similar to those of crystals, mainly between the gyratonic wave and a crystal with a screw dislocation. The qualitative differences arise due to the presence of a normal component $\displaystyle\sigma_{(z)(z)}$ in the strain tensor of the pp-waves, while in the case of a crystal screw dislocation, only the shear component $\displaystyle\sigma_{(\phi)(z)}$ is present. For gravitational waves, the existence of the normal $\displaystyle\sigma_{(z)(z)}$ component is inherent to its type, while the existence of the shear component $\displaystyle\sigma_{(\phi)(z)}$ depends on the existence of the gyratonic term $\displaystyle J$. Therefore, in the space-time of a gravitational wave we may have compression and shear when a longitudinal force is applied. Most interesting, when we dismiss the gyratonic term $\displaystyle J$ in the line element, the pp-waves space-time looses its capacity to shear. The characterization of waves as topological defects can be applied in an attempt to quantize pp-waves. The quantization of a space-time may be performed by the geometric assumption of the Burgers vector being an integer of the Planck’s length magnon1991spin ; ross1989planck and its quantization parameters can be measured by analyzing the interaction of the gravitational field with particles quantiburgers . Thus, interpreting pp-waves as space-time defects may provide a way to quantize these waves. For instance, in the case of an axially symmetric gravitational wave, we have $\displaystyle b=-\sqrt{2}\pi J/A$, thus imposing $\displaystyle b=nb_{0}$ quantiburgers , where $\displaystyle b_{0}$ is the fundamental scale of the defect and $\displaystyle n$ an integer, we have $\displaystyle-\sqrt{2}\pi J/A=nb_{0}$. This feature will be further investigated elsewhere. ## References * (1) R. E. Smallman, “Modern Physical Metallurgy” Third Edition, (Butterworths, London, 1976). * (2) M. O. Katanaev and I. V. Volovich, “Theory of defects in solids and three-dimensional gravity”, Ann. Phys. (NY) 216, 1 (1992). * (3) A. Holz, “Topological properties of linked disclinations and dislocations in solid continua”, J. Phys. A Math. Gen. 25(1), L1 (1992). * (4) R. A. Puntigam, H. H. Soleng, “Volterra distortions, spinning strings, and cosmic defects”, Class. Quantum Grav. 14(5), 1129 (1997). * (5) K. P. Tod, “Conical singularities and torsion”, Class. Quantum Grav. 11(5), 1331 (1994). * (6) B. A. Bilby, R. Bullough and E. Smith, “Continuous distributions of dislocations: a new application of the methods of non-Riemannian geometry”, Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. 231, 263 (1955). * (7) J. Ehlers and W. Kundt, “Exact Solutions of the Gravitational Field Equations”, in “Gravitation: an Introduction to Current Research”, edited by L. Witten (Wiley, New York, 1962). * (8) P.-M. Zhang, C. Duval, G. W. Gibbons and P. A. Hovarthy, “The Memory Effect for Plane Gravitational Waves”, Phys. Lett. B 772, 743 (2017). * (9) P.-M. Zhang, C. Duval, G. W. Gibbons and P. A. Hovarthy, “Soft gravitons and the memory effect for plane gravitational waves”, Phys. Rev. D 96, 064013 (2017). * (10) P.-M. Zhang, C. Duval, G. W. Gibbons and P. A. Hovarthy, “Velocity Memory Effect for Polarized Gravitational Waves”, JCAP 05(2018)030. * (11) P.-M. Zhang, M. Cariglia, C. Duval, M. Elbistan, G. W. Gibbons and P. A. Hovarthy, “Ion Traps and the Memory Effect for Periodic Gravitational Waves”, Phys. Rev. D 98, 044037 (2018). * (12) J. W. Maluf, J. F. da Rocha-Neto, S. C. Ulhoa and F. L. Carneiro, “Plane Gravitational Waves, the Kinetic Energy of Free Particles and the Memory Effect”, Gravitation and Cosmology 24, 216-266 (2018). * (13) J. W. Maluf, J. F. da Rocha-Neto, S. C. Ulhoa and F. L. Carneiro, “Kinetic Energy and angular momentum of free particles in the gyratonic pp-waves space-times”, Class. Quantum Grav. 35, 115001 (2018). * (14) J. W. Maluf, J. F. da Rocha-Neto, S. C. Ulhoa and F. L. Carneiro, “Variations of the Energy of Free Particles in the pp-Wave Spacetimes”, Universe (2018), 4(7), 74. * (15) J. W. Maluf, J. F. da Rocha-Neto, S. C. Ulhoa and F. L. Carneiro, “The work-energy relation for particles on geodesics in the pp-wave spacetimes”, JCAP03 (2019) 028. * (16) J. W. Maluf. “The teleparallel equivalent of general relativity”, Annalen der Physik, 525(5):339–357, 2013. * (17) J. Podolskỳ, R. Steinbauer, and R. Švarc. “Gyratonic pp-waves and their impulsive limit”, Physical Review D, 90(4):044050, 2014. * (18) V. P. Frolov and D. V. Fursaev. “Gravitational field of a spinning radiation beam pulse in higher dimensions”, Physical Review D, 71(10):104034, 2005. * (19) V. P. Frolov, W. Israel, and A. Zelnikov. “Gravitational field of relativistic gyratons”, Physical Review D, 72(8):084031, 2005. * (20) H. Yoshino, A. Zelnivov, and P. V. Frolov. “Apparent horizon formation in the head-on collision of gyratons”, Phys. Rev. D, 75(3):124005, 2007. * (21) Tenev, T. G. and M. F. Horstemeyer, “Mechanics of spacetime $\displaystyle-$ A Solid Mechanics perspective on the theory of General Relativity”, International Journal of Modern Physics D, 27.08 (2018): 1850083. * (22) F. L. Carneiro, S. C. Ulhoa and J. F. da Rocha-Neto, “Energy-momentum and angular-momentum of a gyratonic pp-waves spacetime”, Physical Review D, 100.2 (2019): 024023. * (23) A. M. R. Magnon. “Spin-plane defects and emergence of planck’s constant in gravity”, Journal of mathematical physics, 32(4):928–931, 1991. * (24) D. K. Ross. “Planck’s constant, torsion, and space-time defects”, International Journal of Theoretical Physics, 28(11):1333–1340, 1989. * (25) F. L. Carneiro, S. C. Ulhoa, J. F. da Rocha-Neto and J. W. Maluf, “On the quantization of Burgers vector and gravitational energy in the space-time of a conical defect”, Eur. Phys. J. C 80, 226 (2020).
# On the Evaluation of Vision-and-Language Navigation Instructions Ming Zhao Peter Anderson Vihan Jain Su Wang Alexander Ku Jason Baldridge Eugene Ie Google Research {astroming, pjand, vihan, wangsu, alexku, jridge, eugeneie} @google.com ###### Abstract Vision-and-Language Navigation wayfinding agents can be enhanced by exploiting automatically generated navigation instructions. However, existing instruction generators have not been comprehensively evaluated, and the automatic evaluation metrics used to develop them have not been validated. Using human wayfinders, we show that these generators perform on par with or only slightly better than a template-based generator and far worse than human instructors. Furthermore, we discover that BLEU, ROUGE, METEOR and CIDEr are ineffective for evaluating grounded navigation instructions. To improve instruction evaluation, we propose an instruction-trajectory compatibility model that operates without reference instructions. Our model shows the highest correlation with human wayfinding outcomes when scoring individual instructions. For ranking instruction generation systems, if reference instructions are available we recommend using SPICE. ## 1 Introduction Generating route instructions is a long studied problem with clear practical applications Richter and Klippel (2005). Whereas earlier work sought to create instructions for human wayfinders, recent work has focused on using instruction-generation models to improve the performance of agents that follow instructions given by people. In the context of Vision-and-Language Navigation (VLN) datasets such as Room-to-Room (R2R) Anderson et al. (2018b), models for generating navigation instructions have improved agents’ wayfinding performance in at least two ways: (1) by synthesizing new instructions for data augmentation Fried et al. (2018); Tan et al. (2019), and (2) by fulfilling the role of a probabilistic speaker in a pragmatic reasoning setting Fried et al. (2018). Such data augmentation is so effective that it is nearly ubiquitous in the best performing agents Wang et al. (2019); Huang et al. (2019); Li et al. (2019). To make further advances in the generation of visually-grounded navigation instructions, accurate evaluation of the generated text is essential. However, the performance of existing instruction generators has not yet been evaluated using human wayfinders, and the efficacy of the automated evaluation metrics used to develop them has not been established. This paper addresses both gaps. Figure 1: Proposed dual encoder instruction-trajectory compatibility model. Navigation instructions and trajectories (sequences of panoramic images and view angles) are projected into a shared latent space. The independence between the encoders facilitates learning using both contrastive and classification losses. To establish benchmarks for navigation instruction generation, we evaluate existing English models Fried et al. (2018); Tan et al. (2019) using human wayfinders. These models are effective for data augmentation, but in human trials they perform on par with or only slightly better than a template-based system, and they are far worse than human instructors. This leaves much headroom for better instruction generation, which may in turn improve agents’ wayfinding abilities. Next, we consider the evaluation of navigation instructions without human wayfinders, a necessary step for future improvements in both grounded instruction generation (itself a challenging and important language generation problem) and agent wayfinding. We propose a model-based approach (Fig. 1) to measure the compatibility of an instruction-trajectory pair without needing reference instructions for evaluation. In training this model, we find that adding contrastive losses in addition to pairwise classification losses improves AUC by 9–10%, round-trip back-translation improves performance when used to paraphrase positive examples, and that both trajectory and instruction perturbations are useful as hard negatives. Finally, we compare our compatibility model to common textual evaluation metrics to assess which metric best correlates with the outcomes of human wayfinding attempts. We discover that BLEU Papineni et al. (2002), ROUGE Lin (2004), METEOR Denkowski and Lavie (2014) and CIDEr Vedantam et al. (2015) are ineffective for evaluating grounded navigation instructions. For system-level evaluations with reference instructions, we recommend SPICE Anderson et al. (2016). When averaged over many instructions, SPICE correlates with both human wayfinding performance and subjective human judgments of instruction quality. When scoring individual instructions, our compatibility model most closely reflects human wayfinding performance, outperforming BERTScore (Zhang et al., 2019) and VLN agent-based scores. Our results are a timely reminder that textual evaluation metrics should always be validated against human judgments when applied to new domains. We plan to release our trained compatibility model and the instructions and human evaluation data we collected. ## 2 Related Work ##### Navigation Instruction Generation Until recently, most methods for generating navigation instructions were focused on settings in which a system has access to a map representation of the environment, including the locations of objects and named items (e.g. of streets and buildings) Richter and Klippel (2005). Some generate route instructions interactively given the current position and goal location Dräger and Koller (2012), while others provide in-advance instructions that must be more robust to possible misinterpretation Roth and Frank (2010); Mast and Wolter (2013). Recent work has focused on instruction generation to improve the performance of wayfinding agents. Two instruction generators, Speaker-Follower Fried et al. (2018) and EnvDrop Tan et al. (2019), have been widely used for R2R data augmentation. They provide $\sim$170k new instruction-trajectory pairs sampled from training environments. Both are seq-to-seq models with attention. They take as input a sequence of panoramas grounded in a 3D trajectory, and output a textual instruction intended to describe it. ##### Vision-and-Language Navigation For VLN, embodied agents in 3D environments must follow natural language instructions to reach prescribed goals. Most recent efforts (e.g., Fu et al., 2019; Huang et al., 2019; Jain et al., 2019; Wang et al., 2019, etc.) have used the Room-to-Room (R2R) dataset (Anderson et al., 2018b), which contains 4675 unique paths in the train split, 340 in the val-seen split (same environments, new paths), and an additional 783 paths in the val-unseen split (new environments, new paths). However, our findings are also relevant for similar datasets such as Touchdown Chen et al. (2019); Mehta et al. (2020), CVDN Thomason et al. (2019), REVERIE Qi et al. (2020), and the multilingual Room-across-Room (RxR) dataset Ku et al. (2020). ##### Text Generation Metrics There are many automated metrics that assess textual similarity; we focus on five that are extensively used in the context of image captioning: BLEU (Papineni et al., 2002), METEOR Denkowski and Lavie (2014), ROUGE Lin (2004), CIDEr Vedantam et al. (2015) and SPICE Anderson et al. (2016). More recently, model- and semi-model-based metrics have been proposed. BERTScore (Zhang et al., 2019) takes a semi-model-based approach to compute token-wise similarity using contextual embeddings learned with BERT Devlin et al. (2019). BLEURT (Sellam et al., 2020) is a fully model-based approach combining large-scale synthetic pretraining and domain specific finetuning. However, all of the aforementioned metrics are reference-based, and none is specifically designed for assessing navigation instructions associated with 3D trajectories for an embodied agent, which requires not only language-to-vision grounding but also correct sequencing. ##### Instruction-Trajectory Compatibility Models Our model builds on that of Huang et al. (2019), but differs in loss (using focal and contrastive losses), input features (adding action and geometry representation), and negative mining strategies (adding instruction perturbations in addition to trajectory perturbations). Compared to the trajectory re-ranking compatibility model proposed by Majumdar et al. (2020), we use a dual encoder architecture rather than dense cross-attention. This facilitates the efficient computation of contrastive losses, which are calculated over all pairs in a minibatch, and improve AUC by 10% in our model. We also avoid training on the outputs of the instruction generators (to prevent overfitting to the models we evaluate). We are yet to explore transfer learning (which is the focus of Majumdar et al. (2020)). ## 3 Human Wayfinding Evaluations To benchmark the current state-of-the-art for navigation instruction generation, we evaluate the outputs of the Speaker-Follower and EnvDrop models by asking people to follow them. We use instructions for the 340 and 783 trajectories in the R2R val-seen and val-unseen splits, respectively. Both models are trained on the R2R train split and the generated instructions were provided by the respective authors. To contextualize the results, we additionally evaluate instructions from a template-based generator (using ground-truth object annotations), a new set of instructions written by human annotators, and three adversarial perturbations of these human instructions. New navigation instructions and wayfinding evaluations are collected using a lightly modified version of PanGEA111https://github.com/google- research/pangea, an open-source annotation toolkit for panoramic graph environments. ##### Crafty Crafty is a template-based navigation instruction generator. It observes the trajectory’s geometry and nearby ground-truth object annotations, identifies salient objects, and creates English instructions using templates describing movement with respect to the trajectory and objects. See the Appendix for details. Note that Crafty has an advantage over the learned models which rely on panoramic images to identify visual references and do not exploit object annotations. | | | | | | | | Visual Search | ---|---|---|---|---|---|---|---|---|--- Instructions | Num. Evals | Wordcount | NE $\downarrow$ | SR $\uparrow$ | SPL $\uparrow$ | SDTW $\uparrow$ | Quality $\uparrow$ | Start $\downarrow$ | Other $\downarrow$ | Time $\downarrow$ Val-unseen | | | | | | | | | | Speaker-Follower | 783$\times$3 | 24.6 | 6.55 | 35.8 | 30.3 | 28.1 | 3.50 | 43.5 | 24.8 | 43.2 EnvDrop | 783$\times$3 | 21.3 | 5.89 | 42.3 | 36.1 | 33.5 | 3.70 | 42.1 | 24.8 | 39.5 Val-seen | | | | | | | | | | Speaker-Follower | 340$\times$3 | 24.7 | 6.23 | 42.3 | 35.7 | 33.2 | 3.64 | 43.0 | 24.5 | 42.6 EnvDrop | 340$\times$3 | 22.3 | 5.99 | 47.7 | 40.0 | 36.9 | 3.83 | 39.9 | 25.0 | 42.1 Crafty | 340$\times$3 | 71.2 | 6.01 | 43.6 | 34.7 | 33.3 | 3.48 | 42.0 | 25.9 | 69.6 Direction Swap | 340$\times$3 | 54.8 | 4.74 | 58.9 | 47.9 | 45.9 | 3.67 | 40.6 | 25.4 | 61.0 Entity Swap | 340$\times$3 | 55.1 | 4.71 | 51.3 | 42.5 | 40.6 | 3.33 | 40.1 | 25.8 | 62.7 Phrase Swap | 340$\times$3 | 52.0 | 4.07 | 62.6 | 51.6 | 49.9 | 3.85 | 38.7 | 24.7 | 58.0 Human | 340$\times$3 | 54.1 | 2.56 | 75.1 | 64.7 | 63.1 | 4.25 | 35.8 | 23.8 | 53.9 Table 1: Human wayfinding performance following instructions from the Speaker- Follower Fried et al. (2018) and EnvDrop Tan et al. (2019) models, compared to Crafty (template-based) instructions, Human instructions, and three adversarial perturbations of Human instructions (Direction, Entity and Phrase Swap). ##### Human Instructions We collect 340 new English instructions for the trajectories in the R2R val- seen split using the PanGEA Guide task. ##### Instruction Perturbations To quantify the impact of common instruction generator failure modes on instruction following performance, we include three adversarial perturbations of human instructions capturing incorrect direction words, hallucinated objects/landmarks, and repeated or skipped steps. We use Google Cloud NLP222https://cloud.google.com/natural-language/ to identify named entities and parse dependency trees and then generate perturbations as follows: * • Direction Swap: Random swapping of directional phrases with alternatives from the same set, with sets as follows: around/left/right, bottom/middle/top, up/down, front/back, above/under, enter/exit, backward/forward, away from/towards, into/out of, inside/outside. Example: “Take a right (left) and wait by the couch outside (inside) the bedroom. ” * • Entity Swap: Random swapping of entities in an instruction. All noun phrases excluding a stop list containing any, first, end, front, etc. are considered to be entities. If two entities have the same lemma (e.g., stairs/staircase/stairway) they are considered to be synonyms and are not swapped. Example: “Exit the bedroom (bathroom), turn right, then enter the bathroom (bedroom).” * • Phrase Swap: A random operation on the dependency tree: either remove one sub- sentence tree, duplicate one sub-sentence tree, or shuffle the order of all sentences except the last. Example: “Exit the room using the door on the left. Turn slightly left and go past the round table an chairs. Wait there.” – where the first and second sentences are swapped. ##### Wayfinding Task Using the PanGEA Follower task, annotators are presented with a textual navigation instruction and the first-person camera view from the starting pose. They are instructed to attempt to follow the instruction to reach the goal location. Camera controls allow for continuous heading and elevation changes as well as movement between Matterport3D panoramas based on a navigation graph. Each instruction is evaluated by three different human wayfinders. ##### Evaluation Metrics We use the following standard metrics to evaluate the trajectories generated by our annotators (and hence, the quality of the provided instructions): Navigation Error (NE $\downarrow$) Success Rate (SR $\uparrow$), Success weighted by inverse Path Length (SPL $\uparrow$), Success weighted by normalized Dynamic Time Warping (SDTW $\uparrow$). Arrows indicate improving performance. See Anderson et al. (2018a) and Ilharco et al. (2019) for details. People are resourceful and may succeed in following poor quality instructions by expending additional effort. Therefore, we report additional metrics to capture these costs. Quality $\uparrow$ is a self-reported measure of instruction quality based on a 1–5 Likert scale. At the end of each task annotators respond to the prompt: Do you think there are mistakes in the instruction? Responses range from Way too many mistakes to follow (1) to No mistakes, very very easy to follow’ (5). Visual Search cost $\downarrow$ measures the percentage of the available panoramic visual field that the annotator observes at each viewpoint, based on the pose traces provided by PanGEA and first proposed in the RxR dataset Ku et al. (2020). Higher values indicate greater effort spent looking for the correct path. We report this separately for the start viewpoint and other viewpoints since wayfinders typically look around to orient themselves at the start. Time $\downarrow$ represents the average time taken in seconds. ##### Results Table 1 summarizes the results of 11,886 wayfinding attempts using 37 English- speaking annotators. The performance of annotators stays consistent over time and does not show any sign of adaptation. See Appendix for detailed analysis. As expected, human instructions perform best in human wayfinding evaluations on all path evaluation metrics and on subjective assessments of instruction quality, and they also incur the lowest visual search costs. The only metric not dominated by human instructions is the time taken – which correlates with instruction length, and may be affected by wayfinders giving up when faced with poor quality instructions. Overall, the Speaker-Follower and EnvDrop models are surprisingly weak and noticeably worse than even adversarially perturbed human instructions. Compared to the template-based approach (Crafty), the Speaker-Follower model performs on par and EnvDrop is only slightly better. As a first step to improving existing navigation instruction generators, we focus on developing and evaluating automated metrics that can approximate these human wayfinding evaluations. ## 4 Compatibility Model As an alternative to human evaluations, we train an instruction-trajectory compatibility model to assess both the grounding between textual and visual inputs and the alignment of the two sequences. ### 4.1 Model Structure Our model is a dual encoder that encodes instructions and trajectories into a shared latent space (Figure 1). The instruction representation $h^{w}$ is the concatenation of the final output states of a bi-directional LSTM (Schuster and Paliwal, 1997) encoding the instruction tokens $\mathcal{W}=\\{w_{1},w_{2},...,w_{n}\\}$. We use contextualized token embeddings from BERT (Devlin et al., 2019) as input to the LSTM. The visual encoder is a two-layer LSTM that processes visual features extracted from a sequence of viewpoints $\mathcal{V}=\\{(I_{1},p_{1}),(I_{2},p_{2})...,(I_{t},p_{t})\\}$ comprised of panoramic images $I_{t}$ captured at positions $p_{t}$ along a 3D trajectory. The vector $h^{v}_{t}$ representing the viewpoint at step $t$ is given by: $\displaystyle a_{t}=\text{Attention}(h^{v}_{t-1},e_{\text{pano},t})$ (1) $\displaystyle v_{t}=f([e_{\text{prev},t},e_{\text{next},t},a_{t}])$ (2) $\displaystyle h^{v}_{t}=\text{LSTM}(v_{t},h^{v}_{t-1})$ (3) where $e_{\text{pano},t}$ is a set of 36 visual features representing the panoramic image $I_{t}$ (discretized into 36 viewing angles by elevation $\theta$ and heading $\phi$), $e_{\text{prev},t}$ and $e_{\text{next},t}$ are the visual features in the directions of the previous and next viewpoints ($v_{t-1}-v_{t}$ and $v_{t+1}-v_{t}$ respectively), and $f$ is a projection layer. Each visual feature is a concatenation of a pre-trained CNN image feature Juan et al. (2020) with orientation vectors encoding both sine and cosine functions of the absolute and relative angles $\\{\theta_{\text{abs}},\phi_{\text{abs}},\theta_{\text{rel}},\phi_{\text{rel}}\\}$. We use standard dot-product attention (Luong et al., 2014) and define $h^{v}=h^{v}_{T}$, the final viewpoint embedding in the trajectory. The output of the model is the compatibility score $S$ between an instruction and a trajectory defined as the cosine similarity between $h^{v}$ and $h^{w}$. ### 4.2 Hard Negative Mining To avoid overfitting, our compatibility model is not trained on the outputs of any of the instruction generators that we evaluate. Instead, we use only the relatively small set of positive instruction-trajectory examples from R2R. We use round-trip back-translation to expand the set of positive examples. Unmatched instruction-trajectory pairs from R2R are considered to be negative examples and we also construct hard negative examples from positive examples by adversarially perturbing both trajectories and instructions. ##### Instruction Perturbations We use the same instruction perturbations described in Section 3: Direction Swap, Entity Swap, and Phrase Swap. These perturbations are inspired by typical failure modes in instruction generators and are designed to be hard to recognize without grounding on images and actions along the trajectory. Previous work by Huang et al. (2019) considered only trajectory perturbations. While this encourages the model to recognize incorrect trajectories for a given ground truth instruction, it may not encourage the model to identify a trajectory matched with a poor quality instruction. Our results suggest that instruction perturbations are equally important. ##### Trajectory Perturbations To perturb trajectories we use the navigation graphs defining connected viewpoints in R2R. Inspired by Huang et al. (2019), we consider Random Walk, Path Reversal, and Viewpoint Swap perturbations: * • Random Walk: The first or last two viewpoints are fixed and the remainder of the trajectory is re-sampled using random edge traversals subject to the path length remaining within $\pm 1$ step of the original. To make the task harder, we avoid revisiting a viewpoint and require the re-sampled trajectory to have at least two overlapping viewpoints with the original. * • Path Reversal: The entire trajectory is reversed while keeping the same viewpoints. * • Viewpoint Swap: A new method we introduce that randomly samples and swaps a viewpoint in a trajectory with a new viewpoint sampled from the neighbors of the adjacent viewpoints in the original trajectory. ##### Paraphrases To expand the 14k positive examples from the R2R train set and balance the positive-to-negative ratio, we paraphrase instructions via round-trip back- translation. We use the following ten intermediate languages and Google Translate333https://cloud.google.com/translate/: ar, es, de, fr, hi, it, pt, ru, tr, and zh. To exclude low quality or nearly duplicate instructions, we filter paraphrased instructions outside the BLEU score range of [0.25, 0.7] compared to the original. Overall we have a total of 110,601 positive instruction-trajectory pairs in the training set, which contains 4675 unique trajectories. | | | | | Instruction Validation | Path Validation ---|---|---|---|---|---|--- | Model | Perturbations | Validation | Test | Direction | Entity | Phrase | Viewpoint | Random | Path | | | (val_unseen) | (val_seen) | Swap | Swap | Swap | Swap | Walk | Reversal 1 | Huang et al. (2019) | Path + Instruction | 53.4 | 52.3 | 80.3 | 89.4 | 80.9 | 78.0 | 72.9 | 75.9 | This Work | | | | | | | | | 2 | \+ CE Loss | Path + Instruction | 57.9 | 57.6 | 89.5 | 88.4 | 83.4 | 95.0 | 94.1 | 87.8 3 | \+ Focal Loss | Path + Instruction | 59.2 | 59.2 | 89.8 | 90.5 | 84.2 | 95.6 | 95.0 | 90.3 4 | \+ Contrastive Loss | Path + Instruction | 67.2 | 68.7 | 75.1 | 70.1 | 73.7 | 73.6 | 88.7 | 93.1 5 | \+ Contrastive + CE | Path + Instruction | 66.5 | 67.5 | 82.0 | 77.4 | 76.9 | 84.7 | 91.3 | 91.7 6 | \+ Contrastive + Focal | Path + Instruction | 68.5 | 68.3 | 83.9 | 81.5 | 79.8 | 88.5 | 93.7 | 93.3 7 | \+ Contrastive + Focal + Paraphrase | Path + Instruction | 71.3 | 72.2 | 83.1 | 81.9 | 80.4 | 90.4 | 95.2 | 94.0 8 | \+ Contrastive + Focal + Paraphrase + Bert Embed. | Path + Instruction | 73.5 | 73.7 | 84.6 | 86.7 | 82.2 | 89.1 | 94.3 | 93.3 | Perturbation Ablations | | | | | | | | | 9 | \+ Contrastive + Focal + Paraphrase + Bert Embed. | Instruction | 70.5 | 70.4 | 85.1 | 88.6 | 82.5 | 64.9 | 84.3 | 90.7 10 | \+ Contrastive + Focal + Paraphrase + Bert Embed. | Direction Swap Only | 70.1 | 68.9 | 89.0 | 72.0 | 70.7 | 65.9 | 85.0 | 91.0 11 | \+ Contrastive + Focal + Paraphrase + Bert Embed. | Entity Swap Only | 69.1 | 68.5 | 70.3 | 92.7 | 71.3 | 65.2 | 84.4 | 90.9 12 | \+ Contrastive + Focal + Paraphrase + Bert Embed. | Phrase Swap Only | 70.3 | 70.5 | 70.1 | 72.5 | 85.5 | 65.5 | 83.3 | 90.1 13 | \+ Contrastive + Focal + Paraphrase + Bert Embed. | Path | 69.7 | 69.4 | 71.7 | 71.4 | 69.8 | 92.4 | 94.8 | 92.2 14 | \+ Contrastive + Focal + Paraphrase + Bert Embed. | Viewpoint Swap Only | 70.7 | 72.1 | 70.7 | 71.1 | 71.2 | 94.6 | 94.6 | 91.9 15 | \+ Contrastive + Focal + Paraphrase + Bert Embed. | Random Walk Only | 70.5 | 70.9 | 70.5 | 71.5 | 71.0 | 81.7 | 95.1 | 91.6 16 | \+ Contrastive + Focal + Paraphrase + Bert Embed. | Path Reversal Only | 69.4 | 69.7 | 71.0 | 70.7 | 71.2 | 65.3 | 86.3 | 91.9 17 | \+ Contrastive + Focal + Paraphrase + Bert Embed. | No Perturbation | 69.1 | 69.7 | 71.1 | 71.0 | 70.8 | 65.3 | 84.9 | 90.2 Table 2: Ablation of different models based on classification AUC. Models are trained with the original R2R-train data and paraphrased positive instructions, plus path and/or instruction perturbed hard negatives (second column). The best models are selected based on the validation set (column 3), and we report the final test performance in column 4. To understand the performance of individual perturbation method, we also report the best AUCs for each of the six perturbations in columns 5 - 10. ### 4.3 Loss Functions During training, each minibatch is constructed with $N$ matching instruction- trajectory pairs, which may be perturbed. We define $M\in{\\{0,1\\}}^{N}$ as the vector indicating unperturbed pairs. A compatibility matrix $S\in\mathcal{R}^{N\times N}$ is defined such that $S_{i,j}$ is the cosine similarity score between instruction $i$ and trajectory $j$ determined by our model. We use both binary classification loss functions, defined on diagonal elements of $S$, and a contrastive loss defined on $S$’s rows and columns. Contrastive losses are commonly used for retrieval and representation learning (e.g., Yang et al., 2019; Chen et al., 2020) and in our case exploits all random instruction-trajectory pairs in a minibatch. Each loss requires a separate normalization. For the classification loss we compute the probability of a match $p_{i,j}$, such that $p_{i,j}=\sigma(aS_{i,j}+b)$ where $a$ and $b$ are learned scalars and $\sigma$ is the sigmoid function. For the classification loss $\mathcal{L}_{\text{cls}}$ we consider both binary cross entropy loss $\mathcal{L}_{\text{CE}}$, and focal loss (Lin et al., 2017) given by $\mathcal{L}_{\text{FL}}=(1-p_{i,j})^{\gamma}\mathcal{L_{\text{CE}}}$ where we set $\gamma=2$. For the contrastive loss we compute logits by scaling $S$ with a learned scalar temperature $\tau$. The contrastive loss $\mathcal{L}_{C}(S)$ calculated over the rows and colums of $S$ is given by: $\displaystyle\mathcal{L}_{C}(S)=\frac{1}{\sum_{i}M_{i}}\sum_{i=1}^{N}\Big{(}\mathcal{L}_{r}(S_{i})+\mathcal{L}_{r}(S^{\intercal}_{i})\Big{)}$ (4) where $\mathcal{L}_{r}(S_{i})=0$ if $M_{i}=0$, i.e., the diagonal element is a perturbed pair and not considered to be a match. Otherwise: $\displaystyle\mathcal{L}_{r}(S_{i})=-\log\frac{e^{S_{i,i}/\tau}}{\sum_{j}^{N}e^{S_{i,j}/\tau}}$ (5) The final loss is the combination: $\displaystyle\mathcal{L}=\mathcal{L}_{C}(S)+\frac{\beta}{N}\sum_{i=1}^{N}\mathcal{L}_{cls}(S_{i,i})$ (6) where $\mathcal{L}_{cls}$ is the classification loss, either $\mathcal{L}_{CE}$ or $\mathcal{L}_{FL}$, and we set $\beta=1$. ##### Sampling hyperparameters We sample positive and negative examples equally with a mix ratio of 2:1:1 for ground truth, instruction perturbations, and trajectory perturbations, respectively. For each perturbation type, we sample the three methods with equal probability. ## 5 Experiments We evaluate our compatibility model against alternative model-based evaluations and standard textual similarity metrics. We report instruction classification results in Section 5.1, improved data augmentation for VLN agents in Section 5.2, and correlation with human wayfinder outcomes in 5.3. ### 5.1 Instruction Classification Figure 2: Navigation performance of a VLN agent trained with different fractions of Speaker-Follower augmented paths, starting from 1%. For NE, lower is better; for SR and SDTW, higher is better. The dashed-lines (green and red) use only augmented paths in training, while the dotted lines (blue and orange) use both augmented and R2R-train paths. Filled circles indicate fractions ranked by our compatibility model in descending order, while triangles indicate random fractions. Each point is the mean of 3 runs and the error bars represent the standard deviation of the mean. The model-ranked fractions show consistent improvement over random samples of the same percentage. Agents trained with only augmented paths (dashed-lines) show greater difference between model-ranked fractions and random fractions. ##### Evaluation In this setting we use the instruction-trajectory compatibility model to classify high and low quality instructions for trajectories from the R2R val- unseen and val-seen sets. The instruction pool includes 3 high-quality instructions per trajectory from R2R, plus 2 instructions per trajectory from the Speaker-Follower and EnvDrop models. These are considered to be high quality if 2 out of 3 human wayfinders reached the goal (see Section 3), and low quality otherwise. We assess model performance using Area Under the ROC Curve (AUC). We use the val-unseen split (3915 instructions, 75% high quality) for model validation and the val-seen split (1700 instructions, 78% high quality) as test. ##### Benchmark We compare to the compatibility model proposed by Huang et al. (2019), which computes elementwise similarities between instruction words and trajectory panoramas before pooling, and does not include action embeddings ($e_{\text{prev},t}$ and $e_{\text{next},t}$) or position encodings $p_{t}$. In contrast, our model calculates the similarity between instructions and trajectories after pooling each sequence and includes both action and position encodings. ##### Results Table 2 reports classification AUC including comprehensive ablations of loss functions, approaches to hard negative mining, and modeling choices. With regard to the loss function, we find that the combination of contrastive and focal loss (row 6) performs best overall, and that adding contrastive loss provides a very significant $9$ \- $10\%$ increase in AUC compared to using just cross-entropy (CE) or focal loss (rows 2 and 3) due to the effective use of in-batch negatives. Adding paraphrased positive instructions and pretrained BERT token embeddings also leads to significant performance gains (rows 7 and 8 vs. row 6). The best performing model on both the validation and test sets uses Contrastive + Focal loss with paraphrased instructions and BERT embeddings, as well as trajectory and instruction perturbations (row 8). This model consistently outperforms the benchmark from prior work (row 1) by a large margin and achieves a test set AUC of 73.7%. In rows 9–17 we ablate the six perturbation methods that we use for hard negative mining. Ablations using only instruction perturbations (row 9), only path perturbations (13), or no perturbations at all (row 17) perform considerably worse than our best model (row 8). We also show that no individual perturbation approach is effective on its own. In addition to scores for the validation and test sets, we report AUC for each perturbation method on the val-seen set to investigate their individual performance. Overall, trajectory perturbations get higher scores than instruction perturbations, showing they are easier tasks. Phrase Swap proves the hardest task, while Random Walk is the easiest. ### 5.2 Data Augmentation for VLN Data augmentation using instructions from the Speaker-Follower and EnvDrop models is pervasive in the training of VLN agents Wang et al. (2019); Huang et al. (2019); Li et al. (2019). In this section we evaluate whether our compatibility model can be used to filter out low quality instructions from the augmented training set to improve VLN performance. We score all of 170k augmented instruction-trajectory pairs from the Speaker-Follower model and rank them in descending order. We then use different fractions of the ranked data to train VLN agents, and compare with agents trained using random samples of the same size. We use a VLN agent model based on Wang et al. (2019) and implemented in VALAN Lansing et al. (2019), which achieves a success rate (SR) of 45% on the R2R val-unseen split when trained on the R2R train split and all of the Speaker-Follower augmented instructions. Figure 2 indicates that instruction-trajectory pairs selected by our compatibility model consistently outperform random training pairs in terms of the performance of the trained VLN agent. This demonstrates the efficacy of our compatibility model for improving VLN data augmentation by identifying high quality instructions. ### 5.3 Correlation with Human Wayfinders In this section we evaluate the correlation between the scores given by our instruction-trajectory compatability model and the outcomes from the human wayfinding attempts described in Section 3. Using Kendall’s $\tau$ to assess rank correlation, we report both system-level and instance-level correlation. The instance-level evaluations assess whether the metric can identify the best instruction from two candidates, while the system-level evaluations assess whether a metric can identify the best model from two candidates (after averaging over many instruction scores for each model). The results in Table 3 are reported separately over all 3.9k instructions (9 systems comprising the rows of Table 1), and over model-generated instructions only (4 systems comprising the 2.2k instructions generated by the Speaker-Follower and EnvDrop models on R2R val-seen and val-unseen). ##### Automatic Metrics For comparison we include standard textual evaluation metrics (BLEU, CIDEr, METEOR, ROUGE and SPICE) and two model-based metrics: BERTScore (Zhang et al., 2019), and scores based on the performance of a trained VLN agent attempting to follow the candidate instruction Agarwal et al. (2019). Note that only the compatibility model and the VLN agent-based scores use the candidate trajectory – the other metrics are calculated by comparing each candidate instruction to the three reference instructions from R2R (and are thus reliant on reference instructions). To calculate the standard metrics we use the official evaluation code provided with the COCO captions dataset Chen et al. (2015). For BERTScore, we use a publicly available uncased BERT444tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1 model with 12 layers and hidden dimension 768, and compute the mean $F1$-score over the three references. For the VLN agent score, we train three VLN agents based on Wang et al. (2019) from different random initializations using the R2R train set. We then employ the trained agents for the wayfinding task and report performance as either the SPL or SDTW similarity between the path taken by the agent and the reference path – using either a single agent or the average score from three agents. | | | All Instructions (N=3.9k, M=9) ---|---|---|--- | Score | Ref | NE $\downarrow$ | | SR $\uparrow$ | | SPL $\uparrow$ | | Quality $\uparrow$ System-Level | BLEU-4 | ✓ | (-0.00, | 0.33) | | (-0.22, | 0.39) | | (-0.22, | 0.00) | | (-0.11, | 0.39) CIDEr | ✓ | (-0.06, | 0.39) | | (-0.22, | 0.39) | | (-0.22, | 0.00) | | (-0.17, | 0.39) METEOR | ✓ | (-0.11, | 0.44) | | (-0.39, | 0.28) | | (-0.39, | -0.06) | | (-0.00, | 0.28) ROUGE | ✓ | (-0.06, | 0.39) | | (-0.28, | 0.39) | | (-0.33, | 0.00) | | (-0.06, | 0.39) SPICE | ✓ | (-0.67, | -0.28) | | (-0.06, | 0.61) | | (-0.44, | 0.78) | | (-0.56, | 0.83) BERTScore | ✓ | (-0.06, | 0.39) | | (-0.22, | 0.39) | | (-0.22, | 0.00) | | (-0.17, | 0.39) SPL${}_{\text{1-agent}}$ | | (-0.50, | -0.06) | | (-0.22, | 0.44) | | (-0.11, | 0.56) | | (-0.00, | 0.44) SPL${}_{\text{3-agents}}$ | | (-0.22, | 0.17) | | (-0.33, | 0.39) | | (-0.00, | 0.33) | | (-0.33, | 0.61) SDTW${}_{\text{1-agent}}$ | | (-0.44, | 0.00) | | (-0.22, | 0.44) | | (-0.11, | 0.50) | | (-0.00, | 0.44) | SDTW${}_{\text{3-agents}}$ | | (-0.22, | 0.17) | | (-0.28, | 0.33) | | (-0.00, | 0.33) | | (-0.33, | 0.61) | Compatibility | | (-0.17, | 0.17) | | (-0.17, | 0.50) | | (-0.00, | 0.28) | | (-0.44, | 0.72) | | | All Instructions (N=3.9k, M=9) | Score | Ref | NE $\downarrow$ | | SR $\uparrow$ | | SPL $\uparrow$ | | Quality $\uparrow$ Instance-Level | BLEU-4 | ✓ | (-0.05, | 0.09) | | (-0.04, | 0.00) | | (-0.09, | -0.05) | | (-0.01, | 0.03) CIDEr | ✓ | (-0.06, | 0.09) | | (-0.04, | -0.00) | | (-0.11, | -0.07) | | (-0.02, | 0.01) METEOR | ✓ | (-0.00, | 0.04) | | (-0.05, | -0.02) | | (-0.04, | 0.00) | | (-0.01, | 0.02) ROUGE | ✓ | (-0.05, | 0.08) | | (-0.05, | -0.01) | | (-0.10, | -0.06) | | (-0.02, | 0.02) SPICE | ✓ | (-0.05, | -0.02) | | (-0.00, | 0.04) | | (-0.03, | 0.06) | | (-0.03, | 0.07) BERTScore | ✓ | (-0.04, | -0.00) | | (-0.07, | 0.12) | | (-0.01, | 0.03) | | (-0.07, | 0.11) SPL${}_{\text{1-agent}}$ | | (-0.18, | -0.14) | | (-0.15, | 0.19) | | (-0.14, | 0.18) | | (-0.07, | 0.11) SPL${}_{\text{3-agents}}$ | | (-0.22, | -0.18) | | (-0.20, | 0.24) | | (-0.18, | 0.22) | | (-0.10, | 0.14) SDTW${}_{\text{1-agent}}$ | | (-0.18, | -0.14) | | (-0.15, | 0.19) | | (-0.14, | 0.18) | | (-0.08, | 0.12) | SDTW${}_{\text{3-agents}}$ | | (-0.22, | -0.19) | | (-0.20, | 0.24) | | (-0.18, | 0.22) | | (-0.11, | 0.15) | Compatibility | | (-0.20, | -0.17) | | (-0.13, | 0.17) | | (-0.17, | 0.20) | | (-0.19, | 0.23) | | | Model-Generated Instructions (N=2.2k, M=4) | Score | Ref | NE $\downarrow$ | | SR $\uparrow$ | | SPL $\uparrow$ | | Quality $\uparrow$ Instance-Level | BLEU-4 | ✓ | (-0.02, | 0.03) | | (-0.03, | 0.02) | | (-0.02, | 0.03) | | (-0.02, | 0.03) CIDEr | ✓ | (-0.02, | 0.03) | | (-0.03, | 0.02) | | (-0.02, | 0.03) | | (-0.02, | 0.03) METEOR | ✓ | (-0.02, | 0.03) | | (-0.03, | 0.02) | | (-0.02, | 0.03) | | (-0.02, | 0.03) ROUGE | ✓ | (-0.02, | 0.03) | | (-0.05, | 0.00) | | (-0.04, | 0.01) | | (-0.03, | 0.02) SPICE | ✓ | (-0.05, | -0.00) | | (-0.00, | 0.05) | | (-0.00, | 0.05) | | (-0.01, | 0.06) BERTScore | ✓ | (-0.22, | -0.18) | | (-0.19, | 0.24) | | (-0.18, | 0.23) | | (-0.16, | 0.20) SPL${}_{\text{1-agent}}$ | | (-0.21, | -0.16) | | (-0.17, | 0.23) | | (-0.16, | 0.22) | | (-0.07, | 0.12) SPL${}_{\text{3-agents}}$ | | (-0.26, | -0.21) | | (-0.21, | 0.27) | | (-0.21, | 0.26) | | (-0.09, | 0.14) SDTW${}_{\text{1-agent}}$ | | (-0.22, | -0.16) | | (-0.17, | 0.23) | | (-0.16, | 0.22) | | (-0.07, | 0.13) | SDTW${}_{\text{3-agents}}$ | | (-0.26, | -0.21) | | (-0.22, | 0.27) | | (-0.21, | 0.26) | | (-0.10, | 0.15) | Compatibility | | (-0.25, | -0.20) | | (-0.22, | 0.27) | | (-0.21, | 0.25) | | (-0.18, | 0.23) Table 3: Kendall’s $\tau$ correlation between automated instruction evaluation metrics and human wayfinder evaluations. Ranges are 90% confidence intervals based on bootstrap resampling. N refers to the number of instructions and M refers to the number of systems. If checked, Ref indicates that the metric requires reference instructions for comparison. SPL$k$-agent(s) and SDTW$k$-agent(s) refers to wayfinding scores averaged over $k$ VLN agents trained from random initialization. ##### Results Table 3 compares system-level and instance-level correlations for all metrics, both standard and model-based. At the system-level, we see no correlation between standard text metrics such as BLEU, ROUGE, METEOR and CIDEr and human wayfinder performance. The exception is SPICE, which shows the desired negative correlation with NE, and positive correlation with SR, SPL (see Figure 3) and Quality. At the system-level, the model-based approaches (BERTScore, agent SPL/SDTW and Compatability) also lack the desired correlation and exhibit wide confidence intervals. Here, it is important to point out that the 9 systems under evaluation include a variety of styles (e.g., Crafty’s template-based instructions, different annotator pools, adversarial perturbations) which are dissimilar to the R2R data used to train the VLN agents and the compatibility model. Accordingly, the model-based approaches are unable to reliably rank these out-of-domain systems. At the instance-level (when scoring individual instructions) we observe different outcomes. SPICE scores for individual instructions have high variance, and so SPICE does not correlate with wayfinder performance at the instruction level. In contrast, the model-based approaches exhibit the desired correlation, particularly when restricted to the model-generated instructions (Table 3 bottom panel). Our compatibility score shows the strongest correlation among all metrics, performing similarly to an ensemble of three VLN agents. Figure 3: Standard evaluation metrics vs. human wayfinding outcomes (SPL) for 9 navigation instruction generation systems. SPICE is most consistent with human wayfinding outcomes, although no metrics score the Crafty template-based instructions highly. ## 6 Conclusion Generating grounded navigation instructions is one of the most promising directions for improving the performance of VLN wayfinding agents, and a challenging and important language generation task in its own right. In this paper, we show that efforts to improve navigation instruction generators have been hindered by a lack of suitable automatic evaluation metrics. With the exception of SPICE, all the standard textual evaluation metrics we evaluated (BLEU, CIDEr, METEOR and ROUGE) are ineffective, and – perhaps as a result – existing instruction generators have substantial headroom for improvement. To address this problem, we develop an instruction-trajectory compatibility model that outperforms all existing automatic evaluation metrics on instance- level evaluation without needing any reference instructions – making it suitable for use as a reward function in a reinforcement learning setting, as a discriminator in a Generative Adversarial Network (GAN) Dai et al. (2017), or for filtering instructions in a data augmentation setting. Progress in natural language generation (NLG) is increasing the demand for evaluation metrics that can accurately evaluate generated text in a variety of domains. Our findings are a timely reminder that textual evaluation metrics should not be trusted in new domains unless they have been comprehensively validated against human judgments. In the case of grounded navigation instructions, for model selection in the presence of reference instructions we recommend using the SPICE metric. In all other scenarios (e.g., selecting individual instructions, or model selection without reference instructions) we recommend using a learned instruction-trajectory compatibility model. ## Acknowledgements We thank the Google Data Compute team, in particular Ashwin Kakarla and Priyanka Rachapally, for their tooling and annotation support for this project. ## References * Agarwal et al. (2019) Sanyam Agarwal, Devi Parikh, Dhruv Batra, Peter Anderson, and Stefan Lee. 2019. Visual landmark selection for generating grounded and interpretable navigation instructions. In _CVPR workshop on Deep Learning for Semantic Visual Navigation_. * Anderson et al. (2018a) Peter Anderson, Angel X. Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, and Amir Roshan Zamir. 2018a. On evaluation of embodied navigation agents. _CoRR_ , abs/1807.06757. * Anderson et al. (2016) Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. SPICE: Semantic Propositional Image Caption Evaluation. In _ECCV_. * Anderson et al. (2018b) Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018b. Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments. In _CVPR_. * Chen et al. (2019) Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In _CVPR_. * Chen et al. (2020) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In _ICML_. * Chen et al. (2015) Xinlei Chen, Tsung-Yi Lin Hao Fang, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C. Lawrence Zitnick. 2015. Microsoft COCO Captions: Data Collection and Evaluation Server. _arXiv preprint arXiv:1504.00325_. * Dai et al. (2017) Bo Dai, Sanja Fidler, Raquel Urtasun, and Dahua Lin. 2017. Towards diverse and natural image descriptions via a conditional gan. In _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_. * Denkowski and Lavie (2014) Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In _Proceedings of the Ninth Workshop on Statistical Machine Translation_ , pages 376–380, Baltimore, Maryland, USA. Association for Computational Linguistics. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _NAACL_. * Dräger and Koller (2012) Markus Dräger and Alexander Koller. 2012. Generation of landmark-based navigation instructions from open-source data. In _Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics_ , pages 757–766, Avignon, France. Association for Computational Linguistics. * Fried et al. (2018) Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. 2018. Speaker-follower models for vision-and-language navigation. In _NeurIPS_. * Fu et al. (2019) Tsu-Jui Fu, Xin Eric Wang, Matthew Peterson, Scott Grafton, Miguel Eckstein, and William Yang Wang. 2019. Counterfactual vision-and-language navigation via adversarial path sampling. * Huang et al. (2019) Haoshuo Huang, Vihan Jain, Harsh Mehta, Alexander Ku, Gabriel Magalhaes, Jason Baldridge, and Eugene Ie. 2019. Transferable representation learning in vision-and-language navigation. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_. * Ilharco et al. (2019) Gabriel Ilharco, Vihan Jain, Alexander Ku, Eugene Ie, and Jason Baldridge. 2019\. Effective and general evaluation for instruction conditioned navigation using dynamic time warping. _NeurIPS Visually Grounded Interaction and Language Workshop_. * Jain et al. (2019) Vihan Jain, Gabriel Magalhães, Alexander Ku, Ashish Vaswani, Eugene Ie, and Jason Baldridge. 2019. Stay on the path: Instruction fidelity in vision-and-language navigation. In _ACL_. * Juan et al. (2020) Da-Cheng Juan, Chun-Ta Lu, Zhen Li, Futang Peng, Aleksei Timofeev, Yi-Ting Chen, Yaxi Gao, Tom Duerig, Andrew Tomkins, and Sujith Ravi. 2020. Ultra fine-grained image semantic embedding. In _Proceedings of the 13th International Conference on Web Search and Data Mining_ , page 277–285. * Ku et al. (2020) Alexander Ku, Peter Anderson, Roma Patel, Eugene Ie, and Jason Baldridge. 2020. Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_. * Lansing et al. (2019) Larry Lansing, Vihan Jain, Harsh Mehta, Haoshuo Huang, and Eugene Ie. 2019. VALAN: Vision and language agent navigation. _arXiv:1912.03241_. * Li et al. (2019) Xiujun Li, Chunyuan Li, Qiaolin Xia, Yonatan Bisk, Asli Celikyilmaz, Jianfeng Gao, Noah Smith, and Yejin Choi. 2019. Robust navigation with language pretraining and stochastic sampling. In _CVPR_. * Lin (2004) Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In _Text Summarization Branches Out_ , pages 74–81, Barcelona, Spain. Association for Computational Linguistics. * Lin et al. (2017) Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2017. Focal loss for dense object detection. In _CVPR_. * Luong et al. (2014) Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2014. Effective approaches to attention-based neural machine translation. In _EMNLP_. * Majumdar et al. (2020) Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh, and Dhruv Batra. 2020. Improving vision-and-language navigation with image-text pairs from the web. In _CVPR_. * Mast and Wolter (2013) Vivien Mast and Diedrich Wolter. 2013. A probabilistic framework for object descriptions in indoor route instructions. In _Spatial Information Theory_ , pages 185–204, Cham. Springer International Publishing. * Mehta et al. (2020) Harsh Mehta, Yoav Artzi, Jason Baldridge, Eugene Ie, and Piotr Mirowski. 2020. Retouchdown: Adding touchdown to streetlearn as a shareable resource for language grounding tasks in street view. _arXiv:2001.03671_. * Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _ACL_. * Qi et al. (2020) Yuankai Qi, Qi Wu, Peter Anderson, Xin Wang, William Yang Wang, Chunhua Shen, and Anton van den Hengel. 2020. REVERIE: Remote embodied visual referring expression in real indoor environments. In _CVPR_. * Richter and Klippel (2005) Kai-Florian Richter and Alexander Klippel. 2005. A model for context-specific route directions. In _Spatial Cognition IV. Reasoning, Action, Interaction_ , pages 58–78, Berlin, Heidelberg. Springer Berlin Heidelberg. * Roth and Frank (2010) Michael Roth and Anette Frank. 2010. Computing EM-based alignments of routes and route directions as a basis for natural language generation. In _Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)_ , pages 958–966, Beijing, China. Coling 2010 Organizing Committee. * Schuster and Paliwal (1997) Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. _IEEE Trans. Signal Processing_ , 45:2673–2681. * Sellam et al. (2020) Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7881–7892, Online. Association for Computational Linguistics. * Tan et al. (2019) Hao Tan, Licheng Yu, and Mohit Bansal. 2019. Learning to navigate unseen environments: Back translation with environmental dropout. In _NAACL_. * Thomason et al. (2019) Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2019. Vision-and-dialog navigation. In _CoRL_. * Vedantam et al. (2015) Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In _CVPR_. * Wang et al. (2019) Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, and Lei Zhang. 2019. Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In _CVPR_. * Yang et al. (2019) Yinfei Yang, Gustavo Hernández Ábrego, Steve Yuan, Mandy Guo, Qinlan Shen, Daniel Cer, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2019\. Improving multilingual sentence embedding using bi-directional dual encoder with additive margin softmax. In _IJCAI_. * Zhang et al. (2019) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019\. Bertscore: Evaluating text generation with BERT. In _ICLR_. ## Appendix A Automated metric scores for all instructions We provide more details about automated metric scores for all instructions in this section. Table 4 gives automated metrics for each model we consider. Generated instructions from EnvDrop and Speaker-Follower are scored the highest, whereas human instructions are scored poorly and on par with perturbed instructions, and Crafty is the lowest. These results diverge significantly from human wayfinding performance in Section 3, and highlights the inefficacy of these automated text metrics. | BertScore | BLEU-4 | CIDEr | ROUGE | METEOR | SPICE ---|---|---|---|---|---|--- Val-unseen | | | | | | Speaker Fol. | 78.9 | 22.3 | 36.3 | 45.6 | 23.1 | 17.0 EnvDrop | 79.3 | 23.7 | 42.2 | 45.8 | 22.5 | 18.1 Val-seen | | | | | | Speaker Fol. | 79.0 | 23.3 | 40.1 | 46.3 | 23.5 | 18.7 EnvDrop | 79.5 | 24.5 | 49.3 | 46.7 | 22.8 | 20.2 Crafty | 71.5 | 4.1 | 0.4 | 23.3 | 17.5 | 12.0 Dir. Swap | 74.7 | 8.2 | 5.7 | 31.4 | 20.7 | 19.4 Entity Swap | 74.0 | 8.4 | 4.6 | 31.4 | 20.6 | 17.8 Phrase Swap | 74.8 | 9.8 | 7.4 | 31.4 | 21.0 | 20.1 Human | 74.9 | 9.9 | 6.7 | 33.3 | 21.9 | 21.0 Table 4: Automated metric scores for all instructions. The BLEU, CIDEr and ROUGE metrics score human instructions poorly compared to the neural net models. ## Appendix B Crafty Details We use the data in Matterport3D to build Crafty, a template-based navigation instruction generator that uses a Hidden Markov Model (HMM) to select objects as reference landmarks for wayfinding. Crafty’s four main components (Appraiser, Walker, Observer and Talker) are described below. ### B.1 Appraiser The Appraiser scores the interestingness of objects based on the Matterport3D scans in the training set. It treats each panorama as a document and the categories corresponding to objects visible from the panorama as words, and then computes a per-category inverse document frequency (IDF) score. ### B.2 Walker The Walker converts a panorama sequence into a motion sequence. Given a path (sequence of connected panoramas) and an initial heading, it calculates the entry heading into each panorama and the exit heading required to transition to the next panorama. For each panorama, all annotated objects that are visible from the location are retrieved. For each object, we obtain properties such as their category and center, which allows the distance and heading from the panorama center to be computed. From these, the Walker creates a sequence of motion tuples, each of which captures the context of the source panorama and the goal panorama, along with the heading to move from source to goal. ### B.3 Observer The Observer selects an object sequence by generating objects from an HMM that is specially constructed for each environment, characterized by: * • Emissions: how panoramas relate to objects. This is a probability distribution over panoramas for each object, based on the distance between the object and the panoramas. * • Transitions: how looking at one object might shift to another one, based on their relative location, the motion at play, and the Appraiser’s assessment of their prominence. The intuition for using an HMM is that we tend to fixate on a given salient object over several steps as we move (high self-transitions); these tend to be nearby (high emission probability for objects near a panorama’s center) and connected to the next salient object (biased object-object transitions). To explain a particular observed panorama sequence (path), we can then infer the optimal object sequence using the Viterbi algorithm. ### B.4 Talker Given a motions sequence from the Walker and corresponding object observations from the Observer, the Talker uses a small set of templates to create English instructions for each step. We decompose this into low-level and high-level templates. #### B.4.1 Low-level templates For single step actions, there are three main things to mention: movement, the fixated object and its relationship to the agent’s position. Figure 4: Orientation wheel with directions types. The demarcations are $\frac{\pi}{8}$ (22.5∘), $\frac{3\pi}{8}$ (67.5∘), etc. MOVE. For movement, we simply generate a set of possible commands for each direction type, where the direction types are defined as in the orientation wheel shown in Fig. 4. There are additional direction types for UP and DOWN based on relative pitch (e.g. when the goal panorama is higher or lower than the source). Given one of these heading types, we generate a set of matching phrases appropriate to each. E.g. for LEFT and RIGHT, the verbs face, go, head, make a, pivot, turn, and walk are combined left and right, respectively. For moving STRAIGHT, the verbs continue, go, head, proceed, walk, and travel are combined with straight and ahead. To select an instruction for a given heading, we randomly sample from the list of available phrases. To generate a MOVE command for LEFT, we randomly sample from [face left, go left, $\ldots$ walk left]. OBJ. An object’s description is its category (e.g. couch, tv, window). ORIENT. We use the same direction types shown in Figure 4. When an object is STRAIGHT and BEHIND, we use the phrases ahead of you or in front of you and behind you or in back of you, respectively. For objects to the LEFT or RIGHT, we use two templates DIRECTION_PRE DIRECTION and DIRECTION DIRECTION_POST, where DIRECTION_PRE is selected from [to your, to the, on your, on the] and DIRECTION_POST is the phrase of you. This produces to your left, on the right, right of you, and so on. For SLIGHT LEFT and SLIGHT RIGHT, one of [a bit, slightly, a little, just] is added in front (e.g. a bit to your left). #### B.4.2 High level templates Crafty pieces these low-level textual building blocks together to describe actions. In what follows, MOVE, OBJ, and ORIENT indicate the move command, object phrase and orientation phrase, respectively, discussed above. Single action. We use templates for three situations: start of a path, heading change in a panorama (intra) and moving between panoramas (inter). * • Start of path: There are several templates that simply help a wayfinder verify their current position. Ex: you are near a OBJ, ORIENT. * • Intra: These templates include the movement command followed by a verification of the orientation to an object having completed the movement. Ex: MOVE. a OBJ is ORIENT. * • Inter : These templates capture walking from one panorama to another and provide additional object verification. Ex: MOVE, going along to the OBJ ORIENT. Multi-step actions. We attempt to reduce verbosity by collapsing actions that involve fixation on the same object. * • Combining actions: Repeated actions are collapsed; e.g. [STRAIGHT, STRAIGHT, RIGHT, STRAIGHT] becomes [STRAIGHT, RIGHT, STRAIGHT]). These produce a composite move command, e.g. proceed forward and make a right and go straight. * • Describing the object: To orient with respect to the fixated-upon object, we switch on the direction type between the agent and the object at the last action. Ex: for STRAIGHT, we use heading toward the OBJ and for SLIGHT LEFT/RIGHT, we use approaching the OBJ ORIENT. The final output is the concatenation of the combined move command and the object orientation phrase. End-of-path instruction templates. The final action is a special situation in that it needs to describing stopping near a salient object. For this, we extract MOVE and OBJ phrases from the last action and use templates such as MOVE and stop by the OBJ. Full example. Putting it all together, Crafty creates full path instructions such as the following, with relevant high-level templates indicated: * • (Start) there is a lamp when you look a bit to the left. pivot right, so it is in back of you. * • (Inter) walk forward, going along to the curtain in front of you. * • (Intra) curve left. you should see a tv ahead of you. * • (Multi-Action) go forward and go slightly left and walk straight, passing the curtain to your right. * • (End-of-Path) continue forward and stop by the couch. Crafty’s instructions are more verbose than human instructions, but are often easy to follow—provided there are good, visually salient landmarks in the environment to use for orientation. ## Appendix C Human Rater Performance Over Time Human raters are excellent at learning and adapting to new problems over time. To understand whether our 37 human raters learn to self-correct the perturbed instructions over time and whether that affects the quality of our human wayfinding results, we investigate rater performance as a function of time using the sequence of examples they evaluate. Figure 5 shows the average human rater performance for all of the 9 datasets included in Table 1 of Section 3. Due to the binary nature of SR, we use a 50-point bin to average each rater’s performance, and then average the results across all raters for each bin. Figure 5 shows that the average rater performance stays flat within the uncertainties and does not show systematic drift over time, indicating no overall self-correction that affects the wayfinding results. For a more granular scrutiny of individual perturbation methods, in particular the perturbed instructions, we plot in Figure 6 the average human rater performance over time for the three methods: Direction Swap, Entity Swap, and Phrase Swap. Despite greater uncertainties due to much fewer data points used for averaging, the overall human performance for each method still does not drift significantly in a systematic manner. These results indicate that our human wayfinding performance results are reliable and robust over time, which can be attributed to shuffling of the examples and that the perturbation methods are blind to human raters. Figure 5: Average human rater performance for all datasets as a function of time using the sequence of examples each rater has evaluated. Each rated example indicates a time step. We normalize the scores of each rater by their mean value over time to remove performance bias of each rater in order to better pick up the trend over time. We average 50 examples to get the mean SDTW for each rater due to the discrete nature of success. Left: The mean performance of all rater for each bin. Error bars represent the standard deviation of the mean. Right: Individual rater performance over time. Each line represents a single rater. Despite a few outliers, the overall human rater performance is flat and consistent over time, indicating no self- correction or adaptation to the datasets by human raters. Figure 6: Average human rater performance for the three instruction perturbation methods as a function of time (number of examples), computed in a similar way as in Figure 5. We use a 15-point bin to compute the average for each human rater, and aggregate over all raters to get the mean and its uncertainty. The overall human rater performance stays flat and does not drift significantly over time for instruction perturbations.
# Multi-messenger Astrophysics with the Pierre Auger Observatory Michael Schimp1 for the Pierre Auger Collaboration2 1Bergische Universität Wuppertal1Bergische Universität Wuppertal Gaußstr. 20 Gaußstr. 20 42119 Wuppertal 42119 Wuppertal Germany 2Observatorio Pierre Auger Germany 2Observatorio Pierre Auger Av. San Martín Norte 304 Av. San Martín Norte 304 5613 Malargüe 5613 Malargüe Argentina (Full author list: http://www.auger.org/archive/authors_2019_02.html) Argentina (Full author list: http://www.auger.org/archive/authors_2019_02.html) <EMAIL_ADDRESS> ###### Abstract While the Pierre Auger Observatory is a very successful instrument for ultra- high energy cosmic ray (UHECR) detection, it is increasingly used as part of various types of multi-messenger searches, in which it contributes with searches for air showers induced by atomic nuclei, neutrons, photons, and neutrinos. We present an overview of the multi-messenger activities of the Pierre Auger Observatory. The overview includes: searches for ultra-high energy photons and neutrinos detected by the Pierre Auger Observatory in coincidence with gravitational wave events detected by LIGO and Virgo; searches for correlations of the arrival directions of UHECRs detected by the Pierre Auger Observatory and high-energy neutrinos detected by IceCube and ANTARES; searches for Galactic neutrons; the multi-messenger campaign “Deeper, Wider, Faster”, aiming for common observations of a variety of complementary instruments. We discuss the motivations, methods and results of these searches. multi-messenger astrophysics, Pierre Auger Observatory, neutrinos, photons, ultra-high energy cosmic rays, neutrons ## 1 Introduction Similar to multi-wavelength observations in photon-based astronomy that have substantially broadened the general astrophysical understanding since the 1950s, multi-messenger astrophysics has already provided unique astrophysical insights that otherwise would not have been possible. An early example is the solar storm of 1859, also known as the Carrington Event, establishing the existence of solar flares. It was visible as very bright white light (messenger: photons) close to a set of sunspots [1, 2] and additionally in the form of unusually strong auroras (messenger: cosmic rays, inducing the auroras) that have been observed even at low latitudes, for example below 9∘N in Colombia [3]. The most remarkable recent discoveries in multi-messenger astrophysics were made in the context of the detection of gravitational waves (GWs) and gamma- rays from a binary neutron star (BNS) merger [4], triggering a large search campaign for photons across a very wide range in the electromagnetic (EM) spectrum as well as searches for other messengers [5], consequently leading to the observation of a kilonova as a counterpart of the BNS merger [6]. Another so-far unique astrophysical multi-messenger observation is the detection of a high energy neutrino from the distant blazar TXS-0506+056 in coincidence with a flare of high energy photons in 2017 [7, 8]. Investigations additionally revealed a period of significantly enhanced neutrino emission from this source in 2014 and 2015 [9], corroborating the assumption of TXS-0506+056 being a high-energy neutrino source. The Pierre Auger Observatory is the largest cosmic ray detector in the world, regularly used for detections of extensive air showers (EASs) induced by ultra-high-energy cosmic-rays (UHECRs), atomic nuclei roughly on an EeV energy scale. As these are of extragalactic origin [10], they travel long distances through magnetic fields from their sources to the Earth, changing their directions, and therefore can not in general be used as precise pointers indicating their origin. Assuming that other messengers are created at the acceleration sites of UHECRs, or during their propagation, multi-messenger astrophysical observations are a promising approach to answer the long-standing questions regarding the sites and mechanisms of ultra-high-energy particle acceleration in the universe. In the following sections, we review various contributions of the Pierre Auger Observatory to multi-messenger astrophysical observations. ## 2 Ultra-High Energy Neutrino and Photon Follow-Up Searches of LIGO/Virgo Gravitational Wave Events Searches for ultra-high energy (UHE; $>~{}0.1$ EeV) neutrinos and photons with the Pierre Auger Observatory have been successfully performed several times [11, 12, 13, 14]. The searches rely on the discrimination of EASs induced by neutrinos or photons from those induced by UHECRs, which are by far the most common EASs measured with the Pierre Auger Observatory. The discrimination is based on the fact that these EASs develop differently in the atmosphere. Neutrinos interact deeper in the atmosphere than the other particles, leading to more EM and hadronic particles reaching the Pierre Auger Surface Detector (SD) than in the case of UHECR-induced EASs, eventually leaving longer-lasting light signals in the photomultiplier tubes (PMTs). This distinction is more precise and efficient for inclined showers, leading to a severely varying sensitivity across the sky [12]. Photon-induced EASs are distinguished from others based on several measures such as steeper lateral distribution functions, deeper shower maxima, and smaller footprints. A multi-messenger search performed with photons and neutrinos at the Pierre Auger Observatory is the follow-up of gravitational wave (GW) events detected by the LIGO and Virgo (LV) observatories. These were caused by the coalescence and merger of compact binaries, mostly of binary black holes (BBHs) but also of a binary neutron star (BNS). Before VHEPA 2019, during the first two observational runs LV, called O1 and O2, GW event information was sent in form of alerts only to parties that signed a memorandum of understanding with the LV collaborations, one of which was the Pierre Auger Collaboration. Each such alert contained the following estimated parameters if applicable: * • Time of merger * • Masses of merged objects and remnant * • Distance of emitting system * • Sky localization probability distribution The neutrino follow-up searches were performed by applying the default neutrino search inside the 90% C.L. most probable localization region in the sky during a time range from 500 s before until 1 day after the merger. No neutrino candidates have been found for any of the GW events during O1 and O2. For each of the BBH mergers, the exposure is used to calculate limits on the UHE neutrino fluence as a function of declination of the true source, which is often known with a very limited precision (10s of degrees). As an example, Fig. 1 shows the results for GW150914 [15], the first GW event from a compact binary coalescence that has ever been detected [16]. Figure 1: Black lines represent the 90% C.L. limit on the UHE neutrino fluence from GW150914 as a function of the source declination. Declinations of the 90% C.L. sky localization of the source are highlighted in blue. [15] One of the GW events in O2, GW170817, has been associated with the coalescence of two objects with masses in the typical neutron star mass range ($\sim{}$1–2 $M_{\odot}$) [4]. Follow-up observations of this event with photons yielded signatures of a kilonova caused by a BNS merger at the inferred location of the GW event via detections in a very large wavelength range [6, 5]: Less than two seconds after the merger, a short GRB was observed, whereas at various lower photon energies, light curves of the source have been recorded for several weeks. Therefore, in agreement with IceCube, the time range of the search for ultra-high energy neutrinos with the Pierre Auger Observatory from this source was extended until 14 days after the merger. Fig. 2 shows the fluence limits (90% C.L.) for the time ranges of $\pm 500$ s around the merger, and 0 to 14 days after the merger, respectively [17]. Figure 2: 90% C.L. limits on the UHE neutrino spectral fluence from GW170817 as a function of energy are shown as black angular lines. Model UHE neutrino spectral fluences are represented by the colored smooth lines with the denoted off-axis angles of the merger system. [17] As the UHE neutrino sensitivity of the Pierre Auger Observatory is varying across the sky, and the BNS merger occurred at a time and location with a large sensitivity, the fluence limits for the $\pm 500$ s time range are much more competitive than for the 14 day time window, where any short-term sensitivity enhancements are averaged out due to the moving field of view of the observatory. The photon follow-up searches are unpublished yet. In addition to the process described for neutrinos, only GW events that are relatively close by or well localized will be taken into account in order to prevent false-positive detections. ## 3 Searches for Correlations between Ultra-High Energy Cosmic Rays and High-Energy Neutrinos UHECRs are subject to deflection in magnetic fields due to their charge, which makes finding their sources difficult. However, as the deflection decreases with energy, correlations between particularly high-energy UHECRs and high- energy neutrinos can be expected. To search for these correlations, UHECRs with energies $\gtrsim{}50~{}\mathrm{EeV}$ detected by the Pierre Auger and the Telescope Array observatories, and high-energy neutrinos detected by IceCube, were analyzed in a joint work of the three collaborations [18]. Two analyses of the correlations between these UHECRs and high-energy neutrinos have been applied: a cross-correlation analysis and a stacking likelihood analysis. The neutrino sample contains track-like and cascade-like events which are analyzed separately. Taking an isotropic UHECR flux as the null hypothesis, for track-like neutrino events, both analyses yielded no significant results. However, for cascade-like neutrino events, both analyses yielded significant excesses of correlations. The most significant excess in the cross-correlation analysis was found for a maximum angular separation of $22^{\circ}$, with a post-trial $p$-value of $5.4\cdot{}10^{-3}$. Fig. 3 shows the excess of pairs for the cross- correlation analysis as a function of the maximum angular separation [18]. Figure 3: Relative excesses of pairs as a function of the maximum angular separation are shown for the cross-correlation analysis performed with cascade-like events. Hatched areas indicate expected fluctuations from isotropically distributed UHECRs. [18] In the stacking likelihood analysis, the most significant excess was found for a UHECR deflection of $\frac{6^{\circ}}{E/100\mathrm{EeV}}$, with a post-trial $p$-value of $2.2\cdot{}10^{-2}$. Assuming, alternatively, a null hypothesis of an isotropic high-energy neutrino flux, very similar levels of confidence for the excesses were found [18]. In [19], a study published after VHEPA 2019, including also the ANTARES neutrino telescope, deviations from the null hypotheses were found to be much weaker than in [18], indicating that the correlations searched for are not very strong. This can be explained by several factors, e.g. uncertainties of the magnetic fields responsible for UHECR deflection, and the fact that UHECRs detected at Earth come from not more than a few 100 Mpc away, while possibly a large fraction of the detected neutrinos originates from much further distances. For these neutrinos, a correlation would be unexpected, diluting the overall effect. ## 4 Search for a flux of Ultra-high Energy Neutrons from the Galaxy Individual neutron-induced EASs are indistinguishable from those induced by protons. However, as neutrons travel in straight lines, a flux of neutrons could be detected via an excess of the number of EASs from the directions of their sources. As the neutron decay length is $\sim{}9.1~{}\mathrm{kpc}~{}E/\mathrm{EeV}$ [20], neutrons with energies of a few EeV can reach the Earth from the entire Galaxy but not from much further away. Therefore, 11 classes of sources in the Galaxy have been used as combined target sets for neutron searches with the Pierre Auger Observatory [21]. The chosen target sets are the Galactic center and plane as well as known photon source classes like pulsars and X-ray binaries, amounting several hundred sources in total. The choice of photon sources as probable neutron sources is motivated by the fact that both messengers are produced in photo- hadronic interaction scenarios. The searches in the first 9.75 years of data taken by the observatory have yielded no significant excess of the number of EASs from the target sets with respect to the neutron-free expectation. However, from the sensitivity to the target sources, 95% C.L. upper limits on the energy flux in neutrons of 0.1 eV cm-2 s-1 (Galactic center) through 0.56 eV cm-2 s-1 (Galactic plane) have been deduced for the different target sets. In all cases, these limits already exclude energy fluxes on the level of the measured TeV photon energy flux from the target sets [21, 22]. At energies of a few EeV, the UHECR composition is consistent with a large proton fraction [23]. The strong limit on the neutron flux – in particular that it is much lower than the TeV photon flux – leads to the exclusion of an $E^{-2}$ Fermi-acceleration of protons up to energies of several EeV. Ref. [21] also provides an interpretation in terms of an effective neutron-to- proton luminosity ratio from the Galactic plane. For this, several theoretical assumptions regarding the proton emission and neutron production efficiency are made and related to the neutron flux limits, finally yielding a 95% C.L. upper limit for this ratio of $0.6\%$, placing a strong constraint on ultra- high energy proton production in our Galaxy. ## 5 Ultra-High Energy Cosmic Rays for the Deeper, Wider, Faster program The Deeper, Wider, Faster (DWF) program is a multi-instrument multi-messenger observation project. With more than 30 associated observatories involved, the purpose of DWF is the simultaneous and common observation of sky regions with a large number of instruments at the same time, combining their complementary sensitivities. This allows for simultaneous sensitive and wide-field measurements of multiple messengers in a variety of energy ranges [24]. The observations aim for transient sources, such as fast radio bursts, which last less than a second and are therefore barely possible to follow up with instruments that are pointing in other directions and would first need to be adjusted to the region of interest. Furthermore, the simultaneous common observation allows to observe transient sources directly before their enhanced emission, which is naturally missed by said instruments that need to adjust their pointing. At high energies, DWF is also used to search for transients lasting seconds to hours. Candidate identification is possible in seconds to minutes and a fast respond time of a few minutes allows for further follow-up observations with a short delay. The Pierre Auger Observatory is contributing to DWF by sharing all detected events in the field of view of DWF. The large field of view of the Pierre Auger Observatory lets it contribute to DWF during a large fraction of the observation time. So far, no significant coincidences of UHECR events with other events in DWF have been found. However, the possibility of detecting such coincidences makes DWF an interesting approach for multi-messenger observations with unprecedented combinations of messengers, including UHECRs. ## References * [1] R. C. Carrington, MNRAS 20, 13 (1859). * [2] R. Hodgson, MNRAS 20, 15 (1859). * [3] F. Moreno Cárdenas, S. Cristancho Sánchez, and S. Vargas Domínguez, ASR, 57, 257 (2016). * [4] B. P. Abbott et al., Astrophys. J. Lett. 848, L13 (2017). * [5] B. P. Abbott et al., Astrophys. J. Lett. 848, L12 (2017). * [6] P. A. Evans et al., Science 358, 1565 (2017). * [7] The IceCube Collaboration, Science 361, eaat1378 (2018). * [8] Y. T. Tanaka, S. Buson, and D. Kocevski for the Fermi-LAT collaboration, Astronomer’s Telegram 10791 (2017). * [9] The IceCube Collaboration, Science 361, 147 (2018). * [10] A. Aab et al. (The Pierre Auger Collaboration), Science 357, 1266 (2017). * [11] A. Aab et al. (The Pierre Auger Collaboration), JCAP 10, 022 (2019). * [12] A. Aab et al. (The Pierre Auger Collaboration), JCAP 11, 004 (2019). * [13] P. Abreu et al. (The Pierre Auger Collaboration), JCAP 04, 009 (2017). * [14] J. Rautenberg for the Pierre Auger Collaboration, Proc. 36th Int. Cosmic Ray Conf., PoS(ICRC2019)398. * [15] A. Aab et al. (The Pierre Auger Collaboration), Phys. Rev. D 94, 122007 (2016). * [16] B. P. Abbott et al., Phys. Rev. Lett. 116, 061102 (2016). * [17] A. Albert et al. (the ANTARES, IceCube, and Pierre Auger collaborations) * [18] I. Al Samarai and G. Golup for the IceCube Collaboration, the Pierre Auger Collaboration, the Telescope Array collaboration, and the ANTARES Collaboration, Proc. 35th Int. Cosmic Ray Conf., PoS(ICRC2017)961. * [19] A. Barbano for the IceCube Collaboration, the Pierre Auger Collaboration, the Telescope Array collaboration, and the ANTARES Collaboration, Proc. 36th Int. Cosmic Ray Conf., PoS(ICRC2019)842. * [20] M. Tanabashi et al. (Particle Data Group), Phys. Rev. D 98, 030001 (2018). * [21] A. Aab et al. (The Pierre Auger Collaboration), Astrophys. J. Lett. 789, L34 (2014). * [22] J. A. Hinton and W. Hofmann, Ann. Rev. Astron. Astrophys. 47, 523 (2009). * [23] A. Yushkov for the Pierre Auger Collaboration, Proc. 36th Int. Cosmic Ray Conf., PoS(ICRC2019)482. * [24] I. Andreoni and J. Cooke, Proc. Int. Astron. Union 14(S339), 135 (2017).
# Finite Sample Analysis of Two-Time-Scale Natural Actor-Critic Algorithm Sajad Khodadadianlabel=e1 [<EMAIL_ADDRESS>Thinh T. Doanlabel=e2 [<EMAIL_ADDRESS>Justin Romberglabel=e3 [ <EMAIL_ADDRESS>Siva Theja Magulurilabel=e4 [ <EMAIL_ADDRESS>Geogia Institute of Technology, Virginia Tech, ###### Abstract Actor-critic style two-time-scale algorithms are one of the most popular methods in reinforcement learning, and have seen great empirical success. However, their performance is not completely understood theoretically. In this paper, we characterize the _global_ convergence of an online natural actor- critic algorithm in the tabular setting using a single trajectory of samples. Our analysis applies to very general settings, as we only assume ergodicity of the underlying Markov decision process. In order to ensure enough exploration, we employ an $\epsilon$-greedy sampling of the trajectory. For a fixed and small enough exploration parameter $\epsilon$, we show that the two-time-scale natural actor-critic algorithm has a rate of convergence of $\tilde{\mathcal{O}}(1/T^{1/4})$, where $T$ is the number of samples, and this leads to a sample complexity of $\tilde{\mathcal{O}}(1/\delta^{8})$ samples to find a policy that is within an error of $\delta$ from the _global optimum_. Moreover, by carefully decreasing the exploration parameter $\epsilon$ as the iterations proceed, we present an improved sample complexity of $\tilde{\mathcal{O}}(1/\delta^{6})$ for convergence to the global optimum. , and ## 1 Introduction In reinforcement learning (RL), an agent operating in an environment, modeled as a Markov decision process (MDP), tries to learn a policy that maximizes its long-term reward. Methods for solving this optimization problem include value function methods, such as $Q$-learning [60], and policy space methods, such as TRPO [51], PPO [52], and actor-critic [33]. Policy space methods explicitly search for the maximum of the value function $V^{\pi}$, which codifies the expected long-term reward, through iterative optimization over the policy $\pi$. Although in general $V^{\pi}$ is a nonconvex function of $\pi$ [1], global optimality can be obtained by employing either gradient descent [1, 41], mirror descent [23, 53], or natural gradient descent [1]. These methods, however, assume access to an oracle that returns the gradient of the value function for any given policy. In many practical scenarios, and in particular when the parameters of the MDP are only partially known, these gradients have to be estimated from observations or simulations. Actor-critic (AC) techniques integrate estimation of the gradient into the policy search. In this framework, a critic estimates the value ($Q$-function) of a policy, usually through a temporal difference iteration. The actor then uses this estimate to form a gradient to improve the policy. AC algorithms have been observed to converge quickly relative to other methods [59, 2], and have enjoyed success in several applications including robotics [25], computer games [20], and power networks [22]. AC algorithms can be classified into batch vs. online. In the batch setting, in each iteration of the AC, the critic evaluates the policy using a set of collected data. This type of batch update, however, cannot be implemented in an online manner, and requires simulations that need to be restarted in specific states, making its implementation appropriate in artificial environments such as Atari games [52], but not in scenarios that require the agent to “learn as they go”. A truly online and two-time-scale AC variant was first proposed in [33], where at every iteration the actor and critic updates depend only on one sample observed from the environment using the current policy. Later [45, 7] presented a version of this algorithm using a natural policy gradient. These methods can be viewed as two-time-scale stochastic approximation (SA) algorithms, where the actor and the critic operate at the “slow” and “fast” time scales, respectively. AC algorithms often use low-order approximations for the value and policy functions. While this type of function approximation can dramatically simplify the learning process, thus allowing us to apply the algorithm to complex, real-world problems, these approximations introduce non-vanishing, systematic errors, as the truly optimal policy typically does not lie in the set of functions considered. In this paper, however, we are completely focussed on recovering the globally optimal policy, and so we will operate in the “tabular setting” that considers every possible distribution over the (finite number of) actions for every possible state. Main contributions: * • We analyze the two-time-scale natural AC algorithm in the tabular setting. Our setting can be seen as a two-time-scale linear stochastic approximation with a time-varying Markovian noise. Unlike several recent papers, we do not make an extensive set of assumptions. The _only_ assumption we make is the ergodicity of the underlying Markov chain under all policies. * • Our analysis show the importance of the exploration in AC type algorithms. We argue that a naive execution of natural AC fails to properly explore all of the state-action pairs, a fact that we illustrate with a simple example. Therefore, we employ $\epsilon$-greedy exploration to guarantee global convergence. * • For a fixed and small enough exploration parameter $\epsilon$, we show that using $T$ number of samples, two-time-scale natural AC Algorithm 1 converges to within $\tilde{\mathcal{O}}\left(\frac{1}{\epsilon T^{1/4}}+\epsilon\right)$ of the global optimum. We show that for carefully chosen $\epsilon$, Algorithm 1 finds a policy within a $\delta$-ball around the global optimum using $\tilde{\mathcal{O}}(1/\delta^{8})$ samples. * • We show that using a time-varying $\epsilon$ improves the sample complexity of Algorithm 1 to $\tilde{\mathcal{O}}(1/\delta^{6})$. ### 1.1 Related works Stochastic approximation: This method was first introduced in [50]. Asymptotic convergence of stochastic approximation (SA) was studied in [9, 5]. Recently, there has been a flurry of work on the finite time analysis of SA for both linear [44] and nonlinear [15, 19] operators, under both i.i.d [39] and Markovian [29] noise, with batch [31] or two-time-scale [9] updates. Our setting in this paper can be categorized as a linear two-time-scale SA with the noise generated from a time-varying Markov chain. Actor-critic: AC algorithm was first proposed in [33] as a two-time-scale stochastic approximation [8, 10, 30, 18] variant of the policy gradient algorithm [55]. In this algorithm a faster time scale is used to collect samples for gradient estimation, and a slower time scale is used to perform an update to the policy. In this paper we are interested in such a two-time-scale version of the natural policy gradient [28]. While natural gradient descent is closely related to mirror descent [48, 24], in the context of Markov decision processes they are known to be identical [1, 23]. Even though the objective $V^{\pi}$ is a nonconvex function of the policy $\pi$, convergence rate of natural policy gradient to the global optimum under the planning setting (when the exact gradients are known) is recently established in [1, 23, 13, 12]. Natural AC, which is a variant of AC with natural policy gradient in the actor, was studied in [43, 56, 45, 7]. While the asymptotic convergence of AC methods including natural AC is well- understood [61, 9, 33, 7, 70], their finite-time convergence was largely unknown until recently [54, 36, 69, 31, 68, 47, 35, 38, 58, 64, 65, 40, 62, 14, 66]. The authors in [68, 47, 35, 66] provide the convergence rate of AC where the parameter of the critic is updated using a number of collected samples instead of only one single sample. Such a setting, referred to as batch AC, cannot be implemented in an online fashion since at any iteration the critic has to implement the current policy for a number of time steps to collect enough data. A similar batch approach was used in [58, 65, 64, 40, 36, 31, 14] to study natural AC and in [38, 54] to study the TRPO algorithm, which is a variant of mirror descent. The authors in [31, 14] study the finite time convergence bound of off-policy natural AC algorithm under constant step size. However, due to the constant step size they do not have convergence to the global optimum. [36, 67] study the finite time convergence of a regularized variant of natural AC with batch data update. In [62] the convergence of two- time-scale AC is analyzed. However, in [62] the convergence only to the local optimal is established. In this paper, we study the original AC method [33] without considering a batch update. In other words, data is collected through a single trajectory of a time-varying Markov chain and the update is performed in a two-time-scale manner. To the best of our knowledge, the only paper in the literature that considers such a setting is [62] which studies the AC algorithm under function approximation. Although their results are remarkable, they make several assumptions on the space of approximation functions. In Section 2.1 we will explain why these assumptions cannot be satisfied in the tabular setting with zero approximation error. Another related work is [65] where the authors claim to have a single trajectory algorithm. However, as explained in [31, Appendix C], the proposed algorithm in [65] is not single trajectory. $\epsilon$-greedy: One of the differences between our algorithm and the previous work is the inclusion of $\epsilon$-greedy to the natural AC. This greedy step ensures sufficient exploration of our algorithm, while keeping the algorithm online. $\epsilon$-greedy [42] is commonly employed in various settings such as $Q$-learning [63], multi-armed bandit [34, 57] and contextual bandit [11]. In these algorithms, $\epsilon$-greedy is usually employed in order to ensure sufficient exploration [42]. In this paper, we show that this greedy step can ensure the global convergence of the AC as well, and we characterize the rate of this convergence. To summarize, the work in the literature over the last two decades has looked at various challenges thrown by AC algorithms under various assumptions and different simplifying models. This paper studies the greedy version of this algorithm and consequently in one place addresses several analytical challenges which include: (i) two-time-scale analysis, (ii) an online or single trajectory update, (iii) Markovian data samples, (iv) time-varying Markov chain, (v) asynchronous update in tabular setting, (vi) diminishing step sizes, and (vii) global convergence with minimal assumptions. ## 2 Natural Actor-Critic Methods for Reinforcement Learning The environment of our RL problem is modeled by a Markov Decision Process (MDP) specified by $\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)$, where $\mathcal{S}$ and $\mathcal{A}$ are finite sets of states and actions, $\mathcal{P}$ is the set of transition probability matrices, $\gamma\in(0,1)$ is the discount factor, and $\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]$ is the reward function, where without loss of generality we assume that the rewards are in $[0,1]$. We focus on randomized stationary policies, where each policy $\pi$ assigns to each state $s\in\mathcal{S}$ a probability distribution $\pi(\cdot\,|\,s)$ over $\mathcal{A}$. Each policy $\pi$ on the MDP, induces a Markov chain with transition probability $P^{\pi}(s^{\prime}|s)=\sum_{a}\mathcal{P}(s^{\prime}|s,a)\pi(a|s)$ on the states. Assuming that this Markov chain is irreducible, it induces a stationary distribution over states, which we denote by $\mu^{\pi}$. By definition, this distribution satisfies $(\mu^{\pi})^{T}P^{\pi}=(\mu^{\pi})^{T}$ [26]. For a fixed policy $\pi$, a sample trajectory of the states and actions is generated according to $S_{k+1}\sim\mathcal{P}(\cdot|S_{k},A_{k}),A_{k+1}\sim\pi(\cdot|S_{k+1})$. The value function associated with $\pi$ and the state $s$ is defined as the expected discounted cumulative reward, i.e.$V^{\pi}(s)=\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}\mathcal{R}(S_{k},A_{k})\,|\,S_{0}=s,A_{k}\sim\pi(\cdot|S_{k})\right]$. Furthermore, given an initial state distribution $\mathrm{P}$ over $\mathcal{S}$, we denote the expected cumulative reward for a policy $\pi$ as $V^{\pi}(\mathrm{P})$. The goal is to find a policy that maximizes this expected cumulative reward: $\displaystyle\pi^{*}\in\operatorname*{arg\,max}_{\pi}V^{\pi}(\mathrm{P}).$ (1) Throughout the paper, we denote $V^{\pi^{*}}$ as $V^{*}$. It can be shown [46] that the optimal policy $\pi^{*}$ is independent of the initial distribution $\mathrm{P}$, and hence throughout this paper we assume $\mathrm{P}$ as fixed and we denote $V^{\pi}\equiv V^{\pi}(\mathrm{P})$. It can be shown that value function can be written as $V^{\pi}=\sum_{s,a}d^{\pi}(s)\pi(a|s)\mathcal{R}(s,a)$, where $d^{\pi}$, denoted as the discounted state visitation [55], is defined as $d^{\pi}(s)=(1-\gamma)\sum_{k=0}^{\infty}\gamma^{k}P^{\pi}(S_{k}=s\,|\,s_{0}\sim\mathrm{P})$, with $P^{\pi}(S_{k}=s\,|\,S_{0}\sim\mathrm{P})$ being the probability that $S_{k}=s$ after executing policy $\pi$ starting from the initial distribution $\mathrm{P}$ at $k=0$. Throughout, we denote $d^{\pi^{*}}_{\mathrm{P}}$ as $d^{*}$. Given policy $\pi_{t}$, the Natural Policy Gradient (NPG) algorithm [28, 1] under the softmax parametrization updates the policy in every time step according to $\pi_{t+1}(a|s)=\frac{\pi_{t}(a|s)\exp(\beta_{t}Q^{\pi_{t}}(s,a))}{\sum_{a^{\prime}}\pi_{t}(a^{\prime}|s)\exp(\beta_{t}Q^{\pi_{t}}(s,a^{\prime}))},\forall s,a,$ (2) where $Q^{\pi}(s,a)=\mathbb{E}[\sum_{k=0}^{\infty}\gamma^{k}\mathcal{R}(S_{k},A_{k})\,|\,S_{0}=s,A_{0}=a,A_{k}\sim\pi(\cdot|S_{k})]$ is the $Q$-function corresponding to the policy $\pi$. Here $\beta_{t}$ is the step size which might be time dependent. The update rule in (2) has multiple interpretations [32]. Firstly, as explained in [23], it can be seen as the update of the mirror descent for problem in (1) using negative entropy as the divergence generating function. Secondly, the NPG update in (2) can be seen as a pre-conditioned gradient ascent with softmax parameterization, where the pseudoinverse of the Fisher information matrix [49] multiplies the gradient as a preconditioner [28]. While mirror descent and natural gradient descent are distinct but related algorithms in general [48, 23, 24], they are both identical to (2) in our setting of solving the problem in (1) using softmax policy parametrization. In the setting above, the NPG method finds a globally optimal policy with a provable rate; [1] shows that after $T$ iterations of the update (2) with constant step size $\beta_{t}=\beta$, it finds a policy whose expected cumulative reward is within $\mathcal{O}(1/T)$ of the optimal policy. The convergence bound in [1] is for the “MDP setting”, where the $Q$-function is computed exactly for every candidate policy $\pi_{t}$. In the vast majority of reinforcement learning applications, however, $Q^{\pi_{t}}$ has to be estimated from simulations or observations. ### 2.1 Two-Time-Scale Natural Actor-Critic Algorithm In order to perform the NPG update (2) for an unknown environment, one can first estimate $Q^{\pi_{t}}$ using a batch of samples of state-action-rewards. However, using batch of data for the update of the $Q$-function has practical drawbacks. In particular, sampling of the batch data requires the state of the system to be reset frequently, which is not possible in environments such as robotics. A truly online, completely data-driven technique that keeps a running estimate of the $Q$-functions while performing NPG updates based on these estimates is presented in Algorithm 1 with $\epsilon_{t}=0$. In this algorithm, the “critic” implements an asynchronous update to the $Q$-function, where the only entry in the table that is changed at every iteration is the one corresponding to the observed state-action pair $(S_{t},A_{t})$. After this, the “actor” uses the estimated $Q$-table to update the policy using a natural policy gradient update of the form in (2). The critic and the actor use different step sizes ($\alpha_{t}$ and $\beta_{t}$, respectively), a fact that is crucial to maintaining the algorithm’s stability. Due to the existence of two different step sizes, Algorithm 1 can be viewed as a variant of two-time-scale stochastic approximation [8]. Intuitively, the critic has to collect information about the gradient at a faster time scale than the time scale at which the actor executes the gradient update — in other types of policy gradient algorithms, this takes the form of multiple samples being generated in an inner loop. Since the AC method performs both updates from a single sample, we can achieve a similar effect by having the actor take a more conservative step. One of the main differentiators of our work with the existing literature on the convergence of AC algorithms is the update of the policy that mixes in a small multiple of the uniform distribution. This mixing is necessary to ensure sufficient exploration of the state-action space. In this algorithm, at each iteration $t$, the action $A_{t+1}$ is sampled from the policy $\hat{\pi}_{t}$, which is a convex combination of the policy $\pi_{t}$ and the uniform distribution. This strategy ensures that the sampling policy $\hat{\pi}_{t}$ attains at least $\epsilon_{t}$ weight in all it’s elements, even though some elements of $\pi_{t}$ might be arbitrary small. Furthermore, with introducing this step, we can ignore the technical assumptions which is made by the previous works. In Section 4, we give an example of an MDP with 4 states and 2 actions where a naive implementation of AC without this $\epsilon$-greedy exploration step results in a suboptimal policy. In the existing literature, this exploration is ensured through more stringent conditions on the problem structure, which if satisfied, can guarantee enough exploration by the AC algorithm (Assumption 1 in [65] and [64], Assumption 4.2 in [40], Assumption 4.3 in [21], and Assumption 4.1 in [62]). These assumptions, however, need not necessarily be satisfied in the tabular MDP setting. In particular, all these assumptions require $\pi_{t}(a|s)$ to be bounded away from zero for all states and actions $s,a$ uniformly over time. However, we know that in an MDP, there always exist a deterministic policy which is a global optimal. This means that $\pi_{t}(a|s)$ can very likely go to zero for some sate-action pair $s,a$, and the assumption can be violated. Algorithm 1 Two-time-scale natural AC algorithm with $\epsilon$-greedy exploration Input: Iteration number $T>0$, step sizes $\alpha_{t},\beta_{t}$, exploration parameter $\epsilon_{t}$, $Q_{0}(s,a)\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}$, $\pi_{0}(a|s)=\hat{\pi}_{0}(a|s)=\frac{1}{|\mathcal{A}|},\forall s,a$. Draw $S_{0}$ from some initial distribution and $A_{0}\sim\pi_{0}(\cdot|s_{0})$ for t=0,1,2,…,T do Sample $S_{t+1}\sim\mathcal{P}(\cdot|S_{t},A_{t}),A_{t+1}\sim{\hat{\pi}}_{t}(\cdot|S_{t+1})$ $\alpha_{t}(s,a)=\alpha_{t}\mathbbm{1}\\{S_{t}=s,A_{t}=a\\},\forall s,a$ $Q_{t+1}(s,a)=Q_{t}(s,a)+\alpha_{t}(s,a)\big{[}\mathcal{R}(S_{t},A_{t})+\gamma Q_{t}(S_{t+1},A_{t+1})-Q_{t}(S_{t},A_{t})\big{]},\forall s,a$ $\pi_{t+1}(a|s)=\pi_{t}(a|s)\frac{\exp(\beta_{t}Q_{t+1}(s,a))}{\sum_{a^{\prime}}\pi_{t}(a^{\prime}|s)\exp(\beta_{t}Q_{t+1}(s,a^{\prime}))},\forall s,a$ ${\hat{\pi}}_{t+1}=\frac{\epsilon_{t}}{|\mathcal{A}|}+(1-\epsilon_{t})\pi_{t+1}$ end for Sample $\hat{T}$ from $\\{0,1,\dots,T\\}$ by distribution $P(\hat{T}=i)=\frac{\beta_{i}}{\sum_{j=0}^{T}\beta_{j}}$ Output: $\hat{\pi}_{\hat{T}}$ ## 3 Main Result: Finite Time Convergence Bounds of Greedy Natural Actor- Critic In this section, we provide a finite-time performance guarantee for Algorithm 1. In this algorithm, we can either choose a constant $\epsilon$-greedy parameter, or a time-varying one. The advantage of the constant $\epsilon$ is faster rate of convergence to a neighborhood of the optimal, and the advantage of the time-varying greedy parameter is global convergence without any necessary pre-specified error. In order to characterize our convergence results, first we make the following assumption. ###### Assumption 3.1. For every deterministic policy $\pi$, the Markov chain induced by the transition probability $P^{\pi}$ is ergodic. For more explanation regarding this assumption, look at Section 4. We now present the main result of the paper. We bound the deviation of the value of the policy returned by 1 from the value of the optimal policy. ###### Theorem 3.1. Suppose Assumption 3.1 holds. Consider Algorithm 1 under the following step size parameters $\displaystyle\alpha_{t}=\frac{\alpha}{(t+1)^{\nu}},~{}~{}\beta_{t}=\frac{\beta}{(t+1)^{\sigma}},~{}~{}\epsilon_{t}=\frac{\epsilon}{(t+1)^{\xi}},$ (3) with $0\leq\xi<\nu<\sigma<1,~{}\alpha,\epsilon\leq 1$. Then, $\displaystyle{\mathbb{E}}[V^{*}-V^{{\hat{\pi}}_{\hat{T}}}]\leq$ $\displaystyle\mathcal{O}(T^{\sigma-1})+\begin{cases}\tilde{\mathcal{O}}(T^{-\sigma})&\text{if}\quad 1>2\sigma\\\ \tilde{\mathcal{O}}(T^{\sigma-1})&\text{o.w}\end{cases}+\begin{cases}\tilde{\mathcal{O}}(\epsilon T^{\sigma-1})&\text{if}\quad\xi+\sigma>1\\\ \tilde{\mathcal{O}}(\epsilon T^{-\xi})&\text{o.w}\end{cases}$ $\displaystyle+\begin{cases}\tilde{\mathcal{O}}(T^{0.5(\nu+\xi-1)}/\epsilon^{0.5})&\text{if}~{}~{}\nu+\xi+1>2\sigma,\\\ \tilde{\mathcal{O}}(T^{\sigma-1}/\epsilon^{0.5})&\text{o.w.}\end{cases}+\begin{cases}\tilde{\mathcal{O}}(t^{0.5(\xi-\nu)}/\epsilon^{0.5})&\text{if}~{}~{}2+\xi>\nu+2\sigma,\\\ \tilde{\mathcal{O}}(T^{\sigma-1}/\epsilon^{0.5})&\text{o.w.}\end{cases}$ $\displaystyle+\begin{cases}\tilde{\mathcal{O}}(t^{-0.5})&\text{if}~{}~{}1>2\sigma,\\\ \tilde{\mathcal{O}}(T^{\sigma-1})&\text{o.w.}\end{cases}+\begin{cases}\tilde{\mathcal{O}}(t^{0.5(\xi+\nu-2\sigma)}/\epsilon^{0.5})&\text{if}~{}~{}2+\xi+\nu>4\sigma,\\\ \tilde{\mathcal{O}}(T^{\sigma-1}/\epsilon^{0.5})&\text{o.w,}\end{cases}$ $\displaystyle+\begin{cases}\tilde{\mathcal{O}}(t^{\xi+\nu-\sigma}/\epsilon)&\text{if}~{}~{}2+2\nu+2\xi>4\sigma,\\\ \tilde{\mathcal{O}}(T^{\sigma-1}/\epsilon)&\text{o.w.,}\end{cases}$ where $\tilde{\mathcal{O}}(\cdot)$ ignores the $\log(T)$ terms. The proof of Theorem 3.1 is provided in Section 5. Note that while $V^{*}$ is not random, but $V^{\hat{\pi}_{T}}$ is, since the policy $\hat{\pi}_{T}$ is a function of all the random variables drawn in Algorithm 1. Furthermore, we state two corollaries of Theorem 3.1 for different choices of $\xi$. ###### Corollary 3.1.1. Suppose Assumption 3.1 holds. Consider Algorithm 1 under the parameters in (3). Suppose $\xi=0$, $\nu=0.5$, and $\sigma=0.75$. We have: $\displaystyle{\mathbb{E}}[V^{*}-V^{{\hat{\pi}}_{\hat{T}}}]\leq\tilde{\mathcal{O}}\left(\frac{1}{\epsilon T^{1/4}}+\epsilon\right).$ (4) Hence, the algorithm requires $\tilde{\mathcal{O}}(1/\delta^{4})$ number of samples to get $\epsilon+\delta/\epsilon$ close to the global optimum. Furthermore, by taking $\epsilon=\mathcal{O}(\delta)$, we get ${\mathbb{E}}[V^{*}-V^{{\hat{\pi}}_{\hat{T}}}]\leq\tilde{\mathcal{O}}(\delta)$ after $T=\tilde{\mathcal{O}}(1/\delta^{8})$ iteration of Algorithm 1. The sample complexity in Corollary 3.1.1 is relatively poor due to the $\frac{1}{\epsilon}$ term on the upper bound in (4), which is due to a constant exploration factor in AC. In the next corollary we show how to achieve a better sample complexity by gradually reducing the exploration factor $\epsilon_{t}$. ###### Corollary 3.1.2. Suppose Assumption 3.1 holds. Consider Algorithm 1 under the parameters in (3). Suppose $\xi=1/6$, $\nu=0.5$, and $\sigma=5/6$. We have: $\displaystyle{\mathbb{E}}[V^{*}-V^{{\hat{\pi}}_{\hat{T}}}]\leq\tilde{\mathcal{O}}(1/T^{1/6}).$ In particular, we have ${\mathbb{E}}[V^{*}-V^{{\hat{\pi}}_{\hat{T}}}]\leq\delta$ after $T=\tilde{\mathcal{O}}(1/\delta^{6})$ iterations of Algorithm 1. Corollaries 3.1.1 and 3.1.2 are direct application of Theorem 3.1. In particular, in the case of $\xi=0$, the term $\epsilon T^{-\xi}$ in the bound of Theorem 3.1 will be a constant proportional to $\epsilon$, and the best rate of convergence is obtained by picking $\nu=0.5$ and $\sigma=0.75$ which gives Corollary 3.1.1. Also assuming $\xi>0$, the best rate of convergence can be obtained by $\xi=1/6,\nu=0.5,\sigma=5/6$. We should emphasize that Corollaries 3.1.1 and 3.1.2 characterize the sample complexity for global convergence of Algorithm 1 with the only assumption of ergodicity of the underlying MDP. This is indeed a much weaker assumption compared to the related work. ## 4 Need for Exploration and Ergodicity In this section we explain the necessity of the $\epsilon$-greedy step in Algorithm 1 and the ergodicity Assumption 3.1. In iteration $t$ of the natural AC algorithm, the objective of the critic is to estimate the $Q$-function corresponding to the policy $\pi_{t}$. In two-time-scale natural AC, in each iteration $t$ the algorithm estimates the $Q$-function by updating only a single random element $(s=S_{t},a=A_{t})$ of the $Q_{t}$ table. In our analysis, the $\epsilon$-greedy step ensures that in each iteration of the algorithm, each of the actions are being sampled with probability at least $\epsilon_{t}$. Furthermore, Assumption 3.1 ensures that all the states of the MDP are visited infinitely often. In the following we show why both $\epsilon$-greedy and Assumption 3.1 are essential for the convergence of the natural AC algorithm. (I) $\epsilon$-greedy: Following the update of the policy in Algorithm 1, we have $\displaystyle\pi_{t}(a|s)=$ $\displaystyle\frac{\exp(\sum_{l=0}^{t-1}\beta_{l}Q_{l+1}(s,a))}{\sum_{a^{\prime}}\exp(\sum_{l=0}^{t-1}\beta_{l}Q_{l+1}(s,a^{\prime}))}.$ (5) If for some state $s$, action $\tilde{a}$ satisfies $Q_{k}(s,\tilde{a})\ll Q_{k}(s,a),~{}\forall a\neq\tilde{a}$, by (5), $\pi_{t}(\tilde{a}|s)$ converges to zero geometrically. Thus, with high probability $(s,\tilde{a})$ will not be explored, and we might converge to a suboptimal policy. Note that the scenario explained here can very likely happen when $\mathcal{R}(s,\tilde{a})$ is negligible with respect to $\mathcal{R}(s,a)$ for other actions $a$. The following experiment illustrates the necessity of the $\epsilon$-greedy policy update. Consider the MDP depicted in Fig. 1. This MDP has 4 states and 2 actions. All the transition probabilities depicted in the figure are positive, and the rest are zero. Furthermore, $\mathcal{R}(s_{1},a_{1})=0.1$ and $\mathcal{R}(s_{4},a_{1})=1$, and the rest of the rewards are zero. Suppose $\mathcal{P}(s_{1}|s_{i},a_{1})=0.999,~{}i=1,2,3$, $\mathcal{P}(s_{i+1}|s_{i},a_{1})=0.001,~{}i=1,2,3$, $\mathcal{P}(s_{4}|s_{4},a_{1})=0.999$, $\mathcal{P}(s_{1}|s_{4},a_{1})=0.001$, $\mathcal{P}(s_{1}|s_{i},a_{2})=0.001,~{}i=2,3$, $\mathcal{P}(s_{i+1}|s_{i},a_{2})=0.999,~{}i=2,3$, $\mathcal{P}(s_{2}|s_{1},a_{2})=1$, $\mathcal{P}(s_{4}|s_{4},a_{2})=0.001$, $\mathcal{P}(s_{1}|s_{4},a_{2})=0.999$. In this MDP, the optimal policy in state $s_{1}$ is to play action $a_{2}$. Fig. 2 shows $\pi_{t}(a_{2}|s_{1})$ for 10 trajectories achieved by the natural AC. The straight lines show the output of the algorithm when $\epsilon$-greedy is employed, and the dashed lines are the output without $\epsilon$-greedy. It is clear that almost always the trajectories of the algorithm without $\epsilon$-greedy converge to a suboptimal policy. Figure 1: A 4 states, and 2 actions MDP. Orange and blue correspond to the non zero transition probabilities of actions $a_{1}$ and $a_{2}$ respectively. Figure 2: $\pi_{t}(a_{2}|s_{1})$ for 10 trajectories generated by the natural AC algorithm over the MDP in Fig. 1. Straight and dashed lines show the result with and without $\epsilon$-greedy, respectively. Since here $\pi^{*}(a_{2}|s_{1})=1$, it shows that the algorithm without $\epsilon$-greedy converges to a suboptimal policy. (II) Ergodicity Assumption: This assumption implies that under all policies, the induced Markov chain over the states of the MDP is irreducible and aperiodic. We discuss these two assumptions separately in the next two paragraphs. First, for an example of an MDP which does not satisfy irreducibility assumption, consider any episodic MDP, where there is a terminal state in which the episode ends [42]. This system can be modeled as an infinite horizon MDP with an absorbing state corresponding to the terminal state. It is clear that this MDP does not satisfy irreducibility assumption. Furthermore, since after a finite time, with high probability we reach to the absorbing state, it is impossible to find the optimal policy using a _single_ trajectory. Second, since here we assume finite state and action spaces, the aperiodicity assumption along with irreducibility is equivalent to the existence of a mixing time, which is common in the literature [65, 64, 21, 62, 37, 36]. We make this more precise in Lemma 5.7. ## 5 Proof of Theorem 3.1: Two-Time-Scale Analysis Next we provide the proof of Theorem 3.1. Before expressing the proof, we state the following Proposition on the convergence of the natural AC along with its proof. ###### Proposition 5.1. Consider the two-time-scale natural AC Algorithm 1 with $T$ iterations, and the output ${\hat{\pi}}_{\hat{T}}$. Suppose the step size $\beta_{t}$ and the exploration parameter $\epsilon_{t}$ are non-increasing with respect to $t$. We have the following: $\displaystyle{\mathbb{E}}[V^{*}-V^{{\hat{\pi}}_{\hat{T}}}]\leq\frac{1}{\sum_{t=0}^{T}\beta_{t}}\left\\{\frac{2\beta}{(1-\gamma)^{2}}+\frac{\log|\mathcal{A}|}{(1-\gamma)}+\frac{2}{1-\gamma}\sum_{t=0}^{T}\bigg{[}\beta_{t}{\mathbb{E}}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|+\frac{{L_{1}}\sqrt{|\mathcal{A}|}}{(1-\gamma)}\beta_{t}^{2}+\epsilon_{t}\beta_{t}\bigg{]}\right\\},$ where $\|\cdot\|$ is the euclidean norm and ${L_{1}}$ is a constant whose precise value is given in Lemma 5.9 in Section 5.4. ### 5.1 Proof of Proposition 5.1 In this section we provide the proof of Proposition 5.1. A similar result was proved for NPG in [1], when the actor has access to the exact $Q$-function. However, since the actor in Algorithm 1 has only access to $Q_{t}(s,a)$, rather than the exact $Q$-function $Q^{\pi_{t}}(s,a)$, establishing the bound in Proposition 5.1 is more challenging. Note that $Q_{t}(s,a)$ is obtained by the critic carrying out only one step of the TD-learning using a single sample update at each time step. Consequently, the error bound in Proposition 5.1 involves the term $\frac{1}{T}\sum_{t=0}^{T}\beta_{t}{\mathbb{E}}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|$, which accounts for the time-average error in the critic’s estimate of the $Q$-function. Proposition 5.1 is also different from the results in [1] is terms of the step size. In particular, while [1] only considers the case of constant step size, the result in Proposition 5.1 is stated for general choice of non-increasing step sizes. Furthermore, a similar type of upper bound has been established in [31, 14] for the analysis of off-policy natural AC. However, in those works the $\epsilon_{T}$ term is absent. When the actor has access to the exact $Q$-functions, it was observed in [1] that using a constant step size results in $\mathcal{O}(1/T)$ rate of convergence. This result can be reproduced from Proposition 5.1 by eliminating the $\frac{1}{T}\sum_{t=0}^{T}\beta_{t}{\mathbb{E}}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|$ term in the upper bound, and taking a constant $\beta_{t}$, and choosing $\epsilon_{T}=0$. However, due to the existence of $\epsilon_{T}$, and the $\frac{1}{T}\sum_{t=0}^{T}\beta_{t}{\mathbb{E}}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|$ term, the optimal convergence rate can only be obtained by a carefully diminishing step size, which has been shown in Theorem 3.1. Next we provide the proof of Proposition 5.1. Proof of Proposition 5.1: We will use a Lyapunov drift based argument to prove the proposition using the KL-divergence [16] as a Lyapunov or potential function. This is a natural choice because it is known to be the right potential function for mirror descent [3] in optimization and it is known [1, 23] that the natural gradient ascent is equivalent to mirror descent. Let $M(\pi)={\mathbb{E}}_{s\sim d^{*}}[D_{KL}(\pi^{*}(\cdot|s)||\pi(\cdot|s))]$. Then, $\displaystyle M(\pi_{t+1})-M(\pi_{t})=$ $\displaystyle{\mathbb{E}}_{s\sim d^{*}}\left[\sum_{a}\pi^{*}(a|s)\log\frac{\pi_{t}(a|s)}{\pi_{t+1}(a|s)}\right]$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}$ $\displaystyle\sum_{s,a}d^{*}(s)\pi^{*}(a|s)(\log Z_{t}(s)-\beta_{t}Q_{t+1}(s,a))$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}$ $\displaystyle\beta_{t}\sum_{s,a}d^{*}(s)\pi^{*}(a|s)(Q^{\hat{\pi}_{t}}(s,a)-V^{\hat{\pi}_{t}}(s))$ $\displaystyle+\beta_{t}\sum_{s,a}d^{*}(s)\pi^{*}(a|s)\left(Q_{t+1}(s,a)-Q^{\hat{\pi}_{t}}(s,a)-Q_{t+1}(s,a)+V^{\hat{\pi}_{t}}(s)\right)$ $\displaystyle+\sum_{s,a}d^{*}(s)\pi^{*}(a|s)(\log Z_{t}(s)-\beta_{t}Q_{t+1}(s,a))$ $\displaystyle\stackrel{{\scriptstyle(c)}}{{=}}$ $\displaystyle(1-\gamma)\beta_{t}\left[V^{\hat{\pi}_{t}}-V^{*}\right]+\beta_{t}\sum_{s,a}d^{*}(s)\pi^{*}(a|s)\left[Q^{\hat{\pi}_{t}}(s,a)-Q_{t+1}\right]$ $\displaystyle+\sum_{s,a}d^{*}(s)\left[\log Z_{t}(s)-\beta_{t}V^{\hat{\pi}_{t}}\right],$ where, $(a)$ is by the update rule of the policy $\pi_{t}$ in Algorithm 1 with $Z_{t}(s)=\sum_{a}\pi_{t}(a|s)\exp(\beta_{t}Q_{t+1}(s,a))$, $(b)$ is by adding and subtracting terms, and $(c)$ is by Performance Difference Lemma [27]. Rearranging the terms, we get: $\displaystyle V^{*}\vspace{-1mm}-V^{\hat{\pi}_{t}}=$ $\displaystyle\frac{1}{(1-\gamma)\beta_{t}}\left[M(\pi_{t})-M(\pi_{t+1})\right]$ $\displaystyle+\frac{1}{1-\gamma}\sum_{s,a}d^{*}(s)\pi^{*}(a|s)\left[Q^{\hat{\pi}_{t}}(s,a)-Q_{t+1}(s,a)\right]$ (6) $\displaystyle+\frac{1}{1-\gamma}\sum_{s,a}d^{*}(s)\left[\frac{1}{\beta_{t}}\log Z_{t}(s)-V^{\hat{\pi}_{t}}(s)\right].$ (7) We bound the terms in (6) and (7) separately. Using the Cauchy-Schwarz inequality in (6), we have: $\displaystyle\frac{1}{1-\gamma}\sum_{s,a}d^{*}(s)\pi^{*}(a|s)\left[Q^{\hat{\pi}_{t}}(s,a)-Q_{t+1}(s,a)\right]\leq$ $\displaystyle\frac{1}{1-\gamma}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|.$ In order to bound the term (7), we use the following lemma. ###### Lemma 5.1. Consider Algorithm 1 with $Z_{t}(s)=\sum_{a}\pi_{t}(a|s)\exp(\beta_{t}Q_{t+1}(s,a))$. For any $t\geq 0$ we have the following inequality: $\displaystyle\sum_{s}d^{*}(s)\left[\frac{1}{\beta_{t}}\log Z_{t}(s)-V^{\hat{\pi}_{t}}(s)\right]\leq V^{\pi_{t+1}}(d^{*})-V^{\pi_{t}}(d^{*})+\|Q^{{\hat{\pi}}_{t}}-Q_{t+1}\|+\frac{2{L_{1}}\sqrt{|\mathcal{A}|}}{(1-\gamma)^{2}}\beta_{t}+\frac{\epsilon_{t-1}}{1-\gamma},$ where $\epsilon_{-1}:=0$. Employing Lemma 5.1 in (7), multiplying by $\beta_{t}$ and summing from $0$ to $T$, we get: $\displaystyle\sum_{t=0}^{T}\beta_{t}(V^{*}-V^{{\hat{\pi}}_{t}})\leq$ $\displaystyle\sum_{t=0}^{T}\frac{1}{(1-\gamma)}\left[M(\pi_{t})-M(\pi_{t+1})\right]$ $\displaystyle+\frac{2\beta_{t}}{1-\gamma}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|+\frac{\beta_{t}}{1-\gamma}\left[V^{\pi_{t+1}}(d^{*})-V^{\pi_{t}}(d^{*})\right]$ $\displaystyle+\frac{2{L_{1}}\sqrt{|\mathcal{A}|}}{(1-\gamma)^{2}}\beta_{t}^{2}+\frac{\epsilon_{t-1}\beta_{t}}{1-\gamma}$ $\displaystyle=$ $\displaystyle\frac{1}{(1-\gamma)}\sum_{t=0}^{T}\bigg{\\{}M(\pi_{t})-M(\pi_{t+1})+\beta_{t}\left[V^{\pi_{t+1}}(d^{*})-V^{\pi_{t}}(d^{*})\right]\bigg{\\}}$ (8) $\displaystyle+\sum_{t=0}^{T}\bigg{\\{}\frac{2\beta_{t}}{1-\gamma}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|+\frac{2{L_{1}}\sqrt{|\mathcal{A}|}}{(1-\gamma)^{2}}\beta_{t}^{2}+\frac{\epsilon_{t-1}\beta_{t}}{1-\gamma}\bigg{\\}}.$ (9) We evaluate (8) and (9) separately. First, we have: $\displaystyle\eqref{eq:I14p}=$ $\displaystyle\frac{1}{(1-\gamma)}\sum_{t=0}^{T}\bigg{[}\beta_{t}\left[V^{\pi_{t+1}}(d^{*})-V^{\pi_{t}}(d^{*})\right]\bigg{]}+\frac{1}{(1-\gamma)}\left[M(\pi_{0})-M(\pi_{T+1})\right]$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle\frac{1}{(1-\gamma)}\sum_{t=0}^{T}\bigg{[}\beta_{t}\left[V^{\pi_{t+1}}(d^{*})-V^{\pi_{t}}(d^{*})\right]\bigg{]}+\frac{\log|\mathcal{A}|}{(1-\gamma)}$ $\displaystyle=$ $\displaystyle\frac{1}{(1-\gamma)}\sum_{t=0}^{T}\left[(\beta_{t}-\beta_{t+1})V^{\pi_{t+1}}(d^{*})\right]-\frac{\beta_{0}}{(1-\gamma)}V^{\pi_{0}}(d^{*})+\frac{\beta_{T+1}}{(1-\gamma)}V^{\pi_{T+1}}(d^{*})+\frac{\log|\mathcal{A}|}{(1-\gamma)}$ $\displaystyle\leq$ $\displaystyle\frac{1}{(1-\gamma)}\sum_{t=0}^{T}\left[(\beta_{t}-\beta_{t+1})V^{\pi_{t+1}}(d^{*})\right]+\frac{\beta_{T+1}}{(1-\gamma)}V^{\pi_{T+1}}(d^{*})+\frac{\log|\mathcal{A}|}{(1-\gamma)}$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle\frac{1}{(1-\gamma)^{2}}\sum_{t=0}^{T}(\beta_{t}-\beta_{t+1})+\frac{\beta_{T+1}}{(1-\gamma)^{2}}+\frac{\log|\mathcal{A}|}{(1-\gamma)}$ $\displaystyle=$ $\displaystyle\frac{1}{(1-\gamma)^{2}}(\beta_{0}-\beta_{T+1})+\frac{\beta_{T+1}}{(1-\gamma)^{2}}+\frac{\log|\mathcal{A}|}{(1-\gamma)}$ $\displaystyle\leq$ $\displaystyle\frac{2\beta}{(1-\gamma)^{2}}+\frac{\log|\mathcal{A}|}{(1-\gamma)},$ (10) where $(a)$ is due to $0\leq D_{KL}(P(X)||Unif(X))\leq\log|\mathcal{X}|$, where $P(X)$ is any distributieseseon over $\mathcal{X}$ and $Unif(X)$ is uniform distribution over $\mathcal{X}$, and $|\mathcal{X}|$ is the cardinality of the random variable $X$[16], and $(b)$ is due to Lemma 5.5 and $\beta_{t}$ being non-increasing with respect to $t$. Furthermore, we have $\displaystyle\eqref{eq:I1I4p}=$ $\displaystyle\frac{2}{1-\gamma}\sum_{t=0}^{T}\Bigg{[}\beta_{t}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|+\frac{{L_{1}}\sqrt{|\mathcal{A}|}}{(1-\gamma)}\beta_{t}^{2}+0.5\epsilon_{t-1}\beta_{t}\Bigg{]}$ $\displaystyle\leq$ $\displaystyle\frac{2}{1-\gamma}\sum_{t=0}^{T}\Bigg{[}\beta_{t}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|+\frac{{L_{1}}\sqrt{|\mathcal{A}|}}{(1-\gamma)}\beta_{t}^{2}+\epsilon_{t}\beta_{t}\Bigg{]}.$ (11) Dividing both sides of (10) and (11) with $\sum_{t=0}^{T}\beta_{t}$, and noting that $\frac{\sum_{t=0}^{T}\beta_{t}{\mathbb{E}}(V^{*}-V^{\hat{\pi}_{t}})}{\sum_{t=0}^{T}}={\mathbb{E}}[V^{*}-V^{\hat{\pi}_{\hat{T}}}]$, we get the proposition. According to Proposition 5.1, to establish a bound for the performance metric ${\mathbb{E}}[V^{*}-V^{{\hat{\pi}}_{\hat{T}}}]$, we need to characterize a bound for ${\mathbb{E}}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|$ for all $0\leq t\leq T$. Next we provide the proof of Theorem 3.1 which essentially corresponds to the characterization of this bound. ### 5.2 Proof of Theorem 3.1 First, we introduce some notations and lemmas which will be used within the proof. $\displaystyle O_{t}$ $\displaystyle=(S_{t},A_{t},S_{t+1},A_{t+1})$ $\displaystyle r(O_{t})$ $\displaystyle=[0;0;\dots;0;\mathcal{R}(S_{t},A_{t});0;\dots;0]\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}$ $\displaystyle A(O)$ $\displaystyle\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|\times|\mathcal{S}||\mathcal{A}|}$ $\displaystyle A(O)_{i,j}$ $\displaystyle\equiv A(s,a,s^{\prime},a^{\prime})_{i,j}=\begin{cases}\gamma-1&i=j=(s,a)=(s^{\prime},a^{\prime})\\\ -1&(s,a)\neq(s^{\prime},a^{\prime}),i=j=(s,a)\\\ \gamma&(s,a)\neq(s^{\prime},a^{\prime}),i=(s,a),j=(s^{\prime},a^{\prime})\\\ 0&\text{otherwise}\end{cases}$ $\displaystyle\theta_{t}$ $\displaystyle=Q_{t}-Q^{{\hat{\pi}}_{t-1}}$ (12) $\displaystyle\bar{A}^{\pi}=$ $\displaystyle{\mathbb{E}}_{s\sim\mu^{\pi}(\cdot),a\sim\pi(\cdot|s),s^{\prime}\sim\mathcal{P}(\cdot|s,a),a^{\prime}\sim\pi(\cdot|s^{\prime})}[A(s,a,s^{\prime},a^{\prime})]$ (13) $\displaystyle\Gamma(\pi,\theta,O)$ $\displaystyle=\theta^{\top}(r(O)+A(O)Q^{\pi})+\theta^{\top}(A(O)-\bar{A}^{\pi})\theta.$ Note that with the above notation, the update of the $Q$-function in Algorithm 1 in the vector form can be written as: $Q_{t+1}=Q_{t}+\alpha_{t}(r(O_{t})+A(O_{t})Q_{t}),$ which by adding and subtracting terms, can be equivalently written as: $\theta_{t+1}+(Q^{{\hat{\pi}}_{t}}-Q^{{\hat{\pi}}_{t-1}})=\theta_{t}+\alpha_{t}(r(O_{t})+A(O_{t})Q^{{\hat{\pi}}_{t-1}}+A(O_{t})\theta_{t}).$ Lemmas 5.2 and 5.3 characterize an upper bound on the one step drift of $Q^{{\hat{\pi}}_{t}}$ and $\theta_{t}$. ###### Lemma 5.2. One step drift of the $Q$-function with respect to the sampling policy ${\hat{\pi}}_{t}$ satisfies the following: $\|Q^{{\hat{\pi}}_{t+1}}-Q^{{\hat{\pi}}_{t}}\|\leq{L_{2}}({L_{3}}\frac{\epsilon_{t-1}}{t}+{L_{1}}\beta_{t}),$ where the constants ${L_{1}}$, ${L_{2}}$, and ${L_{3}}$ are defined in Lemmas 5.8 and 5.9. ###### Lemma 5.3. The one step drift of the vector $\theta_{t}$ can be bounded as $\|\theta_{t+1}-\theta_{t}\|^{2}\leq 2\alpha_{t}^{2}\Delta_{Q}^{2}+4{L_{2}}^{2}{L_{3}}^{2}\frac{\epsilon_{t-2}^{2}}{(t-1)^{2}}+4{L_{2}}^{2}{L_{1}}^{2}\beta_{t-1}^{2},$ where $\Delta_{Q}$ is defined in Lemma A.6 in the Appendix. The following lemma is directly used to create a negative drift, which is essential for the convergence proof of Theorem 3.1. ###### Lemma 5.4. Consider the policy ${\hat{\pi}}_{t-1}$ in the $t-1$’th iteration of Algorithm 1, and the vector $\theta_{t}$ and the matrix $\bar{A}^{{\hat{\pi}}_{t-1}}$ defined in (12) and (13), respectively. We have: $\theta^{\top}_{t}\bar{A}^{{\hat{\pi}}_{t-1}}\theta_{t}\leq-(1-\gamma)\frac{\epsilon_{t-2}}{|\mathcal{A}|}\mu\|\theta_{t}\|^{2},$ where $\mu>0$ is some absolute constant. Later in Lemma 5.10 we explain the intuition behind the constant $\mu$. The following Lemma provides some absolute bounds on the value and $Q$-function. ###### Lemma 5.5. Let $Q_{\max}=\frac{1}{1-\gamma}$. Then we have 1. 1. $0\leq V^{\pi}\leq Q_{\max}$ 2. 2. $0\leq Q^{\pi}(s,a)\leq Q_{\max}$ 3. 3. $\|Q^{\pi}\|\leq\sqrt{|\mathcal{S}||\mathcal{A}|}Q_{\max}$ 4. 4. $\|Q_{t}\|\leq\sqrt{|\mathcal{S}||\mathcal{A}|}Q_{\max}$. A major part of the proof of Theorem 3.1 is to establish a bound on ${\mathbb{E}}[\Gamma({\hat{\pi}}_{k-1},\theta_{k},O_{k})]$. In the following, we provide such a bound in Lemma 5.6. The proof of this lemma is provided in Section 5.3. ###### Lemma 5.6. For any $\tau<t$, we have: $\displaystyle{\mathbb{E}}\big{[}\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})\big{]}\leq$ $\displaystyle C_{b}m\rho^{\tau}+K_{2}\Delta_{Q}\tau\alpha_{t-\tau}+(C_{u}{L_{3}}+K_{1}{L_{3}}+K_{2}{L_{2}}{L_{3}})\frac{(\tau+1)^{2}\epsilon_{t-\tau-2}}{t-\tau-1}$ $\displaystyle+(C_{u}{L_{1}}+K_{1}{L_{1}}+K_{2}{L_{1}}{L_{2}})(\tau+1)^{2}\beta_{t-\tau-1}.$ We further define $\displaystyle\tau_{t}:=\min\left\\{r>0|M_{\rho}\rho^{r}\leq\beta_{t},r~{}\text{integral}\right\\},$ (14) where $M_{\rho}=(-\sigma/\ln(\rho))^{\sigma}/(\rho^{1+\sigma/\ln(\rho)})$. It is easy to see that $1\leq\tau_{t}\leq t$ for all $t$, and $\tau_{t}=\mathcal{O}(\log(t))=\tilde{\mathcal{O}}(1)$. In order to establish the convergence result in Theorem 3.1, we use the bound in Proposition 5.1. By the definition of the step sizes, it is clear that $\sum_{t=0}^{T}\beta_{t}=\Theta(T^{1-\sigma})$. Hence, by Proposition 5.1 and assumptions on the step sizes, it is straightforward to show that $\displaystyle{\mathbb{E}}[V^{*}-V^{{\hat{\pi}}_{\hat{T}}}]\leq$ $\displaystyle\mathcal{O}\left(\frac{1}{T^{1-\sigma}}\right)+\begin{cases}\tilde{\mathcal{O}}\left(\frac{1}{T^{\sigma}}\right)&\text{if}\quad 1>2\sigma\\\ \tilde{\mathcal{O}}(\frac{1}{T^{1-\sigma}})&\text{o.w}\end{cases}+\begin{cases}\tilde{\mathcal{O}}\left(\frac{\epsilon}{T^{1-\sigma}}\right)&\text{if}\quad\xi+\sigma>1\\\ \tilde{\mathcal{O}}(\frac{\epsilon}{T^{\xi}})&\text{o.w}\end{cases}$ $\displaystyle+\frac{1}{T^{1-\sigma}}\mathcal{O}\left\\{\sum_{t=0}^{T}\beta_{t}{\mathbb{E}}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|\right\\}.$ (15) Next, we aim at bounding the term $\sum_{t=0}^{T}\beta_{t}{\mathbb{E}}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|$. We have: $\displaystyle\sum_{t=0}^{T}\beta_{t}{\mathbb{E}}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|=$ $\displaystyle\sum_{t=0}^{T}\beta_{t}^{1/2\sigma}\beta_{t}^{1-1/2\sigma}{\mathbb{E}}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|$ $\displaystyle\leq$ $\displaystyle\sqrt{\sum_{t=0}^{T}\beta_{t}^{1/\sigma}}\times\sqrt{\sum_{t=0}^{T}\beta_{t}^{\frac{2\sigma-1}{\sigma}}{\mathbb{E}}\|Q^{\hat{\pi}_{t}}-Q_{t+1}\|^{2}},$ (16) where the inequality is due to Cauchy–Schwarz inequality. Furthermore, we have $\sum_{t=0}^{T}\beta_{t}^{1/\sigma}=\mathcal{O}(\log(T))=\tilde{\mathcal{O}}(1)$. Hence, we only left to bound the last term in (16) which we will do in the rest of the proof. Using $\|\theta_{t}\|^{2}$ as the Lyapunov function, we have: $\displaystyle\|\theta_{t+1}\|^{2}-\|\theta_{t}\|^{2}=$ $\displaystyle 2\theta_{t}^{\top}(\theta_{t+1}-\theta_{t}-\alpha_{t}\bar{A}^{{\hat{\pi}}_{t-1}}\theta_{t})+\|\theta_{t+1}-\theta_{t}\|^{2}+2\alpha_{t}\theta_{t}^{\top}\bar{A}^{{\hat{\pi}}_{t-1}}\theta_{t}$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}$ $\displaystyle 2\alpha_{t}\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})+2\theta_{t}^{\top}(Q^{{\hat{\pi}}_{t-1}}-Q^{{\hat{\pi}}_{t}})+\|\theta_{t+1}-\theta_{t}\|^{2}+2\alpha_{t}\theta_{t}^{\top}\bar{A}^{{\hat{\pi}}_{t-1}}\theta_{t}$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle 2\alpha_{t}\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})+2\|\theta_{t}\|.\overbrace{\|Q^{{\hat{\pi}}_{t-1}}-Q^{{\hat{\pi}}_{t}}\|}^{T_{1}}+\overbrace{\|\theta_{t+1}-\theta_{t}\|^{2}}^{T_{2}}+2\alpha_{t}\overbrace{\theta_{t}^{\top}\bar{A}^{{\hat{\pi}}_{t-1}}\theta_{t}}^{T_{3}},$ (17) where $(a)$ is by definition of $\Gamma$, and $(b)$ is due to the Cauchy–Schwarz inequality. We bound each of the terms $T_{1}$, $T_{2}$, $T_{3}$ using Lemmas 5.2, 5.3, and 5.4, respectively. We have $\displaystyle\|\theta_{t+1}\|^{2}-\|\theta_{t}\|^{2}\leq$ $\displaystyle 2\alpha_{t}\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})+2{L_{2}}\left({L_{3}}\frac{\epsilon_{t-2}}{t-1}+{L_{1}}\beta_{t-1}\right)\|\theta_{t}\|+2\alpha_{t}^{2}\Delta_{Q}^{2}+4{L_{2}}^{2}{L_{3}}^{2}\frac{\epsilon_{t-2}^{2}}{(t-1)^{2}}$ $\displaystyle+4{L_{2}}^{2}{L_{1}}^{2}\beta_{t-1}^{2}-\frac{2(1-\gamma)\mu}{|\mathcal{A}|}\alpha_{t}\epsilon_{t-2}\|\theta_{t}\|^{2}.$ (18) Define $\lambda_{t}=\beta_{t}^{\frac{2\sigma-1}{\sigma}}$. Multiplying both sides of (18) with $\lambda_{t}$ and denoting $y_{t}=\lambda_{t}\|\theta_{t}\|^{2}$, we have $y_{t}\leq e_{t}(\|\theta_{t}\|^{2}-\|\theta_{t+1}\|^{2})+u_{t}+h_{t}\sqrt{y_{t}}$, where $e_{t}=\frac{\lambda_{t}|\mathcal{A}|}{2(1-\gamma)\mu\alpha_{t}\epsilon_{t-2}}$ and $u_{t}=\frac{|\mathcal{A}|\lambda_{t}}{2(1-\gamma)\mu\alpha_{t}\epsilon_{t-2}}(2\alpha_{t}\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})+2\alpha_{t}^{2}\Delta_{Q}^{2}+4{L_{2}}^{2}{L_{3}}^{2}\frac{\epsilon_{t-2}^{2}}{(t-1)^{2}}+4{L_{2}}^{2}{L_{1}}^{2}\beta_{t-1}^{2})$, and $h_{t}=\frac{|\mathcal{A}|\sqrt{\lambda_{t}}}{2(1-\gamma)\mu\alpha_{t}\epsilon_{t-2}}2{L_{2}}\left({L_{3}}\frac{\epsilon_{t-2}}{t-1}+{L_{1}}\beta_{t-1}\right)$. Summing from $\tau_{t}+2$ to $t$, we have $\displaystyle\sum_{k=\tau_{t}+2}^{t}y_{k}\leq\underbrace{\sum_{k=\tau_{t}+2}^{t}e_{k}(\|\theta_{k}\|^{2}-\|\theta_{k+1}\|^{2})}_{T_{1}}+\underbrace{u_{k}}_{T_{2}}+\underbrace{h_{k}\sqrt{y_{k}}}_{T_{3}}.$ (19) We bound the summation of each of the terms $T_{1}$, $T_{2}$, and $T_{3}$ separately. First we have $\displaystyle T_{1}=$ $\displaystyle e_{\tau_{t}+1}\|\theta_{\tau_{t}+2}\|^{2}-e_{t}\|\theta_{t+1}\|^{2}+\sum_{k=\tau_{t}+2}^{t}(e_{k}-e_{k-1})\|\theta_{k}\|^{2}.$ We have $e_{k}\sim\lambda_{k}/(\alpha_{k}\epsilon_{k})\sim\beta_{k}^{\frac{2\sigma-1}{\sigma}}/\alpha_{k}\epsilon_{k}\sim k^{\nu+\xi+1-2\sigma}/\epsilon$. For the case $\nu+\xi+1-2\sigma>0$, $e_{k}$ is increasing. Hence, we have $\displaystyle T_{1}$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}4|\mathcal{S}||\mathcal{A}|Q_{\max}^{2}\left[e_{\tau_{t}+1}+\sum_{k=\tau_{t}+2}^{t}(e_{k}-e_{k-1})\right]$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}4|\mathcal{S}||\mathcal{A}|Q_{\max}^{2}\left[e_{\tau_{t}+1}+e_{t}\right]\leq\tilde{\mathcal{O}}(t^{\nu+\xi+1-2\sigma}/\epsilon),$ where $(a)$ is due to Lemma 5.5, and $(b)$ is due to $e_{t}\geq 0$. Furthermore, if $\nu+\xi+1-2\sigma<0$, we have $e_{k}$ decreasing, and hence $T_{1}\leq\tilde{\mathcal{O}}(e_{\tau_{t}+1})=\tilde{\mathcal{O}}(1/\epsilon)$. It is also easy to show that for $\nu+\xi+1-2\sigma=0$, we have $T_{1}\leq\tilde{\mathcal{O}}(1/\epsilon)$. Hence, in total we have $\displaystyle T_{1}\leq\begin{cases}\tilde{\mathcal{O}}(t^{\nu+\xi+1-2\sigma}/\epsilon)&\text{if}~{}~{}\nu+\xi+1>2\sigma,\\\ \tilde{\mathcal{O}}(1/\epsilon)&\text{o.w.}\end{cases}$ (20) Furthermore, for the term $T_{2}$ we have $\displaystyle{\mathbb{E}}T_{2}=$ $\displaystyle\mathcal{O}(\sum_{k=\tau_{t}+2}^{t}\frac{\lambda_{k}}{\epsilon_{k}}{\mathbb{E}}\Gamma({\hat{\pi}}_{k-1},\theta_{k},O_{k})+\frac{\lambda_{k}\alpha_{k}}{\epsilon_{k}}+\frac{\lambda_{k}\epsilon_{k}}{k^{2}\alpha_{k}}+\frac{\lambda_{k}\beta_{k}^{2}}{\alpha_{k}\epsilon_{k}})$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle\tilde{\mathcal{O}}(\sum_{k=\tau_{t}+2}^{t}\frac{\lambda_{k}}{\epsilon_{k}}(\beta_{k}+\alpha_{k}+\frac{\epsilon_{k}}{k})+\frac{\lambda_{k}\alpha_{k}}{\epsilon_{k}}+\frac{\lambda_{k}\epsilon_{k}}{k^{2}\alpha_{k}}+\frac{\lambda_{k}\beta_{k}^{2}}{\alpha_{k}\epsilon_{k}})$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle\tilde{\mathcal{O}}(\sum_{k=\tau_{t}+2}^{t}k^{1-2\sigma}(k^{\xi-\nu}/\epsilon+k^{-1}+k^{\xi+\nu-2\sigma}/\epsilon))$ $\displaystyle\leq$ $\displaystyle\begin{cases}\tilde{\mathcal{O}}(t^{2+\xi-2\sigma-\nu}/\epsilon)&\text{if}~{}~{}2+\xi>\nu+2\sigma,\\\ \tilde{\mathcal{O}}(1/\epsilon)&\text{o.w.}\end{cases}+\begin{cases}\tilde{\mathcal{O}}(t^{1-2\sigma})&\text{if}~{}~{}1>2\sigma,\\\ \tilde{\mathcal{O}}(1)&\text{o.w.}\end{cases}$ $\displaystyle+\begin{cases}\tilde{\mathcal{O}}(t^{2+\xi+\nu-4\sigma}/\epsilon)&\text{if}~{}~{}2+\xi+\nu>4\sigma,\\\ \tilde{\mathcal{O}}(1/\epsilon)&\text{o.w,}\end{cases}$ (21) where in $(a)$ we use Lemma 5.6 with $\tau=\tau_{k}$, and in $(b)$ we use the assumptions on the step sizes. Finally, for the term $T_{3}$ we have $\displaystyle{\mathbb{E}}T_{3}\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle\sqrt{\sum_{k=\tau_{t}+2}^{t}h_{k}^{2}}\times{\mathbb{E}}\left[\sqrt{\sum_{k=\tau_{t}+2}^{t}y_{k}}\right]$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle\sqrt{\sum_{k=\tau_{t}+2}^{t}h_{k}^{2}}\times\sqrt{\sum_{k=\tau_{t}+2}^{t}{\mathbb{E}}y_{k}},$ where $(a)$ is by Cauchy–Schwarz inequality and $(b)$ is by concavity of square root and Jensen’s inequality. Denoting $G(t)=\sum_{k=\tau_{t}+2}^{t}h_{k}^{2}$, we have $\displaystyle G(t)\leq$ $\displaystyle\tilde{\mathcal{O}}\left(\sum_{k=\tau_{t}+2}^{t}\frac{\lambda_{k}}{\alpha_{k}^{2}k^{2}}+\frac{\lambda_{k}\beta_{k}^{2}}{\alpha_{k}^{2}\epsilon_{k}^{2}}\right)$ $\displaystyle\leq$ $\displaystyle\tilde{\mathcal{O}}\left(\sum_{k=\tau_{t}+2}^{t}k^{-1}+k^{1-4\sigma+2\nu+2\xi}/\epsilon^{2}\right)$ $\displaystyle=$ $\displaystyle\tilde{\mathcal{O}}(1)+\begin{cases}\tilde{\mathcal{O}}(t^{2-4\sigma+2\nu+2\xi}/\epsilon^{2})&\text{if}~{}~{}2+2\nu+2\xi>4\sigma,\\\ \tilde{\mathcal{O}}(1/\epsilon^{2})&\text{o.w.}\end{cases}$ (22) Denote $H(t)=\sum_{k=\tau_{t}+2}^{t}{\mathbb{E}}y_{k}$. Taking expectation on both sides of (19), we have $\displaystyle H(t)$ $\displaystyle\leq{\mathbb{E}}T_{1}+{\mathbb{E}}T_{2}+\sqrt{G(t)}.\sqrt{H(t)}$ $\displaystyle\implies(\sqrt{H(t)}-\frac{1}{2}\sqrt{G(t)})^{2}$ $\displaystyle\leq{\mathbb{E}}T_{1}+{\mathbb{E}}T_{2}+1/4G(t)$ $\displaystyle\implies\sqrt{H(t)}-\frac{1}{2}\sqrt{G(t)}$ $\displaystyle\leq\sqrt{{\mathbb{E}}T_{1}+{\mathbb{E}}T_{2}+1/4G(t)}$ $\displaystyle\implies H(t)$ $\displaystyle\leq 2{\mathbb{E}}T_{1}+2{\mathbb{E}}T_{2}+G(t)$ (23) Combining (20), (21), (22), and (23), we have $\displaystyle H(t)\leq$ $\displaystyle\begin{cases}\tilde{\mathcal{O}}(t^{\nu+\xi+1-2\sigma}/\epsilon)&\text{if}~{}~{}\nu+\xi+1>2\sigma,\\\ \tilde{\mathcal{O}}(1/\epsilon)&\text{o.w.}\end{cases}+\begin{cases}\tilde{\mathcal{O}}(t^{2+\xi-2\sigma-\nu}/\epsilon)&\text{if}~{}~{}2+\xi>\nu+2\sigma,\\\ \tilde{\mathcal{O}}(1/\epsilon)&\text{o.w.}\end{cases}$ $\displaystyle+\begin{cases}\tilde{\mathcal{O}}(t^{1-2\sigma})&\text{if}~{}~{}1>2\sigma,\\\ \tilde{\mathcal{O}}(1)&\text{o.w.}\end{cases}+\begin{cases}\tilde{\mathcal{O}}(t^{2+\xi+\nu-4\sigma}/\epsilon)&\text{if}~{}~{}2+\xi+\nu>4\sigma,\\\ \tilde{\mathcal{O}}(1/\epsilon)&\text{o.w,}\end{cases}$ $\displaystyle+\begin{cases}\tilde{\mathcal{O}}(t^{2-4\sigma+2\nu+2\xi}/\epsilon^{2})&\text{if}~{}~{}2+2\nu+2\xi>4\sigma,\\\ \tilde{\mathcal{O}}(1/\epsilon^{2})&\text{o.w.}\end{cases}+\tilde{\mathcal{O}}(1).$ Combining the above bound, (15) and (16), we get the result. #### 5.2.1 Proof of Corollary 3.1.1 In the case of constant exploration parameter, we have $\xi=0$, and the optimal step size can be achieved by $\sigma=3/4$ and $\nu=1/2$. In this case, we get ${\mathbb{E}}[V^{*}-V^{{\hat{\pi}}_{\hat{T}}}]\leq\tilde{\mathcal{O}}\left(\frac{T^{-1/4}}{\epsilon}+\epsilon\right)$. Hence, to get to a solution policy within $\delta/\epsilon+\epsilon$ of the global optimum, we need $\tilde{\mathcal{O}}(1/\delta^{4})$ number of samples. Furthermore, to get $\delta$-close to the global optimum, we should have $\tilde{\mathcal{O}}(\frac{T^{-1/4}}{\epsilon})\leq\delta/2$ and $\tilde{\mathcal{O}}(\epsilon)\leq\delta/2$, which means we have $\tilde{\mathcal{O}}(T^{-1/8})\leq\delta$. Hence, to get $\delta$-close to the global optimum, we need $\tilde{\mathcal{O}}(1/\delta^{8})$ number of samples. #### 5.2.2 Proof of Corollary 3.1.2 For $\xi>0$ we get ${\mathbb{E}}[V^{*}-V^{{\hat{\pi}}_{\hat{T}}}]\leq\tilde{\mathcal{O}}(T^{-1/6})$ convergence to the global optimum which can be achieved by $\xi=1/6$, $\nu=1/2$, and $\sigma=5/6$. Hence, in this case to get $\delta$-close to the global optimum, we need $\tilde{\mathcal{O}}(1/\delta^{6})$ number of samples. This proves Corollaries 3.1.1 and 3.1.2. ### 5.3 Proof sketch of Lemma 5.6 In Algorithm 1, the actions $\\{A_{t}\\}_{t\geq 1}$ are sampled from a time- varying policy ${\hat{\pi}}_{t-1}$. Hence the tuple $(S_{t},A_{t})$ follows a time-varying Markov chain as follows $\displaystyle S_{t-\tau}$ $\displaystyle\xrightarrow{{\hat{\pi}}_{t-\tau-1}}A_{t-\tau}\xrightarrow{\mathcal{P}}S_{t-\tau+1}\xrightarrow{{\hat{\pi}}_{t-\tau}}A_{t-\tau+1}\dots\xrightarrow{\mathcal{P}}S_{t}\xrightarrow{{\hat{\pi}}_{t-1}}A_{t}\xrightarrow{\mathcal{P}}S_{t+1}\xrightarrow{{\hat{\pi}}_{t}}A_{t+1}.$ Since the sampling policy is changing over time, the convergence analysis of this Markov chain is difficult. In order to analyze this time-varying Markov chain, at each time step $t$, we construct the following auxiliary Markov chain (This idea was first employed in [6]): $\displaystyle S_{t-\tau}$ $\displaystyle\xrightarrow{{\hat{\pi}}_{t-\tau-1}}A_{t-\tau}\xrightarrow{\mathcal{P}}\tilde{S}_{t-\tau+1}\xrightarrow{{\hat{\pi}}_{t-\tau-1}}\tilde{A}_{t-\tau+1}\dots\xrightarrow{\mathcal{P}}\tilde{S}_{t}\xrightarrow{{\hat{\pi}}_{t-\tau-1}}\tilde{A}_{t}\xrightarrow{\mathcal{P}}\tilde{S}_{t+1}\xrightarrow{{\hat{\pi}}_{t-\tau-1}}\tilde{A}_{t+1}.$ Due to the geometric mixing of the Markov chain, which is stated formally in Lemma 5.7, by choosing $\tau$ large enough, the distribution of $(\tilde{S}_{t+1},\tilde{A}_{t+1})$ is “sufficiently close” to the stationary distribution $\mu^{{\hat{\pi}}_{t-\tau-1}}\otimes{\hat{\pi}}_{t-\tau-1}$. ###### Lemma 5.7. Suppose Assumption 3.1 holds for an MDP. Then there exist $m>0$ and $\rho\in(0,1)$, such that $\displaystyle d_{TV}(\mu^{\pi}(\cdot),P^{\pi}(S_{\tau}=\cdot|S_{1}=s))\leq m\rho^{\tau},\forall s\in\mathcal{S},\forall\pi,$ (24) where $d_{TV}(\cdot,\cdot)$ denotes the total variation distance between two distributions. Furthermore, aperiodicity and the existence of $m$ and $\rho$ in inequality (24) are equivalent, i.e., if there exist a policy such that the underlying Markov chain is periodic, then (24) does not hold. Define $\tilde{O}_{t}=(\tilde{S}_{t},\tilde{A}_{t},\tilde{S}_{t+1},\tilde{A}_{t+1})$. We have: $\displaystyle\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})=$ $\displaystyle\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t},O_{t})$ (25) $\displaystyle+\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})$ (26) $\displaystyle+\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})$ (27) $\displaystyle+\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t}).$ (28) We bound each of the terms above separately. Due to the Lipschitzness of $\Gamma$ with respect to it’s first and second arguments, the terms (25) and (26) can be bounded as follows: $\displaystyle\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t},O_{t})\leq$ $\displaystyle\mathcal{O}\left(\|{\hat{\pi}}_{t-1}-{\hat{\pi}}_{t-\tau-1}\|\right)$ $\displaystyle\leq$ $\displaystyle\mathcal{O}\left(\sum_{i=t-\tau}^{t-1}\left\|{\hat{\pi}}_{i}-{\hat{\pi}}_{i-1}\right\|\right)\leq\mathcal{O}\left(\tau\frac{\epsilon_{t}}{t}+\tau\beta_{t}\right).$ $\displaystyle\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})\leq$ $\displaystyle\mathcal{O}\left(\|\theta_{t}-\theta_{t-\tau}\|\right)$ $\displaystyle\leq$ $\displaystyle\mathcal{O}\left(\sum_{i=t-\tau+1}^{t}\|\theta_{i}-\theta_{i-1}\|\right)\leq\mathcal{O}\left(\tau\alpha_{t}+\tau\frac{\epsilon_{t}}{t}+\tau\beta_{t}\right).$ In order to bound the remaining two terms (27) and (28), we first apply conditional expectation on both sides. Bounding (27) is slightly technical and is presented in Lemma A.3. The main idea is as follows. Since the policy ${\hat{\pi}}_{t}$ does not change very fast over time, the conditional expectation of $\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})$ and $\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})$ are close. Denoting $\bar{\mathcal{F}}_{t-\tau}=\\{S_{t-\tau},{\hat{\pi}}_{t-\tau-1},\theta_{t-\tau}\\}$, we have $\displaystyle{\mathbb{E}}[\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})|\bar{\mathcal{F}}_{t-\tau}]\leq\tilde{\mathcal{O}}\left(\sum_{i=t-\tau}^{t}\|{\hat{\pi}}_{i}-{\hat{\pi}}_{t-\tau-1}\|\right)\leq\mathcal{O}\left(\tau\frac{\epsilon_{t}}{t}+\tau\beta_{t}\right).$ Finally, denoting $O^{\prime}_{t}=(S_{t}^{\prime},A_{t}^{\prime},S_{t+1}^{\prime},A_{t+1}^{\prime})$, where $S_{t}^{\prime}\sim\mu^{{\hat{\pi}}_{t-\tau-1}}$, $A_{t}^{\prime}\sim{\hat{\pi}}_{t-\tau-1}(\cdot|S_{t}^{\prime})$, $S_{t+1}^{\prime}\sim\mathcal{P}(\cdot|S_{t}^{\prime},A_{t}^{\prime})$, and $A_{t+1}^{\prime}\sim{\hat{\pi}}_{t-\tau-1}(\cdot|S_{t+1}^{\prime})$, we have ${\mathbb{E}}\left[\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O^{\prime}_{t})\big{|}S_{t-\tau},{\hat{\pi}}_{t-\tau-1}\right]=0$ due to the Bellman equation. According to Lemma 5.7, the distribution of the auxiliary chain $\tilde{O}_{t}=(\tilde{S}_{t},\tilde{A}_{t},\tilde{S}_{t+1},\tilde{A}_{t+1})$ converges geometrically fast to the distribution of $O^{\prime}_{t}=(S_{t}^{\prime},A_{t}^{\prime},S_{t+1}^{\prime},A_{t+1}^{\prime})$. Hence, we have $\displaystyle{\mathbb{E}}\left[\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})\big{|}S_{t-\tau},{\hat{\pi}}_{t-\tau-1}\right]$ $\displaystyle\leq\mathcal{O}(\rho^{\tau}).$ Putting all the above bounds together, we get the result. ### 5.4 Explanation of the main lemmas Lemma 5.2 which provides a bound on one step drift of the $Q$-function with respect to the sampling policy ${\hat{\pi}}_{t}$ can be derived from Lemmas 5.8 and 5.9 below. ###### Lemma 5.8. For every pair of policies $\pi_{1}$ and $\pi_{2}$, we have: $\|Q^{\pi_{1}}-Q^{\pi_{2}}\|\leq{L_{2}}\|\pi_{1}-\pi_{2}\|,$ where ${L_{2}}=\frac{\gamma|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^{2}}$. ###### Lemma 5.9. The policy ${\hat{\pi}}_{t}$, satisfies the following: $\displaystyle\|{\hat{\pi}}_{t+1}-{\hat{\pi}}_{t}\|$ $\displaystyle\leq{L_{1}}\beta_{t}+{L_{3}}\frac{\epsilon_{t-1}}{t},~{}\forall t\geq 1,$ where ${L_{1}}=Q_{\max}\sqrt{|\mathcal{A}||\mathcal{S}|}$ and ${L_{3}}=\xi\sqrt{|\mathcal{S}|}(\frac{1}{\sqrt{|\mathcal{A}|}}+1)$. Lemma 5.8 characterizes the Lipschitzness of the $Q^{\pi}$ function with respect to the policy $\pi$, and Lemma 5.9 provides an upper bound on the drift of the sampling policy ${\hat{\pi}}_{t}$. Finally, Lemma 5.10 below provides an intuition regarding the constant $\mu$ in Lemma 5.4. ###### Lemma 5.10. Suppose Assumption 3.1 holds. There exist a constant $\mu>0$ such that for all the policies $\pi$, the stationary distribution $\mu^{\pi}$ satisfies $\mu^{\pi}(s)\geq\mu,\forall s\in\mathcal{S}.$ Lemma 5.10 is a direct consequence of the ergodicity of the underlying MDP. In particular, the ergodicity Assumption 3.1 ensures that for all the policies $\pi$, under the stationary distribution $\mu^{\pi}$, all the states are being visited with rate at least $\mu$. As explained in Section 4 this is indeed essential for the convergence of AC algorithm. ## 6 Conclusion In this paper we studied the convergence of two-time-scale natural actor- critic algorithm. In order to promote exploration and ensure convergence, we employed $\epsilon$-greedy in the iterations of the algorithm. We have shown that with a constant $\epsilon$ parameter, the actor-critic algorithm converges to a ball around the global optimum with radius $\epsilon+\delta/\epsilon$ using $\tilde{\mathcal{O}}(1/\delta^{4})$ number of samples, and using a small enough exploration parameter $\epsilon$ it requires $\tilde{\mathcal{O}}(1/\delta^{8})$ number of samples to find a policy within $\delta$-ball around the global optimum. Furthermore, with a carefully diminishing greedy parameter, we show $\tilde{\mathcal{O}}(\delta^{-6})$ sample complexity for the convergence to the global optimum. Due to the employment of the $\epsilon$-greedy, we characterize this sample complexity with the minimum set of assumptions, i.e. the ergodicity of the underlying MDP. We show that this assumption is indeed necessary for establishing the convergence of natural actor-critic algorithm. Improving this sample complexity, and characterizing the same sample complexity in the function approximation setting is among our future works. ## References * [1] [author] Agarwal, AlekhA., Kakade, Sham MS. M., Lee, Jason DJ. D. and Mahajan, GauravG. (2019). On the theory of policy gradient methods: Optimality, approximation, and distribution shift. Preprint arXiv:1908.00261. * [2] [author] Bahdanau, DzmitryD., Brakel, PhilemonP., Xu, KelvinK., Goyal, AnirudhA., Lowe, RyanR., Pineau, JoelleJ., Courville, AaronA. and Bengio, YoshuaY. (2016). An actor-critic algorithm for sequence prediction. preprint arXiv:1607.07086. * [3] [author] Bansal, NikhilN. and Gupta, AnupamA. (2019). Potential-Function Proofs for Gradient Methods. Theory of Computing 15 1–32. * [4] [author] Beck, AmirA. (2017). First-order methods in optimization. SIAM. * [5] [author] Benveniste, AlbertA., Métivier, MichelM. and Priouret, PierreP. (2012). Adaptive algorithms and stochastic approximations 22\. Springer Science & Business Media. * [6] [author] Bhandari, JalajJ., Russo, DanielD. and Singal, RaghavR. (2018). A finite time analysis of temporal difference learning with linear function approximation. In Conference on learning theory 1691–1692. PMLR. * [7] [author] Bhatnagar, ShalabhS., Sutton, Richard SR. S., Ghavamzadeh, MohammadM. and Lee, MarkM. (2009). Natural actor–critic algorithms. Automatica 45 2471–2482. * [8] [author] Borkar, V. S.V. S. (2008). Stochastic Approximation: A Dynamical Systems Viewpoint. Cambridge University Press. * [9] [author] Borkar, Vivek SV. S. (2009). Stochastic approximation: a dynamical systems viewpoint 48\. Springer. * [10] [author] Borkar, Vivek SV. S. and Pattathil, SarathS. (2018). Concentration bounds for two time scale stochastic approximation. In 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 504–511. IEEE. * [11] [author] Bouneffouf, DjallelD., Bouzeghoub, AmelA. and Gançarski, Alda LopesA. L. (2012). A contextual-bandit algorithm for mobile context-aware recommender system. In International conference on neural information processing 324–331. Springer. * [12] [author] Cayci, SemihS., He, NiaoN. and Srikant, RR. (2021). Linear Convergence of Entropy-Regularized Natural Policy Gradient with Linear Function Approximation. arXiv preprint arXiv:2106.04096. * [13] [author] Cen, ShicongS., Cheng, ChenC., Chen, YuxinY., Wei, YutingY. and Chi, YuejieY. (2020). Fast global convergence of natural policy gradient methods with entropy regularization. arXiv preprint arXiv:2007.06558. * [14] [author] Chen, ZaiweiZ., Khodadadian, SajadS. and Maguluri, Siva ThejaS. T. (2021). Finite-Sample Analysis of Off-Policy Natural Actor-Critic with Linear Function Approximation. arXiv preprint arXiv:2105.12540. * [15] [author] Chen, ZaiweiZ., Maguluri, Siva ThejaS. T., Shakkottai, SanjayS. and Shanmugam, KarthikeyanK. (2020). Finite-Sample Analysis of Contractive Stochastic Approximation Using Smooth Convex Envelopes. Advances in Neural Information Processing Systems 33\. * [16] [author] Cover, Thomas MT. M. and Thomas, Joy AJ. A. (2012). Elements of information theory. John Wiley & Sons. * [17] [author] Davis, ASA. (1961). Markov chains as random input automata. The American Mathematical Monthly 68 264–267. * [18] [author] Doan, Thinh TT. T. (2019). Finite-Time Analysis and Restarting Scheme for Linear Two-Time-Scale Stochastic Approximation. preprint arXiv:1912.10583. * [19] [author] Doan, Thinh TT. T. (2021). Finite-Time Convergence Rates of Nonlinear Two-Time-Scale Stochastic Approximation under Markovian Noise. arXiv preprint arXiv:2104.01627. * [20] [author] Espeholt, LasseL., Soyer, HubertH., Munos, RemiR., Simonyan, KarenK., Mnih, VolodymirV., Ward, TomT., Doron, YotamY., Firoiu, VladV., Harley, TimT., Dunning, IainI. et al. (2018). Impala: Scalable distributed deep-RL with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561. * [21] [author] Fu, ZuyueZ., Yang, ZhuoranZ. and Wang, ZhaoranZ. (2020). Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy. arXiv preprint arXiv:2008.00483. * [22] [author] Gajjar, GRG., Khaparde, SAS., Nagaraju, PP. and Soman, SAS. (2003). Application of actor-critic learning algorithm for optimal bidding problem of a Genco. IEEE Transactions on Power Systems 18 11–18. * [23] [author] Geist, MatthieuM., Scherrer, BrunoB. and Pietquin, OlivierO. (2019). A theory of regularized markov decision processes. preprint arXiv:1901.11275. * [24] [author] Gunasekar, SuriyaS., Woodworth, BlakeB. and Srebro, NathanN. (2020). Mirrorless Mirror Descent: A More Natural Discretization of Riemannian Gradient Flow. arXiv preprint arXiv:2004.01025. * [25] [author] Haarnoja, TuomasT., Zhou, AurickA., Hartikainen, KristianK., Tucker, GeorgeG., Ha, SehoonS., Tan, JieJ., Kumar, VikashV., Zhu, HenryH., Gupta, AbhishekA., Abbeel, PieterP. et al. (2018). Soft actor-critic algorithms and applications. preprint arXiv:1812.05905. * [26] [author] Hajek, BruceB. (2015). Random processes for engineers. Cambridge university press. * [27] [author] Kakade, ShamS. and Langford, JohnJ. (2002). Approximately optimal approximate reinforcement learning. In ICML 2 267–274. * [28] [author] Kakade, Sham MS. M. (2002). A natural policy gradient. In Advances in neural information processing systems 1531–1538. * [29] [author] Kaledin, MaximM., Moulines, EricE., Naumov, AlexeyA., Tadic, VladislavV. and Wai, Hoi-ToH.-T. (2020). Finite time analysis of linear two-timescale stochastic approximation with Markovian noise. In Conference on Learning Theory 2144–2203. PMLR. * [30] [author] Kaledin, MaximM., Moulines, EricE., Naumov, AlexeyA., Tadic, VladislavV. and Wai, Hoi-ToH.-T. (2020). Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise. preprint arXiv:2002.01268. * [31] [author] Khodadadian, SajadS., Chen, ZaiweiZ. and Maguluri, Siva ThejaS. T. (2021). Finite-Sample Analysis of Off-Policy Natural Actor-Critic Algorithm. arXiv preprint arXiv:2102.09318. * [32] [author] Khodadadian, SajadS., Jhunjhunwala, Prakirt RajP. R., Varma, Sushil MahavirS. M. and Maguluri, Siva ThejaS. T. (2021). On the linear convergence of natural policy gradient algorithm. arXiv preprint arXiv:2105.01424. * [33] [author] Konda, Vijay RV. R. and Tsitsiklis, John NJ. N. (2000). Actor-critic algorithms. In Advances in neural information processing systems 1008–1014. * [34] [author] Kuleshov, VolodymyrV. and Precup, DoinaD. (2014). Algorithms for multi-armed bandit problems. arXiv preprint arXiv:1402.6028. * [35] [author] Kumar, HarshatH., Koppel, AlecA. and Ribeiro, AlejandroA. (2019). On the Sample Complexity of Actor-Critic Method for Reinforcement Learning with Function Approximation. preprint arXiv:1910.08412. * [36] [author] Lan, GuanghuiG. (2021). Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes. arXiv preprint arXiv:2102.00135. * [37] [author] Levin, David AD. A. and Peres, YuvalY. (2017). Markov chains and mixing times 107\. American Mathematical Soc. * [38] [author] Liu, BoyiB., Cai, QiQ., Yang, ZhuoranZ. and Wang, ZhaoranZ. (2019). Neural proximal/trust region policy optimization attains globally optimal policy. Advances in Neural Information Processing Systems 32\. * [39] [author] Liu, BoB., Liu, JiJ., Ghavamzadeh, MohammadM., Mahadevan, SridharS. and Petrik, MarekM. (2015). Finite-Sample Analysis of Proximal Gradient TD Algorithms. In UAI 504–513. Citeseer. * [40] [author] Liu, YanliY., Zhang, KaiqingK., Basar, TamerT. and Yin, WotaoW. (2020). An improved analysis of (variance-reduced) policy gradient and natural policy gradient methods. Advances in Neural Information Processing Systems 33\. * [41] [author] Mei, JinchengJ., Xiao, ChenjunC., Szepesvari, CsabaC. and Schuurmans, DaleD. (2020). On the Global Convergence Rates of Softmax Policy Gradient Methods. preprint arXiv:2005.06392. * [42] [author] Montague, P ReadP. R. (1999). Reinforcement Learning: An Introduction, by Sutton, RS and Barto, AG. Trends in cognitive sciences 3 360\. * [43] [author] Morimura, TetsuroT., Uchibe, EijiE., Yoshimoto, JunichiroJ. and Doya, KenjiK. (2009). A generalized natural actor-critic algorithm. In Advances in neural information processing systems 1312–1320. * [44] [author] Mou, WenlongW., Li, Chris JunchiC. J., Wainwright, Martin JM. J., Bartlett, Peter LP. L. and Jordan, Michael IM. I. (2020). On linear stochastic approximation: Fine-grained Polyak-Ruppert and non-asymptotic concentration. In Conference on Learning Theory 2947–2997. PMLR. * [45] [author] Peters, JanJ. and Schaal, StefanS. (2008). Natural actor-critic. Neurocomputing 71 1180–1190. * [46] [author] Puterman, Martin LM. L. (1990). Markov decision processes. Handbooks in operations research and management science 2 331–434. * [47] [author] Qiu, ShuangS., Yang, ZhuoranZ., Ye, JiepingJ. and Wang, ZhaoranZ. (2019). On the finite-time convergence of actor-critic algorithm. In Optimization Foundations for Reinforcement Learning Workshop at Advances in Neural Information Processing Systems (NeurIPS). * [48] [author] Raskutti, GarveshG. and Mukherjee, SayanS. (2015). The information geometry of mirror descent. IEEE Transactions on Information Theory 61 1451–1457. * [49] [author] Rissanen, Jorma JJ. J. (1996). Fisher information and stochastic complexity. IEEE transactions on information theory 42 40–47. * [50] [author] Robbins, HerbertH. and Monro, SuttonS. (1951). A stochastic approximation method. The annals of mathematical statistics 400–407. * [51] [author] Schulman, JohnJ., Levine, SergeyS., Abbeel, PieterP., Jordan, MichaelM. and Moritz, PhilippP. (2015). Trust region policy optimization. In International conference on machine learning 1889–1897. * [52] [author] Schulman, JohnJ., Wolski, FilipF., Dhariwal, PrafullaP., Radford, AlecA. and Klimov, OlegO. (2017). Proximal policy optimization algorithms. preprint arXiv:1707.06347. * [53] [author] Shani, LiorL., Efroni, YonathanY. and Mannor, ShieS. (2019). Adaptive trust region policy optimization: Global convergence and faster rates for regularized MDPs. arXiv preprint arXiv:1909.02769. * [54] [author] Shani, LiorL., Efroni, YonathanY. and Mannor, ShieS. (2020). Adaptive Trust Region Policy Optimization: Global Convergence and Faster Rates for Regularized MDPs. In Proceedings of the AAAI Conference on Artificial Intelligence 34 5668–5675. * [55] [author] Sutton, Richard SR. S., McAllester, David AD. A., Singh, Satinder PS. P. and Mansour, YishayY. (2000). Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems 1057–1063. * [56] [author] Thomas, Philip S.P. S., Dabney, William CW. C., Giguere, StephenS. and Mahadevan, SridharS. (2013). Projected Natural Actor-Critic. In Advances in Neural Information Processing Systems 26 (C. J. C.C. J. C. Burges, L.L. Bottou, M.M. Welling, Z.Z. Ghahramani and K. Q.K. Q. Weinberger, eds.) 2337–2345. Curran Associates, Inc. * [57] [author] Tokic, MichelM. (2010). Adaptive $\varepsilon$-greedy exploration in reinforcement learning based on value differences. In Annual Conference on Artificial Intelligence 203–210. Springer. * [58] [author] Wang, LingxiaoL., Cai, QiQ., Yang, ZhuoranZ. and Wang, ZhaoranZ. (2019). Neural policy gradient methods: Global optimality and rates of convergence. preprint arXiv:1909.01150. * [59] [author] Wang, ZiyuZ., Bapst, VictorV., Heess, NicolasN., Mnih, VolodymyrV., Munos, RemiR., Kavukcuoglu, KorayK. and de Freitas, NandoN. (2016). Sample efficient actor-critic with experience replay. preprint arXiv:1611.01224. * [60] [author] Watkins, Christopher JCHC. J. and Dayan, PeterP. (1992). Q-learning. Machine learning 8 279–292. * [61] [author] Williams, Ronald JR. J. and Baird, Leemon CL. C. (1990). A mathematical analysis of actor-critic architectures for learning optimal controls through incremental dynamic programming. In Proceedings of the Sixth Yale Workshop on Adaptive and Learning Systems 96–101. Citeseer. * [62] [author] Wu, YueY., Zhang, WeitongW., Xu, PanP. and Gu, QuanquanQ. (2020). A Finite Time Analysis of Two Time-Scale Actor Critic Methods. preprint arXiv:2005.01350. * [63] [author] Wunder, MichaelM., Littman, Michael LM. L. and Babes, MonicaM. (2010). Classes of multiagent q-learning dynamics with epsilon-greedy exploration. In Proceedings of the 27th International Conference on Machine Learning (ICML-10) 1167–1174. Citeseer. * [64] [author] Xu, TengyuT., Wang, ZheZ. and Liang, YingbinY. (2020). Non-asymptotic Convergence Analysis of Two Time-scale (Natural) Actor-Critic Algorithms. arXiv preprint arXiv:2005.03557. * [65] [author] Xu, TengyuT., Wang, ZheZ. and Liang, YingbinY. (2020). Improving Sample Complexity Bounds for Actor-Critic Algorithms. preprint arXiv:2004.12956. * [66] [author] Xu, TengyuT., Yang, ZhuoranZ., Wang, ZhaoranZ. and Liang, YingbinY. (2021). Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality. Preprint arXiv:2102.11866. * [67] [author] Zhan, WenhaoW., Cen, ShicongS., Huang, BaiheB., Chen, YuxinY., Lee, Jason DJ. D. and Chi, YuejieY. (2021). Policy mirror descent for regularized reinforcement learning: A generalized framework with linear convergence. arXiv preprint arXiv:2105.11066. * [68] [author] Zhang, KaiqingK., Koppel, AlecA., Zhu, HaoH. and Başar, TamerT. (2019). Convergence and iteration complexity of policy gradient method for infinite-horizon reinforcement learning. In 2019 IEEE 58th Conference on Decision and Control (CDC) 7415–7422. IEEE. * [69] [author] Zhang, KaiqingK., Yang, ZhuoranZ., Liu, HanH., Zhang, TongT. and Basar, TamerT. (2021). Finite-sample analysis for decentralized batch multi-agent reinforcement learning with networked agents. IEEE Transactions on Automatic Control. * [70] [author] Zhang, ShangtongS., Liu, BoB., Yao, HengshuaiH. and Whiteson, ShimonS. (2020). Provably convergent two-timescale off-policy actor-critic with function approximation. In International Conference on Machine Learning 11204–11213. PMLR. Appendices The supplementary material is organized as follows: in Section A the details of the Proof of Theorem 3.1 is presented, and in Section B details of the proof of Proposition 5.1 is provided. ## Appendix A Details of the Proof of Theorem 3.1 ### A.1 Proof of Useful Lemmas Proof of Lemma 5.3: $\displaystyle\|\theta_{t+1}-\theta_{t}\|^{2}$ $\displaystyle=\|Q_{t+1}-Q_{t}+Q^{{\hat{\pi}}_{t-1}}-Q^{{\hat{\pi}}_{t}}\|^{2}$ $\displaystyle\leq 2\|Q_{t+1}-Q_{t}\|^{2}+2\|Q^{{\hat{\pi}}_{t-1}}-Q^{{\hat{\pi}}_{t}}\|^{2}$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}2\alpha_{t}^{2}\Delta_{Q}^{2}+2{L_{2}}^{2}\|{\hat{\pi}}_{t-1}-{\hat{\pi}}_{t}\|^{2}$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}2\alpha_{t}^{2}\Delta_{Q}^{2}+2{L_{2}}^{2}\left({L_{3}}\frac{\epsilon_{t-2}}{t-1}+{L_{1}}\beta_{t-1}\right)^{2}$ $\displaystyle\leq 2\alpha_{t}^{2}\Delta_{Q}^{2}+4{L_{2}}^{2}{L_{3}}^{2}\frac{\epsilon_{t-2}^{2}}{(t-1)^{2}}+4{L_{2}}^{2}{L_{1}}^{2}\beta_{t-1}^{2},$ where $(a)$ is due to Lemmas 5.8 and A.6, and $(b)$ is due to Lemma 5.9. Proof of Lemma 5.4: We prove this lemma for a slightly more general case. Assume a finite state Markov chain $\\{X_{k}\\}_{k=0,1,\dots}$ with state space $\mathcal{X}=\\{x_{1},x_{2},\dots,x_{|\mathcal{X}|}\\}$ and stationary distribution $\nu$. Define $M:=diag(\nu)$ a diagonal matrix with diagonal entries equal to the elements of $\nu$. Clearly, $M=M^{\top}$. Further denote $P$ as the transition matrix of the Markov chain. Define $V=\gamma P-I$, where $I$ is the identity matrix. Assuming $X_{k}\sim\nu$, for any function $F(\cdot):\mathcal{X}\rightarrow\mathbb{R}$, we have: ${\mathbb{E}}\left[F\left(X_{k}\right)^{2}\right]={\mathbb{E}}\left[F\left(X_{k+1}\right)^{2}\right].$ By Cauchy–Schwarz inequality, we have: $\displaystyle{\mathbb{E}}\left[F\left(X_{k}\right)F\left(X_{k+1}\right)\right]$ $\displaystyle\leq\sqrt{{\mathbb{E}}\left[F^{2}(X_{k})\right]}.\sqrt{{\mathbb{E}}\left[F^{2}(X_{k+1})\right]}$ $\displaystyle={\mathbb{E}}\left[F^{2}(X_{k})\right].$ (29) Denoting $F=[F(x_{1});F(x_{2});\dots;F(x_{|\mathcal{X}|})]$ as a $|\mathcal{X}|$ dimentional vector, we have: $\displaystyle{\mathbb{E}}[F^{2}(X_{k})]$ $\displaystyle=\sum_{x\in\mathcal{X}}\nu(x)F^{2}(x)=F^{\top}MF,$ (30) $\displaystyle{\mathbb{E}}[F(X_{k})F(X_{k+1})]$ $\displaystyle=\sum_{x,y\in\mathcal{X}}\nu(x)P(y|x)F(x)F(y)$ $\displaystyle=F^{\top}MPF=F^{\top}P^{\top}MF,$ (31) where the last equality is due to ${\mathbb{E}}[F(X_{k})F(X_{k+1})]$ being a scalar. Combining (29), (30), and (31), we have: $\displaystyle F^{\top}MPF$ $\displaystyle\leq F^{\top}MF,~{}~{}\forall F$ $\displaystyle\implies MP$ $\displaystyle\leq M$ $\displaystyle\implies M(\gamma P-I)$ $\displaystyle\leq-(1-\gamma)M.$ (32) Next, in the case of MDP, for a fixed policy $\pi$, we define $M^{\pi}\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|\times|\mathcal{S}||\mathcal{A}|}$ and $P^{\pi}\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|\times|\mathcal{S}||\mathcal{A}|}$ matrices as follows: $\displaystyle M^{\pi}_{(s,a),(s^{\prime},a^{\prime})}$ $\displaystyle=\begin{cases}\mu^{\pi}(s)\pi(a|s)&(s,a)=(s^{\prime},a^{\prime}),\\\ 0&o.w\end{cases}$ $\displaystyle P^{\pi}_{(s,a),(s^{\prime},a^{\prime})}$ $\displaystyle=\mathcal{P}(s^{\prime}|s,a)\pi(a^{\prime}|s^{\prime}).$ It is easy to see that: $\displaystyle\bar{A}^{\pi}_{(s,a),(s^{\prime},a^{\prime})}$ $\displaystyle=\begin{cases}\mu^{\pi}(s)\pi(a|s)\left(\gamma\mathcal{P}\left(s^{\prime}|s,a\right)\pi\left(a^{\prime}|s^{\prime}\right)-1\right)&s=s^{\prime},a=a^{\prime},\\\ \gamma\mu^{\pi}(s)\pi(a|s)\mathcal{P}\left(s^{\prime}|s,a\right)\pi\left(a^{\prime}|s^{\prime}\right)&s\neq s^{\prime}~{}\text{or}~{}a\neq a^{\prime}\end{cases}$ $\displaystyle\implies\bar{A}^{\pi}$ $\displaystyle=M^{\pi}(\gamma P^{\pi}-I)\leq-(1-\gamma)M^{\pi},$ where the last inequality follows from (32). As a result, we have: $\displaystyle\theta^{\top}_{t}\bar{A}^{{\hat{\pi}}_{t-1}}\theta_{t}$ $\displaystyle\leq-(1-\gamma)\sum_{s,a}\mu^{\pi}(a){\hat{\pi}}_{t-1}(a|s)\theta_{t,s,a}^{2}$ $\displaystyle\leq-(1-\gamma)\frac{\epsilon_{t-2}}{|\mathcal{A}|}\mu||\theta_{t}||^{2},$ where the last inequality follows from ${\hat{\pi}}_{t-1}(a|s)\geq\frac{\epsilon_{t-2}}{|\mathcal{A}|}$ and Lemma 5.10. Proof of Lemma 5.5: 1. 1. By the assumption on the reward function $\mathcal{R}(s,a)\geq 0$, we have $V^{\pi}(s)=\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}\mathcal{R}(S_{k},A_{k})\,|\,S_{0}=s\right]\geq 0$. Furthermore, due to $\mathcal{R}(s,a)\leq 1$, we have $V^{\pi}(s)=\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}\mathcal{R}(S_{k},A_{k})\,|\,S_{0}=s\right]\leq\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}\,|\,S_{0}=s\right]=\frac{1}{1-\gamma}$ for all $s\in\mathcal{S}$. 2. 2. Similarly, we have $Q^{\pi}(s,a)\in[0,\frac{1}{1-\gamma}]$ for all $s,a$. 3. 3. $\|Q^{\pi}\|=\sqrt{\sum_{s,a}{Q^{\pi}}^{2}(s,a)}\leq\frac{\sqrt{|\mathcal{S}||\mathcal{A}|}}{1-\gamma}$ 4. 4. In order to prove this, first we show $\|Q_{t}\|_{\infty}\leq\frac{1}{1-\gamma}$ for all $t\geq 0$. We construct this bound by induction. Due to the initialization, the inequality holds for $t=0$. Assuming the inequality holds for $t$, we prove it holds for $t+1$. For all $s,a$, we have: $\displaystyle|Q_{t+1}(s,a)|=$ $\displaystyle\left|(1-\alpha_{t}(s,a))Q_{t}(s,a)+\alpha_{t}(s,a)(\mathcal{R}(s,a)+\gamma Q_{t}(s_{t+1},a_{t+1}))\right|$ $\displaystyle\leq$ $\displaystyle(1-\alpha_{t}(s,a))|Q_{t}(s,a)|+\alpha_{t}(s,a)|\mathcal{R}(s,a)+\gamma Q_{t}(S_{t+1},A_{t+1})|$ $\displaystyle\leq$ $\displaystyle(1-\alpha_{t}(s,a))Q_{\max}+\alpha_{t}(s,a)(1+\gamma Q_{\max})$ $\displaystyle=$ $\displaystyle(1-\alpha_{t}(s,a))\frac{1}{1-\gamma}+\alpha_{t}(s,a)(1+\gamma\frac{1}{1-\gamma})=\frac{1}{1-\gamma}.$ The bound for $\|Q_{t}\|$ follows directly. Proof of Lemma 5.6: Given time indices $t$ and $\tau<t$, we consider the following auxiliary chain of state-actions: $\displaystyle S_{t-\tau}$ $\displaystyle\xrightarrow{{\hat{\pi}}_{t-\tau-1}}A_{t-\tau}\xrightarrow{\mathcal{P}}\tilde{S}_{t-\tau+1}\xrightarrow{{\hat{\pi}}_{t-\tau-1}}\tilde{A}_{t-\tau+1}\dots\xrightarrow{\mathcal{P}}\tilde{S}_{t}\xrightarrow{{\hat{\pi}}_{t-\tau-1}}\tilde{A}_{t}\xrightarrow{\mathcal{P}}\tilde{S}_{t+1}\xrightarrow{{\hat{\pi}}_{t-\tau-1}}\tilde{A}_{t+1}.$ Note that the original chain is as follows: $\displaystyle S_{t-\tau}$ $\displaystyle\xrightarrow{{\hat{\pi}}_{t-\tau-1}}A_{t-\tau}\xrightarrow{\mathcal{P}}S_{t-\tau+1}\xrightarrow{{\hat{\pi}}_{t-\tau}}A_{t-\tau+1}\dots\xrightarrow{\mathcal{P}}S_{t}\xrightarrow{{\hat{\pi}}_{t-1}}A_{t}\xrightarrow{\mathcal{P}}S_{t+1}\xrightarrow{{\hat{\pi}}_{t}}A_{t+1}.$ Further, we define $\tilde{O}_{t}=(\tilde{S}_{t},\tilde{A}_{t},\tilde{S}_{t+1},\tilde{A}_{t+1})$. We have: $\displaystyle\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})=$ $\displaystyle\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t},O_{t})$ (33) $\displaystyle+\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})$ (34) $\displaystyle+\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})$ (35) $\displaystyle+\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t}).$ (36) We bound each of the terms above separately. Firstly: $\displaystyle\Gamma({\hat{\pi}}_{t-1},\theta_{t},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t},O_{t})\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle K_{1}\|{\hat{\pi}}_{t-1}-{\hat{\pi}}_{t-\tau-1}\|$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle K_{1}\sum_{i=t-\tau}^{t-1}\left\|{\hat{\pi}}_{i}-{\hat{\pi}}_{i-1}\right\|$ $\displaystyle\stackrel{{\scriptstyle(c)}}{{\leq}}$ $\displaystyle K_{1}\sum_{i=t-\tau}^{t-1}\left[{L_{3}}\frac{\epsilon_{i-2}}{i-1}+{L_{1}}\beta_{i-1}\right]$ $\displaystyle\leq$ $\displaystyle K_{1}\tau\left[{L_{3}}\frac{\epsilon_{t-\tau-2}}{t-\tau-1}+{L_{1}}\beta_{t-\tau-1}\right],$ where $(a)$ is due to Lemma A.1, $(b)$ is by triangle inequality, and $(c)$ is due to Lemma 5.9. Second, we have: $\displaystyle\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle K_{2}\|\theta_{t}-\theta_{t-\tau}\|$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle K_{2}\sum_{i=t-\tau+1}^{t}\|\theta_{i}-\theta_{i-1}\|$ $\displaystyle\stackrel{{\scriptstyle(c)}}{{\leq}}$ $\displaystyle K_{2}\sum_{i=t-\tau+1}^{t}\Delta_{Q}\alpha_{i-1}+{L_{2}}{L_{3}}\frac{\epsilon_{i-3}}{i-2}+{L_{1}}{L_{2}}\beta_{i-2}$ $\displaystyle\leq$ $\displaystyle K_{2}\tau\left[\Delta_{Q}\alpha_{t-\tau}+{L_{2}}{L_{3}}\frac{\epsilon_{t-\tau-2}}{t-\tau-1}+{L_{1}}{L_{2}}\beta_{t-\tau-1}\right],$ where $(a)$ due to Lemma A.2, $(b)$ is by triangle inequality, and $(c)$ is due to Lemma A.6. Third, denoting $\mathcal{F}_{t-\tau}:=\\{S_{t-\tau},{\hat{\pi}}_{t-\tau-1},\theta_{t-\tau}\\}$, we have: $\displaystyle{\mathbb{E}}\bigg{[}\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})\bigg{|}\mathcal{F}_{t-\tau}\bigg{]}\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle C_{u}{\mathbb{E}}\left[\sum_{i=t-\tau}^{t}\|{\hat{\pi}}_{i}-{\hat{\pi}}_{t-\tau-1}\|\Bigg{|}\mathcal{F}_{t-\tau}\right]$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle C_{u}(\tau+1)^{2}\left[{L_{3}}\frac{\epsilon_{t-\tau-2}}{t-\tau-1}+{L_{1}}\beta_{t-\tau-1}\right],$ where $(a)$ is due to Lemma A.3 and $(b)$ is due to Lemma A.5. Finally, by Lemma A.4 we have: ${\mathbb{E}}\left[\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})\big{|}\mathcal{F}_{t-\tau}\right]\leq C_{b}m\rho^{\tau}.$ Combining the bounds above, and noticing $\tau\geq 1$, we get the result. Proof of Lemma 5.7: Suppose $\pi$ be an arbitrary stochastic policy. $\pi$ can be written as a $|\mathcal{S}|$ by $|\mathcal{A}|$ stochastic matrix, which has non-negative elements, and each row sums up to one. Hence, by [17, Theorem 1], $\pi$ can be written as a convex combination of at most $N=|\mathcal{S}|(|\mathcal{A}|-1)+1$ deterministic policies $\\{\pi_{i}\\}_{i=1}^{|\mathcal{S}|(|\mathcal{A}|-1)+1}$. In other words, there exist coefficients $\\{\alpha_{i}\\}_{i=1}^{N}$, such that $\alpha_{i}\geq 0$ and $\sum_{i}\alpha_{i}=1$, and $\pi=\sum_{i}\alpha_{i}\pi_{i}$. By definition of $P^{\pi}$, we have $P^{\pi}=\sum_{i}\alpha_{i}P^{\pi_{i}}$. Due to ergodicity Assumption 3.1, for every policy $\pi_{i}$, there exist a finite integer $r_{i}$, such that $(P^{\pi_{i}})^{\bar{r}_{i}}$ is a positive matrix with minimum element $e_{i}>0$ for all $\bar{r}_{i}\geq r_{i}$. Since we have a finite number of $r_{i}$ and $e_{i}$’s, we have $r=\max_{i}r_{i}$ is a finite integer, and $e=\min_{i}e_{i}>0$. Furthermore, we have $\displaystyle\left[(P^{\pi})^{r}\right]_{m,n}=$ $\displaystyle\left[\left(\sum_{i}\alpha_{i}P^{\pi_{i}}\right)^{r}\right]_{m,n}$ (37) $\displaystyle\stackrel{{\scriptstyle(a)}}{{\geq}}$ $\displaystyle\sum_{i}\alpha_{i}^{r}\left[(P^{\pi_{i}})^{r}\right]_{m,n}\geq Ne\frac{1}{N}\sum_{i}\alpha_{i}^{r}$ (38) $\displaystyle\stackrel{{\scriptstyle(b)}}{{\geq}}$ $\displaystyle Ne\left(\frac{1}{N}\sum_{i}\alpha_{i}\right)^{r}=Ne\left(\frac{1}{N}\right)^{r}>0,$ where $(a)$ is due to non-negativity of matrices $\alpha_{i}P^{\pi_{i}}$ and $(b)$ is by Jensen’s inequality. Hence, by [37, Theorem 4.9], we can show the existence of $\rho\in(0,1)$ and $m>0$. Furthermore, if the underlying Markov chain under a policy $\pi$ is periodic with period $d$, then we have $\lim_{t\rightarrow\infty}P(S_{dt}=i|S_{0}=i)>0$ while $\lim_{t\rightarrow\infty}P(S_{dt+1}=i|S_{0}=i)=0$, and hence (24) does not hold. Proof of Lemma 5.8: By the policy gradient theorem [1], we know that for any distribution $\mu$, we have $\frac{\partial V^{\pi}(\mu)}{\partial\pi(a|s)}=\frac{1}{1-\gamma}d_{\mu}^{\pi}(s)Q^{\pi}(s,a)$. As a result: $\displaystyle\left\|\frac{\partial V^{\pi}(\mu)}{\partial\pi}\right\|$ $\displaystyle\leq\frac{1}{1-\gamma}\sqrt{\sum_{s,a}{Q^{\pi}}^{2}(s,a)}\leq\frac{\sqrt{|\mathcal{S}||\mathcal{A}|}}{(1-\gamma)^{2}}.$ Furthermore, we have: $\displaystyle\frac{\partial Q^{\pi}(s,a)}{\partial\pi}$ $\displaystyle=\gamma\sum_{s^{\prime}}\mathcal{P}(s^{\prime}|s,a)\frac{\partial V^{\pi}(s^{\prime})}{\partial\pi},$ which implies $\displaystyle\left\|\frac{\partial Q^{\pi}(s,a)}{\partial\pi}\right\|$ $\displaystyle\leq\gamma\sum_{s^{\prime}}\mathcal{P}(s^{\prime}|s,a)\left\|\frac{\partial V^{\pi}(s^{\prime})}{\partial\pi}\right\|\leq\gamma\frac{\sqrt{|\mathcal{S}||\mathcal{A}|}}{(1-\gamma)^{2}}$ $\displaystyle\implies|Q^{\pi_{1}}(s$ $\displaystyle,a)-Q^{\pi_{2}}(s,a)|\leq\gamma\frac{\sqrt{|\mathcal{S}||\mathcal{A}|}}{(1-\gamma)^{2}}\|\pi_{1}-\pi_{2}\|.$ Using this, we have: $\displaystyle\|Q^{\pi_{1}}-Q^{\pi_{2}}\|$ $\displaystyle\leq\sqrt{\sum_{s,a}\frac{\gamma^{2}|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^{4}}\|\pi_{1}-\pi_{2}\|^{2}}$ $\displaystyle=\frac{\gamma|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^{2}}\|\pi_{1}-\pi_{2}\|={L_{2}}\|\pi_{1}-\pi_{2}\|,$ where ${L_{2}}:=\frac{\gamma|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^{2}}$. Proof of Lemma 5.9: Policy $\pi_{t}$ can be parameterized by the vector $\theta^{t}\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}$ as $\pi_{t}(a|s)=\frac{\exp(\theta^{t}_{s,a})}{\sum_{a^{\prime}}\exp(\theta^{t}_{s,a^{\prime}})}$. It is straightforward to see that the multiplicative weight update of the policy in Algorithm 1 is equivalent to [14, Lemma 3.1] $\theta^{t+1}=\theta^{t}+\beta_{t}Q_{t+1}.$ We have $\displaystyle\|\pi_{t+1}-\pi_{t}\|^{2}$ $\displaystyle=\sum_{s}\|\pi_{t+1}(\cdot|s)-\pi_{t}(\cdot|s)\|^{2}$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}\sum_{s}\|\beta_{t}Q_{t+1}(s,\cdot)\|^{2}\leq\beta_{t}^{2}|\mathcal{A}||\mathcal{S}|Q_{\max}^{2},$ (39) where in $(a)$ we use $1$-Lipschitzness of the softmax function [4]. As a result, we have: $\displaystyle\|{\hat{\pi}}_{t+1}-{\hat{\pi}}_{t}\|$ $\displaystyle=\|(\epsilon_{t}-\epsilon_{t-1})(\frac{1}{\mathcal{A}}-\pi_{t+1})+(1-\epsilon_{t-1})(\pi_{t+1}-\pi_{t})\|$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}|\epsilon_{t}-\epsilon_{t-1}|\sqrt{|\mathcal{S}|}(\frac{1}{\sqrt{|\mathcal{A}|}}+1)+\|\pi_{t+1}-\pi_{t}\|$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}\frac{\xi\epsilon_{t-1}}{t}\sqrt{|\mathcal{S}|}(\frac{1}{\sqrt{|\mathcal{A}|}}+1)+\beta_{t}Q_{\max}\sqrt{|\mathcal{A}||\mathcal{S}|}$ $\displaystyle={L_{3}}\frac{\epsilon_{t-1}}{t}+{L_{1}}\beta_{t},$ where $(a)$ is due to triangle inequality and $(b)$ is due to the assumption on $\epsilon_{t}$ and (39). Here ${L_{3}}=\xi\sqrt{|\mathcal{S}|}(\frac{1}{\sqrt{|\mathcal{A}|}}+1)$ and ${L_{1}}=Q_{\max}\sqrt{|\mathcal{A}||\mathcal{S}|}$. Proof of Lemma 5.10: Ergodicity assumption 3.1 implies that the underlying Markov chain induced by all the policies is irreducible. The proof follows from [37, Proposition 1.14]. ### A.2 Auxiliary Lemmas ###### Lemma A.1. For any $\pi_{1},\pi_{2},\theta$, and $O=(S,A,S^{\prime},A^{\prime})$, $\left|\Gamma(\pi_{1},\theta,O)-\Gamma(\pi_{2},\theta,O)\right|\ \leq K_{1}\|\pi_{1}-\pi_{2}\|,$ where $K_{1}=2Q_{\max}\sqrt{2|\mathcal{S}||\mathcal{A}|}{L_{2}}+8Q_{\max}^{2}|\mathcal{S}|^{2}|\mathcal{A}|^{3}\left(\left\lceil\log_{\rho}m^{-1}\right\rceil+\frac{1}{1-\rho}+2\right)$. ###### Lemma A.2. For any $\pi,Q_{1},Q_{2}$, and $O=(S,A,S^{\prime},A^{\prime})$, $|\Gamma(\pi,\theta_{1},O)-\Gamma(\pi,\theta_{2},O)|\leq K_{2}\|\theta_{1}-\theta_{2}\|,$ where $K_{2}=1+9\sqrt{2|\mathcal{S}||\mathcal{A}|}Q_{\max}$. ###### Lemma A.3. Consider original tuples $O_{t}=(S_{t},A_{t},S_{t+1},A_{t+1})$ and the auxiliary tuples $\tilde{O}_{t}=(\tilde{S}_{t},\tilde{A}_{t},\tilde{S}_{t+1},\tilde{A}_{t+1})$. Denote $\mathcal{F}_{t-\tau}:=\\{S_{t-\tau},{\hat{\pi}}_{t-\tau-1},\theta_{t-\tau}\\}$. For any time indices $t>\tau>1$, we have $\displaystyle{\mathbb{E}}\left[\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})\mid\mathcal{F}_{t-\tau}\right]\leq C_{u}{\mathbb{E}}\left[\sum_{i=t-\tau}^{t}\|{\hat{\pi}}_{i}-{\hat{\pi}}_{t-\tau-1}\|\mid\mathcal{F}_{t-\tau}\right],$ where $C_{u}=4Q_{\max}|\mathcal{S}|^{1.5}|\mathcal{A}|^{1.5}(1+3Q_{\max}|\mathcal{S}||\mathcal{A}|)$. ###### Lemma A.4. Consider the auxiliary tuple $\tilde{O}_{t}=(\tilde{S}_{t},\tilde{A}_{t},\tilde{S}_{t+1},\tilde{A}_{t+1})$. Denote $\mathcal{F}_{t-\tau}=\\{S_{t-\tau},{\hat{\pi}}_{t-\tau-1},\theta_{t-\tau}\\}$. For any time indices $t>\tau>1$, we have: ${\mathbb{E}}\left[\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})\big{|}\mathcal{F}_{t-\tau}\right]\leq C_{b}m\rho^{\tau},$ where $C_{b}=4Q_{\max}|\mathcal{S}||\mathcal{A}|(1+3|\mathcal{S}||\mathcal{A}|Q_{\max})$. ###### Lemma A.5. For any time indices $t>\tau>1$, the policies generated by Algorithm 1 satisfy the following: $\sum_{i=t-\tau}^{t}\left\|{\hat{\pi}}_{i}-{\hat{\pi}}_{t-\tau-1}\right\|\leq(\tau+1)^{2}\left[{L_{3}}\frac{\epsilon_{t-\tau-2}}{t-\tau-1}+{L_{1}}\beta_{t-\tau-1}\right].$ ###### Lemma A.6. We have the following bounds: 1. 1. $\|A(O)\|\leq\sqrt{1+\gamma^{2}}\leq\sqrt{2}$, 2. 2. $\|r(O)\|\leq 1$, 3. 3. $\|\bar{A}^{\pi}\|\leq\sqrt{2}$, 4. 4. $\left\|{\mathbb{E}}[r\left(O_{1}\right)-r(O_{2})]\right\|_{1}\leq 2|\mathcal{S}||\mathcal{A}|d_{TV}(O_{1},O_{2})$, 5. 5. $\left\|{\mathbb{E}}[A\left(O_{1}\right)-A(O_{2})]\right\|_{1}\leq 2|\mathcal{S}||\mathcal{A}|d_{TV}(O_{1},O_{2})$, 6. 6. $\|Q_{t+1}-Q_{t}\|\leq\alpha_{t}\Delta_{Q}:=\alpha_{t}(2Q_{\max}+1)$, 7. 7. $\left\|\theta_{t}-\theta_{t-1}\right\|\leq\Delta_{Q}\alpha_{t-1}+{L_{2}}{L_{3}}\frac{\epsilon_{t-3}}{t-2}+{L_{1}}{L_{2}}\beta_{t-2}$, where $Q_{\max}=\frac{1}{1-\gamma}$, and the constants ${L_{1}},{L_{2}},{L_{3}}$ are defined in Lemmas 5.8 and 5.9. ###### Lemma A.7. Consider $O_{t}=(S_{t},A_{t},S_{t+1},A_{t+1})$ and $\tilde{O}_{t}=(\tilde{S}_{t},\tilde{A}_{t},\tilde{S}_{t+1},\tilde{A}_{t+1})$. Denote $\mathcal{F}_{t-\tau}:=\\{S_{t-\tau},{\hat{\pi}}_{t-\tau-1},\theta_{t-\tau}\\}$. We have: $\displaystyle d_{TV}\big{(}P(O_{t}\in\cdot|$ $\displaystyle\mathcal{F}_{t-\tau})||P(\tilde{O}_{t}\in\cdot|\mathcal{F}_{t-\tau})\big{)}\leq\sqrt{|\mathcal{A}||\mathcal{S}|}{\mathbb{E}}\left[\sum_{i=t-\tau}^{t}\|{\hat{\pi}}_{i}-{\hat{\pi}}_{t-\tau-1}\|\mid\mathcal{F}_{t-\tau}\right]$ ###### Lemma A.8 (Lemma A.1 in [62]). Denote $M=\left\lceil\log_{\rho}m^{-1}\right\rceil+\frac{1}{1-\rho}$. For any $\pi_{1}$ and $\pi_{2}$ policies, we have the following inequality: $\displaystyle d_{TV}\left(\mu^{\pi_{1}}\otimes\pi_{1}\otimes P\otimes\pi_{1},\mu^{\pi_{2}}\otimes\pi_{2}\otimes P\otimes\pi_{2}\right)\leq|\mathcal{A}|\left(M+2\right)\|\pi_{1}-\pi_{2}\|$ ### A.3 Proofs of the auxiliary Lemmas Proof of Lemma A.1: $\displaystyle\Gamma(\pi_{1},\theta,O)$ $\displaystyle-\Gamma(\pi_{2},\theta,O)$ $\displaystyle=$ $\displaystyle\theta^{\top}A(O)(Q^{\pi_{1}}-Q^{\pi_{2}})-\theta^{\top}(\bar{A}^{\pi_{1}}-\bar{A}^{\pi_{2}})\theta$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle\|\theta\|.\|A(O)\|.\|Q^{\pi_{1}}-Q^{\pi_{2}}\|+\|\theta\|^{2}.\|\bar{A}^{\pi_{1}}-\bar{A}^{\pi_{2}}\|$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle 2Q_{\max}\sqrt{2|\mathcal{S}||\mathcal{A}|}\|Q^{\pi_{1}}-Q^{\pi_{2}}\|+4Q_{\max}^{2}|\mathcal{S}||\mathcal{A}|.\|\bar{A}^{\pi_{1}}-\bar{A}^{\pi_{2}}\|$ $\displaystyle\stackrel{{\scriptstyle(c)}}{{\leq}}$ $\displaystyle 2Q_{\max}\sqrt{2|\mathcal{S}||\mathcal{A}|}{L_{2}}\|\pi_{1}-\pi_{2}\|+8Q_{\max}^{2}|\mathcal{S}|^{2}|\mathcal{A}|^{2}d_{TV}\left(\mu^{\pi_{1}}\otimes\pi_{1}\otimes\mathcal{P}\otimes\pi_{1},\mu^{\pi_{2}}\otimes\pi_{2}\otimes\mathcal{P}\otimes\pi_{2}\right)$ $\displaystyle\stackrel{{\scriptstyle(d)}}{{\leq}}$ $\displaystyle 2Q_{\max}\sqrt{2|\mathcal{S}||\mathcal{A}|}{L_{2}}\|\pi_{1}-\pi_{2}\|+8Q_{\max}^{2}|\mathcal{S}|^{2}|\mathcal{A}|^{3}\left(\left\lceil\log_{\rho}m^{-1}\right\rceil+\frac{1}{1-\rho}+2\right)\|\pi_{1}-\pi_{2}\|$ $\displaystyle=$ $\displaystyle K_{1}\|\pi_{1}-\pi_{2}\|$ where $(a)$ is due to Cauchy–Schwarz inequality, $(b)$ is due to Lemmas 5.5 and A.6, $(c)$ is due to Lemmas 5.8 and A.6, $(d)$ is due to Lemma A.8. Proof of Lemma A.2: $\displaystyle|\Gamma(\pi,\theta_{1},O)-\Gamma(\pi,\theta_{2},O)|\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle(\|r(O)\|+\|A(O)\|.\|Q^{\pi}\|)\|\theta_{1}-\theta_{2}\|+\|A(O)-\bar{A}^{\pi}\|.\|\theta_{1}-\theta_{2}\|(\|\theta_{1}\|+\|\theta_{2}\|)$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle(1+9\sqrt{2|\mathcal{S}||\mathcal{A}|}Q_{\max})\|\theta_{1}-\theta_{2}\|$ where $(a)$ follows from Cauchy–Schwarz and triangle inequality, and $(b)$ is due to Lemmas 5.5 and A.6. Proof of Lemma A.3: We have: $\displaystyle{\mathbb{E}}\big{[}\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})|\mathcal{F}_{t-\tau}\big{]}$ $\displaystyle=$ $\displaystyle\theta_{t-\tau}^{\top}{\mathbb{E}}\left[r\left(O_{t}\right)-r(\tilde{O}_{t})+\left(A\left(O_{t}\right)-A(\tilde{O}_{t})\right)Q^{{\hat{\pi}}_{t-\tau-1}}|\mathcal{F}_{t-\tau}\right]+\theta_{t-\tau}^{\top}{\mathbb{E}}\left[A(O_{t})-A(\tilde{O}_{t})|\mathcal{F}_{t-\tau}\right]\theta_{t-\tau}$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle\|\theta_{t-\tau}\|_{\infty}\left\|{\mathbb{E}}\left[r\left(O_{t}\right)-r(\tilde{O}_{t})|\mathcal{F}_{t-\tau}\right]\right\|_{1}+\|\theta_{t-\tau}\|_{\infty}\left\|{\mathbb{E}}\left[\left(A\left(O_{t}\right)-A(\tilde{O}_{t})\right)|\mathcal{F}_{t-\tau}\right]Q^{{\hat{\pi}}_{t-\tau-1}}\right\|_{1}$ $\displaystyle+\|\theta_{t-\tau}\|_{\infty}\left\|{\mathbb{E}}\left[A\left(O_{t}\right)-A(\tilde{O}_{t})|\mathcal{F}_{t-\tau}\right]\theta_{t-\tau}\right\|_{1}$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle\left\|\theta_{t-\tau}\right\|_{\infty}\left\|{\mathbb{E}}\left[r\left(O_{t}\right)-r(\tilde{O}_{t})|\mathcal{F}_{t-\tau}\right]\right\|_{1}+\|\theta_{t-\tau}\|_{\infty}\left\|{\mathbb{E}}\left[\left(A\left(O_{t}\right)-A(\tilde{O}_{t})\right)|\mathcal{F}_{t-\tau}\right]\right\|_{1}\left\|Q^{{\hat{\pi}}_{t-\tau-1}}\right\|_{1}$ $\displaystyle+\left\|\theta_{t-\tau}\right\|_{\infty}.\left\|{\mathbb{E}}\left[A\left(O_{t}\right)-A(\tilde{O}_{t})|\mathcal{F}_{t-\tau}\right]\right\|_{1}.\left\|\theta_{t-\tau}\right\|_{1}$ $\displaystyle\stackrel{{\scriptstyle(c)}}{{\leq}}$ $\displaystyle 2Q_{\max}\times 2|\mathcal{S}||\mathcal{A}|d_{TV}\left(O_{t},\tilde{O}_{t}|\mathcal{F}_{t-\tau}\right)+2Q_{\max}\times 2|\mathcal{S}||\mathcal{A}|d_{TV}\left(O_{t},\tilde{O}_{t}|\mathcal{F}_{t-\tau}\right)\times Q_{\max}|\mathcal{S}||\mathcal{A}|$ $\displaystyle+2Q_{\max}\times 2|\mathcal{S}||\mathcal{A}|d_{TV}\left(O_{t},\tilde{O}_{t}|\mathcal{F}_{t-\tau}\right)\times 2Q_{\max}|\mathcal{S}||\mathcal{A}|$ $\displaystyle=$ $\displaystyle 4Q_{\max}|\mathcal{S}||\mathcal{A}|(1+3Q_{\max}|\mathcal{S}||\mathcal{A}|)d_{TV}\left(O_{t},\tilde{O}_{t}|\mathcal{F}_{t-\tau}\right)$ where $(a)$ is due to the Hölder’s inequality, $(b)$ is due to definition of matrix induced norm, $(c)$ is due to Lemma A.6. Using Lemma A.7, we get the result. Proof of Lemma A.4: Consider the tuple $O^{\prime t}=(S_{t}^{\prime},A_{t}^{\prime},S_{t+1}^{\prime},A_{t+1}^{\prime})$, where $S_{t}^{\prime}\sim\mu^{{\hat{\pi}}_{t-\tau-1}}$, $A_{t}^{\prime}\sim{\hat{\pi}}_{t-\tau-1}(\cdot|S_{t}^{\prime})$, $S_{t+1}^{\prime}\sim\mathcal{P}(\cdot|S_{t}^{\prime},A_{t}^{\prime})$, and $A_{t+1}^{\prime}\sim{\hat{\pi}}_{t-\tau-1}(\cdot|S_{t+1}^{\prime})$. We have $\displaystyle{\mathbb{E}}\big{[}\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O^{\prime t})\big{|}\mathcal{F}_{t-\tau}\big{]}=$ $\displaystyle\theta_{t-\tau}^{\top}{\mathbb{E}}\left[r(O_{t}^{\prime})+A(O_{t}^{\prime})Q^{{\hat{\pi}}_{t-\tau-1}}|\mathcal{F}_{t-\tau}\right]$ $\displaystyle+\theta_{t-\tau}^{\top}{\mathbb{E}}\left[A(O_{t}^{\prime})|\mathcal{F}_{t-\tau}\right]\theta_{t-\tau}-\theta_{t-\tau}^{\top}\bar{A}^{{\hat{\pi}}_{t-\tau-1}}\theta_{t-\tau}=0,$ where the last equality is due to the Bellman equation and the definition of $\bar{A}^{\pi_{t-\tau-1}}$. As a result, we have: $\displaystyle{\mathbb{E}}\big{[}\Gamma({\hat{\pi}}_{t-\tau-1}$ $\displaystyle,\theta_{t-\tau},\tilde{O}_{t})\big{|}\mathcal{F}_{t-\tau}\big{]}$ $\displaystyle=$ $\displaystyle{\mathbb{E}}\left[\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},\tilde{O}_{t})-\Gamma({\hat{\pi}}_{t-\tau-1},\theta_{t-\tau},O^{\prime t})\big{|}\mathcal{F}_{t-\tau}\right]$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle\left\|\theta_{t-\tau}\right\|_{\infty}\left\|{\mathbb{E}}\left[r(\tilde{O}_{t})-r\left(O_{t}^{\prime}\right)|\mathcal{F}_{t-\tau}\right]\right\|_{1}+\left\|\theta_{t-\tau}\right\|_{\infty}\left\|{\mathbb{E}}\left[A(\tilde{O}_{t})-A(O_{t}^{\prime})\big{|}\mathcal{F}_{t-\tau}\right]\right\|_{1}\left\|Q^{{\hat{\pi}}_{t-\tau-1}}\right\|_{1}$ $\displaystyle+\left\|\theta_{t-\tau}\right\|_{\infty}.\left\|{\mathbb{E}}\left[A(\tilde{O}_{t})-A(O_{t}^{\prime})\big{|}\mathcal{F}_{t-\tau}\right]\right\|_{1}.\left\|\theta_{t-\tau}\right\|_{1}$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle 2Q_{\max}\times 2|\mathcal{S}||\mathcal{A}|d_{TV}\left(\tilde{O}_{t},O_{t}^{\prime}|\mathcal{F}_{t-\tau}\right)+2Q_{\max}\times 2|\mathcal{S}||\mathcal{A}|d_{TV}(\tilde{O}_{t},O_{t}^{\prime}|\mathcal{F}_{t-\tau})\times 3|\mathcal{S}||\mathcal{A}|Q_{\max}$ $\displaystyle\stackrel{{\scriptstyle(c)}}{{=}}$ $\displaystyle C_{b}\sum_{s,a,s^{\prime},a^{\prime}}|P(\tilde{S}_{t}=s|\mathcal{F}_{t-\tau}){\hat{\pi}}_{t-\tau-1}(a|s)\mathcal{P}(s^{\prime}|s,a){\hat{\pi}}_{t-\tau-1}(a^{\prime}|s^{\prime})$ $\displaystyle-P(S_{t}^{\prime}=s|\mathcal{F}_{t-\tau}){\hat{\pi}}_{t-\tau-1}(a|s)\mathcal{P}(s^{\prime}|s,a){\hat{\pi}}_{t-\tau-1}(a^{\prime}|s^{\prime})|$ $\displaystyle=$ $\displaystyle C_{b}\sum_{s,a,s^{\prime},a^{\prime}}{\hat{\pi}}_{t-\tau-1}(a|s)\mathcal{P}(s^{\prime}|s,a){\hat{\pi}}_{t-\tau-1}(a^{\prime}|s^{\prime})|P(\tilde{S}_{t}=s|\mathcal{F}_{t-\tau})-P(S_{t}^{\prime}=s|\mathcal{F}_{t-\tau})|$ $\displaystyle=$ $\displaystyle C_{b}\sum_{s}|P(\tilde{S}_{t}=s|\mathcal{F}_{t-\tau})-P(S_{t}^{\prime}=s|\mathcal{F}_{t-\tau})|$ $\displaystyle\stackrel{{\scriptstyle(d)}}{{\leq}}$ $\displaystyle C_{b}m\rho^{\tau},$ where $(a)$ follows from Hölder’s inequality and the definition of the matrix norm, $(b)$ follows from Lemma 5.5, in $(c)$ we defined $C_{b}=4Q_{\max}|\mathcal{S}||\mathcal{A}|(1+3|\mathcal{S}||\mathcal{A}|Q_{\max})$, and $(d)$ is due to the Lemma 5.7. Proof of Lemma A.5: $\displaystyle\sum_{i=t-\tau}^{t}\left\|{\hat{\pi}}_{i}-{\hat{\pi}}_{t-\tau-1}\right\|=$ $\displaystyle\sum_{i=t-\tau}^{t}\left\|\sum_{j=t-\tau}^{i}{\hat{\pi}}_{j}-{\hat{\pi}}_{j-1}\right\|$ $\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}$ $\displaystyle\sum_{i=t-\tau}^{t}\sum_{j=t-\tau}^{i}\left\|{\hat{\pi}}_{j}-{\hat{\pi}}_{j-1}\right\|$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}$ $\displaystyle\sum_{i=t-\tau}^{t}\sum_{j=t-\tau}^{i}\left[{L_{3}}\frac{\epsilon_{j-2}}{j-1}+{L_{1}}\beta_{j-1}\right]\leq(\tau+1)^{2}\left[{L_{3}}\frac{\epsilon_{t-\tau-2}}{t-\tau-1}+{L_{1}}\beta_{t-\tau-1}\right]$ where $(a)$ is by triangle inequality, and $(b)$ follows from Lemma 5.9. Proof of Lemma A.6: 1. 1. The proof follows directly by Frobenius norm upper bound on the two norm of a matrix. 2. 2. Follows directly from assumption $\mathcal{R}(s,a)\leq 1~{}\forall s,a$. 3. 3. $\|\bar{A}^{\pi}\|=\|{\mathbb{E}}_{\pi}A(O)\|\leq{\mathbb{E}}_{\pi}\|A(O)\|\leq\sqrt{2}$. 4. 4. $\left\|{\mathbb{E}}[r\left(O_{1}\right)-r(O_{2})]\right\|_{1}=\sum_{s,a}\left|{\mathbb{E}}\left(r(O_{1})-r(O_{2})\right)_{s,a}\right|$ $\leq 2|\mathcal{S}||\mathcal{A}|d_{TV}(O_{1},O_{2})$, where the inequality is due to $|r(O)_{s,a}|\leq 1$. 5. 5. $\left\|{\mathbb{E}}[A\left(O_{1}\right)-A(O_{2})]\right\|_{1}\stackrel{{\scriptstyle(a)}}{{=}}\max_{s^{\prime},a^{\prime}}\sum_{s,a}\left|{\mathbb{E}}\left(A(O_{1})-A(O_{2})\right)_{s,a,s^{\prime},a^{\prime}}\right|$ $\stackrel{{\scriptstyle(b)}}{{\leq}}\max_{s^{\prime},a^{\prime}}2|\mathcal{S}||\mathcal{A}|d_{TV}(O_{1},O_{2})=2|\mathcal{S}||\mathcal{A}|d_{TV}(O_{1},O_{2})$, where $(a)$ is due to the definition of matrix norm, and $(b)$ is due to $|A(O)_{s,a,s^{\prime},a^{\prime}}|\leq 1$. 6. 6. $\|Q_{t+1}-Q_{t}\|\leq\sqrt{\sum_{s,a}\alpha_{t}^{2}(s,a)(2Q_{\max}+1)^{2}}$ $=\alpha_{t}(2Q_{\max}+1)$ 7. 7. $\left\|\theta_{t}-\theta_{t-1}\right\|\leq\left\|Q_{t}-Q_{t-1}\right\|+\left\|Q^{{\hat{\pi}}_{t-1}}-Q^{{\hat{\pi}}_{t-2}}\right\|\leq\Delta_{Q}\alpha_{t-1}+{L_{2}}\left({L_{3}}\frac{\epsilon_{t-3}}{t-2}+{L_{1}}\beta_{t-2}\right)$ where the last inequality follows from the previous part, and Lemmas 5.8 and 5.9. Proof of Lemma A.7: $\displaystyle d_{TV}(P(O_{t}\in\cdot\mid\mathcal{F}_{t-\tau})||P(\tilde{O}_{t}\in\cdot\mid\mathcal{F}_{t-\tau}))$ $\displaystyle=$ $\displaystyle\sum_{s,a,s^{\prime},a^{\prime}}\left|P(\overbrace{S_{t}=s,A_{t}=a}^{\mathcal{H}_{t}},S_{t+1}=s^{\prime},A_{t+1}=a^{\prime}|\mathcal{F}_{t-\tau})-P(\tilde{S}_{t}=s,\tilde{A}_{t}=a,\tilde{S}_{t+1}=s^{\prime},\tilde{A}_{t+1}=a^{\prime}|\mathcal{F}_{t-\tau})\right|$ $\displaystyle=$ $\displaystyle\sum_{s,a,s^{\prime},a^{\prime}}\big{|}{\mathbb{E}}[{\hat{\pi}}_{t}(a^{\prime}|s^{\prime})|\mathcal{F}_{t-\tau},\mathcal{H}_{t}]\mathcal{P}(s^{\prime}|s,a)P(S_{t}=s,A_{t}=a|\mathcal{F}_{t-\tau})$ $\displaystyle-{\hat{\pi}}_{t-\tau-1}(a^{\prime}|s^{\prime})\mathcal{P}(s^{\prime}|s,a)P(\tilde{S}_{t}=s,\tilde{A}_{t}=a|\mathcal{F}_{t-\tau})\big{|}$ $\displaystyle\leq$ $\displaystyle\sum_{s,a,s^{\prime},a^{\prime}}\mathcal{P}(s^{\prime}|s,a)P(S_{t}=s,A_{t}=a|\mathcal{F}_{t-\tau})\left|{\mathbb{E}}[{\hat{\pi}}_{t}(a^{\prime}|s^{\prime})|\mathcal{F}_{t-\tau},\mathcal{H}_{t}]-{\hat{\pi}}_{t-\tau-1}(a^{\prime}|s^{\prime})\right|$ ($I_{1}$) $\displaystyle+\sum_{s,a}\big{|}P(S_{t}=s,A_{t}=a|\mathcal{F}_{t-\tau})-P(\tilde{S}_{t}=s,\tilde{A}_{t}=a|\mathcal{F}_{t-\tau})\big{|}.$ ($I_{2}$) We bound $I_{1}$ and $I_{2}$ separately: $\displaystyle I_{1}$ $\displaystyle\leq\sum_{s,a,s^{\prime},a^{\prime}}\mathcal{P}(s^{\prime}|s,a)P(S_{t}=s,A_{t}=a|\mathcal{F}_{t-\tau}){\mathbb{E}}[|{\hat{\pi}}_{t}(a^{\prime}|s^{\prime})-{\hat{\pi}}_{t-\tau-1}(a^{\prime}|s^{\prime})|\big{|}\mathcal{F}_{t-\tau},\mathcal{H}_{t}]$ $\displaystyle\leq\sum_{s,a,s^{\prime},a^{\prime}}P(S_{t}=s,A_{t}=a|\mathcal{F}_{t-\tau}){\mathbb{E}}[|{\hat{\pi}}_{t}(a^{\prime}|s^{\prime})-{\hat{\pi}}_{t-\tau-1}(a^{\prime}|s^{\prime})|\big{|}\mathcal{F}_{t-\tau},\mathcal{H}_{t}]$ $\displaystyle=\sum_{s^{\prime},a^{\prime}}{\mathbb{E}}[|{\hat{\pi}}_{t}(a^{\prime}|s^{\prime})-{\hat{\pi}}_{t-\tau-1}(a^{\prime}|s^{\prime})|\big{|}\mathcal{F}_{t-\tau}]$ $\displaystyle\leq\sqrt{|\mathcal{A}||\mathcal{S}|}{\mathbb{E}}[\|{\hat{\pi}}_{t}-{\hat{\pi}}_{t-\tau-1}\|\big{|}\mathcal{F}_{t-\tau}],$ $\displaystyle I_{2}$ $\displaystyle=\sum_{s,a}\Bigg{|}\sum_{s^{\prime\prime},a^{\prime\prime}}P(S_{t}=s,A_{t}=a,S_{t-1}=s^{\prime\prime},A_{t-1}=a^{\prime\prime}|\mathcal{F}_{t-\tau})$ $\displaystyle\quad\quad\quad-P(\tilde{S}_{t}=s,\tilde{A}_{t}=a,\tilde{S}_{t-1}=s^{\prime\prime},\tilde{A}_{t-1}=a^{\prime\prime}|\mathcal{F}_{t-\tau})\Bigg{|}$ $\displaystyle\leq\sum_{s,a,s^{\prime\prime},a^{\prime\prime}}\bigg{|}P(S_{t}=s,A_{t}=a,S_{t-1}=s^{\prime\prime},A_{t-1}=a^{\prime\prime}|\mathcal{F}_{t-\tau})$ $\displaystyle~{}~{}~{}-P(\tilde{S}_{t}=s,\tilde{A}_{t}=a,\tilde{S}_{t-1}=s^{\prime\prime},\tilde{A}_{t-1}=a^{\prime\prime}|\mathcal{F}_{t-\tau})\bigg{|}$ $\displaystyle=d_{TV}(P(O_{t-1}\in\cdot|\mathcal{F}_{t-\tau})||P(\tilde{O}_{t-1}\in\cdot|\mathcal{F}_{t-\tau})).$ Combining the above bounds, we get: $\displaystyle d_{TV}(P(O_{t}\in\cdot|\mathcal{F}_{t-\tau})$ $\displaystyle||P(\tilde{O}_{t}\in\cdot|\mathcal{F}_{t-\tau}))$ $\displaystyle\leq$ $\displaystyle\sqrt{|\mathcal{A}||\mathcal{S}|}{\mathbb{E}}\left[\|{\hat{\pi}}_{t}-{\hat{\pi}}_{t-\tau-1}\|\bigg{|}\mathcal{F}_{t-\tau}\right]+d_{TV}(P(O_{t-1}\in\cdot|\mathcal{F}_{t-\tau})||P(\tilde{O}_{t-1}\in\cdot|\mathcal{F}_{t-\tau})).$ Following this induction, and noting that $P(S_{t-\tau}=s,A_{t-\tau}=a|\mathcal{F}_{t-\tau})=P(\tilde{S}_{t-\tau}=s,\tilde{A}_{t-\tau}=a|\mathcal{F}_{t-\tau})$ (due to the definition of $\tilde{S}$ and $\tilde{A}$), we get the result. Proof of Lemma A.8: The proof follows directly from Lemma A.1 in [62]. ## Appendix B Details of the Proof of Proposition 5.1 ### B.1 Useful lemmas Proof of Lemma 5.1: We have: $\displaystyle\log Z_{t}(s)$ $\displaystyle=\log\sum_{a^{\prime}}\pi_{t}(a|s)\exp(\beta_{t}Q_{t+1}(s,a))$ $\displaystyle\geq\sum_{a}\pi_{t}(a|s)\beta_{t}Q_{t+1}(s,a),$ (40) where the inequality is due to the concavity of $\log(\cdot)$ function and Jensen’s inequality. Furthermore, we have: $\displaystyle V^{\pi_{t+1}}(\mu)-V^{\pi_{t}}(\mu)\stackrel{{\scriptstyle(a)}}{{=}}$ $\displaystyle\frac{1}{1-\gamma}\sum_{s,a}d^{\pi_{t+1}}_{\mu}(s)\pi_{t+1}(a|s)\big{[}Q_{t+1}(s,a)+Q^{\pi_{t}}(s,a)-Q_{t+1}(s,a)-V^{\pi_{t}}(s)\big{]}$ $\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}$ $\displaystyle\frac{1}{1-\gamma}\sum_{s,a}d^{\pi_{t+1}}_{\mu}(s)\pi_{t+1}(a|s)\bigg{[}\frac{1}{\beta_{t}}\log\frac{\pi_{t+1}(a|s)}{\pi_{t}(a|s)}$ $\displaystyle+\frac{1}{\beta_{t}}\log Z_{t}(s)+Q^{\pi_{t}}(s,a)-Q_{t+1}(s,a)-V^{\pi_{t}}(s)\bigg{]}$ $\displaystyle\stackrel{{\scriptstyle(c)}}{{\geq}}$ $\displaystyle\frac{1}{1-\gamma}\sum_{s,a}d^{\pi_{t+1}}_{\mu}(s)\pi_{t+1}(a|s)\bigg{[}\frac{1}{\beta_{t}}\log Z_{t}(s)+Q^{\pi_{t}}(s,a)-Q_{t+1}(s,a)-V^{\pi_{t}}(s)\bigg{]}$ $\displaystyle=$ $\displaystyle\frac{1}{1-\gamma}\Bigg{[}\sum_{s,a}d^{\pi_{t+1}}_{\mu}(s)\pi_{t}(a|s)\left[\frac{1}{\beta_{t}}\log Z_{t}(s)-Q_{t+1}(s,a)\right]$ $\displaystyle+\sum_{s,a}d^{\pi_{t+1}}_{\mu}(s)(\pi_{t+1}(a|s)-\pi_{t}(a|s))\left[Q^{\pi_{t}}(s,a)-Q_{t+1}(s,a)\right]\Bigg{]}$ $\displaystyle\stackrel{{\scriptstyle(d)}}{{\geq}}$ $\displaystyle\sum_{s,a}\mu(s)\pi_{t}(a|s)\left[\frac{1}{\beta_{t}}\log Z_{t}(s)-Q_{t+1}(s,a)\right]$ $\displaystyle-\frac{2Q_{\max}{L_{1}}\sqrt{|\mathcal{A}|}}{1-\gamma}\beta_{t}$ $\displaystyle=$ $\displaystyle\sum_{s}\mu(s)\left[\frac{1}{\beta_{t}}\log Z_{t}(s)-V^{\hat{\pi}_{t}}(s)\right]$ $\displaystyle+\sum_{s,a}\mu(s)\hat{\pi}_{t}(a|s)\left[Q^{\hat{\pi}_{t}}(s,a)-Q_{t+1}(s,a)\right]$ $\displaystyle+\sum_{s,a}\mu(s)(\hat{\pi}_{t}(a|s)-\pi_{t}(a|s))Q_{t+1}(s,a)$ $\displaystyle-\frac{2Q_{\max}{L_{1}}\sqrt{|\mathcal{A}|}}{1-\gamma}\beta_{t},$ where $(a)$ is due to Performance Difference Lemma [27], $(b)$ is by the update rule in Algorithm 1, $(c)$ is by positivity of the KL-divergence [16], and $(d)$ is by the definition of $d^{\pi_{t+1}}$ and (40). Taking $\mu=d^{*}$, we have: $\displaystyle\sum_{s}d^{*}(s)\left[\frac{1}{\beta_{t}}\log Z_{t}(s)-V^{\hat{\pi}_{t}}(s)\right]\leq V^{\pi_{t+1}}(d^{*})-V^{\pi_{t}}(d^{*})+\|Q^{{\hat{\pi}}_{t}}-Q_{t+1}\|+\frac{2{L_{1}}\sqrt{|\mathcal{A}|}}{(1-\gamma)^{2}}\beta_{t}+\frac{\epsilon_{t-1}}{1-\gamma}$ which gets the result.
# Insights into the Electron-Electron Interaction from Quantum Monte Carlo Calculations Carl A. Kukkonen<EMAIL_ADDRESS>Kun Chen<EMAIL_ADDRESS>Center for Computational Quantum Physics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010 ###### Abstract The effective electron-electron interaction in the electron gas depends on both the density and spin local field factors. Variational Diagrammatic Quantum Monte Carlo calculations of the spin local field factor are reported and used to quantitatively present the full spin-dependent, electron-electron interaction. Together with the charge local field factor from previous Diffusion Quantum Monte Carlo calculations, we obtain the complete form of the effective electron-electron interaction in the uniform three-dimensional electron gas. Very simple quadratic formulas are presented for the local field factors that quantitatively produce all of the response functions of the electron gas at metallic densities. Exchange and correlation become increasingly important at low densities. At the compressibility divergence at rs = 5.25, both the direct (screened Coulomb) term and the charge-dependent exchange term in the electron-electron interaction at q=0 are separately divergent. However, due to large cancellations, their difference is finite, well behaved, and much smaller than either term separately. As a result, the spin contribution to the electron- electron interaction becomes an important factor. The static electron-electron interaction is repulsive as a function of density but is less repulsive for electrons with parallel spins. The effect of allowing a deformable, rather than rigid, positive background is shown to be as quantitatively important as exchange and correlation. As a simple concrete example, the electron-electron interaction is calculated using the measured bulk modulus of the alkali metals with a linear phonon dispersion. The net electron-electron interaction in lithium is attractive for wave vectors $0-2k_{F}$, which suggests superconductivity, and is mostly repulsive for the other alkali metals. ††preprint: APS/123-QED ## I Introduction The electron-electron interaction is important for pairing and superconductivity, spin and magnetic phenomena, transport properties, and as an input for numerical calculations. Many interesting phenomena occur in exotic materials, some with reduced dimensionality and unusual topologies, and different theoretical approaches have been employed to explain experiment. This paper looks back to the well-studied three-dimensional electron gas to examine the quantitative effects of exchange and correlation on the electron- electron interaction to see if any insight may be gained for problems of modern interest. The equation for the spin dependent effective electron-electron interaction in the three-dimensional electron gas is well-established in the mean field local approximation, and is given in terms of the local field factors that define all of the response functions of the electron gas Kukkonen and Overhauser (1979). Sum rules specify the local field factors at small and large wave vector $q$, and the difficult problem is to calculate the intermediate wave vector dependence. The Quantum Monte Carlo method is considered to produce accurate results. The density local field factor was calculated many years ago by numerous methods including Diffusion Quantum Monte Carlo. Results for the spin local field factor using the Variational Diagrammatic Monte Carlo (VDMC) method were first published in 2019Chen and Haule (2019) and additional results are presented here. The spin local field factor completes the specification of the effective electron-electron interaction and has motivated this paper. The local field factors are also known as exchange and correlation kernels in Time-Dependent Density Functional Theory. The equation for the electron-electron interaction itself is indicative, but it is difficult to understand the relative importance of the terms until the actual local field factors are used to show quantitative results. The numerically calculated values for the density and spin local field factors are discussed, and it is shown that they can approximated by very simple formulas that produce all of the response functions of the electron gas. ## II Electron-electron Interaction Kukkonen and OverhauserKukkonen and Overhauser (1979) (KO) demonstrated that the standard self-consistent perturbation theory based on Hartree-Fock theory and linear response theory could be used to calculate the effective many-body interaction $V_{ee}$ between two electrons in a simple metal, modeled by the electron gas, in terms of the density local field factor $G_{+}(q,\omega)$ and spin local field factor $G_{-}(q,\omega)$. $\displaystyle V_{e\vec{\sigma}_{1},e\vec{\sigma}_{2}}$ $\displaystyle=$ $\displaystyle\frac{4\pi e^{2}}{q^{2}}\left(\frac{\left(\omega^{2}-\omega_{0}^{2}\right)/\left(\omega^{2}-\omega_{q}^{2}\right)}{\left(1-G_{+}Q\right)\left[1+\left(1-G_{+}\right)Q\right]}\right.$ (1) $\displaystyle\left.-\frac{G_{+}^{2}Q}{1-G_{+}Q}-\frac{G_{-}^{2}Q}{1-G_{-}Q}\vec{\sigma}_{1}\cdot\vec{\sigma}_{2}\right)$ This is the interaction to be used for calculating matrix elements between two electrons with momenta $k_{1}$ and $k_{2}$ and spins $\sigma_{1}$ and $\sigma_{2}$. For parallel spins, the wave functions must be properly anti- symmetric. The electron gas is characterized by the density parameter $r_{s}$, ($n=4\pi(r_{s}a_{0})^{3}\\!/3$, $a_{0}$ is the Bohr radius). $Q=v\Pi^{0}$ where $v=4\pi e^{2}/q^{2}$ is the Coulomb potential and $\Pi^{0}(q,\omega)$ is the Lindhard function. For convenience, we will not usually explicitly present the wave vector and frequency dependence. We will also not explicitly use the word effective in discussing interactions. With the deformable background, the standard phonon frequencies and background (lattice) screening results were obtained and are represented by the frequency dependence of the first term in Eq. (1). The intuitive physics concepts behind the electron-electron interaction in Eq. (1) are presented in Ref. Kukkonen and Overhauser (1979). The frequencies $\omega_{q}$ and $\omega_{0}$ refer to the frequency response of the deformable background which is discussed in Ref. Kukkonen and Overhauser (1979) and Section VII of this paper. The first calculations of the screened interaction in the electron gas were the Thomas Fermi interaction and its quantum mechanical extension by LindhardGiuliani and Vignale (2005). The Lindhard result is recovered by setting both $G$’s equal to zero which results in $\displaystyle V_{\text{Lindhard}}=\frac{4\pi e^{2}}{q^{2}(1+Q)}$ (2) In this approximation, the electron-electron, electron-test charge and test charge-test charge interactions are all the same. These latter interactions are discussed in Appendix B. The KO electron-electron interaction has been verified by many-body calculations Vignale and Singwi (1985a), and has been extended to multicarrier Vignale and Singwi (1985b), spin polarized and two-dimensional systemsGiuliani and Vignale (2005). The potential for superconductivity without phonons using the KO interaction was examined by TakadaTakada (1993) and by Richardson and Ashcroft(Richardson and Ashcroft, 1996). Takada found superconductivity for a single carrier system and Richardson and Ashcroft did not. Richardson and Ashcroft stated that an important difference was that each group used different values of $G_{+}(q)$ and $G_{-}(q)$. Richardson and Ashcroft calculated their own $G$’s including frequency dependence. Takada used frequency independent $G$’s. The $G$’s considered here are the static local field factors. Connecting with Feynman diagrams, the frequency independent factor in the first term can also be written as $\Lambda^{2}/\epsilon$ – two vertex corrections divided by the dielectric function Kukkonen and Wilkins (1979). The deformable background (lattice) screens the first term, the Coulomb interaction, but not the exchange and correlation and spin response terms which arise from summing ladder diagrams. For a rigid lattice, the frequency dependence in the first term of Eq. (1) equals one, and the first term is repulsive. The second term is attractive. The spin dependent term is repulsive for opposite spins (singlet) and attractive for parallel spins (triplet). We initially consider the rigid background and discuss the deformable background in section VII below. Quantitative evaluation of the electron-electron interaction is made in section V using the values of $G_{+}(q)$ discussed in Appendix A and the new accurate values of $G_{-}(q)$ reported below. ## III Spin Local Field Factor $\mathbf{G_{-}(q)}$ A Quantum Monte Carlo method using renormalized Feynman diagrams was developed by Chen and Haule(Chen and Haule, 2019). The resulting Variational Diagrammatic Monte Carlo (VDMC) method is a generic many-body solver that was tested on the electron gas. The VDMC method was used to calculate the spin and density responses in the electron gas. It is well suited for finite temperatures. The description of the VDMC method, including grouping of Feynman diagrams, numerical approach and high precision results are given in Ref. Chen and Haule (2019). The data reported here are new VDMC calculations of the static spin local field factor with higher accuracy for densities $r_{s}=1-4$ from $q=0-2.34\,k_{F}$, and additional results for $r_{s}=5$. The calculation temperature is $T=0.025\,T_{F}$ which is equivalent to $T=0$. The method provides the highest accuracy calculations of the $q=0$ susceptibility, and our calculations at finite $q$ also have high accuracy, but less than at $q=0$. Typical error bars are shown with the data. Figure 1: Spin local field factor $G_{-}(q)$ versus $q/k_{F}$ for $r_{s}=1-5$ for the three-dimensional electron gas calculated with the Variational Diagrammatic Monte Carlo method. Error bars are shown by shading. Figure 1 shows the wave vector dependence of $G_{-}(q)$ which demonstrates that it initially follows the quadratic behavior required by the susceptibility sum rule. The behavior changes dramatically near $q=2\,k_{F}$. The susceptibility enhancement at $q=0$ was calculated more accurately than the values at finite $q$. The results from Ref.Chen and Haule (2019) and the new result at $r_{s}=5$ are reported in Table 1. $\mathbf{r_{s}}$ | 1 | 2 | 3 | 4 | 5 ---|---|---|---|---|--- $\mathbf{\chi}/\mathbf{\chi_{0}}$ | 1.152(2) | 1.296(6) | 1.432(9) | 1.576(9) | 1.683(15) Table 1: Susceptibility enhancement at $q=0$ for the three-dimensional electron gas calculated by Variational Diagrammatic Monte Carlo method. Uncertainty is indicated by the number in parentheses. In order to clearly see the small $q$ behavior and the effect of the susceptibility sum rule, the quantity $G_{-}(q)/(q/q_{TF})^{2}$ is plotted in Fig. 2. The Thomas Fermi screening wave vector is defined by $q_{TF}^{2}=4k_{F}/\pi a_{0}$. The spin exchange and correlation kernel for Time Dependent Density Function Theory is $f_{xc}^{\text{spin}}=-4\pi G_{-}(q)/(q/q_{TF})^{2}$. Figure 2: Spin local field factor divided by the wave vector divided by Thomas Fermi wave vector squared, $G_{-}(q)/(q/q_{TF})^{2}$ versus $q/k_{F}$ for $r_{s}=1-5$. Error bars are shown by shading. Figure 2 shows that the spin local field factor $G_{-}(q)$ follows the quadratic well and does not fall below the quadratic behavior until near $2\,k_{F}$. In fact, for high density $r_{s}=1$, $G_{-}(q)$ rises significantly above the quadratic before it falls below. The close adherence to the quadratic behavior, suggests that in the metallic region $r_{s}=2-5$, that a simple quadratic that satisfies the susceptibility sum rule is adequate to calculate the spin response function. The wave vector dependent susceptibility enhancement is given by $\frac{\chi(q)}{\chi_{0}(q)}=\frac{1}{1-G_{-}Q}\;,$ (3) and shown in Figure 3. Figure 3: Susceptibility enhancement $\chi(q)/\chi_{0}$ plotted versus $q/k_{F}$ for $r_{s}=1-5$. The data points use the actual values of $G_{-}(q)$ calculated here and reported in Figure 1. The solid lines are the simple quadratic function Eq. (5) set by the susceptibility sum rule at $q=0$. Error bars are shown by shading. Note that the $Y$ axis starts at 1.0. The $q=0$ value of the susceptibility enhancement is entirely set by the susceptibility sum rule. The enhancement is modest because there is no divergence in the susceptibility near the metallic region. The simple quadratic, which is the horizontal line in Fig. 2, fits the data quite well and is adequate for the discussions in this paper and for comparison with experiment. If higher accuracy is needed, the actual data shown in Figure 2 can be used. The application of the VDMC method to the density local field factor $G_{+}(q)$ is briefly discussed in Appendix A. ## IV Simple Expressions for local field factors $\mathbf{G_{+}(q)}$ and $\mathbf{G_{-}(q)}$ The recommended simple quadratic forms for $G_{+}(q)$ and $G_{-}(q)$ are: $G_{+}(q)=\left(1-\frac{\kappa_{0}}{\kappa}\right)\left(\frac{q}{q_{TF}}\right)^{2}$ (4) $G_{-}(q)=\left(1-\frac{\chi_{0}}{\chi}\right)\left(\frac{q}{q_{TF}}\right)^{2}\;.$ (5) These expressions are exact at small $q$ and accurately represent the QMC data up to almost $q=2\,k_{F}$ for the metallic region $r_{s}=2-5$. $G_{+}(q)$ is discussed in Appendix A. Although these simple quadratic approximations are not accurate beyond $2\,k_{F}$, they are suitable for the electron gas response functions which are cut off by the Lindhard function above $q=2\,k_{F}$. For any application that requires values of $G$ at larger $q$, we recommend the interpolation formula discussed in Appendix A for $G_{+}$ or the actual data above for $G_{-}$. The simple quadratic approximation to $G_{-}(q)$ given in Eq. (5) and used to calculate the susceptibility enhancement in Fig. 3 fits the VDMC data quite well. The fit is exact at $q=0$, falls below by $2\%$ at $q=1.5\,k_{F}$ and slightly above at $2\,k_{F}$. The average values are within $1\%$. The susceptibility is the product of the enhancement times the Lindhard function and the Bohr magneton. The falloff of the Lindhard function above $2\,k_{F}$ makes this region unimportant for most applications. The recommended values of the compressibility are taken from Perdew and WangPerdew and Wang (2018) and susceptibility ratios are given in Table 1. Both are plotted in Figure 4 and are accurately fitted by quadratic interpolation formulas. Figure 4: Compressibility ratio $\kappa_{0}/\kappa$ and susceptibility ratio $\chi_{0}/\chi$ for the three-dimensional electron gas with a rigid uniform positive background. The curves in Fig 4 are fits to the data in the metallic region and fit the data to less than $0.5\%$. Note that the compressibility and susceptibility ratios must equal 1 at $r_{s}=0$. Since we are only interested in the metallic region $r_{s}=1-5$, the fitting curves were not required to have an intercept of 1 at $q=0$, which results in a simpler and more accurate equations in the metallic region. The compressibility and susceptibility ratios at $q=0$ are well fitted from $r_{s}=1-5$ by the following equations. $\frac{\chi_{0}}{\chi}=0.9821-0.1232\,r_{s}+0.0091\,r_{s}^{2}$ (6) $\frac{\kappa_{0}}{\kappa}=1.0025-0.1721\,r_{s}-0.0036\,r_{s}^{2}$ (7) With these $G$’s and the compressibility and susceptibility ratios, all of the response functions for the three-dimensional electron gas with a rigid background can be quantitatively calculated. The same approach can be used for a spin polarized or two component electron gas. Figure 4 shows the well-known divergence and sign change of the compressibility at $r_{s}=5.25$. This causes the vertex function and thus the dielectric function to diverge and become negative. The rigid uniform positive background prevents the overall model electron gas from becoming unstable. ## V Electron-Electron Interaction: Quantitative Results Using $G_{+}(q)$ and $G_{-}(q)$, we plot the electron-electron interaction. We plot each term in Eq. (1) separately to show their relative importance. These are denoted $V_{ee}1$, $V_{ee}2$ and $V_{ee}3$. The first term $V_{ee}1$ is the coefficient of the frequency dependent factor (which is equal to $1$ for a rigid background). $V_{ee}1$ is intrinsically positive. The second term $V_{ee}2$ subtracts from the first term. $V_{ee}3$, the spin dependent term is positive (repulsive) for opposite spins (singlet) and negative (attractive) for parallel spins (triplet). These three terms are plotted together in Fig. 5 for $r_{s}=2$ and $5$. Figure 5: The three terms in the equation for $V_{ee}(q)$ for $r_{s}=2$ (a) and $r_{s}=5$ (b). The first term $V_{ee}1$ is the screened Coulomb interaction. The second two terms are additional effects of exchange and correlation. The second term $V_{ee}2$ is subtracted from the first term and the third (spin dependent) term $V_{ee}3$ is subtracted for parallel spins and added for antiparallel (opposite) spins. The potential is measured in units of $4\pi e^{2}$. The magnitude of the first term $V_{ee}1$ at $q=0$ is $(\kappa/\kappa_{0})/q_{TF}^{2}$ which is divergent at the compressibility divergence. The results look “normal” at $r_{s}=2$ where the second and third terms are small corrections to $V_{ee}1$ (the screened Coulomb interaction). $V_{ee}1$ is screened by the deformable background (lattice) in the usual fashion. This seems consistent with the idea of perturbation theory where the corrections are small (except for the effect of the compressibility sum rule which is large even at $r_{s}=2$). At $r_{s}=5$, $V_{ee}1$ is very large. $V_{ee}1$ diverges as a function of $r_{s}$ at the compressibility divergence approximately as $1/(1-r_{s}/5.25)$. When calculating with Feynman diagrams, this term arises as the direct screened interaction with two vertex corrections $V_{ee}1=(\Lambda^{2}/\epsilon)V_{ext}=\Lambda V_{et}$. This apparent divergence is one concern. The second term $V_{ee}2$ is also large and apparently diverging, and is not screened by the lattice. $V_{ee}2$ is subtracted from $V_{ee}1$. The fact that this term is large brings into question the use of perturbation theory and linear response. However, upon closer examination, $V_{ee}2$ completely tracks $V_{ee}1$ and for a rigid background, they formally and exactly partially cancel each other to yield a finite value. This is due to a massive cancellation of Feynman diagrams using that approach. For a rigid background, the difference $V_{ee}1-V_{ee}2$ at $q=0$ is given as $V_{ee}1(0)-V_{ee}2(0)=\frac{\kappa}{\kappa_{0}}\left(\frac{1-(\kappa/\kappa_{0})^{2}}{q_{TF}^{2}}\right)\;=\;\frac{2-\kappa_{0}/\kappa}{q_{TF}^{2}}\;.$ (8) For a rigid background, the net spin independent terms of the electron- electron interaction are completely well behaved and have no divergence at the compressibility divergence. This was also observed by KOKukkonen and Overhauser (1979). The deformable lattice will be discussed in Section VII below. Figure 6: The net electron-electron interactions $V_{ee}(q)$ for opposite and parallel spins compared to the Lindhard potential at $r_{s}=2$ (a) and $5$ (b). A rigid background is assumed for the electron gas. Figure 6 shows the net electron-electron interaction for electrons with parallel and opposite spins. This is the interaction to be used for calculating matrix elements and superconductivity. $V_{ee}^{\uparrow\downarrow}=V_{ee}1-V_{ee}2+V_{ee}3$ (9) is the interaction for opposite spins, and $V_{ee}^{\uparrow\uparrow}=V_{ee}1-V_{ee}2-V_{ee}3$ (10) is the interaction for parallel spins. As mentioned above $V_{ee}1-V_{ee}2$ is a smooth function and lies midway between parallel and opposite spins. The overall electron-electron interaction is smooth and has no remaining evidence of the large effects in $V_{ee}1$ and $V_{ee}2$ individually. At $r_{s}=2$, $V_{ee}2$ and $V_{ee}3$ have effects at the $10\%$ level. However at $r_{s}=5$, the first two terms nearly cancel, and $V_{ee}3$, the spin dependent term, is relatively important. The overall electron-electron interaction is considerably less repulsive for parallel spins. The values of the electron-electron interaction at $q=0$ are completely determined by the compressibility and susceptibility sum rules. At large $q$ (short distances), the electron-electron interaction follows the Lindhard function to the bare Coulomb interaction. The Lindhard interaction is shown for comparison. This quantitative evaluation of the electron-electron interaction shows that with a rigid background, the static electron-electron interaction is well behaved and repulsive throughout the metallic region. The electron-electron interaction (and the other interactions in the electron gas discussed in Appendix B) are completely specified with simple equations provided in this paper. However, care must be taken with quantitative comparison with experiment. The effective mass, renormalization factor $z$, core polarization and a deformable lattice can have significant effects. Some of these were discussed by Kukkonen and WilkinsKukkonen and Wilkins (1979). The deformable lattice will be discussed in section VII. ## VI Effect of the Compressibility Sum Rule We have emphasized that the compressibility and susceptibility sum rules, which are derived from changes in the total electron gas energy, completely determine the various interactions at $q=0$. This is illustrated Fig. 7 where we plot the ratio of the different interactions to the Lindhard interaction at $q=0$ versus $r_{s}$. Figure 7: The ratio of the electron gas interactions at $q=0$ to the Lindhard (Thomas Fermi) interaction as a function of $r_{s}$ for a rigid background. The top curve is the electron-electron interaction for electrons with opposite spins. The next curve down is the electron- electron interaction for parallel spins. The electron-test charge interaction $V_{et}$ at $q=0$ is equal to the Thomas Fermi interaction at all $r_{s}$. The bottom curve is the test charge- test charge interaction $V_{tt}$ which always falls below Thomas Fermi interaction and becomes negative at the compressibility instability. Figure 7 shows that all of the interactions are the same and equal to the Lindhard and Thomas Fermi interactions at high density near $r_{s}=0$. At lower density (larger $r_{s}$), the effects of exchange and correlation manifest themselves through the sum rules. Only the electron-electron interaction depends on spin and that is shown by the two curves for parallel and opposite spins. With a rigid uniform positive background, the electron-electron and electron- test charge interactions show no unusual behavior as $r_{s}$ approaches the compressibility divergence at $r_{s}=5.25$. However, the test charge-test charge interaction at $q=0$ becomes negative (attractive) above the instability. This is due to the fact that the dielectric function has become negative. This interaction is discussed in Appendix B. The sum rules dictate the $q=0$ behavior. At large $q$, all the interactions fall off as $1/q^{2}$ which reflects the bare Coulomb interaction at short distances, and the wave vector dependence of the local field factors and the cut off of the Lindhard function at $2\,k_{F}$ set the intermediate $q$ behavior which has been discussed above. ## VII Deformable Background KO considered a smooth but elastically deformable background which led to the frequency dependence in Eq.(1). The result was verified in Ref.Giuliani and Vignale (2005). Introducing the deformable background naturally results in the phonon frequenciesKukkonen and Overhauser (1979). $\displaystyle\omega_{q}^{2}$ $\displaystyle=$ $\displaystyle\frac{Nq^{2}}{M}\left(V_{ii}^{\text{bare }}-\frac{\left(V_{ei}^{\text{bare }}\right)^{2}}{v}+\frac{\left(V_{ei}^{\text{bare }}\right)^{2}}{v\epsilon}\right)$ (11) $\displaystyle\equiv$ $\displaystyle\omega_{0}^{2}+\frac{Nq^{2}\left(V_{ei}^{\text{bare }}\right)^{2}}{Mv\epsilon}$ Where $N$ is the density and $M$ is the mass of the background (ions), $V_{ii}^{\text{ bare}}$ is the bare ion-ion interaction, $V_{ei}^{\text{bare}}$ is the bare electron-ion interaction, and $\epsilon$ is the dielectric function. It is instructive to rewrite the frequency dependence in equation (1) as $\frac{\omega^{2}-\omega_{0}^{2}}{\omega^{2}-\omega_{q}^{2}}=1+\frac{\omega_{q}^{2}-\omega_{0}^{2}}{\omega^{2}-\omega_{q}^{2}}\;.$ (12) This now multiplies the first term. The factor 1 allows the frequency independent part of the first term and second term to be formally combined which cancels the compressibility divergence in the static interaction, and Eq.(1) can be rewritten as $\displaystyle V_{e\vec{\sigma}_{1},e\vec{\sigma}_{2}}$ $\displaystyle=\frac{4\pi e^{2}}{q^{2}}\left(\frac{\left(\omega^{2}_{q}-\omega_{0}^{2}\right)/\left(\omega^{2}-\omega_{q}^{2}\right)}{\left(1-G_{+}Q\right)\left[1+\left(1-G_{+}\right)Q\right]}\right.+$ (13) $\displaystyle\left.\frac{1+(1-G_{+})G_{+}Q}{1+(1-G_{+})Q}-\frac{G_{-}^{2}Q}{1-G_{-}Q}\vec{\sigma}_{1}\cdot\vec{\sigma}_{2}\right)$ The new first term $V_{ee-{\text{phonon}}}$ is divergent at $q=0$ at the compressibility divergence, but the second two terms have no divergence. $V_{ee-\text{phonon}}$ represents the additional screening of the Coulomb interaction by the background (lattice). At $\omega=0$, the numerator of the first term is negative which is the expected result for static screening by the positive background. $-\frac{\omega_{q}^{2}-\omega_{0}^{2}}{\omega_{q}^{2}}=\displaystyle{\frac{-Nq^{2}\left(V_{ei}^{\text{bare}}\right)^{2}\Big{/}(Mve)}{(Nq^{2})/M\displaystyle{\left(V_{ii}^{\text{bare}}-\frac{\left(V_{ei}^{\text{bare }}\right)}{v}+\frac{\left(V_{ei}^{\text{bare }}\right)^{2}}{\epsilon\,v}\right)}}}$ (14) The screening depends on the stiffness of the background represented by $V_{ii}^{\text{bare}}$ and the properties of the electron gas. The rigid background is obtained when $V_{ii}^{\text{bare}}$ (and thus $\omega_{q}^{2}$) goes to infinity and this term goes zero. Another interesting limit occurs if all of the interactions including $V_{ii}^{\text{b}are}$ are Coulomb interactions. In this case $-\frac{\omega_{q}^{2}-\omega_{0}^{2}}{\omega_{q}^{2}}=-1$ (15) and $V_{ee-\text{phonon}}$ is large and negative (attractive) At $q=0$, the first term has the value set by the compressibility sum rule as $\displaystyle V_{ee-\text{phonon}}(q=0)$ $\displaystyle=$ $\displaystyle-\frac{\kappa}{\kappa_{0}}V_{et}(q=0)$ (16) $\displaystyle=$ $\displaystyle-\frac{\kappa}{\kappa_{0}}V_{TF}(q=0)$ With all Coulomb interactions, this negative first term is larger than the other two terms combined and the overall electron-electron interaction at $q=0$ is attractive. Note this term diverges at the compressibility divergence. An instructive intermediate case is to consider that $V_{ei}^{\text{bare}}$ in the numerator of Eq.(14) is equal to the Coulomb interaction $v=4\pi e^{2}/q^{2}$, and to take $\omega_{q}$ from experiment. The background (lattice) screening term can be rewritten as $\displaystyle V_{ee-\text{phonon}}(q)$ $\displaystyle=$ $\displaystyle-\frac{1}{\omega_{q}^{2}}\left(\frac{v}{(\epsilon(1-G_{+}Q))^{2}}\right)\frac{Nq^{2}}{M}$ (17) $\displaystyle=$ $\displaystyle-\frac{1}{\omega_{q}^{2}}V_{et}^{2}(q)\frac{Nq^{2}}{M}$ All of the effects of the deformable background are in $\omega_{q}^{2}$, the phonon frequencies which depend on the electron gas parameters as well as the bare ion-ion interaction. The electron test charge interaction appears because the lattice appears as a test charge to the electron gas. The electron test charge interaction is equal to the Thomas Fermi and Lindhard interactions at $q=0$, but differs at larger q as shown in Appendix B. As the simplest example, we model the phonons by the relationship between $\omega_{q}$ and the bulk modulus B which is the inverse of the compressibility $\kappa$. $\omega_{q}^{2}=\frac{Bq^{2}}{NM}$ (18) where $NM$ is the mass density of the background, and $\sqrt{B/NM}$ is the speed of sound. Using this relationship and the bulk modulus of the non- interacting electron gas $B_{0}=1/(n^{2}V_{TF}(q=0))$, and the fact that the ion density and the electron density are the same for the monovalent alkali metals, one obtains the equation for the background screening contribution to the electron-electron interaction $V_{ee-\text{phonon}}(q)=-\frac{V_{et}(q))^{2}}{V_{et}(0)}\frac{B_{0}}{B_{\text{experiment}}}$ (19) The measured bulk moduli, free electron valuesAshcroft and Mermin (1976), and their ratios for the alkali metals are given in Table 2. | $r_{s}$ | $B_{0}$ | $B_{\text{experiment}}$ | $B_{0}/B_{\text{experiment}}$ ---|---|---|---|--- Li | 3.25 | 23.9 | 11 | 2.173 Na | 3.93 | 9.23 | 6.3 | 1.465 K | 4.86 | 3.19 | 3.1 | 1.029 Rb | 5.2 | 2.28 | 2.5 | 0.912 Cs | 5.62 | 1.54 | 1.6 | 0.963 Table 2: Free electron and experimentally measured bulk moduli for the alkali metals. Figure 8: Static $(\omega=0)$ electron-electron interaction at $q=0$ for the electron gas at the density of alkali metals (in units of the Lindhard (Thomas Fermi) interaction). The top two curves are the electron-electron interaction for a rigid background with opposite spins, and parallel spins. The data at the lower right are the electron-electron interaction including the deformable background screening for the alkali metals using the measured bulk modulus for opposite and parallel spins. In Figure 8, the repulsive electron-electron interactions at $q=0$ for opposite and parallel spins in a rigid background is plotted versus $r_{s}$ as in Figure 7. Also plotted is the net electron-electron interaction including screening by the deformable background using the experimentally measured bulk modulus for the alkali metals. The electron-electron interactions for a rigid background are simply the second two terms of Eq. (13) evaluated at $q=0$. These are completely specified by the compressibility and susceptibility sum rules, and the values are given in Fig. 4. $V_{ee}^{\uparrow\downarrow}(0)=\left[\left(2-\frac{\kappa_{0}}{\kappa}\right)+\frac{\chi}{\chi_{0}}\left(1-\frac{\chi_{0}}{\chi}\right)^{2}\right]V_{\text{Lindhard}}(0)$ (20) $V_{ee}^{\uparrow\uparrow}(0)=\left[\left(2-\frac{\kappa_{0}}{\kappa}\right)-\frac{\chi}{\chi_{0}}\left(1-\frac{\chi_{0}}{\chi}\right)^{2}\right]V_{\text{Lindhard}}(0)$ (21) The attractive background screening contribution to the electron-electron interaction is given by Eq. (19) evaluated at $q=0$. The electron-test charge interaction $V_{et}(q)$ is given in Eq. (26) in Appendix B, and is equal to the Lindhard and Thomas Fermi potentials at $q=0$. $V_{et-\text{phonon}}(0)=\left[\frac{B_{0}}{B_{\text{experiment}}}\right]V_{\text{Lindhard}}(0)$ (22) This term is spin independent and is subtracted from both the opposite and parallel spin electron-electron interactions for a rigid background and net electron-electron interaction is plotted at the $r_{s}$ values of the alkali metals. Figure 9: Electron-electron interaction at densities of lithium (a) and sodium (b) comparing the repulsive interaction calculated for a rigid background to the net interaction when the deformable background is included using the measured bulk modulus. The top two curves are for the rigid lattice for opposite and parallel spins. The bottom two curves subtract the deformable background contribution from the top two curves. They represent the net electron-electron interaction. Note that all of the electron-electron interactions at $q=0$ are proportional to $V_{\text{Lindhard}}(q=0)=1/q_{TF}^{2}=r_{s}\,a_{0}/(2.95)^{2}$ in units of $4\pi e^{2}$. The electron-electron interactions divided by $V_{\text{Lindhard}}(q=0)$ at each $r_{s}$ are plotted in Fig. 8 which shows the relative importance of exchange and correlation and the deformable background. Figure 8 shows that the effects of screening by the deformable background are as large as the effects of exchange and correlation. The $q=0$ value of the net electron-electron for lithium is attractive for both parallel and opposite spins, while all of the other alkali metals are repulsive. Lithium is the only alkali metal that exhibits superconductivity at ambient pressure. Although the net electron-electron for lithium is more attractive for parallel spins than opposite spins, this does not necessarily imply triplet pairing because the spatial part of the overall wave function must be anti-symmetric. This is discussed briefly after Fig. 10. Figure 10: $V_{ee}(r)$, the Fourier transform of $V_{ee}(q)$, the electron- electron interaction shown in Fig. 9, at the densities of lithium (a) and sodium (b). This figure compares the repulsive interaction calculated for a rigid background to the net electron-electron interaction when a deformable background is included. The top two curves are for the rigid background for opposite and parallel spins. The bottom two curves include the deformable background contribution. The wave vector dependent electron-electron interaction for this simple model of lattice screening in the alkali metals can be calculated using Eq. (13), Eq. (19) and Eq. (26). This assumes the linear phonon dispersion relation of Eq. (18). A linear phonon dispersion should be the correct behavior at small $q$, but overestimates $\omega_{q}$ as $q$ approaches $2\,k_{F}$ (and a reciprocal lattice vector). Since $\omega_{q}^{2}$ is in the denominator, a smaller value near $2\,k_{F}$ would imply an even more attractive potential. The results for lithium and sodium are shown in Fig. 9. The electron-electron interaction in a rigid background is repulsive. The repulsion is less for parallel spins than for opposite spins. The deformable background at low frequencies contributes a negative term due the additional screening by the deformable background/lattice. The background screening is due to the net Coulomb interaction and is independent of spin for a non- magnetized electron gas. The most interesting feature is that the net electron-electron interaction in lithium is attractive from $q=0$ to above $2\,k_{F}$, for both parallel and opposite spins. A more careful treatment of the background/phonons is needed, but this result is likely to be qualitatively correct. The strong attractive region in lithium may explain the source of the observed superconductivity. The other alkali metals with lower densities have more repulsive electron- electron interactions and smaller effects of the deformable lattice as measured by the bulk modulus. The lattice screening is still a significant effect. The repulsive interaction in the rigid lattice falls off more quickly with wave vector than the attractive contribution from the deformable background. The resulting net electron-electron interaction has a minimum near $1.5-2\,k_{F}$. This minimum is slightly repulsive for opposite spins, and slightly attractive for parallel spins for all the alkali metals. The small attraction for parallel spins near $2\,k_{F}$ may lead to interesting physics. Sodium is shown in Fig. 9. The curves for the other alkali metals are similar. Figure 10 shows $V_{ee}(r)$, the Fourier transform of the electron-electron interactions in Fig 9. The energy scale is $eV$, and the Fermi energy for lithium is $4.74\,eV$ and for sodium it is $3.23\,eV$. With a rigid lattice, the electron-electron interaction is repulsive as shown in the top two curves of Figure 10. The classical turning points for scattering, where the repulsive potential is equal to the Fermi energy, are approximately $0.75\,r_{s}$ for lithium and $0.9r_{s}$ for sodium. Electron-electron scattering does not contribute to the electrical resistivity because momentum and charge are conserved in the scattering event (except for a small effect due to umklapp scattering in a real-lattice as opposed to a uniform background). Electron- electron scattering does however contribute to the thermal resistivity. When the deformable background is included, the overall interaction potential for lithium is attractive for both parallel and opposite spins in the region $0.25-1.0\,r_{s}$ with a depth comparable to the Fermi energy, with a repulsive core at shorter distances. At larger distances, not shown in Fig 10, much smaller oscillations similar to Friedel oscillations are seen. The interaction or scattering of two electrons in the electron gas with an attractive potential due to the deformable background could include resonances, virtual bound states or even bound states. We note that although the electron-electron attraction is stronger for parallel spins, the overall wave function which must be anti-symmetric would imply that the spatial part is anti-symmetric (p-wave) and the probability that the electron is found at small distances is lower. Thus it is likely that the opposite spin electrons with a symmetric (s-wave) wave function would sample more of the region of the attractive potential. For sodium, only the electron-electron interaction for parallel spins becomes significantly attractive. We present this Fourier transform real space calculation to stimulate thinking within the single particle picture of the electron gas– which has limitations. The usual thinking is in momentum and frequency space. Equation (13) gives the explicit frequency dependence. The local field factors used here are independent of frequency, but the frequency dependence of the Lindhard function is known. The electron-electron interaction in this paper can be used in calculations of superconductivity. The simple inclusion of the deformable lattice shows that lithium is substantially different from the other alkali metals, and lithium is the only alkali that shows superconductivity at ambient pressure. Cesium exhibits superconductivity at high pressure. The net electron-electron interaction is sensitive to exchange and correlation and to the deformable lattice. Richardson and AshcroftRichardson and Ashcroft (1997) considered superconductivity with the KO electron-electron interaction and phonons treated on an equal basis, and applied the theory to several metals including lithium. The considerable difficulties in comparison with experiment are discussed in that paper. The region of largest static attraction between two electrons in lithium is at very short distances. The role of this short range attraction in the dynamical theory of superconductivity is not known to us. A strong word of caution is needed when comparing electron gas calculations with experiment, particularly at low density close to the compressibility divergence at $r_{s}=5.25$. Factors such as effective mass, core polarization and renormalization factor $z$ have to be carefully considered when comparing with experiment. These effects were discussed in Kukkonen and WilkinsKukkonen and Wilkins (1979) and in RefRichardson and Ashcroft (1997). A simple example is Cesium with $r_{s}=5.62$ which is beyond the compressibility instability. Cesium and the other alkali metals have polarizable cores with a core polarization dielectric function $\epsilon_{B}$ that is frequency independent in the regions of interest. The correct theoryKukkonen and Wilkins (1979) is an electron gas with electrons of effective mass $m^{\ast}$ in a neutralizing uniform positive background with dielectric constant $\epsilon_{B}$. The result is obtained by scaling the known solutions for the electron gas with mass $m$ and a non-polarizable background, but at a different density $r_{s}^{\ast}=r_{s}(m^{\ast}/m\epsilon_{B})$ and evaluated at a scaled wave vector. For cesium, $\epsilon_{B}=1.27$ and this re-normalizes $r_{s}$ from 5.62 to 4.44 which is below the compressibility divergence. The core polarization corrections for lithium and sodium are quite small. ## VIII Summary Variational Diagrammatic Monte Carlo (VDMC) calculations of the wave vector dependent spin local field factor (exchange and correlation kernel) have been presented and utilized for all calculations. Using the density and spin local field factors and explicit equations presented in this paper, all of the response functions of the three-dimensional electron gas can be easily and quantitatively calculated. Exchange and correlation are fully included within the self-consistent local field approximation and the compressibility and susceptibility sum rules at $q=0$ are satisfied. The full spin dependent electron-electron interaction is calculated using these field factors. For a rigid background, the electron-electron interaction shows no divergence at the compressibility divergence, is repulsive for all $r_{s}$ in the metallic region. Considering a deformable background, the $\omega=0$ (static) screening by the background is shown to be very important with an effect that is as large as exchange and correlation. A simple calculation shows that with a deformable background modelled by using the measured bulk modulus, the net electron- electron interaction including exchange and correlation is attractive (negative) in a large range of momentum and real space for lithium which does exhibit superconductivity at ambient pressure, and is mostly positive (repulsive) for all the other alkali metals. The compressibility divergence does not appear in the electron-electron and electron-test charge interactions, but still shows up in the test charge-test charge interaction as a divergence in the dielectric function and in the screening by the deformable background. The quantitative spin dependent electron-electron interaction can be used in other calculations such as superconductivity, and can be also used as a starting point for improved numerical calculations. The self-consistent local field calculation of the KO interaction using Feynman diagrams may lead to new techniques for identifying massive cancellations of divergent diagrams and allow new perturbation techniques around the self-consistent solution. More detailed numerical calculations of the density local field factor $G_{+}(q)$ from $q=0$ to $3\,k_{F}$ would be welcomed, as well as a physical explanation for the behavior. ## Acknowledgements The authors are grateful to William Halperin, Jan Herbst, Giovanni Vignale and David Ceperley for discussions, and to Giovanni Onida and Massimiliano Corradini for providing the exact coefficients for their analytic function for $G_{+}(q)$ that fits the QMC data, and to Tori Hagiya for comparing his experimental data with our theory. C.A.K. acknowledges support from the US Social Security Administration. K.C. appreciates support from the Simons Foundation. ## Appendix A DENSITY LOCAL FIELD FACTOR $\mathbf{G_{+}(q)}$ The density local field factor has been a subject of research for more than 60 years. $G_{+}(q)$ is needed to calculate the dielectric function and vertex correction. These quantities are sufficient to calculate all of the interactions and response functions that do not depend on spin. Many proposals have been made for the dielectric function and $G_{+}(q)$ particularly by HubbardHubbard (1958), GeldartGeldart and Vosko (1966) and collaborators, and SingwiVashishta and Singwi (1972) and collaborators. When it was realized that the compressibility sum rule was of paramount importance at $q=0$, the dielectric functions and $G$’s were manually adjusted to satisfy the sum rule even though the calculations themselves did not satisfy the sum rule. The calculations were used to interpolate the wave vector dependence above $q=0$. The behavior of $G_{+}(q)$ at $2\,k_{F}$ and above has been the subject of considerable research and debate, but is not important for the calculations of the static response properties of the electron gas because the Lindhard function cuts off the response quickly above $2\,k_{F}$ . The Quantum Monte Carlo method was used by Moroni, Ceperley and SenatoreMoroni et al. (1995) to calculate $G_{+}(q)$ from $q=k_{F}$ to $q=4\,k_{F}$ for $r_{s}=2,5$ and $10$. Corradini, Del Sole, Onida and PalummoCorradini et al. (1998) fitted the QMC data with an analytical function that reflected the appropriate small and large limits. This expression is given in terms of derivatives of the electron gas energy. The Quantum Monte Carlo calculations are considered the most accurate, but there are no data below $q=k_{F}$. Richardson and AshcroftRichardson and Ashcroft (1994) calculated the local field corrections at finite frequencies, and their static results are similar, but differ somewhat in detail from the QMC results. They emphasized the importance of the sum rules. Richardson and Ashcroft also provided an interpolation formula. Retrospective discussions of the local field factors were given by Simion and GiulianiSimion and Giuliani (2008) and by Hellal, Gasser and IssolahHellal et al. (2003). We plot the Quantum Monte Carlo results of Ref. Hubbard (1958) for $G_{+}(q)$ together with the analytic interpolation function Ref.Geldart and Vosko (1966), and the even simpler quadratic formula that quite accurately reproduces the response functions of the electron gas in Fig. 11. Although there are no data below $q=k_{F}$ , the QMC results above $k_{F}$ show that $G_{+}(q)$ follows the quadratic required by the compressibility sum rule up until nearly $2\,k_{F}$ and then falls significantly below the initial quadratic. Theory predicts that the large $q$ behavior will also be a quadratic but with a different coefficient. Other earlier versions of $G_{+}(q)$ Hubbard (1958) had a much smaller value at $2\,k_{F}$. (a) (b) Figure 11: Density local field function $G_{+}(q)$ plotted versus $q/k_{F}$ for $r_{s}=2$ (a) and $r_{s}=5$ (b). Data points are the Quantum Monte Carlo calculations from Ref. Hubbard (1958). The solid curves that fit the data are the analytic function from Ref. Geldart and Vosko (1966). The quadratic Eq. (4) is the proposed simple approximation to $G_{+}(q)$ for calculating the response functions. Error bars are shown for all data points. If they are not evident, the error is smaller than the data point. Figure 11 emphasizes the large $q$ behavior of $G_{+}(q)$. However the static response functions of the electron gas depend mostly on the low $q$ behavior, because the Lindhard functions cuts off the effect of $G_{+}(q)$ quickly above $q=2\,k_{F}$ . At small $q$, the compressibility sum rule specifies that $\displaystyle G_{+}(q\rightarrow 0)=\left(1-\frac{\kappa_{0}}{\kappa}\right)\left(\frac{q}{q_{TF}}\right)^{2}$ (23) To emphasize the low q behavior, $G_{+}(q)/(q/q_{TF})^{2}$ is plotted in Fig. 12 below, where the intercept at $q=0$ is $(1-\kappa_{0}/\kappa)$. Note that the density exchange and correlation kernel needed for Time-Dependent Density Functional Theory is given by $f_{xc}=-4\pi G_{+}(q)/(q/q_{TF})^{2}$. (a) (b) Figure 12: $G_{+}(q)/(q/q_{TF})^{2}$, the density local field function divided by $(q/q_{TF})^{2}$ is plotted versus $q/k_{F}$ for $r_{s}=2$ (a) and $r_{s}=5$ (b). Data points are the Quantum Monte Carlo calculations from Ref. Moroni et al. (1995). The solid curves that fit the data are the analytic function from Ref. Corradini et al. (1998). The straight line represents the quadratic Eq. (4) that is the proposed simple approximation to $G_{+}(q)$ for calculating the response functions. Error bars are shown for all data points. Note again that there are no Quantum Monte Carlo data below $q=k_{F}$. The $q=0$ values of the quadratic and the analytic function are set by the compressibility sum rule. The compressibility is also calculated by Monte Carlo methods, and the $q=0$ value is much more accurate than the q dependent data. The compressibility sum rule dictates $G_{+}(0)$ and that the initial behavior will be quadratic, Eq. (23), as represented by the constant horizontal line. The analytic interpolation formula (14) apparently fits the data above $1.5k_{F}$ quite well, but misses substantially the data at $k_{F}$ for $r_{s}=2$. This illustrates the problem of global curve fitting with analytical functions with a limited number of parameters. The simple quadratic does at least as good a job below $2k_{F}$ and is substantially different at larger $q$, but the effect at large $q$ is not very important for the response functions as will be shown below. Looking carefully at the QMC data between $k_{F}$ and $2\,k_{F}$, there is an intriguing hint of structure in $G_{+}(q)$, with data both below and above the quadratic. Further Quantum Monte Carlo calculations from $q=0-3\,k_{F}$ would be informative. The vertex function which embodies the effect of exchange and correlation in the response functions is plotted in Fig 13, where the simple quadratic is compared to the QMC data and the fitting function for this data. (a) (b) Figure 13: Vertex function $\Lambda=1/(1-G_{+}Q)$ plotted versus $q/k_{F}$ for $r_{s}=2$ (a) and $r_{s}=5$ (b). Data points are the Quantum Monte Carlo calculations from Ref. Moroni et al. (1995). The dotted curve that fits the data is the analytic function from Ref. Corradini et al. (1998). The solid curve uses Eq. (4), the proposed simple quadratic approximation for calculating the response functions. Error bars are shown for all data points. If they are not evident, the error is smaller than the data point. Note that the $y-\text{axis}$ starts at 1.0 for $r_{s}=2$, in order to emphasize small differences. The vertex function $\Lambda=1/(1-G_{+}Q)$ is required to calculate the dielectric function, and the other interactions in the electron gas. Figure 13 shows the vertex function using the actual QMC data and the analytic function as well as the simple quadratic. The first point to note is that the $q=0$ value of the vertex function is exactly given by the compressibility sum rule and is equal to $\Lambda(0)=\kappa/\kappa_{0}$ which diverges at the compressibility divergence as approximately $1/(1-r_{s}/5.25)$. At $r_{s}=2$, the vertex enhancement is substantial, but it is not near the compressibility divergence. At $r_{s}=5$, the electron gas is very near the divergence and the vertex enhancement is very large. The error bar in the value of $G_{+}(q)$ at $q=k_{F}$ reaches nearly a point of instability and its effect is magnified because it appears in the denominator. The vertex function is extremely sensitive to small changes at $r_{s}=5$. The simple quadratic function for $G_{+}(q)$ yields a vertex function that satisfies the compressibility sum rule and fits the vertex function derived from the QMC data as well as the fitting formula of Ref. Geldart and Vosko (1966). This is despite the large differences at large $q$, because the contributions at large $q$ are cut off by the Lindhard function. The fitting function should be used for any calculations that depend on $q$ substantially above $2k_{F}$. The fact that the vertex function resulting from the analytic function is larger than the quadratic for $q$ greater than zero and less than $k_{F}$ is entirely due to the curve fitting, and there are no QMC data in this region. The simple quadratic approximation for $G_{+}(q)$ was suggested by Taylor Taylor (1978) 40 years ago and we concur. Figure 14: Density local field factor $G_{+}(q)$ from VDMC calculations for $r_{s}=1$ plotted versus $q/k_{F}$ (a), and $G_{+}(q)/(q/q_{TF})^{2}$ (b). Error bars are shown. The current VDMC method works well when the vertex correction or susceptibility enhancement is modest. This corresponds to $\chi_{0}/\chi$ or $\kappa_{0}/\kappa<0.6$ in Figure 4. For the spin susceptibility, this is near $r_{s}=5$. For the compressibility, this corresponds to $r_{s}=2$. As a test, we have calculated the density local field factor $G_{+}(q)$ for $r_{s}=1$ and $2$, and the data are shown in Figures 14 and 15. At $r_{s}=2$, we compare the new VDMC calculations to QMC results of Ref. Moroni et al. (1995) and the corresponding interpolation formula of Ref. Corradini et al. (1998). At $r_{s}=1$, Fig. 14 shows that the density local field factor $G_{+}(q)$ has the same qualitative behavior as the spin local field factor $G_{-}(q)$ which are shown in Fig. 1 and Fig. 2. The error bars are acceptable. Both show that $G$ rises above the quadratic at approximately $1.2\,kF$ and then falls below at $2\,kF$. Figure 15: Density local field factor $G_{+}(q)$ from VDMC calculations for $r_{s}=1$ plotted versus $q/k_{F}$ (a), and $G_{+}(q)/(q/q_{TF})^{2}$ (b). Error bars are shown. The new data for $G_{+}(q)$ in Fig. 15 demonstrates the limitations of the current version of the VDMC approach. Ordinarily, we would not show data with such large error bars. However, we want to compare with the QMC data Ref. Moroni et al. (1995) and to provide new data below $k_{F}$. The VDMC data with error bars overlap the QMC data with its error bars except for two points near $2\,k_{F}$, and even there, the agreement is quite close. The simple quadratic approximation for $G_{+}(q)$ given in Eq. (4) represents the VDMC data quite well up to $2\,k_{F}$. The interpolation curve of Ref. Corradini et al. (1998) fits the data well above $2\,k_{F}$. The data for $G_{+}(q)$ at $r_{s}=2$ are qualitatively similar to the data for $G_{-}(q)$. The VDMC data for $G_{-}$ and the limited data for $G_{+}$ show that both of these local field factors are smooth functions of wave vector. Although the data rise slightly above the quadratic between 1.5 and $2\,k_{F}$, there is no evidence of a large peak. We have not developed a physical intuition for this behavior. ## Appendix B TEST CHARGE-TEST CHARGE $\mathbf{V_{tt}}$ AND ELECTRON-TEST CHARGE $\mathbf{V_{et}}$ INTERACTIONS AND DENSITY RESPONSE FUNCTION We use the same quadratic function for $G_{+}(q)$ to plot the test charge test-charge and electron-test charge interactions at $r_{s}=2$ and $5$, and compare them to the Lindhard interaction. For the general reader, a simple physically motivated derivation of these interactions and the electron- electron interaction are in Ref. Kukkonen and Overhauser (1979). The test charge-test charge interaction $V_{tt}$ is the Coulomb potential generated by a test charge plus the induced screening cloud, and felt by another test charge. Figure 16: $V_{tt}(q)$, the test charge-test charge or Coulomb interaction at $r_{s}=2$ and $5$. The potential is measured in units of $4\pi e^{2}$. The dashed line is the Lindhard potential which is equal to the Thomas Fermi potential at $q=0$ with the value $V_{\text{Lindhard}}(0)=1/q_{TF}^{2}$. The dielectric function $\epsilon(q,\omega)$ is defined by $V_{tt}=\frac{V_{ext}}{\epsilon}$ (24) and is written as $\epsilon=1+\frac{v\Pi^{0}}{1-G_{+}v\Pi^{0}}=1+\Lambda Q,$ (25) where $\Pi^{0}$ is the Lindhard free electron response function and $v=4\pi e^{2}/q^{2}$. Note that others may define the response function with a minus sign. For convenience, we define $Q=v\Pi_{0}$ and the vertex correction $\Lambda=1/(1-G_{+}Q)$. The potentials are measured in units of $4\pi e^{2}$ so that the Thomas Fermi and Lindhard potentials at $q=0$ are simply $1/q_{TF}^{2}$. Without exchange and correlation, $G_{+}=0$ and therefore $\Lambda=1$ and the Lindhard result is obtained. The electron-test charge interaction $V_{et}$ is simply the test charge - test charge interaction multiplied by the vertex function. $V_{et}=\Lambda\,V_{tt}=\frac{V_{ext}}{1+(1-G_{+})Q}$ (26) For these interactions and response functions of the electron gas in the metallic region, the large $q$ behavior of the local field factor is not of significant importance in most applications because the Lindhard function cuts off quickly above $q=2\,k_{F}$. Figure 17: The test charge-test charge interactions for $r_{s}=2$ (a) and 5(b) have been numerically Fourier transformed and are plotted versus distance. The Fourier transform of the Lindhard function is shown for comparison. The data are shown from $r/(r_{s}\,a_{0})=0.5-2.2$. The $q=0$ value of the interaction is set by the compressibility sum rule. $V_{tt}(0)=(\kappa_{0}/\kappa)/q_{TF}^{2}$ is always less than the Lindhard or Thomas Fermi value. Both are virtually identical above $2.5\,k_{F}$. The $q$ dependence of $G_{+}(q)$ is only important between $0$ and $2\,k_{F}$. $V_{tt}(q)$ in Fig 16 looks qualitatively like the Lindhard potential at $r_{s}=2$, but is dramatically different at $r_{s}\\!=\\!5$. This is due to the compressibility sum rule which fixes the value at $q=0$, and $G_{+}(q)$ interpolates between $q=0$ and $2\,k_{F}$. According to the compressibility sum rule, $V_{tt}(q=0)=0$ at the compressibility divergence at $r_{s}=5.25$, and becomes negative at larger $r_{s}$. It must turn positive again and match $1/q^{2}$ at $q$ beyond $2\,kF$. We don’t have a physical intuition for this“overscreening” behavior resulting from a negative dielectric function. Figure 18: The test charge-test charge interactions for $r_{s}=2$(a) and 5(b) have been numerically Fourier transformed and are plotted versus distance. The Fourier transform of the Lindhard function is shown for comparison. The data are shown from $r/(r_{s}\,a_{0})=2.2-8$ with a factor of 100 magnification compared to Fig 17 Near the compressibility divergence, the vertex correction and thus the dielectric function and $V_{tt}$ are extremely sensitive to small changes, and pressure may be an interesting variable. When applying this formula to real metals, the effective mass and core polarization will re-normalize the equations to make the effective $r_{s}^{\ast}$ lower than the actual physical $r_{s}$. Another Ward identity specifying the renormalization factor $z$ must also be considered. The Fourier transforms of the test charge-test charge potentials are shown in Fig. 17 and Fig. 18 compared to the Lindhard potential. Although it is not shown in Fig. 17, $V_{tt}$ and $V_{\text{Lindhard}}$ converge at small distances $r/(r_{s}a_{0})<0.5$ (derived from $q>2\,k_{F}$) and become equal to the bare interaction at even smaller distances. At intermediate distances (derived from intermediate $q$), $V_{tt}$ is less repulsive than $V_{\text{Lindhard}}$. At large distances the oscillations in $V_{tt}$ are larger than the Friedel oscillations in $V_{\text{Lindhard}}$. This is dramatically different for $r_{s}=5$ where there is a broad attractive region around the test charge from $r/(r_{s}a_{0})=0.6$ to $1.4$. The oscillations at larger distances also have a larger amplitude. Figure 19: The electron-test charge interaction $V_{et}(q)$ at $r_{s}=2$ and $5$. The dashed curve is the Lindhard potential. The lattice spacing of alkali metals is approximately 1.1 times $r_{s}$, and the diameter of the core electrons is about 0.5 times $r_{s}$. The attractive minimum for $r_{s}=5$ located at $r/(r_{s}a_{0})=0.8$ has a depth of $0.7\,eV$ compared to a cohesive energy of rubidium of $0.85\,eV$. This attraction can be part of the explanation for the contraction of the inter-atomic spacing in liquid rubidium that is observed in x-ray scattering experiments as a function of pressure and temperatureMatsuda et al. (2007). The electron-test charge interaction $V_{et}$ is shown in Fig. 19. The electron-test charge interaction at $q=0$ is equal to $1/q_{TF}^{2}$ which is the same as the Lindhard and Thomas- Fermi interactions (obtained by setting $G_{+}=0$). The effect of the vertex correction in the numerator is canceled by the vertex correction in the dielectric function at $q=0$. The effect of exchange and correlation only occurs between zero and about $2.5\,k_{F}$. For $r_{s}=2$, the effects of exchange and correlation are small. The effects are larger for $r_{s}=5$. A recent paperHagiya et al. (2020) calculated the static density response function of lithium from a Kramers-Kronig transformation of the dynamic structure factor measured by inelastic electron scattering. Figure 20 was prepared by T. Hagiya, the lead experimental author, using the formula for $X(q)$ including exchange and correlation provided by us compared to the Random Phase Approximation. The density response function is given by: $\displaystyle\chi(q)=\frac{\Pi^{0}}{(1-G_{+}Q)(1+Q/(1-G_{+}Q))}=\frac{\lambda\Pi^{0}}{\varepsilon}.$ (27) The RPA response function is obtained by setting $G_{+}\\!=\\!1$. Figure 20: Measured static density response function for lithium from Ref. Hagiya et al. (2020) compared to theoretical value with no adjustable parameters. The top curve includes exchange and correlation using $G_{+}(q)$ and the bottom curve uses the Random Phase Approximation which is obtained by setting $G_{+}(q)=1$. The theory has no adjustable parameters. The experimentalists point out that their data is not accurate enough to definitively distinguish between the response functions using exchange and correlation and the RPA. Nevertheless, the experimental results are very impressive as is the data analysis. ## References * Kukkonen and Overhauser (1979) C. A. Kukkonen and A. Overhauser, Physical Review B 20, 550 (1979). * Chen and Haule (2019) K. Chen and K. Haule, Nature Communications 10, 1 (2019). * Giuliani and Vignale (2005) G. Giuliani and G. Vignale, _Quantum theory of the electron liquid_ (Cambridge University Press, 2005). * Vignale and Singwi (1985a) G. Vignale and K. Singwi, Physical Review B 32, 2156 (1985a). * Vignale and Singwi (1985b) G. Vignale and K. S. Singwi, Physical Review B 31, 2729 (1985b). * Takada (1993) Y. Takada, Physical Review B 47, 5202 (1993). * Richardson and Ashcroft (1996) C. Richardson and N. Ashcroft, Physical Review B 54, R764 (1996). * Kukkonen and Wilkins (1979) C. A. Kukkonen and J. W. Wilkins, Physical Review B 19, 6075 (1979). * Perdew and Wang (2018) J. P. Perdew and Y. Wang, Physical Review B 98, 079904 (2018). * Ashcroft and Mermin (1976) N. W. Ashcroft and N. D. Mermin, _Solid state physics_ (New York: Holt, Rinehart and Winston,, 1976). * Richardson and Ashcroft (1997) C. Richardson and N. Ashcroft, Physical Review B 55, 15130 (1997). * Hubbard (1958) J. Hubbard, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 243, 336 (1958). * Geldart and Vosko (1966) D. Geldart and S. Vosko, Canadian Journal of Physics 44, 2137 (1966). * Vashishta and Singwi (1972) P. Vashishta and K. Singwi, Physical Review B 6, 875 (1972). * Moroni et al. (1995) S. Moroni, D. M. Ceperley, and G. Senatore, Physical Review Letters 75, 689 (1995). * Corradini et al. (1998) M. Corradini, R. Del Sole, G. Onida, and M. Palummo, Physical Review B 57, 14569 (1998). * Richardson and Ashcroft (1994) C. Richardson and N. Ashcroft, Physical Review B 50, 8170 (1994). * Simion and Giuliani (2008) G. E. Simion and G. F. Giuliani, Physical Review B 77, 035131 (2008). * Hellal et al. (2003) S. Hellal, J.-G. Gasser, and A. Issolah, Physical Review B 68, 094204 (2003). * Taylor (1978) R. Taylor, Journal of Physics F: Metal Physics 8, 1699 (1978). * Matsuda et al. (2007) K. Matsuda, K. Tamura, and M. Inui, Physical Review Letters 98, 096401 (2007). * Hagiya et al. (2020) T. Hagiya, K. Matsuda, N. Hiraoka, Y. Kajihara, K. Kimura, and M. Inui, Physical Review B 102, 054208 (2020).
# Continual Learning of Visual Concepts for Robots through Limited Supervision Ali Ayub<EMAIL_ADDRESS>The Pennsylvania State UniversityState CollegePAUSA and Alan R. Wagner<EMAIL_ADDRESS>The Pennsylvania State UniversityState CollegePAUSA (2021) ###### Abstract. For many real-world robotics applications, robots need to continually adapt and learn new concepts. Further, robots need to learn through limited data because of scarcity of labeled data in the real-world environments. To this end, my research focuses on developing robots that continually learn in dynamic unseen environments/scenarios, learn from limited human supervision, remember previously learned knowledge and use that knowledge to learn new concepts. I develop machine learning models that not only produce State-of- the-results on benchmark datasets but also allow robots to learn new objects and scenes in unconstrained environments which lead to a variety of novel robotics applications. ††copyright: rightsretained††journalyear: 2021††conference: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction; March 8–11, 2021; Boulder, CO, USA††booktitle: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’21 Companion), March 8–11, 2021, Boulder, CO, USA††doi: 10.1145/3434074.3446357††isbn: 978-1-4503-8290-8/21/03 ## 1\. Introduction Continual adaptation and learning through limited data is the hallmark of human intelligence. Humans continue to learn new concepts over their lifetime without the need to relearn most previous concepts. With robots becoming an integral part of our society, they must also continue to learn over their lifetime to adapt to the ever-changing environments. Further, in real-world applications, robots do not have access to a large amount of labeled data since it is impractical for human users to provide hundreds of examples to the robot. Thus, robots must learn using a small amount of data through limited human supervision. The long-term goal of my research is to develop autonomous robots for everyday environments where they can learn over their lifetime and use the learned knowledge to assist humans in their daily lives. Creating robots that continually learn is a challenging problem. Deep learning is widely used to address many robot learning tasks, yet deep learning suffers from a phenomenon called catastrophic forgetting when learning continually. Catastrophic forgetting occurs when continually training a model (neural network) to recognize new classes, the model forgets the previously learned classes and the overall classification accuracy decreases. One way to address this problem is by storing the complete data of the previously learned classes. However, storing data of the previous classes requires a huge memory when learning new classes continually. Robots, on the other hand, have limited on-board memory available, hence they cannot keep storing high-dimensional images of previous classes. In real-world scenarios, labelling a large amount of data is costly in terms of time and effort. Hence, robots have to learn from a small number of interactions with likely impatient human users. Deep learning systems, however, require a large amount of labelled data for learning. In order to tackle these challenges, my work develops machine learning and computer vision techniques that are inspired by concept learning models from cognitive science. My work is informed by higher level concept learning in children (and all humans) related to curiosity-driven, intrinsically motivated, continual learning of visual concepts (objects and scenes). ## 2\. Related Work Recent continual learning techniques use deep neural networks and rely on storing a fraction of old class data when learning a set of new classes (Rebuffi et al., 2017; Castro et al., 2018). To avoid storage of real samples, some approaches use generative-memory and regenerate samples of old classes using GANs or autoencoders (Ayub and Wagner, 2020d, 2021; Ostapenko et al., 2019; Ayub and Wagner, 2021), however the performance of these approaches is generally inferior to approaches that store real images. One major issue with all these prior continual learning approaches is that they require a large amount of training data. Hence using these methods for continual learning from limited data results in poor accuracy. Curiosity-driven learning has been explored for robotics applications in the past to learn from limited data and supervision. In recent years, some deep reinforcement learning approaches have been proposed that use a curiosity- driven reward function (Burda et al., 2019; Haber et al., 2018) to train neural networks. For object learning, many researchers have presented active learning techniques using uncertainty sampling (Beluch et al., 2018; Gal et al., 2017; Siddiqui et al., 2020; Yoo and Kweon, 2019; Shen et al., 2019). All of these approaches train deep networks using specific loss terms such that the network can predict the most uncertain samples. Although these approaches produced good results on small, simple image datasets like MNIST (LeChun, 1998), they were not tested on a real robot. One of the main limitations of prior curiosity-driven and active learning approaches is that they are designed for a batch learning setting and will thus suffer from catastrophic forgetting when attempting to learn continually. In contrast, we present a novel approach that not only allows a robot to learn from visual data continually but also allows it to assign curiosity scores to unlabeled objects in a self-supervised manner. ## 3\. Methodology In this work, I consider a general continual learning setup for learning visual categories (object or scene classes). In each new increment $t$, the robot gets a small set of labeled samples $S_{t}=\\{(x_{i}^{t},y_{i}^{t})\\}_{i=1}^{n^{t}}$, where $x_{i}\in\mathcal{X}$ are the visual samples (images) and $y_{i}^{t}$ are their ground truth labels. The samples in an increment can belong to the earlier learned classes or completely new classes. Further, the robot has limited storage capacity, thus it cannot store the high dimensional images of the previously learned categories. To learn new objects or scenes, the robot first acquires new image data autonomously using its own cameras.The category labels for the images are provided by the human in a textual format. I then use a neural network pre- trained on a large dataset (e.g. ImageNet (Russakovsky et al., 2015)) to extract feature vectors for the images. Then, I apply a novel cognitively- inspired clustering approach (called Agg-Var clustering) on the feature vectors of the images to learn centroids and covariance matrices for the visual categories. In Agg-Var clustering, the model finds the Euclidean distance of a new $i$th feature vector $x_{i}^{y}$ of a class $y$ to the previously learned centroids of the class. If the distance is below a pre- defined distance threshold $D$ (hyperparameter), the model performs memory integration (Mack et al., 2018) by updating the closest centroid and the corresponding covariance matrix using the new feature vector. If the distance is above the distance threshold $D$, the model performs pattern separation (Mack et al., 2018) by creating a new centroid initialized with $x_{i}^{y}$ and a new covariance matrix initialized with a zero matrix. In this way, the model gets a set of centroids and covariance matrices for all the classes separately. Note that even a small number of images per class are enough to learn the centroids/covariance matrix representation for the class, hence my model can be used to learn from limited labeled data. For classification of test images, I use pseudorehearsal technique (Robins, 1995) in which I use the centroids and covariance matrices of the old classes as parameters of Gaussian distributions. I then sample these Gaussian distributions to generate pseudo-exemplars for the old classes. A shallow neural network classifier with a single linear layer is then trained using the pseudo-exemplars and the feature vectors of the images in the current increment. In this way, the model mitigates catastrophic forgetting. ## 4\. Past, Current and Future Work Towards the goal of creating continually learning robots, the first project in my PhD was focused on the few-shot incremental learning problem (FSIL), in which the robot learns continually from a small number of object examples provided by a human. I developed a novel approach termed Centroid-Based Concept Learning (CBCL) to tackle this problem (Ayub and Wagner, 2020b). CBCL’s classification accuracy was significantly higher than the State-of-the- art (SOTA) incremental learning approaches on benchmark datasets (Table 1). I then applied CBCL on a real robot for a cleaning application, in which the robot learns household objects from a few visual examples provided by a human and organizes related objects from a clutter of objects. This research demonstrated that my method could be capable of dynamically learning task or situation specific objects (Ayub and Wagner, 2020e). I also showed that CBCL is a general approach and can be applied for other tasks, such as RGB-D indoor scene classification (Ayub and Wagner, 2020a). In real-world environments, robots must learn from streaming data that lacks well-defined task boundaries (online learning). The lack of task boundaries and unknown number of categories makes this problem harder than FSIL. I developed an updated version of CBCL, termed Centroid-Based Concept Learning with Pseudorehearsal (CBCL-PR) for online learning. CBCL-PR significantly outperformed SOTA approaches on a benchmark dataset in terms of detecting known and unknown scene categories. I then applied CBCL-PR on the Pepper robot in which the robot wandered in unconstrained real-world environments to learn new scene categories and detect previously unknown scene categories (Ayub et al., [n.d.]). Methods | iCaRL | EEIL | BiC | CBCL ---|---|---|---|--- Accuracy (%) | 63.75 | 64.02 | 64.84 | 69.85 Table 1. Comparison of CBCL with iCaRL (Rebuffi et al., 2017), EEIL (Castro et al., 2018) and BiC (Wu et al., 2019) for class-incremental learning with 10 classes per increment on the CIFAR-100 dataset (Krizhevsky, 2009). In follow-up research, I developed a system that used CBCL-PR for online learning of scenes/contexts and Dempster Schaeffer theory to represent and learn appropriate norms related to different scenes in terms of conditional probabilities. My work was the first of its kind to examine online learning of norms for social robots. I tested this approach on Pepper in which the robot wandered around at different scenes and learned norms through simple Q/A sessions with a human.This research demonstrated that my approach may allow robots to learn different scene categories and use the recognition of these scenes to moderate their behavior and decision-making (Ayub and Wagner, 2020f). I am currently working on the curiosity-driven active online learning (CDAOL) problem, in which the robot has a large amount of unlabeled objects available in an environment and it must choose the most informative samples to be labeled. I am developing a novel approach to assign curiosity scores to new unlabeled objects in a self-supervised manner using the distance of the new objects from the previously learned centroids. Preliminary experiments show that my approach can learn the most informative objects quickly without forgetting the previously learned objects which results in a dramatic increase in accuracy over the other approaches, especially in the earlier increments (Ayub and Wagner, 2020c). For future work, I plan to apply the above-mentioned approach on a real robot. However, real-world robots have access to clutters of objects rather than single object images. Second, capturing multiple views of individual objects in unconstrained environments through robot’s own cameras without human assistance is challenging. To deal with this, I plan to develop a complete system to allow a robot to capture images of cluttered objects, localize all the objects in the clutter, get labels for the most informative objects, use a manipulator module to move its hands around the labeled objects to get different views of the objects and finally train the CNN using the images of the new objects. I plan to test this system on a real robot in a lab environment with clutter of objects present at various locations with different backgrounds. The experiment will be performed over the course of one month at different times of the day in which the robot will wander around in the environment and learn about the objects it is curious about by asking a human teacher. This experiment is the first of its kind, that will demonstrate a true lifelong learning robot that learns a large number of objects (240 objects) in an unconstrained environment over a long period of time through limited human supervision. ## References * (1) * Ayub et al. ([n.d.]) Ali Ayub, Carter Fendley, and Alan R. Wagner. [n.d.]. Boundaryless Online Learning of Indoor Scenes by a Robot. In Review, IEEE International Conference on Robotics and Automation (ICRA), 2021. * Ayub and Wagner (2020a) Ali Ayub and Alan R. Wagner. 2020a. Centroid Based Concept Learning for RGB-D Indoor Scene Classification. In _British Machine Vision Conference (BMVC)_. * Ayub and Wagner (2020b) Ali Ayub and Alan R. Wagner. 2020b. Cognitively-Inspired Model for Incremental Learning Using a Few Examples. In _The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_. * Ayub and Wagner (2020c) Ali Ayub and Alan R. Wagner. 2020c. Online Learning of Objects through Curiosity-Driven Active Learning. _IEEE RoMan (Workshop on Lifelong Learning for Long-term Human-Robot Interaction)_ (2020). * Ayub and Wagner (2020d) Ali Ayub and Alan R. Wagner. 2020d. Storing Encoded Episodes as Concepts for Continual Learning. _arXiv:2007.06637_ (2020). * Ayub and Wagner (2020e) Ali Ayub and Alan R. Wagner. 2020e. Tell me what this is: Few-Shot Incremental Object Learning by a Robot. _arXiv:2008.00819_ (2020). * Ayub and Wagner (2020f) Ali Ayub and Alan R. Wagner. 2020f. What am I allowed to do here?: Online Learning of Context-Specific Norms by Pepper. In _International Conference on Social Robotics_. * Ayub and Wagner (2021) Ali Ayub and Alan R. Wagner. 2021. EEC: Learning to Encode and Regenerate Images for Continual Learning. In _International Conference on Learning Representations (ICLR)_. https://openreview.net/forum?id=lWaz5a9lcFU * Beluch et al. (2018) William H. Beluch, Tim Genewein, Andreas Nürnberger, and Jan M. Köhler. 2018. The Power of Ensembles for Active Learning in Image Classification. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. * Burda et al. (2019) Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A. Efros. 2019\. Large-Scale Study of Curiosity-Driven Learning. In _International Conference on Learning Representations_. * Castro et al. (2018) Francisco M. Castro, Manuel J. Marin-Jimenez, Nicolas Guil, Cordelia Schmid, and Karteek Alahari. 2018\. End-to-End Incremental Learning. In _The European Conference on Computer Vision (ECCV)_. 233–248. * Gal et al. (2017) Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep Bayesian Active Learning with Image Data. In _Proceedings of the 34th International Conference on Machine Learning - Volume 70_ (Sydney, NSW, Australia). JMLR.org, 1183–1192. * Haber et al. (2018) Nick Haber, Damian Mrowca, Stephanie Wang, Li F Fei-Fei, and Daniel L Yamins. 2018. Learning to Play With Intrinsically-Motivated, Self-Aware Agents. In _Advances in Neural Information Processing Systems 31_ , S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). 8388–8399. * Krizhevsky (2009) Alex Krizhevsky. 2009\. Learning Multiple Layers of Features from Tiny Images. Technical report, University of Toronto. * LeChun (1998) Y. LeChun. 1998\. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/ * Mack et al. (2018) Michael L. Mack, Bradley C. Love, and Alison R. Preston. 2018. Building concepts one episode at a time: The hippocampus and concept formation. _Neuroscience Letters_ 680 (2018), 31–38. * Ostapenko et al. (2019) Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Patrick Jahnichen, and Moin Nabi. 2019\. Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. 11321–11329. * Rebuffi et al. (2017) Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H. Lampert. 2017. iCaRL: Incremental Classifier and Representation Learning. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. 2001–2010. * Robins (1995) Anthony Robins. 1995\. Catastrophic Forgetting, Rehearsal and Pseudorehearsal. _Connection Science_ 7, 2 (1995), 123–146. * Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015\. ImageNet Large Scale Visual Recognition Challenge. _Int. J. Comput. Vision_ 115, 3 (Dec. 2015), 211–252. * Shen et al. (2019) Tingke Shen, Amlan Kar, and Sanja Fidler. 2019. Learning to Caption Images Through a Lifetime by Asking Questions. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_. * Siddiqui et al. (2020) Yawar Siddiqui, Julien Valentin, and Matthias Niessner. 2020\. ViewAL: Active Learning With Viewpoint Entropy for Semantic Segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_. * Wu et al. (2019) Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. 2019. Large Scale Incremental Learning. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. * Yoo and Kweon (2019) Donggeun Yoo and In So Kweon. 2019. Learning Loss for Active Learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_.
# Generic Event Boundary Detection: A Benchmark for Event Segmentation Mike Zheng Shou Facebook AI National University of Singapore Stan Weixian Lei National University of Singapore Weiyao Wang Facebook AI Deepti Ghadiyaram Facebook AI Matt Feiszli Facebook AI ###### Abstract This paper presents a novel task together with a new benchmark for detecting generic, taxonomy-free event boundaries that segment a whole video into chunks. Conventional work in temporal video segmentation and action detection focuses on localizing pre-defined action categories and thus does not scale to generic videos. Cognitive Science has known since last century that humans consistently segment videos into meaningful temporal chunks. This segmentation happens naturally, without pre-defined event categories and without being explicitly asked to do so. Here, we repeat these cognitive experiments on mainstream CV datasets; with our novel annotation guideline which addresses the complexities of taxonomy-free event boundary annotation, we introduce the task of Generic Event Boundary Detection (GEBD) and the new benchmark Kinetics-GEBD. We view GEBD as an important stepping stone towards understanding the video as a whole, and believe it has been previously neglected due to a lack of proper task definition and annotations. Through experiment and human study we demonstrate the value of the annotations. Further, we benchmark supervised and un-supervised GEBD approaches on the TAPOS dataset and our Kinetics-GEBD. We release our annotations and baseline codes at CVPR’21 LOVEU Challenge: https://sites.google.com/view/loveucvpr21. ## 1 Introduction Figure 1: Examples of generic event boundaries: 1) A long jump is segmented at a shot cut, then between actions of Run, Jump and Stand up (dominant subject in red circle). 2) color / brightness change 3) new subject appears. | #videos | #segments | #boundaries | video domain | boundary cause | #boundary classes | #Annotations per video ---|---|---|---|---|---|---|--- THUMOS’14 | 2700 | 18K | 36K | sports | action | 20 | 1 ActivityNet v1.3 | 27801 | 23K | 46K | in-the-wild | action | 203 | 1 Charades | 67000 | 10K | 20K | household | action | 157 | 1 HACS Segments | 50000 | 139K | 278K | in-the-wild | action | 200 | 1 AVA | 214 | 197K | 394K | movie | action | 80 | 1 EPIC-Kitchens | 432 | 39K | 79K | kitchen | action | 2747, open-vocab | 1 EPIC-Kitchens-100 | 700 | 89K | 179K | kitchen | action | 4053, open-vocab | 1 TAPOS Instances | 16294 | 48K | 33K | sports | action | open-vocab | 1 Kinetics-GEBD (raw) | 55351 | 1771K | 1498K | in-the-wild | generic | taxonomy-free | 4.93 Kinetics-GEBD (clean) | 54691 | 1561K | 1290K | in-the-wild | generic | taxonomy-free | 4.94 Table 1: Comparing our Kinetics-GEBD with other video boundary datasets. Our Kinetics-GEBD has the largest number of temporal boundaries (e.g. 32x of ActivityNet, 8x of EPIC-Kitchens-100), spans a broad spectrum of video domains in the wild in contrast to sports or kitchen centric, is open-vocabulary rather than building on a pre-defined taxonomy, contains boundaries caused by not only action change but also generic event change, and has almost 5 annotations per video to capture human perception differences and therefore ensure diversity. Note that for ActivityNet and TAPOS, since ground truths of test set are withheld, we do not include #segments and #boundaries of their test sets. Cognitive science tells us [51] that humans perceive video in terms of “events” (goal-directed sequences of actions, like “washing a car” or “cooking a meal”), and further, people segment events naturally and spontaneously while perceiving video, breaking down longer events into a series of shorter temporal units. However, mainstream SOTA video models [49, 50, 13, 7, 28, 12] still commonly process short clips (e.g. 1s long), followed by some kind of pooling operation to generate video-level predictions. Recent years have seen significant progress in temporal action detection [8, 9, 14], segmentation [24, 3, 21, 11] and parsing [37, 40] in videos. Despite this, we have not seen major developments in modeling long-form video. The cognitive science suggests that one underlying deficit is event segmentation: unlike our SOTA models, humans naturally divide video into meaningful units and can reason about these units. In contrast to our current methods build upon limited sets of predefined action classes, humans perceive a broad and diverse set of segment boundaries without any predefined target classes. To enable machines to develop such ability, we propose a new task called Generic Event Boundary Detection (GEBD) which aims at localizing the moments where humans naturally perceive event boundaries. As Fig. 1 shows, our event boundaries could happen at the moments where the action changes (e.g. Run to Jump), the subject changes (e.g. a new person appears), the environment changes (e.g. suddenly become bright), for example. To annotate ground truths of such taxonomy-free event boundaries, the common strategies used by the existing temporal tasks with pre-defined taxonomy do not work: 1. 1. Existing tasks require us to manually define each target class carefully i.e. its semantic differentiators compared to other classes. But it is impractical to enumerate and manually define all candidate generic event boundary classes. 2. 2. The existing tasks typically focus on shot and action boundaries, neglecting other generic event boundaries as the examples shown in Fig. 1 like change of subject. In this paper, we propose to follow cognitive experiments [51] in annotating event boundaries on computer vision datasets. We choose the popular Kinetics [20] dataset as our video source and construct a new event segmentation benchmark Kinetics-GEBD. The marked boundaries are relatively consistent across different annotators; the main challenge raising ambiguity is the level of detail. For example, one annotator might mark boundaries at the beginning and end of a dance sequence, while another might annotate every dance move. We develop several novel principles in design annotation guideline to ensure consistent level of detail across different annotators while explicitly capturing the human perception differences with a multi-review protocol. Our new GEBD task and benchmark will be valuable in: 1. 1. Immediately supporting applications like video editing, summarization, keyframe selection, highlight detection. Event boundaries divide a video into natural, meaningful units and can rule out unnatural cuts in the middle of a unit, for example. 2. 2. Spurring progress in long-form video; GEBD is a first step towards segmenting video into meaningful units and enabling further reasoning based on these units. . In summary, our contributions are four-fold: * • A new task and benchmark, Kinetics-GEBD, for detecting generic event boundaries without the need of a predefined target event taxonomy. * • We propose novel annotation task design principles that are effective yet easy for annotators to follow. We disambiguate what shall be annotated as event boundaries while preserving diversity across individuals in the annotation. * • We benchmark a number of supervised and un-supervised methods on the TAPOS [40] dataset and our Kinetics-GEBD. * • We demonstrate the value of our event boundaries on downstream applications including video-level classification and video summarization. ## 2 Related Work Temporal Action Detection or localization methods attempt to detect the start time and end time for action instances in untrimmed, long videos. Standard benchmarks include THUMOS [19], ActivityNet [1], HACS [55], etc. All of them target a list of specified action classes and manually define the criteria for determining the start point and end point of each action, preventing annotations at scale. Numerous methods have been developed for temporal action detection [8, 9, 14, 31, 42, 44, 4, 30, 56, 53, 33]. Notably, many of them contain a temporal proposal module which solves a binary classification problem analogous to foreground-background segmentation. “Background” segments contain no pre- defined action classes. However, many other generic events could appear in background segments, and segmenting generic events is the main focus in this paper. Temporal Action Segmentation [24, 3, 21, 11, 26, 2, 27, 39, 18] means labeling the action classes in every frames. Some popular benchmarks are 50Salads [47], GTEA [26], Breakfast [21, 22], MERL Shopping [45], etc. Another task called Temporal Action Parsing was recently proposed in [40]; parsing aims to detect the temporal boundaries for segmenting an action into sub-actions. This is more closely related to our current work. However, these annotations and methods are also developed for pre-defined action classes only, not generic boundaries. Shot Boundary Detection is a classical task to detect shot transitions due to video editing such as scene cuts, fades/dissolves, and panning. Some recent works are [5, 15, 48, 41, 46]. These shot boundaries are well-defined and an overcomplete set is easy to detect since the changes between shots are often significant. In this paper, we also annotate and detect shot boundaries in our Kinetics-GEBD benchmark; however, the main novelty lies in event boundaries which are useful for breaking generic videos into semantically-coherent subparts. ## 3 Definition of the GEBD Task ### 3.1 Task Definition GEBD localizes the moments where humans naturally perceive taxonomy-free event boundaries that break a longer event into shorter temporal segments. To obtain ground truth annotations, we begin with the cognitive experiments’ guideline [51] which achieved consistent boundaries marked by different annotators. However, the cognitive experiments typically cover a limited number of scenarios in simple videos, e.g. a single actor, free of distractions from the event of interest. We target diverse, natural human activity videos like Kinetics [20] which contain multiple actors, background distractions, different levels of detail in both space and time, etc. Thus, there is more ambiguity about what are the event boundary positions. ### 3.2 Principles for Designing Annotation Guideline To overcome these ambiguities in natural videos, we arrived at the following design principles throughout multiple iterations of improving annotation guidelines. (a) Detail in space: Focus on the dominant subject. In order to avoid getting distracted by background events, annotators shall focus on the salient subject performing the event. The subject could be a person, a group, an object, or a collection of objects, depending on the video content. (b) Detail in time: Find event boundaries at “1 level deeper” granularity compared to the video-level event. Given a video, it can be segmented at different temporal granularities. For example, the event boundaries of a long jump video could be 1) coarse: Long Jump starts / ends, or 2) intermediate: Long Jump is broken into running, jumping, and landing, or 3) fine: every foot step. All variants are legitimate segmentations. We embrace this ambiguity to a limited degree: we instructed annotators to mark boundaries “1 level deeper” than the video-level event, and provided some examples but no precise definition of “1 level”. Sometimes there is no one single video-level event; yet the merit of this principle is to ensure the segmented subparts are at the same level of granularity. This technique can be recursively applied to the segmented subparts when finer granularity is desired. With this principle implemented, we find that humans can reliably agree on event boundaries without the need of a hand-crafted event boundary taxonomy. (c) Diversity of perception: Use multi-review. Sometimes people have different interpretations of “1 level deeper” and go slightly deeper or coarser. For example, in a video of two consecutive Long Jump instances, some might segment two instances of long jump, while others would segment the running and jumping units. In practice, we consider both are correct and find that one video usually has at most 2-3 such possible variations due to the human perceiving differences rather than the ambiguity of task definition. Thus, to capture such diversity, we assign 5 annotators for each video based the rule of thumb in user experience research. (d) Annotation format: Timestamps vs Time Ranges. The above principles clarify when to mark an event boundary. The remaining question is marking where. Following previous works, we can accommodate some ambiguity in “where” during evaluation by varying an error tolerance threshold; more details can be found in Sec. 3.3. We provide two options for marking an event boundary: 1) A single “Timestamp”, typically used for instantaneous change (e.g. the moment when jumping begins in long jump). 2) A time “Range”, typically used for short yet gradual change e.g. the interval between the end of landing and the start of standing up. More detailed can be found in Supp. More details of our annotation guideline for Kinetics-GEBD (e.g. our own annotation interface, task rejection criteria, annotation format) can be found in Supp. ### 3.3 Evaluation Protocol As described in Sec. 3.2, a boundary can be either a timestamp or a short range. If it is a range, we represent it by its middle timestamp during evaluation. Thus, our evaluation task is to measure the discrepancy between the detected timestamp and the ground truth timestamp, regardless of their types or semantic meanings. To measure the discrepancy between timestamps, we follow previous works such as temporal parsing of an action instance [40] and online detection of action start [43] and use the Relative Distance (Rel.Dis.) measurement. Inspired by the Intersection-over-Union measurement, Rel.Dis. is the error between the detected and ground truth timestamps, divided by the length of the corresponding whole action instance. Given a fixed threshold for Rel.Dis., we can determine whether a detection is correct (i.e. $\leq$ threshold) or incorrect (i.e. $>$ threshold), and then compute precision, recall, F1 score for the whole dataset. Note that duplicated detection for the same boundary is not allowed. Also, Each rater’s annotation is used separately. A detection result is compared against each rater’s annotation and the highest F1 score is treated as the final result. We have also explored other metrics, e.g. the Global/Local Consistency Error proposed in [35, 34], which are less appropriate here. Detailed discussions can be found in Supp. ## 4 Benchmark Creation: Kinetics-GEBD ### 4.1 Video Sources Our Kinetics-GEBD Train Set contains 20K videos randomly selected from Kinetics-400 Train Set [20]. Our Kinetics-GEBD Test Set contains another 20K videos randomly selected from Kinetics-400 Train Set. Our Val Set contains all 20K videos in Kinetics-400 Val Set. We rank all videos in Kinetics-400 Train set by video-level class. From this ordered list, we uniformly sample 20K videos as our Train Set and another 20K as our Test Set. Therefore, videos are chosen to have a similar distribution as Kinetics-400. ### 4.2 Annotator Training To ramp up a new annotator, we provide a training curriculum consisting of a cascade of 5 training batches. Each training batch contains 100 randomly sampled Kinetics videos with some reference annotations. We make it clear to the annotator that different people may segment the same video in different ways, thus our provided annotations are only for reference. Once a batch is done and before moving the annotator to the next batch, we will review its annotations for all 100 videos and provide specific feedback regarding errors made due to misunderstanding or misconduct of the guideline. Overall, we do observe steady improvements over training batches for each new annotator. ### 4.3 Quality Assurance We present our detailed quality assurance mechanism in Supp. Briefly, annotators were trained on 5 cascaded batches of 100 videos, with a QA mechanism before they worked on real jobs. Typical issues early in training included misunderstanding of the tool or guidelines, as well as annotating too much or too little detail. Training videos were rated on a scale of 1 (good), 2 (minor errors like inaccurate timestamps), and 3 (bad, typically misunderstanding of guidelines). Raters progressed to real jobs when their average rating was deemed sufficient. In practice, the performance of an annotator is satisfying and acceptable if its average rating is below 1.3. ### 4.4 Common Characteristics of Boundary Causes Cognitive studies [6] suggest that event boundaries can be characterized by several high-level causes. Throughout our pilot annotation tasks for refining guideline, we confirmed such finding and arrived at the following high-level causes of event boundaries: (1) Change of Subject: new subject appears or old subject disappears and such subject is dominant. (2) Change of Object of Interaction: the subject starts to interact with a new object or finishes with an old object. (3) Change of Action: an old action ends, or a new action starts. Note that this characteristic includes when the subject changes physical direction (e.g. a runner suddenly changes direction) and when the same action is being performed multiple times (e.g. several consecutive push- up instances). (4) Change in Environment: significant changes in color or brightness of the environment or the dominant subject (e.g. a light is turned on, illuminating a previously darker environment). Further, Shot Change boundaries are also common in Kinetics videos. Thus, we also annotate shot boundaries and the instructions can be found in Supp. In a video of multiple shots, the target granularity for event boundaries is 1 level deeper than the corresponding shot-level event. Sometimes an event boundary might be due to Multiple coupled causes or Others. As the distribution shown in Fig. 2: Others is negligible; Change of Action is the most common cause. Note that the actions leading to boundaries in our dataset are much more generic and diverse than the pre-defined taxonomies in the current CV action datasets. Figure 2: Distribution of boundary causes on Kinetics-GEBD Val. ### 4.5 Annotation Results Summary and Analysis Annotation capacity. In total, around 40 qualified annotators were trained to annotate our Kinetics-GEBD. The average speed is around 5mins per video per annotator. Statistics of #annotations received. Recall that each video is annotated by 5 annotators. Annotators can reject a video due to the reasons stated in Supp. Table 2 shows that most videos receive all 5 annotations without rejection. #Annotations | 0 | 1 | 2 | 3 | 4 | 5 ---|---|---|---|---|---|--- #videos | 101 | 141 | 203 | 342 | 805 | 18166 Per. (%) | 0.51 | 0.71 | 1.03 | 1.73 | 4.07 | 91.94 Table 2: For our Kinetics-GEBD Val set, #annotations received per video vs. #videos and its percentage . The extent of consensus for GEBD annotation. Given the construction of the dataset, a natural question is “how consistent are the annotations?”. Adopting the protocol in Sec. 3.3, for the same video, we treat one annotation as ground truth and another annotation as detection result. Since we expect consistent annotations to have very close boundaries in time, we do not use relative distance; instead, we evaluate F1 score based on the absolute distance between two boundaries, varying the threshold from 0.2s to 1s with a step of 0.2s, and calculate the average F1 score. By averaging the F1 score over all pairs of annotations for the same video, we can obtain its consistency score. If all raters make very similar annotations, the consistency score will be high i.e. towards 1; otherwise low i.e. towards 0. Fig. 3 shows that the majority of videos have consistency scores higher than 0.5. This indicates that given our designed task definition and annotation guideline, humans are able to reach decent degree of consensus, taking into account the factors that (1) often due to different human perception manners, a video can have multiple correct segmentations, and (2) sometimes annotators make mistakes. Figure 3: The number of videos percentage (below line) for each range of the consistency score (above line) on our Kinetics-GEBD Val set when the video is not rejected by any annotators. To understand how the frequency of annotation mistakes (i.e. annotation quality) correlates with the consistency score, in Table 3, we randomly sample 5 non-rejection videos for each consistency score range and conduct manual auditing according to the protocol in Sec. 4.3 to get the average rating for each range. As the consistency score becomes low, the rating gets worse. Recall that the cutoff for the rating to determine qualified annotators is 1.3, which corresponds to 0.5 consistency score here. Consistency | (0.4,0.5] | (0.5,0.6] | (0.6,0.7] | (0.7,0.8] | (0.8,1] ---|---|---|---|---|--- Rating | 1.4 | 1.24 | 1.20 | 1.16 | 1.04 Table 3: Average audit rating vs. average F1 consistency score on our Kinetics-GEBD Val set. ### 4.6 Post-processing for the Annotations Given the raw annotations, we conduct the following steps to construct our Kinetics-GEBD benchmark. (1) To ensure annotation quality and remove very ambiguous videos, we exclude videos that have lower than 0.3 consistency score. (2) To capture the diversity of human perception, we only keep videos that receive at least 3 annotations. During evaluation, the detection is compared against each ground truth annotation and the highest F1 score is treated as the final result. (3) For each annotation, if two boundaries are very close (i.e. less than 0.1s), we merge them into one. Note that this includes the case that one Timestamp boundary falls into a Range, or one Range boundary overlaps with another Range boundary. We remove any boundaries from the initial and final 0.3s of each video. More details of our motivation for annotation post-processing could be found in Supp. ### 4.7 Statistics (a) (b) (c) Figure 4: Statistics on Kinetics-GEBD. #boundaries per video per annotation: (a) distribution (b) average over each Kinetics class and then sorted by class; #boundaries per video: (c) distribution (d) average over each Kinetics class and then sorted by class; duration per segment: (e) distribution (f) average over each Kinetics class and then sorted by class. For the raw Kinetics-GEBD annotation, the average number of boundaries per video per annotation is 5.48 (std dev 2.76, range [1, 33]). The average time between boundaries is 1.47s (std dev 1.24, range [0,10.01]). The number and average length of the time-range boundaries is 265K and 0.71s. The number of timestamp-only boundaries is 1232K. For the Kinetics-GEBD benchmark (after post-processing raw annotations), the average number of boundaries per video per annotation is 4.77 (std dev 2.24, range [0,14], distribution plot as Fig. 4(a)). The average time between boundaries is 1.65s (std dev 1.25, range [0.023, 10.08], distribution plot as Fig. 4(e)). Furthermore, the left column of Fig. 4 shows the distribution of #boundaries per video per annotation, #boundaries per video and duration per segment, respectively. To show how these compared on the base Kinetics-400 classes, we rank all Kinetics classes from high to low and highlight 3 classes, as shown in the right column of Fig. 4. ## 5 Experimental Results of GEBD Methods ### 5.1 Dataset In addition to our Kinetics-GEBD, we also experiment on the recent TAPOS dataset [40] containing Olympics sport videos with 21 actions. The training set contains 13,094 action instances and the validation set contains 1,790 action instances. The authors manually defined how to break each action into sub-actions during annotation. While not taxonomy-free, the TAPOS boundaries between sub-actions are analogous to GEBD action boundaries. Thus, we can re- purpose TAPOS for our GEBD task by trimming each action instance with its action label hidden (can be as long as 5mins) and conducting GEBD on each action instance. Note that in TAPOS only 1 rater’s annotation has been released and thereby used as ground truth. Rel.Dis. threshold | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 | 0.3 | 0.35 | 0.4 | 0.45 | 0.5 | avg ---|---|---|---|---|---|---|---|---|---|---|--- Unsuper. | SceneDetect | 0.035 | 0.045 | 0.047 | 0.051 | 0.053 | 0.054 | 0.055 | 0.056 | 0.057 | 0.058 | 0.051 PA - Random | 0.158 | 0.233 | 0.273 | 0.310 | 0.331 | 0.347 | 0.357 | 0.369 | 0.376 | 0.384 | 0.314 PA | 0.360 | 0.459 | 0.507 | 0.543 | 0.567 | 0.579 | 0.592 | 0.601 | 0.609 | 0.615 | 0.543 Super. | ISBA | 0.106 | 0.170 | 0.227 | 0.265 | 0.298 | 0.326 | 0.348 | 0.369 | 0.382 | 0.396 | 0.302 TCN | 0.237 | 0.312 | 0.331 | 0.339 | 0.342 | 0.344 | 0.347 | 0.348 | 0.348 | 0.348 | 0.330 CTM | 0.244 | 0.312 | 0.336 | 0.351 | 0.361 | 0.369 | 0.374 | 0.381 | 0.383 | 0.385 | 0.350 TransParser | 0.289 | 0.381 | 0.435 | 0.475 | 0.500 | 0.514 | 0.527 | 0.534 | 0.540 | 0.545 | 0.474 PC | 0.522 | 0.595 | 0.628 | 0.646 | 0.659 | 0.665 | 0.671 | 0.676 | 0.679 | 0.683 | 0.642 Table 4: F1 results on TAPOS for various supervised and unsuperivsed GEBD methods. Rel.Dis. threshold | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 | 0.3 | 0.35 | 0.4 | 0.45 | 0.5 | avg ---|---|---|---|---|---|---|---|---|---|---|--- Unsuper. | SceneDetect | 0.275 | 0.300 | 0.312 | 0.319 | 0.324 | 0.327 | 0.330 | 0.332 | 0.334 | 0.335 | 0.318 PA - Random | 0.336 | 0.435 | 0.484 | 0.512 | 0.529 | 0.541 | 0.548 | 0.554 | 0.558 | 0.561 | 0.506 PA | 0.396 | 0.488 | 0.520 | 0.534 | 0.544 | 0.550 | 0.555 | 0.558 | 0.561 | 0.564 | 0.527 Super. | BMN | 0.186 | 0.204 | 0.213 | 0.220 | 0.226 | 0.230 | 0.233 | 0.237 | 0.239 | 0.241 | 0.223 BMN-StartEnd | 0.491 | 0.589 | 0.627 | 0.648 | 0.660 | 0.668 | 0.674 | 0.678 | 0.681 | 0.683 | 0.640 TCN-TAPOS | 0.464 | 0.560 | 0.602 | 0.628 | 0.645 | 0.659 | 0.669 | 0.676 | 0.682 | 0.687 | 0.627 TCN | 0.588 | 0.657 | 0.679 | 0.691 | 0.698 | 0.703 | 0.706 | 0.708 | 0.710 | 0.712 | 0.685 PC | 0.625 | 0.758 | 0.804 | 0.829 | 0.844 | 0.853 | 0.859 | 0.864 | 0.867 | 0.870 | 0.817 Table 5: F1 results on Kinetics-GEBD for various supervised and unsuperivsed GEBD methods. ### 5.2 Supervised Methods for GEBD We directly quote the results of supervised methods from [40] on TAPOS (i.e. the below #1-3). Since [40] has not published codes, we implement the below #4-6 methods by ourselves on our Kinetics-GEBD: #1. Temporal parsing model: TransParser [40] proposes a pattern miner trained with a local loss based on the sub-action boundary supervision and a global loss trained with the action instance label supervision. #2. Temporal action segmentation models: Connectionist Temporal Modeling (CTM) [17] and Iterative Soft Boundary Assignment (ISBA) [10] are supervised by the order of occurrence of a set of pre-defined sub-actions. #3. Action boundary detection model: Temporal Convolution Network (TCN) [24, 31] trains a binary classifier to distinguish the frames around boundaries against other frames. #4. Pairwise boundary Classifier (PC): At each candidate boundary position time $t$, we use the same backbone network to extract a feature pair: the average feature of frames before and the average feature of frames after $t$. We conduct global pooling over space for each feature and then we concatenate these two paired features together as the input to a linear binary classifier, which is trained to predict the probability of time $t$ is a boundary. PC is trained end-to-end to fine-tune the backbone network pre-trained on ImageNet; training with the backbone fixed does not converge. We watershed the probability sequence to obtain internals above 0.5. Each internal’s center is treated as an event boundary. #5. Temporal action proposal model: to understand how well an class-agnostic action boundary proposal model can detect generic event boundaries, we train a BMN model [29] on THUMOS’14 [19] and test it on Kinetics-GEBD to generate action proposals. We denote BMN as treating both the start and end of each action proposal as event boundary. Alternatively, since one intermediate step in BMN is to evaluate two probability scores of respectively being action start and end, we watershed each probability sequence to obtain internals above 0.5 and treat the center of each internal as an event boundary. We take the union of all these centers and denote this method as BMN-StartEnd. #6. Cross-dataset GEBD method TCN-TAPOS: to confirm the need of Kinetics-GEBD which is more challenging than TAPOS, we conduct testing on Kinetics-GEBD using the TCN model trained on TAPOS. ### 5.3 Unsupervised Methods for GEBD This direction is intriguing because it can potentially handle any kind of events, without the need to annotate a large amount of event boundary labels. #1. SceneDetect111https://github.com/Breakthrough/PySceneDetect: an online popular library for detecting classical shot changes. #2. PA - Random: we randomly swap the detection results of the below PA method among all videos. The position of each boundary is mapped to the new video with its relative position in the original video unchanged. #3. PredictAbility (PA): Event Segmentation Theory indicates that the moment people perceive event boundary is where future activity is least predictable [23, 38, 54]. This motivates us to develop a PA-based method which first 1) computationally assesses the predictability score over time and then 2) locates the event boundaries by detecting the local minima of the predictability sequence. 1) Predictability Assessment: To quantify the predictability at time $t$, we compute the average feature of frames preceding and the average feature of frames succeeding $t$. Then, we compute their squared L2 norm feature distance to obtain the inverse predictability $\phi\left(t\right)$; lower distance implies greater predictability. 2) Boundaries from Predictability: Given $\phi\left(t\right)$, a natural method is to propose temporal boundaries at the local maxima of $\phi$. This is similar to the classical blob detection problem, and thus we apply the classical Laplacian of Gaussian (LoG) filter [32] to our 1D temporal problem. We apply the 1D LoG filter to compute $L(t)=\mathrm{LoG}(\phi\left(t\right))$, and compute its derivative $L^{\prime}\left(t\right)$. We detect temporal boundaries at the negative-to-positive zero-crossings of $L^{\prime}$, which correspond to local maxima of $\phi$. ### 5.4 Implementation Details The following settings are used for all experiments conducted by ourselves unless explicitly specified otherwise: 2 GP100 NVIDIA cards are used. For each video, we sample 1 frame for every 3 frames. The inputs are RGB images resized to 224x224. To make fair comparisons, all models implemented by ourselves, i.e. PC, TCN, TCN-TAPOS, PA, BMN, BMN-StartEnd, build on ResNet-50 [16] backbone. PC is trained end-to-end while others simply use the off-the-shelf ImageNet pretrained feature. Our PC, TCN, TCN-TAPOS and PA all use 5 frames before and 5 frames after a candidate boundary as the model input. For PA, we tune the sigma in the LoG filter on the Train set and set it to 15. During evaluation, we follow TAPOS [40] to vary the Relative Distance (Rel.Dis.) threshold indicated in Sec. 3.3 from 5% to 50% with a step of 5%. ### 5.5 Results Comparisons TAPOS val set F1 results are shown in Table 4. Detailed results of precision and recall are in Supp. The predictability-based PA method is clearly much better than the random guess. It is quite encouraging to see that our unsupervised method PA even outperforms all previous supervised methods i.e. ISBA, TCN, CTM, TransParser. SceneDetect achieves high precision while quite low recall because it only fires at the very salient boundaries. Kinetics-GEBD val set F1 results are shown in Table 5. Detailed results of precision and recall are in Supp. Among unsupervised methods, PA is clearly better than shot change detection method SceneDetect and the random guess PA - Random. Comparing PA with the supervised method TCN which also uses the same fixed backbone feature, the gap is not large, indicating that un-supervised or semi-supervised GEBD methods are worthwhile researching in the future. PC clearly outperforms others, indicating that the event boundaries cannot be comprehensively represented by off-the-shelf feature while can be better learned by the backbone. For the class-agnostic action proposal methods, directly detecting action proposals (i.e. BMN) is not a good GEBD approach but accessing the probability of being boundary (i.e. BMN-StartEnd) is effective. BMN-StartEnd is still worse than PC due to only detecting action change boundaries while ignoring other generic event boundaries like subject change. For the similar reason, on Kinetics-GEBD, a GEBD model trained on TAPOS (i.e. TCN-TAPOS) underperforms the same model directly trained on Kinetics-GEBD (i.e. TCN). These again confirm the challenging nature of generic event boundaries and the need of our new benchmark Kinetics-GEBD. ## 6 Applications of Video Event Boundaries Figure 5: To classify a video, it is difficult to tell what is the optimal number of frames for uniform sampling. Our event boundaries provide cue about how many frames shall be sampled. ### 6.1 Video-level classification We test the classification accuracy on videos that receive at least 3 annotations in Kinetics-GEBD Val set. We use the public implementation222https://github.com/mit-han-lab/temporal-shift-module of the TSN [52] model which uniformly samples $K$ frames, applies ResNet-50 backbone on each frame, and finally average the predictions to get the video-level prediction. Fig. 5 shows that the video-level classification accuracy for uniform sampling (blue curve) increases and then decreases as $K$ varies from 1 to 10. Thus, give a video, how can we determine $K$? Despite GEBD is not designed to select discriminative frames, our boundaries provide cue about how to set $K$ in uniform sampling in order to achieve high classification accuracy Based on our annotated boundaries, we can break the video into segments and each segment might only need one frame to be sampled. To validate this hypothesis, we sample the middle frame of each segment. Fig. 5 shows that this (the red dot) uses in average 5.5 frames per video and achieves accuracy close to the best achieved by uniform sampling. This is useful in practice when the video content is diverse and thus we do not know what is the best $K$. In addition, we find that sampling the middle frame (64.4% accuracy) outperforms sampling boundary frames (63.0% accuracy). This implies that GEBD helps identify less-discriminative frames (the frames at boundaries are less discriminative), and is consistent with the cognitive finding that boundaries have less predictive power. ### 6.2 Video summarization Our temporal boundaries provide a natural way to select keyframes for video summarization. We conduct the following two user study tasks to compare Ours (sample the middle frame of subparts) and Uniform (uniformly sample the same number of frames as Ours). In Task 1, we randomly sample videos from Kinetics- GEBD Val. In Task 2, we select the videos that the frame distance between Ours and Uniform are the largest. Each task involves around 200-250 videos. For each video in both task, 20 users are asked “which set of keyframes better summarize the video comprehensively?” and shall vote one out of three options: (1) Set 1 is better; (2) Set 2 is better; (3) Tie (both good/bad summarization). Table 6 shows the percentage of different options winning at vote-level and at the video-level (e.g. out of 20 votes for the same video, if #votes for (1) is the highest, Set 1 wins). We can see that for random samples Ours is clearly better than Uniform and for samples of large disparity Ours significantly outperforms Uniform. Percentage (%) | Uniform | Ours | Tie ---|---|---|--- Task 1: random samples | Vote-level | 33.9 | 40.9 | 25.1 Video-level | 38.3 | 43.7 | 17.8 Task 2: large disparity | Vote-level | 12.6 | 73.0 | 14.3 Video-level | 6.0 | 90.0 | 4.0 Table 6: User study results for video summarization. ## 7 Conclusion and Future Work In this paper, we have introduced the new task of GEBD and resolved ambiguities in the annotation process. A new benchmark, Kinetics-GEBD, has been created along with novel designs for annotation guidelines and quality assurance. we benchmark supervised and un-supervised GEBD approaches on the TAPOS dataset and our Kinetics-GEBD, together with method design explorations that suggest future directions. We also showed value for GEBD in downstream applications. We believe our work is an important stepping stone towards long-form video understanding and hope it will enable future work in learning based on temporal event structure. In the future, we plan to address scene changes which usually happen in much longer videos (e.g. move from kitchen to bathroom in 30mins long ADL [36] videos, move from street to restaurant in hours long UT-Ego [25] videos). ## 8 Acknowledgement We thank Jitendra Malik for the insightful guidance. We appreciate the great support in annotation from our Product Data Operations team and Annotation Tooling team. Mike Shou and Stan Lei are supported by the National Research Foundation Singapore under its NRFF award NRF-NRFF13-2021-0008. ## 9 Supp: More Details of the GEBD Task ### 9.1 Other Candidate Evaluation Metrics We made attempts to explore adapting the Global/Local Consistency Error metric, which was proposed in [35, 34] to evaluate boundary segmentation of 2D image, from 2D spatial to 1D temporal for our task. This metric is designed for the scenario that we do not impose penalty if the detection and ground truth are of different granularity i.e. one is a fine-grained, detailed segmentation of another one. During our experiments, we found the variance of this metric between different detection results, regardless of their qualities, is often quite small. This is because some incorrect boundaries made by one detection can be mistakenly considered as a correct, finer segmentation of a ground truth segmentation. Thus, during annotators training we cannot use this metric to quickly identify bad annotations to provide feedback to annotators, and during benchmarking we are not able to always effectively distinguish the bad detection against good detection. Consequently, we do not end up with using this metric. In aligning with this, our task is also designed to target specific temporal granularity i.e. 1 level deeper in semantics compared to the video-level event. ### 9.2 Discussion of Much Longer Videos and Scene Change Boundaries: Future Work In this paper, since the video source for our benchmark is Kinetics [20], our target boundaries are event boundaries. This is because each video corresponds to one single, dominant event at the whole video level. Its videos usually are of 10s long, which is longer than the effective input clip duration of the mainstream video models i.e. 1-2s and thus still indeed presents the challenge of whole, long-form video modeling. For such 10s Kinetics videos, two types of boundaries are prevalent for our GEBD task: 1) shot change boundary due to editing, and 2) event boundary which breaks an event into temporal units. In addition to Kinetics, initially we have also considered other data sources e.g. longer videos such as 30mins long videos in ADL [36] and hours long videos in UT-Ego [25]. We conducted preliminary studies and got two observations: (1) Compared to Kinetics-GEBD, these long videos contain too many event boundaries to be annotated completely. (2) Recall the principle stated in the main paper that targeting event boundaries at 1 level deeper in compared to the whole video-level event. When adapting this principle to these very long videos, the temporal boundaries often happen at the moments of scene/situational changes (e.g. the moment moving from kitchen to bathroom in ADL [36], the moment changing from street to restaurant in UT-Ego [25]). Thus, such videos are more suitable for studying scene/situational boundary detection. In practice, if need to detect event boundaries on such very long videos, the models developed based on our Kinetics-GEBD can also be applied, and even be applied recursively to deal with event boundaries at different temporal granularity. In summary, we believe both scenarios are important. In this paper, we focus on the scenario that the video is at around 10s timescale and the whole video presents one event. The event can be segmented into temporal units based on shot boundaries and event boundaries which are the main focus in this paper and benchmark. In the future, it would be interesting to specifically annotate and build benchmark for the scenario that involves much longer videos and focuses on scene change boundaries, and we will not need to annotate event boundaries again. ## 10 Supp: More Details of the Kinetics-GEBD Annotation ### 10.1 Guideline: A Mixture of Descriptions and Visual Examples A very useful practice we learned to eliminate ambiguity is that when designing annotation guideline, use both visual examples and textual descriptions. Examples are straightforward to illustrate the requirements but they are too specific and cannot cover every cases. Descriptions can cover the generic requirements but often are too abstract to be understood clearly. ### 10.2 The annotation format - Timestamps vs Time Ranges for Kinetics-GEBD The term “Relatively” has been used multiple times in the cognitive survey [51] and turns out to be a good practice. For examples, Kinetics [7] videos are usually 10s long and each video corresponds to one event. Hence, we ask the annotators to be very cautious whenever marking a Range if its possible duration is less than 0.3s. Note that on Kinetics, usually a boundary’s duration will not exceed 2s otherwise it is likely a meaningful temporal unit rather than just a transition between units. Therefore, we request the annotators to be very cautious whenever they want to mark a range longer than 2s. ### 10.3 Rejection Scenarios during Annotation We provide the option for annotators to reject a video when 1) the video does not have significant change over time and thus does not contain temporal boundary; 2) the video contains violating content e.g. nudity, graphic violence; 3) the video has been edited to play at extreme speed; 4) the video cannot be understood e.g. too blurry all the time; 5) the video contains too many boundaries to be annotated accurately i.e. more than 20 boundaries for a 10s long video in Kinetics-GEBD. ### 10.4 Boundary Causes: Shot Change Shot change boundaries correspond to the changes caused by video editing or rapid camera behaviors. (1) Some shot change boundaries are always sudden and shall be annotated using one single Timestamp: Cut, Change from/to slow motion, Change from/to fast motion. (2) Some shot change boundaries are always gradual and shall be annotated using a short Range: Change due to Panning, Change due to Zooming, Change due to Fade/Dissolve/Gradual. Note that for Panning and Zooming, the goal is not to simply mark whenever the camera pans or zooms; these are just characteristics of some common shot change boundaries; the marked boundary has to be at the moment of a change that connects two temporal units and the succeeding unit brings in new information, and meanwhile such change is also a panning effect. For example, during the running period of long jump in Fig. 1 in the main paper, after the scene cut (i.e. the period corresponding to the second frame), despite the camera has been panning, while the dominant person is in the center all the time and there is no action change or other new information. Therefore, there is no temporal event boundary shall be marked during the period corresponding to the second frame. ### 10.5 Event Class vs. High-Level Cause In contrast to high-level causes, we consider generic event classes are low- level and capture specific details, which is similar to classes in conventional action datasets. Importantly, such a handful of high-level causes 1) are not exclusive - the dropdown does have the Others (N/A) option for a boundary cannot be characterized by any high-level causes. 2) are not orthogonal (Fig. 2 shows 8% selections are “Multiple”) 3) are highly imbalanced (63.6% are change of action which can cover numerous generic event classes). That being said, the dropdown selection in the annotation interface was actually requested by annotators - is more of an abstract reminder about what event changes roughly look like in high-level, to help eliminate common “miss” errors. ### 10.6 Motivation for Annotation Post-processing In Sec. 4.6, we introduced three steps to construct our Kinetics-GEBD benchmark. Our guiding motivation for post-processing is to have automatic ways to further enhance the data quality e.g. remove very ambiguous videos (human studies in Table 2 tell us for consistency score $<$ 0.3, the quality is typically bad), merge very close boundaries (when two boundaries have offset $<$ 0.1s, usually there is only 1-2 frames in the between; through human inspection of these boundaries, we found they are typically due to annotation errors; we did not go higher thresholds like 0.2s because we did find quite some reasonable cases at 0.2s and mostly make sense for higher than 0.2s). ### 10.7 Quality Assurance Mechanism To ensure the annotation quality: (1) Before graduating a new annotator to work on real jobs, the annotator will work on an exam batch of 100 videos and we will audit its performance. If the performance is not satisfying, the annotator will be moved back to re-do the training curriculum. (2) During the real annotation process, every week we conduct auditing for each annotator based on randomly sampled some of its jobs completed in that week. If the performance is not satisfying, the annotator will be moved back to re-do the training curriculum. If the times of re-training exceeds three, the annotator will be removed forever. The detailed guideline of rating scores for quality assurance can be found in the Sec. 12. ## 11 Supp: Additional Experimental Results ### 11.1 Complete Results of Precision, Recall, F1 Table 7 for TAPOS and Table 8 for Kinetics-GEBD respectively present the detailed numbers of precision, recall, F1 score for various methods. It is worthwhile noting that some methods like SceneDetect achieves high precision yet low recall because the method is designed to detect shot boundaries, which are easy to spot out, while misses other event boundaries. (a) Precision --- Rel.Dis. threshold | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 | 0.3 | 0.35 | 0.4 | 0.45 | 0.5 | avg Unsuper. | SceneDetect | 0.391 | 0.506 | 0.532 | 0.576 | 0.596 | 0.608 | 0.621 | 0.628 | 0.641 | 0.647 | 0.575 PA - Random | 0.206 | 0.304 | 0.356 | 0.404 | 0.432 | 0.452 | 0.466 | 0.481 | 0.491 | 0.500 | 0.409 PA | 0.470 | 0.599 | 0.662 | 0.708 | 0.740 | 0.755 | 0.771 | 0.784 | 0.795 | 0.801 | 0.708 Super. | ISBA | 0.119 | 0.185 | 0.230 | 0.268 | 0.301 | 0.329 | 0.356 | 0.379 | 0.392 | 0.405 | 0.296 TCN | 0.140 | 0.187 | 0.200 | 0.204 | 0.207 | 0.208 | 0.210 | 0.211 | 0.211 | 0.211 | 0.199 CTM | 0.154 | 0.197 | 0.212 | 0.221 | 0.228 | 0.233 | 0.237 | 0.242 | 0.244 | 0.245 | 0.221 TransParser | 0.230 | 0.302 | 0.345 | 0.377 | 0.398 | 0.410 | 0.420 | 0.427 | 0.432 | 0.437 | 0.378 PC | 0.650 | 0.741 | 0.782 | 0.805 | 0.821 | 0.829 | 0.836 | 0.842 | 0.846 | 0.851 | 0.800 (b) Recall Rel.Dis. threshold | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 | 0.3 | 0.35 | 0.4 | 0.45 | 0.5 | avg Unsuper. | SceneDetect | 0.018 | 0.023 | 0.025 | 0.027 | 0.028 | 0.028 | 0.029 | 0.029 | 0.030 | 0.030 | 0.027 PA - Random | 0.128 | 0.189 | 0.221 | 0.252 | 0.269 | 0.281 | 0.290 | 0.299 | 0.305 | 0.311 | 0.255 PA | 0.292 | 0.372 | 0.412 | 0.440 | 0.460 | 0.470 | 0.480 | 0.488 | 0.494 | 0.498 | 0.441 Super. | ISBA | 0.095 | 0.158 | 0.225 | 0.263 | 0.296 | 0.323 | 0.340 | 0.360 | 0.373 | 0.386 | 0.282 TCN | 0.757 | 0.940 | 0.974 | 0.985 | 0.989 | 0.990 | 0.994 | 0.994 | 0.994 | 0.994 | 0.961 CTM | 0.596 | 0.752 | 0.811 | 0.843 | 0.860 | 0.875 | 0.886 | 0.894 | 0.898 | 0.901 | 0.831 TransParser | 0.386 | 0.516 | 0.590 | 0.642 | 0.673 | 0.689 | 0.705 | 0.714 | 0.721 | 0.726 | 0.636 PC | 0.436 | 0.497 | 0.525 | 0.541 | 0.551 | 0.556 | 0.561 | 0.565 | 0.568 | 0.572 | 0.537 (c) F1 Rel.Dis. threshold | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 | 0.3 | 0.35 | 0.4 | 0.45 | 0.5 | avg Unsuper. | SceneDetect | 0.035 | 0.045 | 0.047 | 0.051 | 0.053 | 0.054 | 0.055 | 0.056 | 0.057 | 0.058 | 0.051 PA - Random | 0.158 | 0.233 | 0.273 | 0.310 | 0.331 | 0.347 | 0.357 | 0.369 | 0.376 | 0.384 | 0.314 PA | 0.360 | 0.459 | 0.507 | 0.543 | 0.567 | 0.579 | 0.592 | 0.601 | 0.609 | 0.615 | 0.543 Super. | ISBA | 0.106 | 0.170 | 0.227 | 0.265 | 0.298 | 0.326 | 0.348 | 0.369 | 0.382 | 0.396 | 0.302 TCN | 0.237 | 0.312 | 0.331 | 0.339 | 0.342 | 0.344 | 0.347 | 0.348 | 0.348 | 0.348 | 0.330 CTM | 0.244 | 0.312 | 0.336 | 0.351 | 0.361 | 0.369 | 0.374 | 0.381 | 0.383 | 0.385 | 0.350 TransParser | 0.289 | 0.381 | 0.435 | 0.475 | 0.500 | 0.514 | 0.527 | 0.534 | 0.540 | 0.545 | 0.474 PC | 0.522 | 0.595 | 0.628 | 0.647 | 0.660 | 0.666 | 0.672 | 0.676 | 0.680 | 0.684 | 0.643 Table 7: GEBD results on TAPOS. (a) Precision --- Rel.Dis. threshold | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 | 0.3 | 0.35 | 0.4 | 0.45 | 0.5 | avg Unsuper. | SceneDetect | 0.731 | 0.792 | 0.819 | 0.837 | 0.847 | 0.856 | 0.862 | 0.867 | 0.870 | 0.872 | 0.835 PA - Random | 0.737 | 0.884 | 0.933 | 0.956 | 0.968 | 0.975 | 0.979 | 0.981 | 0.984 | 0.986 | 0.938 PA | 0.836 | 0.944 | 0.965 | 0.973 | 0.978 | 0.980 | 0.983 | 0.985 | 0.986 | 0.989 | 0.962 Super. | BMN | 0.128 | 0.141 | 0.148 | 0.152 | 0.156 | 0.159 | 0.162 | 0.164 | 0.165 | 0.167 | 0.154 BMN-StartEnd | 0.396 | 0.479 | 0.509 | 0.525 | 0.534 | 0.540 | 0.544 | 0.547 | 0.549 | 0.551 | 0.517 TCN-TAPOS | 0.518 | 0.622 | 0.665 | 0.690 | 0.706 | 0.718 | 0.727 | 0.733 | 0.738 | 0.743 | 0.686 TCN | 0.461 | 0.519 | 0.538 | 0.547 | 0.553 | 0.557 | 0.559 | 0.561 | 0.563 | 0.564 | 0.542 PC | 0.624 | 0.753 | 0.794 | 0.816 | 0.828 | 0.836 | 0.841 | 0.844 | 0.846 | 0.849 | 0.803 (b) Recall Rel.Dis. threshold | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 | 0.3 | 0.35 | 0.4 | 0.45 | 0.5 | avg Unsuper. | SceneDetect | 0.170 | 0.185 | 0.192 | 0.197 | 0.200 | 0.202 | 0.204 | 0.206 | 0.207 | 0.207 | 0.197 PA - Random | 0.218 | 0.289 | 0.326 | 0.350 | 0.364 | 0.374 | 0.381 | 0.386 | 0.389 | 0.393 | 0.347 PA | 0.259 | 0.329 | 0.355 | 0.368 | 0.377 | 0.382 | 0.386 | 0.390 | 0.392 | 0.395 | 0.363 Super. | BMN | 0.338 | 0.369 | 0.385 | 0.397 | 0.407 | 0.414 | 0.420 | 0.426 | 0.430 | 0.434 | 0.402 BMN-StartEnd | 0.648 | 0.766 | 0.817 | 0.846 | 0.864 | 0.876 | 0.885 | 0.892 | 0.897 | 0.900 | 0.839 TCN-TAPOS | 0.420 | 0.508 | 0.550 | 0.576 | 0.594 | 0.609 | 0.619 | 0.627 | 0.633 | 0.639 | 0.577 TCN | 0.811 | 0.894 | 0.923 | 0.938 | 0.947 | 0.952 | 0.956 | 0.959 | 0.961 | 0.963 | 0.930 PC | 0.626 | 0.764 | 0.814 | 0.843 | 0.859 | 0.871 | 0.879 | 0.885 | 0.889 | 0.892 | 0.832 (c) F1 Rel.Dis. threshold | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 | 0.3 | 0.35 | 0.4 | 0.45 | 0.5 | avg Unsuper. | SceneDetect | 0.275 | 0.300 | 0.312 | 0.319 | 0.324 | 0.327 | 0.330 | 0.332 | 0.334 | 0.335 | 0.318 PA - Random | 0.336 | 0.435 | 0.484 | 0.512 | 0.529 | 0.541 | 0.548 | 0.554 | 0.558 | 0.561 | 0.506 PA | 0.396 | 0.488 | 0.520 | 0.534 | 0.544 | 0.550 | 0.555 | 0.558 | 0.561 | 0.564 | 0.527 Super. | BMN | 0.186 | 0.204 | 0.213 | 0.220 | 0.226 | 0.230 | 0.233 | 0.237 | 0.239 | 0.241 | 0.223 BMN-StartEnd | 0.491 | 0.589 | 0.627 | 0.648 | 0.660 | 0.668 | 0.674 | 0.678 | 0.681 | 0.683 | 0.640 TCN-TAPOS | 0.464 | 0.560 | 0.602 | 0.628 | 0.645 | 0.659 | 0.669 | 0.676 | 0.682 | 0.687 | 0.627 TCN | 0.588 | 0.657 | 0.679 | 0.691 | 0.698 | 0.703 | 0.706 | 0.708 | 0.710 | 0.712 | 0.685 PC | 0.625 | 0.758 | 0.804 | 0.829 | 0.844 | 0.853 | 0.859 | 0.864 | 0.867 | 0.870 | 0.817 Table 8: GEBD results on Kinetics-GEBD. ## 12 Supp: Detailed Quality Assurance Guideline For the simplicity of illustrating the rating instructions, we use GT in the below to stand for Ground Truth. We refer the specific annotator as “rater”. As explained in the main paper, GT is not unique, depending on various human perception behaviors. We bear this in mind during auditing: we judge each annotated boundary via first interpreting its perception and segmentation logic and then comparing it against its corresponding correct boundary. ### 12.1 Rating score definition 1 = accurate (does not have to be consistent with GT but reasonable); 2 = minor error (not accurate, occasional miss, the main purpose of this intermedium score level is to reflect the need of improvement while acknowledge that the rater’s understanding of the annotation guideline does not have problem); 3 = bad, clear error that should not happen (clear errors that have been emphasized multiple times during training, e.g. mark as shot boundary range but annotate one timestamp, mark unreasonable event boundary at the start or end of video, mark over-detailed annotations). For the below different scenarios, an annotation for a video may fit multiple scenarios and thus result into various scores - the baddest score is deemed as the final score. ### 12.2 Specific instructions for the scenario of “Rejection” 1, accurate: when the GT rejects the video while the rater marks 1 or 2 reasonable boundaries. 2, minor: when it would be desirable to annotate boundaries as GT does but the video might have some ambiguity, so that the rater rejects it; when the GT rejects the video while the rater marks 1 unreasonable boundary. 3, bad: when there are clear boundaries should have been marked e.g. shot boundaries; when GT rejects the video while the rater marks more than 2 unreasonable boundaries; when the rater looks into a wrong granularity so that mistakenly considers the video contains too many boundaries or no boundaries. Recall the long jump example, marking at every step is too detailed while marking at the level of long jump is too abstract. ### 12.3 Specific instructions for the scenario of “the same boundary marked by both GT and rater” First, check dropdown boundary cause selection i.e. shot boundary or event boundary: Note that in the annotation guideline, we request the rater to only select shot boundary if a boundary is both shot boundary and event boundary. If this is violated, give 3, bad score. Note that we do not judge the second dropdown selection for common event boundary causes like change of subject, change of action. For other cases, go for 1, accurate. Second, check time difference (rater marks timestamp x seconds apart from GT video timestamp): 2, minor: x in the range of 0.3s - 0.6s. 3, bad: x is longer than 0.6s. ### 12.4 Specific instructions for the scenario of “a clear boundary marked by GT but the rater misses it” Note again that if a GT boundary is reasonable to either keep or ignore, we do not consider it as a clear boundary and do not penalize the miss of it. 2, minor: the rater misses 25%-50% of all very clear boundaries (e.g. for a video of 10 clear boundaries, miss 4 very clear boundaries). 3, bad: the video only has only a few clear boundaries while the rater misses one of them (e.g. 3 shot boundaries, miss one); the rater misses more than 50% of all very clear boundaries (e.g. for a video of 10 clear boundaries, miss 6; for a video of 3 boundaries, miss 2). ### 12.5 Specific instructions for the scenario of “a boundary falsely marked by rater but GT would not mark it” Note again that if a boundary marked by rater does make sense, can be considered as reasonable 2, minor: the rater marks unreasonable boundaries whose number count is 25%-50% of all very clear GT boundaries (e.g. for a video of 10 clear boundaries, mark 4 more additional, unreasonable boundaries). 3, bad: the rater marks unreasonable boundaries whose number count is more than 50% of all very clear boundaries (e.g. for a video of 10 clear boundaries, mark 6 more unreasonable boundaries; for a video of 3 boundaries, mark 2 more unreasonable boundaries); the video only has only a few clear GT boundaries while the rater marks one more unreasonable boundaries (e.g. 3 shot boundaries, the rater marks one additional shot boundary which is not reasonable and shall not be marked). ## References * [1] Activitynet challenge 2016. http://activity-net.org/challenges/2016/, 2016. * [2] Sathyanarayanan N Aakur and Sudeep Sarkar. A perceptual prediction framework for self supervised event segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. * [3] Jean-Baptiste Alayrac, Ivan Laptev, Josef Sivic, and Simon Lacoste-Julien. Joint discovery of object states and manipulation actions. In Proceedings of the IEEE International Conference on Computer Vision, 2017. * [4] Humam Alwassel, Fabian Caba Heilbron, and Bernard Ghanem. Action search: Spotting actions in videos and its application to temporal action localization. In Proceedings of the European Conference on Computer Vision, 2018\. * [5] Lorenzo Baraldi, Costantino Grana, and Rita Cucchiara. Shot and scene detection via hierarchical clustering for re-using broadcast video. In International Conference on Computer Analysis of Images and Patterns. Springer, 2015. * [6] Roger G Barker and Herbert F Wright. Midwest and its children: The psychological ecology of an american town. 1955\. * [7] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. * [8] Yu-Wei Chao, Sudheendra Vijayanarasimhan, Bryan Seybold, David A Ross, Jia Deng, and Rahul Sukthankar. Rethinking the faster r-cnn architecture for temporal action localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. * [9] Xiyang Dai, Bharat Singh, Guyue Zhang, Larry S Davis, and Yan Qiu Chen. Temporal context network for activity localization in videos. In Proceedings of the IEEE International Conference on Computer Vision, 2017. * [10] Li Ding and Chenliang Xu. Weakly-supervised action segmentation with iterative soft boundary assignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. * [11] Yazan Abu Farha and Jurgen Gall. Ms-tcn: Multi-stage temporal convolutional network for action segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. * [12] Christoph Feichtenhofer. X3d: Expanding architectures for efficient video recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020. * [13] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In Proceedings of the IEEE International Conference on Computer Vision, 2019. * [14] Jiyang Gao, Zhenheng Yang, and Ram Nevatia. Cascaded boundary regression for temporal action detection. In the British Machine Vision Conference, 2017. * [15] Michael Gygli. Ridiculously fast shot boundary detection with fully convolutional neural networks. In IEEE International Conference on Content-Based Multimedia Indexing, 2018. * [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. * [17] De-An Huang, Li Fei-Fei, and Juan Carlos Niebles. Connectionist temporal modeling for weakly supervised action labeling. In Proceedings of the European Conference on Computer Vision, 2016\. * [18] Yifei Huang, Yusuke Sugano, and Yoichi Sato. Improving action segmentation via graph-based temporal reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020. * [19] Y.-G. Jiang, J. Liu, A. R. Zamir, G. Toderici, I. Laptev, M. Shah, and R. Sukthankar. THUMOS challenge: Action recognition with a large number of classes. http://crcv.ucf.edu/THUMOS14/, 2014. * [20] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. * [21] Hilde Kuehne, Ali Arslan, and Thomas Serre. The language of actions: Recovering the syntax and semantics of goal-directed human activities. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014. * [22] Hilde Kuehne, Juergen Gall, and Thomas Serre. An end-to-end generative framework for video segmentation and recognition. In Proceedings of IEEE Winter Applications of Computer Vision Conference, 2016. * [23] Christopher A Kurby and Jeffrey M Zacks. Segmentation in the perception and memory of events. Trends in cognitive sciences, 2008. * [24] Colin Lea, Austin Reiter, René Vidal, and Gregory D. Hager. Segmental spatiotemporal cnns for fine-grained action segmentation. In Proceedings of the European Conference on Computer Vision, 2016\. * [25] Yong Jae Lee, Joydeep Ghosh, and Kristen Grauman. Discovering important people and objects for egocentric video summarization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2012. * [26] Peng Lei and Sinisa Todorovic. Temporal deformable residual networks for action segmentation in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. * [27] Jun Li, Peng Lei, and Sinisa Todorovic. Weakly supervised energy-based learning for action segmentation. In Proceedings of the IEEE International Conference on Computer Vision, 2019. * [28] Ji Lin, Chuang Gan, and Song Han. Tsm: Temporal shift module for efficient video understanding. In Proceedings of the IEEE International Conference on Computer Vision, 2019. * [29] Tianwei Lin, Xiao Liu, Xin Li, Errui Ding, and Shilei Wen. Bmn: Boundary-matching network for temporal action proposal generation. In Proceedings of the IEEE International Conference on Computer Vision, 2019. * [30] Tianwei Lin, Xu Zhao, and Zheng Shou. Single shot temporal action detection. In Proceedings of the 2017 ACM on Multimedia Conference. ACM, 2017\. * [31] Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, and Ming Yang. Bsn: Boundary sensitive network for temporal action proposal generation. In Proceedings of the European Conference on Computer Vision, 2018\. * [32] Tony Lindeberg. Feature detection with automatic scale selection. International Journal of Computer Vision, 1998. * [33] Fuchen Long, Ting Yao, Zhaofan Qiu, Xinmei Tian, Jiebo Luo, and Tao Mei. Gaussian temporal awareness networks for action localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. * [34] David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of IEEE International Conference on Computer Vision, 2001. * [35] David R Martin, Charless C Fowlkes, and Jitendra Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004\. * [36] Hamed Pirsiavash and Deva Ramanan. Detecting activities of daily living in first-person camera views. In Proceedings of IEEE conference on Computer Vision and Pattern Recognition, 2012. * [37] Hamed Pirsiavash and Deva Ramanan. Parsing videos of actions with segmental grammars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014. * [38] Jeremy R Reynolds, Jeffrey M Zacks, and Todd S Braver. A computational model of event segmentation from perceptual prediction. Cognitive science, 2007. * [39] Alexander Richard, Hilde Kuehne, and Juergen Gall. Action sets: Weakly supervised action segmentation without ordering constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. * [40] Dian Shao, Yue Zhao, Bo Dai, and Dahua Lin. Intra- and inter-action understanding via temporal action parsing. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2020. * [41] Hong Shao, Yang Qu, and Wencheng Cui. Shot boundary detection algorithm based on hsv histogram and hog feature. In International Conference on Advanced Engineering Materials and Technology. Atlantis Press, 2015. * [42] Zheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, and Shih-Fu Chang. Cdc: Convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. * [43] Zheng Shou, Junting Pan, Jonathan Chan, Kazuyuki Miyazawa, Hassan Mansour, Anthony Vetro, Xavier Giro-i Nieto, and Shih-Fu Chang. Online action detection in untrimmed, streaming videos-modeling and evaluation. In Proceedings of the European Conference on Computer Vision, 2018\. * [44] Zheng Shou, Dongang Wang, and Shih-Fu Chang. Temporal action localization in untrimmed videos via multi-stage cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. * [45] Bharat Singh, Tim K Marks, Michael Jones, Oncel Tuzel, and Ming Shao. A multi-stream bi-directional recurrent neural network for fine-grained action detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. * [46] Tomáš Souček, Jaroslav Moravec, and Jakub Lokoč. Transnet: A deep network for fast detection of common shot transitions. arXiv preprint arXiv:1906.03363, 2019. * [47] Sebastian Stein and Stephen J McKenna. Combining embedded accelerometers with computer vision for recognizing food preparation activities. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, pages 729–738, 2013. * [48] Shitao Tang, Litong Feng, Zhanghui Kuang, Yimin Chen, and Wei Zhang. Fast video shot transition localization with deep structured models. In Asian Conference on Computer Vision. Springer, 2018. * [49] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, 2015. * [50] Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6450–6459, 2018. * [51] Barbara Tversky and Jeffrey M Zacks. Event perception. Oxford handbook of cognitive psychology, 2013. * [52] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In Proceedings of the European Conference on Computer Vision, 2016\. * [53] Zehuan Yuan, Jonathan C Stroud, Tong Lu, and Jia Deng. Temporal action localization by structured maximal sums. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. * [54] Jeffrey M Zacks, Nicole K Speer, Khena M Swallow, Todd S Braver, and Jeremy R Reynolds. Event perception: a mind-brain perspective. Psychological bulletin, 2007. * [55] Hang Zhao, Antonio Torralba, Lorenzo Torresani, and Zhicheng Yan. Hacs: Human action clips and segments dataset for recognition and temporal localization. In Proceedings of the IEEE International Conference on Computer Vision, 2019. * [56] Yue Zhao, Yuanjun Xiong, Limin Wang, Zhirong Wu, Xiaoou Tang, and Dahua Lin. Temporal action detection with structured segment networks. In Proceedings of the IEEE International Conference on Computer Vision, 2017.
# Does the Heisenberg uncertainty principle apply along the time dimension? John Ashmead Visiting Scholar, University of Pennsylvania <EMAIL_ADDRESS> ###### Abstract Does the Heisenberg uncertainty principle (HUP) apply along the time dimension in the same way it applies along the three space dimensions? Relativity says it should; current practice says no. With recent advances in measurement at the attosecond scale it is now possible to decide this question experimentally. The most direct test is to measure the time-of-arrival of a quantum particle: if the HUP applies in time, then the dispersion in the time-of-arrival will be measurably increased. We develop an appropriate metric of time-of-arrival in the standard case; extend this to include the case where there is uncertainty in time; then compare. There is – as expected – increased uncertainty in the time-of-arrival if the HUP applies along the time axis. The results are fully constrained by Lorentz covariance, therefore uniquely defined, therefore falsifiable. So we have an experimental question on our hands. Any definite resolution would have significant implications with respect to the role of time in quantum mechanics and relativity. A positive result would also have significant practical applications in the areas of quantum communication, attosecond physics (e.g. protein folding), and quantum computing. ## 1 Introduction > “You can have as much junk in the guess as you like, provided that the > consequences can be compared with experiment.” – Richard P. Feynman [1] ##### Heisenberg uncertainty principle in time The Heisenberg uncertainty principle in space: $\Delta x\Delta p\geq 1$ (1) is a foundational principle of quantum mechanics. From special relativity and the requirement of Lorentz covariance we expect that this should be extended to time: $\Delta t\Delta E\geq 1$ (2) Both Bohr and Einstein regarded the uncertainty principle in time as essential to the integrity of quantum mechanics. At the sixth Conference of Solvay in 1930, Einstein devised the celebrated Clock-in-a-Box experiment to refute quantum mechanics by showing the HUP in time could be broken. Consider a box full of photons with a small fast door controlled by a clock. Weigh the box. Open the door for a time $\Delta t$, long enough for one photon to escape. Weigh the box again. From the equivalence of mass and energy, the change in the weight of the box gives you the _exact_ energy of the photon. If $\Delta E\to 0$, then you can make $\Delta t\Delta E<1$. Bohr famously refuted the refutation by looking at the specifics of the experiment. If you weigh the box by adding and removing small weights till the box’s weight is again balanced, the process will take time. During this process the clock will move up and down in the gravitational field, and its time rate will be affected by the gravitational redshift. Bohr showed that resulting uncertainty is precisely what is required to restore the HUP in time [2, 3]. The irony of employing Einstein’s own General Relativity (via the use of the gravitational redshift) to refute Einstein’s argument was presumably lost on neither Einstein nor Bohr. In later work this symmetry between time and space was in general lost. This is clear in the Schrödinger equation: $\imath\frac{\partial}{\partial\tau}\psi_{\tau}\left(\vec{x}\right)=H\psi_{\tau}\left(\vec{x}\right)$ (3) Here the wave function is indexed by time: if we know the wave function at time $\tau$ we can use this equation to compute the wave function at time $\tau+\epsilon$. The wave function has in general non-zero dispersion in space, but is always treated as having zero dispersion in time. In quantum mechanics “time is a parameter not an operator” (Hilgevoord [4, 5]). Or as Busch [6] puts it “… different types of time energy uncertainty can indeed be deduced in specific contexts, but … there is no unique universal relation that could stand on equal footing with the position-momentum uncertainty relation.” See also Pauli, Dirac, and Muga [7, 8, 9, 10]. So, we have a contradiction at the heart of quantum mechanics. Symmetry requires that the uncertainty principle apply in time; experience to date says this is not necessary. Both are strong arguments; neither is decisive on its own. The goal of this work is to reduce this to an experimental question; to show we can address the question with current technology. ##### Order of magnitude estimate The relevant time scale is the time for a photon to cross an atom. This is of order attoseconds. This is small enough to explain why such an effect has not already been seen by chance. Therefore the argument from experience is not decisive. But recent advances in ultra-fast time measurements have been extraordinary; we can now do measurements at even the sub-attosecond time scale [11]. Therefore we should now be able to measure the effects of uncertainty in time, if they are present. Further the principle of covariance strongly constrains the effects. In the same way that the inhabitants of Abbott’s Flatland [12] can infer the properties of a Sphere from the Sphere’s projection on Flatland, we should be able to predict the effects of uncertainty in time from the known uncertainties in space. ##### Operational meaning of uncertainty in time In an earlier work [13] we used the path integral approach to do this111Detailed references are provided in the earlier work. For here we note that the initial ideas come from the work of Stueckelberg and Feynman [14, 15, 16, 17, 18, 19] as further developed by Horwitz, Fanchi, Piron, Land, Collins, and others [20, 21, 22, 23, 24, 25, 26, 27].. We generalized the usual paths in the three space dimensions to extend in time as well. No other change was required. The results were manifestly covariant by construction, uniquely defined, and consistent with existing results in the appropriate long time limit. While there are presumably other routes to the same end, the requirement of Lorentz covariance is a strong one and implies – at a minimum – that the first order corrections of any such route will be the same. We can therefore present a well-defined and falsifiable target to the experimentalists. ##### Time-of-arrival measurements The most obvious line of attack is to use a time-of-arrival detector. We emit a particle at a known time, with known average velocity and dispersion in velocity. We measure the dispersion in time at a detector. The necessary uncertainty in space will be associated with uncertainty in time-of-arrival. For instance if the wave function has an uncertainty in position of $\Delta x$ and an average velocity of $v$, there will be an uncertainty of time-of- arrival of order $\frac{\Delta x}{v}$. We refer to this as the extrinsic uncertainty. If there is an intrinsic uncertainty in time associated with the particle we expect to see an additional uncertainty in time-of-arrival from that. If the intrinsic uncertainty in time is $\Delta t$, then we would expect a total uncertainty in time-of-arrival of order $\Delta t+\frac{\Delta x}{v}$. It is the difference between the two predictions that is the experimental target. For brevity we will refer to standard quantum mechanics, without uncertainty in time, as SQM. And we will refer to quantum mechanics with intrinsic uncertainty in time as TQM. ##### Time-of-arrival in SQM We clearly need a solid measure of the extrinsic uncertainty to serve as a reference point. We look at a number of available measures but are forced to conclude none of the established metrics are entirely satisfactory: they impose arbitrary conditions only valid in classical mechanics, have free parameters, or are physically unrealistic. We argue that the difficulties are of a fundamental nature. In quantum mechanics there cannot really be a crisp boundary between the wave function detected and the equipment doing the detection: we must – as a Bohr would surely insist – consider both simultaneously as part of a single system. However once this requirement is recognized and accepted, we can work out the rules for a specific setup and get reasonably solid predictions. ##### Comparison of time-of-arrival in SQM and TQM With this preliminary question dealt with, the actual comparison of the results with and without intrinsic uncertainty in time is straightforward. We first look at a generic particle detector setup in SQM, then look at the parallel results in TQM. By writing the latter as the direct product of a time part and a space part, we get a clean comparison. As expected, if there is intrinsic uncertainty in time then the uncertainty in time-of-arrival is increased and by a specific amount. In the non-relativistic limit, the relative increase in the uncertainty in time-of-arrival is small. But by running the initial wave function through a single slit in time, as a rapidly opening and closing camera shutter, we can make the relative increase arbitrarily great. _We therefore have falsifiability._ ##### Implications Since the predictions come directly from the principle of Lorentz covariance and the basic rules for quantum mechanics, any definite result – negative or positive – will have implications for foundational questions in relativity and quantum mechanics. If there is in fact intrinsic uncertainty in time, there will be practical implications for quantum communication, attosecond physics (e.g. protein folding), and quantum computing. This will open up new ways to carry information, additional kinds of interaction between quantum particles (forces of anticipation and regret, resulting from the extension in time), and a deeper understanding of the measurement problem. ## 2 Time-of-arrival measurements in SQM There does not appear to be a well-defined, generally accepted, and appropriate metric for the time-of-arrival in the case of SQM. The Kijowski metric [28] is perhaps the most popular metric for time-of- arrival. It has a long history, back to 1974, has seen a great deal of use, and has some experimental confirmation. However it is based on a deeply embedded classical assumption which is unjustifiable from a quantum mechanical viewpoint and which produces inconsistent results in some edge cases. We will nevertheless find it useful as a starting point. A conceptually sounder approach is supplied by Marchewka and Schuss, who use Feynman path integrals to compute the time-of-arrival [29, 30, 31, 32]. Unfortunately their approach has free parameters, which rules it out for use here. The difficulties with the Marchewka and Schuss approach are partly a function of an ad hoc treatment of the boundary between incoming wave function and the plane of the detector. We are able to deal with this boundary in a systematic way by using a discrete approach along the space coordinate, then taking the continuum limit. Unfortunately while this discrete approach solves the problem of how to handle the discontinuity at the edge of the detector it does not address the more fundamental problem: of assuming a crisp boundary between the quantum mechanical and classical parts of the problem in the first place. We address this problem by arguing that while the assumption of a crisp boundary between the quantum mechanical and classical parts of a problem is not in general justified it is also not in general necessary. If we work bottom-up, starting from a quantum mechanical perspective, we can use various ad hoc but reasonable heuristics to determine which parts may be described with acceptable accuracy using classical mechanics, and where only a fully quantum mechanical approach will suffice. ### 2.1 Kijowski time-of-arrival operator > “A common theme is that classical mechanics, deterministic or stochastic, is > always a fundamental reference on which all of these approaches [to the > time-of-arrival operator] are based.” – Muga and Leavens [33] #### 2.1.1 Definition The most popular time-of-arrival operator seems to be one developed by Kijowski in 1974 [28]. It and variations on it have seen much use since then [34, 35, 33, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45]. We used Kijowski as a starting point in our previous paper: it gives reasonable results in simple cases. Unfortunately its conceptual problems rule it out for use here. The main problem is that classical requirements play an essential role in Kijowski’s original derivation. In particular, Kijowski assumed that for the part of the wave function to the left of the detector222Throughout we assume that the particle is going left to right along the $x$ axis; that the detector is at $x=0$; that the particle starts well to the left of the detector; and that we do not need to consider the $y$ and $z$ axes. These assumptions are conventional in the time-of-arrival literature. only components with $p>0$ will contribute; while for the part of the wave function to the right of the detector only components with $p<0$ will contribute. This condition (which we will refer to as the classical condition) is imposed on the basis of the corresponding requirement in classical mechanics, as noted by Muga and Leavens above. Figure 1: The classical condition for time-of-arrival measurements Focusing on the case of a single space dimension, with the detector at $x=0$, the Kijowski metric is333Equation 122 in [33]. : $\rho_{d}\left(\tau\right)={\left|{\int\limits_{0}^{\infty}dp\sqrt{\frac{p}{{2\pi m}}}{e^{-\imath\frac{{{p^{2}}\tau}}{{2m}}}}{\varphi^{\left({left}\right)}}\left(p\right)}\right|^{2}}+{\left|{\int\limits_{-\infty}^{0}dp\sqrt{\frac{{-p}}{{2\pi m}}}{e^{-\imath\frac{{{p^{2}}\tau}}{{2m}}}}{\varphi^{\left({right}\right)}}\left(p\right)}\right|^{2}}$ (4) However while the restrictions on momentum may make sense classically444Amusingly enough, this condition is sometimes violated even in classical waves: Rayleigh surface waves in shallow water (e.g. tsunamis approaching land) show retrograde motion of parts of the wave [46]., they are more troubling in quantum mechanics. It is as if we were thinking of the wave function as composed of a myriad of classical particles each of which travels like a tiny billiard ball. This would appear to be a hidden variable interpretation of quantum mechanics. This was shown to be inconsistent with quantum mechanics by Bell in 1965 [47] experimentally confirmed [48, 49, 50, 51, 52]. However, Kijowski’s metric has produced reasonable results, see for instance [53]. Therefore if we see problems with it, we need to explain the successes as well. #### 2.1.2 Application to Gaussian test functions We will use Gaussian test functions (GTFs) to probe this metric. We define GTFs as normalized Gaussian functions that are solutions to the free Schrödinger equation. We will look at the results of applying Kijowski’s metric to two different cases, which we refer to as the “bullet” and “wave” variations, following Feynman’s terminology in his discussion of the double- slit experiment [54]. By “bullet” we mean a wave function which is narrowly focused around the central axis of motion; by “wave” we mean a wave function which is widely spread around the central axis of motion. ##### Bullets For the bullet version we will take a GTF which starts at time $\tau=0$, centered at start position $x_{0}=-d$, with average momentum $p_{0}$, and dispersion $\sigma_{p}^{2}\ll p_{0}$. These conditions are met in the Muga, Baute, Damborenea, and Egusquiza [53] reference; they give: $\delta v=0.098{{cm}\mathord{\left/{\vphantom{{cm}s}}\right.\kern-1.2pt}s},{v_{0}}=10{{cm}\mathord{\left/{\vphantom{{cm}s}}\right.\kern-1.2pt}s}$ (5) This is typical of high-energy particles, where the free particle wave functions will have essentially no negative momentum component. The starting wave function is: ${{\bar{\varphi}}_{0}}\left(x\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{x}^{2}}}}}{e^{\imath{p_{0}}x-\frac{1}{{2\sigma_{x}^{2}}}{{\left({x+d}\right)}^{2}}}}$ (6) This is taken to satisfy the free Schrödinger equation: $\imath\frac{\partial}{{\partial\tau}}{{\bar{\varphi}}_{\tau}}\left(x\right)=-\frac{1}{{2m}}\frac{{\partial^{2}}}{{\partial{x^{2}}}}{{\bar{\varphi}}_{\tau}}\left(x\right)$ (7) so is given as a function of time by: $\bar{\varphi}_{\tau}\left(x\right)=\sqrt[4]{\frac{1}{\pi\sigma_{x}^{2}}}\sqrt{\frac{1}{f_{\tau}^{\left(x\right)}}}e^{\imath p_{0}x-\frac{1}{2\sigma_{x}^{2}f_{\tau}^{\left(x\right)}}\left(x+d-\frac{p_{0}}{m}\tau\right)^{2}-\imath\frac{p_{0}^{2}}{2m}\tau}$ (8) with dispersion factor $f$ defined by: $f_{\tau}^{\left(x\right)}\equiv 1+\imath\frac{\tau}{m\sigma_{x}^{2}}$ (9) The momentum transform is: ${{\hat{\bar{\varphi}}}_{\tau}}\left(p\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{p}^{2}}}}}{e^{\imath pd-\frac{{{\left({p-{p_{0}}}\right)}^{2}}}{{2\sigma_{p}^{2}}}-\imath\frac{{p^{2}}}{{2m}}\tau}}$ (10) with $\sigma_{p}\equiv\frac{1}{\sigma_{x}}$. We will use this wave function as a starting point throughout this work, referring to it as the “bullet wave function”. In Kijowski’s notation, there is no right hand wave function since the particle started on the left. Further since the wave function is by construction narrow in momentum space we can extend the lower limit in the integral on the left from zero to negative infinity giving: ${\rho_{d}}\left(\tau\right)={\left|{\int\limits_{-\infty}^{\infty}dp\sqrt{\frac{p}{{2\pi m}}}\sqrt[4]{{\frac{1}{{\pi\sigma_{p}^{2}}}}}{e^{\imath pd-\frac{{{\left({p-{p_{0}}}\right)}^{2}}}{{2\sigma_{p}^{2}}}-\imath\frac{{p^{2}}}{{2m}}\tau}}}\right|^{2}}$ (11) Since the wave function is narrowly focused, we can replace the square root in the integral by the average: $\sqrt{\frac{{{p_{0}}+\delta p}}{m}}\approx\sqrt{\frac{{{\text{p}}_{0}}}{m}}$ (12) which we can then pull outside of the integral: ${\rho_{d}}\left(\tau\right)\approx\frac{{p_{0}}}{m}{\left|{\int\limits_{-\infty}^{\infty}{\frac{{dp}}{{\sqrt{2\pi}}}\sqrt[4]{{\frac{1}{{\pi\sigma_{p}^{2}}}}}{e^{\imath pd-\frac{{{\left({p-{p_{0}}}\right)}^{2}}}{{2\sigma_{p}^{2}}}-\imath\frac{{p^{2}}}{{2m}}\tau}}}}\right|^{2}}$ (13) We recognize the integral as simply the inverse Fourier transform of the momentum space form: ${{\bar{\varphi}}_{\tau}}\left(d\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{x}^{2}}}}}\sqrt{\frac{1}{{f_{\tau}^{\left(x\right)}}}}{e^{\imath{p_{0}}d-\frac{1}{{2\sigma_{x}^{2}f_{\tau}^{\left(x\right)}}}{{\left({d-\frac{{p_{0}}}{m}\tau}\right)}^{2}}-\imath\frac{{p_{0}^{2}}}{{2m}}\tau}}$ (14) so: ${\rho_{d}}\left(\tau\right)\approx\frac{{p_{0}}}{m}{\left|{{{\bar{\varphi}}_{\tau}}\left(d\right)}\right|^{2}}=\frac{{p_{0}}}{m}{{\bar{\rho}}_{\tau}}\left(d\right)$ (15) with the probability density being: ${{\bar{\rho}}_{\tau}}\left(d\right)=\frac{{p_{0}}}{m}\sqrt{\frac{1}{{\pi\sigma_{x}^{2}\left({1+\frac{{\tau^{2}}}{{{m^{2}}\sigma_{x}^{4}}}}\right)}}}\exp\left({-\frac{{{\left({{\bar{x}}_{\tau}}\right)}^{2}}}{{\sigma_{x}^{2}\left({1+\frac{{\tau^{2}}}{{{m^{2}}\sigma_{x}^{4}}}}\right)}}}\right)$ (16) and with the average location in space: $\bar{x}_{\tau}\equiv-d+{v_{0}}\tau$ (17) We have the probability density as a function of $x$; we need it as a function of $\tau$. We therefore make the variable transformation: $d\tau={v_{0}}dx=\frac{{p_{0}}}{m}dx$ (18) And rewrite $\tau$ as the average value of $\tau$ plus an offset: $\tau=\bar{\tau}+\delta\tau,\bar{\tau}\equiv\frac{d}{{v_{0}}}$ (19) We rewrite the numerator in the density function in terms of $\delta\tau$: $\bar{\rho}_{\tau}\left(d\right)\approx\sqrt{\frac{1}{\pi\sigma_{x}^{2}\left|f_{\tau}^{\left(x\right)}\right|^{2}}}e^{-\frac{\left(v_{0}\delta\tau\right)^{2}}{\sigma_{x}^{2}\left|f_{\tau}^{\left(x\right)}\right|^{2}}}$ (20) We may estimate the uncertainty in time by expanding around the average time. Since the numerator is already only of second order in $\delta\tau$ we need only keep the zeroth order in $\delta\tau$ in the denominator: $\sigma_{x}^{2}\left|f_{\tau}^{\left(x\right)}\right|^{2}=\sigma_{x}^{2}+\frac{\tau^{2}}{m^{2}\sigma_{x}^{2}}\approx\frac{\tau^{2}}{m^{2}\sigma_{x}^{2}}=\frac{\left(\bar{\tau}+\delta\tau\right)^{2}}{m^{2}\sigma_{x}^{2}}\approx\frac{\bar{\tau}^{2}}{m^{2}\sigma_{x}^{2}}$ (21) giving: $\bar{\rho}_{\delta\tau}\approx\sqrt{\frac{v_{0}^{2}m^{2}\sigma_{x}^{2}}{\pi\bar{\tau}^{2}}}e^{-\frac{v_{0}^{2}m^{2}\sigma_{x}^{2}}{\bar{\tau}^{2}}\left(\delta\tau\right)^{2}}$ (22) We define an effective dispersion in time: $\bar{\sigma}_{\tau}\equiv\frac{1}{mv_{0}\sigma_{x}}\bar{\tau}$ (23) So we have the probability of detection as roughly: $\bar{\rho}_{\delta\tau}\approx\sqrt{\frac{1}{\pi\bar{\sigma}_{\tau}^{2}}}e^{-\frac{\left(\delta\tau\right)^{2}}{\bar{\sigma}_{\tau}^{2}}}$ (24) This is normalized to one, centered on $\tau=\bar{\tau}$, and with uncertainty: $\Delta\tau=\frac{1}{\sqrt{2}}\bar{\sigma}_{\tau}$ (25) It is interesting that the classical condition has played no essential role in this derivation. In spite of that, the result is about what we would expect from a classical particle with probability distribution $\bar{\rho}$. Therefore the implication of the experimental results in Muga and Leavens is that they are confirmation of the reasonableness of the classical approximation in general – rather than of Kijowski’s classical condition in particular. ##### Waves Now consider a more wave-like Gaussian test function: slow and wide. Take $0<{p_{0}}\ll{\sigma_{p}}$ in equation 10, in fact make $p_{0}$ only very slightly greater than 0. Our GTF is still moving to the right but very slowly. If we wait long enough – and if there were no detector in the way – it will eventually get completely past the detector. Since there is a detector, the GTF will encounter this instead. We will first assume a perfect detector with no reflection. Use of the Kijowski distribution requires we drop the $p<0$ half of the wave function because we started on the left: ${\varphi^{\left({wave}\right)}}\left(p\right)\Rightarrow\sqrt[4]{{\frac{1}{{\pi\sigma_{p}^{2}}}}}{e^{\imath pd-\frac{{\left({p-{p_{0}}}\right)^{2}}}{{2\sigma_{p}^{2}}}}}\theta\left(p\right)$ (26) We can no longer justify extending the lower bound on the left hand integral from zero to negative infinity. We have instead: ${\rho_{d}}\left(\tau\right)={\left|{\int\limits_{0}^{\infty}dp\sqrt{\frac{p}{{2\pi m}}}\sqrt[4]{{\frac{1}{{\pi\sigma_{p}^{2}}}}}{e^{\imath pd-\frac{{{\left({p-{p_{0}}}\right)}^{2}}}{{2\sigma_{p}^{2}}}-\imath\frac{{p^{2}}}{{2m}}\tau}}}\right|^{2}}$ (27) This means we have dropped nearly half of the wave function and therefore expect a detection rate of only about 25%. The integral over the momentum gives a sum over Bessel functions of fractional order; the subsequent integrals over time are distinctly non-trivial. However in the limiting case $d\to 0,p_{0}\to 0$ (the wave function starts at the detector) the distribution simplifies drastically giving: $\rho_{0}\left(\tau\right)={\left|{\frac{{\sqrt[4]{m}\sqrt{\sigma_{p}^{2}}\Gamma\left({\frac{3}{4}}\right)}}{{{{(2\pi)}^{3/4}}{{(m+\imath\sigma_{p}^{2}\tau)}^{3/4}}}}}\right|^{2}}$ (28) And this in turn is simple enough to let us compute the norm analytically: $\int\limits_{0}^{\infty}{d\tau\rho_{0}\left(\tau\right)}=\frac{1}{4}$ (29) This is exactly as expected: if we throw out half the wave function, the subsequent norms are the square of the wave function or one quarter. This result is general. The GTF obeys the Schrödinger equation, which is norm preserving. If we throw out fraction $F$ of the wave function, the resulting wave function has norm ${\left({1-F}\right)^{2}}$. Assuming perfect detection, the Kijowski distribution will under-count the actual detections by this factor. It is arguable that the problem is the assumption of perfect detection. However if we drill down to the actual mechanism of detection, this will involve some sort of interaction of the incoming particles with various atoms, typically involving the calculation of matrix elements of the form (e.g. Sakurai [55]): $\left\langle{\varphi_{\tau}^{\left({n^{\prime}l^{\prime}m^{\prime}}\right)}\left({\vec{x}}\right)}\right|\sqrt{\frac{{n_{\vec{k},\alpha}}}{{2\omega V}}}{{\vec{\varepsilon}}^{\left(\alpha\right)}}\exp\left({\imath\vec{k}\cdot\vec{x}-\imath\omega\tau}\right)\left|{\varphi_{\tau}^{\left({nlm}\right)}\left({\vec{x}}\right)}\right\rangle$ (30) It is difficult to make sense of the classical condition in this context. Would we have to apply it to the interaction with each atom in turn? And how would the atoms know whether they are supposed to be detecting particles coming from the left or the right? And therefore how would the unfortunate atoms know whether they should drop the upper left and bottom right quadrants on the diagram or vice versa? #### 2.1.3 Requirements for a time-of-arrival metric The classical condition was introduced without justification in terms of quantum mechanics itself. As there are no cases where the rules of quantum mechanics have failed to produce correct results, this is not acceptable. Further it is unnecessary. The probability current, discussed below (subsubsection 3.1.1), provides a conceptually sound and purely quantum mechanical approach to the problem of detection. We therefore have to dispense entirely with the classical condition. As this is the foundation of the Kijowski class of metrics – per Muga and Leavens above – we have to dispense with the class as well. We have however learned a bit about our requirements: we are looking for a time-of-arrival metric which satisfies the requirements: 1. 1. Comes out of a completely quantum mechanical analysis, with no ad hoc classical requirements imposed on it. 2. 2. Includes an account of the interaction with the detector. As Bohr pointed out during the EPR debate [56, 57], an analysis of a quantum mechanical detection must include the specifics of the apparatus. There is no well-defined value of a quantum parameter in the abstract, but only in the context of a specific measurement. 3. 3. Conserves probability. 4. 4. And of course, is as simple as possible given the first three requirements. ### 2.2 Marchewka and Schuss path integral approach > “According to our assumptions, trajectories that propagate into the > absorbing boundary never return into the interval [a, b] so that the Feynman > integral over these trajectories is supported outside the interval. On the > other hand, the Feynman integral over the bounded trajectories in the > interval is supported inside the interval. Thus the two integrals are > orthogonal and give rise to no interference.” – Marchewka and Schuss [31] We turn to an approach from Marchewka and Schuss [29, 30, 31, 32]. They use a path integral approach plus the assumption of an absorbing boundary. To compute the wave function, they use a recursive approach: computing the wave function at each step, subtracting off the absorption at each step, and so to the end. Marchewka and Schuss start with the wave function defined over a general interval from $a\to b$. We can simplify their treatment slightly by taking $a=-\infty,b=0$. They break the time evolution up into steps of size $\epsilon$. Given the wave function at step $n$, they compute the wave function at step $n+1$ as: ${\psi_{n+1}}\left({x_{n+1}}\right)=\int\limits_{-\infty}^{0}{d{x_{n}}{K_{\varepsilon}}\left({{x_{n+1}};{x_{n}}}\right)}{\psi_{n}}\left({x_{n}}\right)$ (31) The kernel can be quite general. Marchewka and Schuss then introduce the assumption that the probability to be absorbed at each endpoint is proportional to the value of the probability to be at the end point times a characteristic length $\lambda$555They refer to $\lambda$ as a “fudge parameter”. The value of $\lambda$ has to found by running some reference experiments; after this is done the values are determined for “nearby” experiments.: ${P_{n}}=\lambda{\left|{{\psi_{n}}\left(0\right)}\right|^{2}}$ (32) Using the kernel they compute the probability of absorption as: ${P_{n}}=\frac{\varepsilon}{{2\pi{m}}}\left({\lambda\left|{\left.{{\partial_{x}}{\psi_{n}}\left(x\right)}\right|_{x=0}^{2}}\right|}\right)$ (33) They then shrink the wave function at the next step by an appropriate factor, the survival probability: ${\psi_{n+1}}\left(x\right)\to\sqrt{1-{P_{n}}}{\psi_{n+1}}\left(x\right)$ (34) This guarantees the total probability is constant, where the total probability is the probability to be detected plus the probability to survive. This overcomes the most obvious problem with the Kijowski approach. At the same time it represents an explicit loss of the phase information. There is nothing to keep the “characteristic length” from being complex; taking it as real is a choice. Therefore we are seeing the loss of information normally associated with the act of measurement. So we have effectively broken the total measurements at the end points down into a series of time dependent “mini-measurements” or “mini-collapses” at each time step. Computing the product of all these infinitesimal absorptions they get the final corrected wave function as: $\psi_{\tau}^{\left({corrected}\right)}\left(x\right)=\exp\left({-\frac{1}{{2\pi{m}}}\int\limits_{0}^{\tau}{d\tau^{\prime}\left({{\lambda}\left|{{\left.{{\partial_{x}}{\psi_{\tau}}\left(0\right)}\right|}^{2}}\right|}\right)}}\right){\psi_{\tau}}\left(x\right)$ (35) They go on to apply this approach to a variety of cases, as GTFs going from left to right. The Marchewka and Schuss approach is a significant advance over the Kijowski metric: it works at purely quantum mechanical level, it includes the detector in the analysis, and it guarantees by construction overall conservation of probability. But at the same time the introduction of the characteristic length is a major problem for falsifiability: how do we know whether the same length should be used for SQM and for TQM? And the trick for estimating the loss of probability at each step implicitly assumes no interference in time. This is not a problem for SQM but a point we need to keep in mind when including TQM. We therefor further refine our requirements for a metric: we need to handle any discontinuity at the detector in a well-defined way. ### 2.3 Discrete approach Figure 2: Reflection principle We can handle the discontinuity at the boundary in a well-defined way by taking a discrete approach, by setting up a grid in $\tau,x$. We model quantum mechanics as a random walk on the Feynman checkerboard [58]. Feynman used steps taken at light speed; ours will travel a bit more slowly than that. This lets us satisfy the previous requirements. And it lets us take advantage of the extensive literature on random walks (e.g. [59, 60, 61, 62]). We will start with a random walk model for detection, take the continuous limit as the grid spacing goes to zero, Wick rotate in time, and then apply the results to a GTF. This will give us a well-defined answer for the detection rate for a GTF. #### 2.3.1 Random Walk We start with a grid, index by $n\in\left[{0,1,\ldots,\infty}\right]$ in the time direction and by $m\in[-\infty,\ldots,-1,0,1,\ldots,\infty]$ in the space direction. We start with a simple coin tossing picture. We will take the probability to go left or right as equal, both $\frac{1}{2}$. $P_{nm}$ is the probability to be at position $m$ at step $n.$ It is given recursively by: ${P_{n+1,m}}={P_{n,m-1}}/2+{P_{n,m+1}}/2$ (36) The number of paths from $\left(0,0\right)\to\left(n,m\right)$ is given by the binomial coefficient $\left({\begin{array}[]{c}n\\\ {\frac{{n+m}}{2}}\end{array}}\right)$. We will take this as zero if $n$ and $m$ do not have the same parity, if they are not both even or both odd. If $n>\left|{m}\right|$ then $P_{nm}=0$. The total probability to get from $\left(0,0\right)\to\left(n,m\right)$ is given by the number of paths times the probability $\frac{{1}}{2^{n}}$ for each one. In our example: ${P_{nm}}=\left({\begin{array}[]{c}n\\\ {\frac{{n+m}}{2}}\end{array}}\right)\frac{1}{{2^{n}}}$ (37) If we wish to start at a position other than zero, say $m^{\prime}$, we replace $m$ with $m-m^{\prime}$. ##### First arrival We can model detection as the first arrival of the path at the position of the detector. We start the particle at step 0 at position $m=-d$ and position the detector at $m=0$. We take the probability of the first arrival at step $n$ when starting at position $-d$ as ${F_{n}}$. We define the function $G_{nm}$ as the probability to arrive at position $m$ at time $n$, _without_ having been previously detected. We can get this directly from the raw probabilities using the reflection principle (e.g. [59]) as follows: The number of paths that start at $\left(0,-d\right)$ and get to $\left(n,m\right)$ is given by: $\left({\begin{array}[]{c}n\\\ {\frac{{n+{m+d}}}{2}}\end{array}}\right)$. Now consider all paths that get to $m$ at step $n$ having first touched the detector at some step $f$. By symmetry, the number of paths from $-d\to f$ is the same as the number of paths from $d\to f$. The number of these paths is $\left({\begin{array}[]{c}n\\\ {\frac{{n+\left|{m-d}\right|}}{2}}\end{array}}\right)$. Therefore we have the number of paths that reach $m$ without first touching the line $m=0$ as ([63]): ${G_{nm}}=\left({\left({\begin{array}[]{c}n\\\ {\frac{{n+m+d}}{2}}\end{array}}\right)-\left({\begin{array}[]{c}n\\\ {\frac{{n+m-d}}{2}}\end{array}}\right)}\right)\frac{1}{{2^{n}}}$ (38) And of course, if $\left(n,m\right)$ do not have the same parity, $G_{nm}=0$. To get to the detector at step $n$, the particle must have been one step to the left at the previous step. It will then have a probability of $\frac{{1}}{2}$ to get the detector. So we have: ${F_{n}}=\frac{{1}}{2}{G_{n-1,-1}}$ (39) giving: $F_{n}=\frac{d}{n}\left({\begin{array}[]{c}n\\\ {\frac{{n+d}}{2}}\end{array}}\right)\frac{{1}}{2^{n}}=\frac{d}{n}{P_{nd}}$ (40) If the particle is already at position $0$ at step $0$ we declare that it has arrived at time zero so we have: $F_{0}={\delta_{d0}}$ (41) #### 2.3.2 Diffusion equation The passage to the continuum limit is well understood ([63, 62]). Define: $\begin{gathered}\tau=n\Delta\tau\hfill\\\ x=m\Delta x\hfill\end{gathered}$ (42) We get the diffusion equation if we take the limit so that both $\Delta x$ and $\Delta\tau$ become infinitesimal while we keep fixed the ratio: ${D_{0}}=\mathop{\lim}\limits_{\Delta\tau,\Delta x\to 0}\frac{{{\left({\Delta x}\right)}^{2}}}{{2\Delta\tau}}$ (43) This gives the diffusion equation for the probability $P$: $\frac{{\partial P}}{{\partial\tau}}={D_{0}}\frac{{{\partial^{2}}P}}{{\partial{x^{2}}}}$ (44) To further simplify we take $D_{0}=\frac{1}{2}$: $\frac{{\partial{P_{\tau}}}}{{\partial\tau}}=\frac{1}{2}\frac{{\partial^{2}}}{{\partial{x^{2}}}}{P_{\tau}}$ (45) If we start with the probability distribution ${P_{0}}\left(x-x^{\prime}\right)=\delta\left({x-x^{\prime}}\right)$ the probability distribution as a function of time is then: ${P_{\tau}}\left({x;x^{\prime}}\right)=\frac{1}{{\sqrt{2\pi\tau}}}\exp\left({-\frac{{{\left({x-x^{\prime}}\right)}^{2}}}{{2\tau}}}\right)$ (46) with the probability of a first arrival at $\tau$ being given by: ${F_{\tau}}\left({x;x^{\prime}}\right)=\frac{{\left|{x-x^{\prime}}\right|}}{{\sqrt{2\pi{\tau^{3}}}}}\exp\left({-\frac{{{\left({x-x^{\prime}}\right)}^{2}}}{{2\tau}}}\right)$ (47) To include the mass, scale by $\tau\to\tau^{\prime}\equiv m\tau$ giving: $\frac{{\partial{P_{\tau}}}}{{\partial\tau^{\prime}}}=\frac{1}{{2m}}\frac{{\partial^{2}}}{{\partial{x^{2}}}}{P_{\tau^{\prime}}}$ (48) and: ${P_{\tau}}\left({x;x^{\prime}}\right)=\frac{{\sqrt{m}}}{{\sqrt{2\pi\tau}}}\exp\left({-m\frac{{{\left({x-x^{\prime}}\right)}^{2}}}{{2\tau}}}\right)$ (49) To get the detection rate we have to handle the scaling of the time a bit differently, since it is a probability distribution in time. The detection rate only makes sense when integrated over $\tau$, so the substitution implies: $\int{d\tau{F_{\tau}}\to\int{\frac{{d\tau^{\prime}}}{m}{F_{\tau^{\prime}}}}}\to\int{d\tau^{\prime}{D_{\tau^{\prime}}}}\Rightarrow{D_{\tau}}=\frac{1}{m}{F_{\tau}}$ (50) So we have: ${D_{\tau}}\left({x-x^{\prime}}\right)=\frac{{\left|{x-x^{\prime}}\right|}}{\tau}\frac{{\sqrt{m}}}{{\sqrt{2\pi\tau}}}\exp\left({-m\frac{{{\left({x-x^{\prime}}\right)}^{2}}}{{2\tau}}}\right)$ (51) As a double check on this, we derive the detection rate directly from the diffusion equation using, again, the method of images (see appendix A). #### 2.3.3 Wick rotation To go from the diffusion equation to the Schrödinger equation: $\imath\frac{\partial}{{\partial\tau}}\psi=-\frac{1}{{2{m}}}\frac{{\partial^{2}}}{{\partial{x^{2}}}}\psi$ (52) we use Wick rotation: $\tau\to-\imath\tau,P\to\psi$ (53) so we have the Wick-rotated kernel $K$: ${K_{\tau}}\left({x;x^{\prime}}\right)=\sqrt{\frac{m}{{2\pi\imath\tau}}}\exp\left({\imath m\frac{{{\left({x-x^{\prime}}\right)}^{2}}}{{2\tau}}}\right)$ (54) and for the kernel for first arrival $F$: ${F_{\tau}}\left({x;x^{\prime}}\right)=\frac{{\left|{x-x^{\prime}}\right|}}{\tau}{K_{\tau}}\left({x-x^{\prime}}\right)$ (55) As a second double check, we derive $F$ directly using Laplace transforms (see appendix B). #### 2.3.4 Application to a Gaussian test function We start with the “bullet” GTF from our analysis of the Kijowski metric (equation 6): ${\varphi_{0}}\left({x}\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{x}^{2}}}}}{e^{\imath{p_{0}}x-\frac{1}{{2\sigma_{x}^{2}}}{{\left({x+d}\right)}^{2}}}}$ (56) with: $\left|{d}\right|\gg{\sigma_{x}}$ (57) The wave function at the detector is then: ${\varphi_{\tau}}\left(0\right)=\int\limits_{-\infty}^{0}{dx^{\prime}{F_{\tau}}\left({0;x^{\prime}}\right)}{\varphi_{0}}\left({x^{\prime}}\right)=\int\limits_{-\infty}^{0}{dx^{\prime}\frac{{\left|{0-x^{\prime}}\right|}}{\tau}{K_{\tau}}\left({0;x^{\prime}}\right)}{\varphi_{0}}\left({x^{\prime}}\right)$ (58) We can solve this explicitly in terms of error functions. However since $\varphi_{0}$ is strongly centered on $-d$ we can use the same trick as we did in the analysis of the Kijowski metric. We extend the upper limit of integration to $\infty$: ${\varphi_{\tau}}\left(0\right)\approx-\frac{1}{\tau}\int\limits_{-\infty}^{\infty}{dx^{\prime}x^{\prime}{K_{\tau}}\left({0;x^{\prime}}\right)}{\varphi_{0}}\left({x^{\prime}}\right)$ (59) We see by inspection we can pull down the $x^{\prime}$ by taking the derivative of $\varphi_{0}$ with respect to $p_{0}$. This gives: ${\varphi_{\tau}}\left(0\right)\approx\imath\frac{1}{{\tau}}\frac{\partial}{{\partial{p_{0}}}}\int\limits_{-\infty}^{\infty}{dx^{\prime}{K_{\tau}}\left({0;x^{\prime}}\right)}{\varphi_{0}}\left({x^{\prime}}\right)$ (60) We recognize the integral as the integral that propagates the free wave function from $0\to\tau$: ${\varphi_{\tau}}\left(0\right)\approx\imath\frac{1}{{\tau}}\frac{\partial}{{\partial{p_{0}}}}\varphi_{\tau}^{\left({free}\right)}\left(0\right)$ (61) We have: $\varphi_{\tau}^{\left({free}\right)}\left(0\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{x}^{2}}}}}\sqrt{\frac{1}{{f_{\tau}^{\left(x\right)}}}}{e^{-\frac{1}{{2\sigma_{x}^{2}f_{\tau}^{\left(x\right)}}}{{\left({d-\frac{{p_{0}}}{m}\tau}\right)}^{2}}-\imath\frac{{p_{0}^{2}}}{{2m}}\tau}}$ (62) so the wave function for a first arrival at $\tau$ is: ${\varphi_{\tau}}\left(0\right)\approx\left({\imath\frac{1}{{m}}\frac{{d-{v_{0}}\tau}}{{\sigma_{x}^{2}f_{\tau}^{\left(x\right)}}}+v_{0}}\right)\varphi_{\tau}^{\left({free}\right)}\left(0\right)$ (63) #### 2.3.5 Metrics We can compute the probability density to a sufficient order by expanding around the average time of arrival: $\bar{\tau}\equiv\frac{d}{{v_{0}}},\delta\tau\equiv\tau-\bar{\tau}$ (64) The first term in the expression for the $\varphi_{\tau}$ scales as $1/{\bar{\tau}}$; the second as a constant so at large $\tau$ we have: ${\rho_{\tau}}\left(0\right)\approx v_{0}^{2}\rho_{\tau}^{\left({free}\right)}\left(0\right)$ (65) The result is the same as with Kijowski, except for the overall factor of $v_{0}^{2}$. Since the uncertainty is produced by normalizing this, the uncertainty in time is the same as with the bullet calculation for Kijowski. But the factor of $v_{0}^{2}$ is troubling, particularly when we recall we are using natural units so that at non-relativistic speeds this is hard to distinguish from zero. The immediate problem is our somewhat thin-skinned approximation. The random walk can only penetrate to position $m=0$ in the grid before it is absorbed. And since the probability density is computed as the square, the result is doubly small. Given the taking of the limit, it is perhaps more surprising that we get a finite detection rate at all. A realistic approximation would have the paths penetrating some finite distance into the detector, with the absorption going perhaps with some characteristic length – as with the Marchewka and Schuss approach. In return for a _more_ detailed examination of the boundary we have gotten a _less_ physically realistic estimate. But the real problem with all three of the approaches we have looked at is their assumption of a crisp boundary between quantum and classical realms. ### 2.4 Implications for time-of-arrival measurements > “What is it that we human beings ultimately depend on? We depend on our > words. We are suspended in language. Our task is to communicate experience > and ideas to others. We must strive continually to extend the scope of our > description, but in such a way that our messages do not thereby lose their > objective or unambiguous character.” – Niels Bohr as cited in [64] In reality there can be no crisp boundary between quantum and classical realms. Consider the lifespan of an atom built on classical principles, as in say Bohr’s planetary model of the atom. A classical electron in orbit around the nucleus undergoes centripetal acceleration from the nucleus; therefore emits Larmor radiation; therefore decays in towards the nucleus. The estimated lifespan is of order $10^{-11}$ seconds [65]. What keeps this from happening in the case of a quantum atom is the uncertainty principle: as the electron spirals into the nucleus its uncertainty in position is reduced so its uncertainty in momentum is increased. A larger $\Delta p$ means a larger kinetic energy. In fact the ground state energy can be estimated as the minimum total energy – kinetic plus potential – consistent with the uncertainty principle. Since everything material is built of atoms and since there are no classical atoms there is no classical realm. And therefore it is incorrect to speak of a transition to a classical end state. No such state exists. However classical mechanics does work very well in practice. So what we have is a problem in description. Which parts of our experimental setup – particle, source, detector, observers – can be described to acceptable accuracy using classical language; where must we use a fully quantum description? This approach places us, with Bohr [66], firmly on the epistemological side of the measurement debate. One possible starting point is the program of decoherence [67, 68, 69, 70, 71, 72, 73, 74]. The key question for decoherence is not when does a system go from quantum to classical but rather when does a classical description become possible. Another is Quantum Trajectory Theory (QTT) [75, 76, 77, 78]. QTT, as its name suggests, is a particularly good fit with the path integral approach, the formalism we are primarily focused on here. For the moment we will explore a more informal approach, taking advantage of the conceptual simplicity of the path integral approach. We can describe the evolution of a quantum particle in terms of the various paths it takes. In the case of the problem at hand, we visualize the paths as those that enter the detector or do not, that are reflected or are not, that leave and do not return (backflow) or return again and so on. Ultimately however each of the paths will escape permanently or else enter an entropy sink of some kind, a structure that is sufficiently macroscopic that it is no longer possible to analyze the process in terms of the individual paths. Its effects will live on, perhaps in a photo-electric cascade or a change of temperature. But the specific path will be lost like a spy in a crowd. The entropy sink is the one essential part needed for a measurement to take place. In a quantum mechanical system information, per the no-cloning theorem, is neither created nor destroyed. But the heart of a measurement is the reduction of the complexity of the system under examination to a few numbers. Information is necessarily lost in doing this. In general it is fairly clear where this happens; typically in crossing the boundary from the microscopic to the macroscopic scale. Figure 3: Global character of the measurement problem Typical paths may cross the boundary multiple times, ultimately either being absorbed and registered (measured), lost in some way (detector inefficiency), or reflected (backflow). There will be some time delay associated with all of these cases666A typical representation could be done in terms of a Laplace transform $\mathcal{D}\left(s\right)$ of the detection amplitude plus a Laplace transform $\mathcal{R}\left(s\right)$ of the reflection amplitude. The Marchewka and Schuss fudge parameter may be thought of as a useful first approximation of $\mathcal{D}$.. This is not too far from conventional practice. A macroscopic detector functions as the entropy sink, its efficiency is usually known, and reflections are possible but minimized. In many cases the response time associated with the process of detection may not require much attention – we may only care that there was a detection. Our case is a bit trickier than that. We have to care about the time delay associated with a detection, especially if the time delay introduces some uncertainty in time in its own right. This picture is enough to show that there can not be, in general, a local time-of-arrival operator. Suppose there is such. Consider the paths leaving it on the right, continuing to an entropic sink of some kind. If we arrange a quantum mirror of some kind (perhaps just a regular mirror), then some of the paths will return to their source. The detection rate will be correspondingly reduced. But this can only be predicted using knowledge of the global situation, of the interposed mirror. An operator local to the boundary cannot know about the mirror and therefore cannot give a correct prediction of the time-of-arrival distribution. Therefore the time-of-arrival cannot in general be given correctly by a purely local operator. Since the fundamental laws are those of quantum mechanics, the analysis must be carried out at a quantum mechanical level – except for those parts where we can show the classical approximation suffices, usually the end states. In fact, there is no guarantee that even the end states can be adequately described in classical terms; while there are no cases – so far – of a perdurable and macroscopic quantum system777But see the macroscopic wave functions in [79, 80]., there is nothing to rule that out. ## 3 Effects of dispersion in time on time-of-arrival measurements > “Clearly,” the Time Traveller proceeded, “any real body must have extension > in four directions: it must have Length, Breadth, Thickness, and — Duration. > But through a natural infirmity of the flesh, which I will explain to you in > a moment, we incline to overlook this fact. There are really four > dimensions, three which we call the three planes of Space, and a fourth, > Time. There is, however, a tendency to draw an unreal distinction between > the former three dimensions and the latter, because it happens that our > consciousness moves intermittently in one direction along the latter from > the beginning to the end of our lives.” – H. G. Wells [81] We now return to our original problem, first estimating the dispersion in time-of-arrival without dispersion in time, then with. ### 3.1 Time-of-arrival measurements without dispersion in time Our efforts to correctly define a time of arrival operator have led to the conclusion that the time-of–arrival – like measurement in general – is a system level property; it is not something that can be correctly described by a local operator. We have therefore to a certain extent painted ourselves into a corner. We would like to compute the results for the general case, but we have shown there are only specific cases. We will deal with this by trading precision for generality. We will first assume a perfect detector, then justify the assumption. #### 3.1.1 Probability current in SQM We will first compute the detection rate by using the probability current per Marchewka and Schuss [32]. We start with the Schrödinger equation: $\imath\frac{\partial}{{\partial\tau}}{\psi_{\tau}}\left(x\right)=-\frac{1}{{2m}}\frac{{\partial^{2}}}{{\partial{x^{2}}}}{\psi_{\tau}}\left(x\right)$ (66) We will assume we have in hand a wave function that includes the interaction with the detector. We define the probability of detection at clock time $\tau$ as $D_{\tau}$. The total probability of the wave function to be found to the left of the detector at time $\tau$ is the integral of the probability density from $-\infty\to 0$. The total probability to have been detected up to time $\tau$ is the integral of $D_{\tau}$ from $0\to\tau$. Therefore by conservation of probability we have: $1=\int\limits_{-\infty}^{0}{dx}\psi_{\tau}^{*}\left(x\right){\psi_{\tau}}\left(x\right)+\int\limits_{0}^{\tau}{d\tau^{\prime}{D_{\tau^{\prime}}}}$ (67) We take the derivative with respect to $\tau$ to get: $D_{\tau}=-\int\limits_{-\infty}^{0}{dx\frac{{\partial\psi_{\tau}^{*}\left(x\right)}}{{\partial\tau}}{\psi_{\tau}}\left(x\right)+\psi_{\tau}^{*}\left(x\right)\frac{{\partial{\psi_{\tau}}\left(x\right)}}{{\partial\tau}}}$ (68) We use the Schrödinger equation: ${D_{\tau}}=\frac{\imath}{{2m}}\int\limits_{-\infty}^{0}{dx\frac{{{\partial^{2}}\psi_{\tau}^{*}\left(x\right)}}{{\partial{x^{2}}}}{\psi_{\tau}}\left(x\right)-\psi_{\tau}^{*}\left(x\right)\frac{{{\partial^{2}}{\psi_{\tau}}\left(x\right)}}{{\partial{x^{2}}}}}$ (69) And integrate by parts to get the equation for the detection rate: ${D_{\tau}}={\left.{\frac{\imath}{{2m}}\left({\frac{{\partial\psi_{\tau}^{*}\left(x\right)}}{{\partial x}}{\psi_{\tau}}\left(x\right)-\psi_{\tau}^{*}\left(x\right)\frac{{\partial{\psi_{\tau}}\left(x\right)}}{{\partial x}}}\right)}\right|_{x=0}}$ (70) We recognize the expression on the right as the probability current: ${J_{\tau}}\left(x\right)=-\frac{\imath}{{2}}\left({\psi_{\tau}^{*}\left(x\right)\frac{{\partial{\psi_{\tau}}\left(x\right)}}{{\partial x}}-{\psi_{\tau}}\left(x\right)\frac{{\partial\psi_{\tau}^{*}\left(x\right)}}{{\partial x}}}\right)$ (71) or: ${J_{\tau}}=\psi_{\tau}^{*}\frac{p}{{2m}}{\psi_{\tau}}-{\psi_{\tau}}\frac{p}{{2m}}\psi_{\tau}^{*}$ (72) so: $D_{\tau}=J_{\tau}$ (73) Note that $D_{\tau}$ can, in the general case, be negative. For instance, if the detector includes – per the previous section – a mirror of some kind there may be backflow, probability current going from right to left. Ironically enough, this is in fact the first metric that Kijowski considered in his paper, only to reject it because it violated the classical condition 2. #### 3.1.2 Black box detector Given the wave function we can compute the detection rate using the probability current. The real problem is to compute the wave function in the first place, given that this must in principle include the interaction of the particle with the detector. To treat the general case, we would like a general detector, one where the details of the interaction do not matter. We are looking for something which is a perfect black. We can use Einstein’s clock-in-a-box, but this time run it in reverse: have the detector be a box that has a small window open for a fraction of a second then check how many particles are in the box afterward by weighing the box. The interior walls of the box provide the necessary entropy sink. We will refer to this as a black box detector 888A “perfect black” is apparently not quite as theoretical a concept as it sounds: the commercial material Vantablack can absorb 99.965% of incoming light [82]. And Vantablack even uses a similar mechanism: incoming light is trapped in vertically aligned nanotube arrays.. In fact, this is a reasonable model of the elementary treatment of detection. When, for instance, we compute the rate of scattering of particles into a bit of solid angle, we assume that the particles will be absorbed with some efficiency but we do not generally worry about subtleties as time delay till a detection is registered or backflow. #### 3.1.3 Application to a Gaussian test function We will take a GTF at the plane as above: ${\varphi_{\tau}}\left(x\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{x}^{2}}}}}\sqrt{\frac{1}{{f_{\tau}^{\left(x\right)}}}}{e^{\imath{p_{0}}x-\frac{1}{{2\sigma_{x}^{2}f_{\tau}^{\left(x\right)}}}{{\left({x+d-\frac{{p_{0}}}{m}{\tau}}\right)}^{2}}-\imath\frac{{p_{0}^{2}}}{{2m}}{\tau}}}$ (74) An elementary application of the probability current gives the detection rate as the velocity times the probability density: ${D_{\tau}}=\frac{{p_{0}}}{m}{\rho_{\tau}}\left(0\right)$ (75) This is the same as we had for the Kijowski bullet (equation 16). And it is, as noted there, also the classical rate for detection of a probability distribution impacting a wall. Our use of the Einstein box in reverse is intended to provide a lowest common denominator for quantum measurements; it is not surprising that the lowest common denominator for quantum measurements should be the corresponding classical distribution. We have the uncertainty in time-of-arrival as before (equation 25): $\Delta\tau=\frac{1}{{\sqrt{2}}}{{\bar{\sigma}}_{\tau}}=\frac{1}{{\sqrt{2}}}\frac{1}{{m{v_{0}}{\sigma_{x}}}}\bar{\tau}$ (76) Since we have $\bar{p}=m{v_{0}}$ and – for a minimum uncertainty wave function – ${\sigma_{p}}=\frac{1}{{\sigma_{x}}}$ we can also write: $\frac{{\sigma_{\tau}}}{{\bar{\tau}}}=\frac{{{\hat{\sigma}}_{p}}}{{\bar{p}}}$ (77) so the dispersion in momentum and in clock time are closely related, as one might expect. While the results are the same as with the Kijowski metric (for bullet wave functions), the advantage is that we now have a clearer sense of what is actually going on and therefore what corrections we might have to make in practice. These may include (but will probably not be limited to): 1. 1. Depletion, e.g. inefficiencies of the detector, 2. 2. Backflow, e.g. a reflection coefficient, 3. 3. Delay in the detection (not normally important, but significant here), 4. 4. Self-interference of the wave function with itself, 5. 5. Edge effects – one can imagine the incoming part of the wave function skimming along the edge of the detector for instance, 6. 6. And of course the increasingly complex effects that result from increasingly complex models of the wave function-detector interaction. ### 3.2 Time-of-arrival measurements with dispersion in time #### 3.2.1 Quantum mechanics with uncertainty in time Figure 4: Paths in three and in four dimensions Path integrals provide a natural way to extend quantum mechanics to include time in a fully covariant way. In [13] we did this by first extending the usual three-dimensional paths to four dimensions. The rest of the treatment followed from this single assumption together with the twin requirements of covariance and of consistency with known results. But as one might expect, there are a fair number of questions to be addressed along the way – making the full paper both rather long and rather technical. To make this treatment self-contained we summarize the key points here. ##### Feynman path integrals in four dimensions We start with clock time, defined operationally as what the laboratory clock measures. This is the parameter $\tau$ in the Schrödinger equation, Klein- Gordon equation, path integral sums, and so on. The normal three dimensional paths are parameterized by clock time; we generalize them to four dimensional paths, introducing the coordinate time $t$ to do so: ${\bar{\pi}_{\tau}}\left({x,y,z}\right)\to{\pi_{\tau}}\left({t,x,y,z}\right)$ (78) It can be helpful to think of coordinate time as another space dimension. We will review the relationship between clock time and coordinate time at the end, once the formalism has been laid out. (This is a case where it is helpful to let the formalism precede the intuition.) In path integrals we get the amplitude to go from one point to another by summing over all intervening paths, weighing each path by the corresponding action. The path integral with four dimensional paths is formally identical to the path integral with three dimensions, except that the paths take all possible routes in four rather than three dimensions. We define the kernel as: ${K_{\tau}}\left({x^{\prime\prime};x^{\prime}}\right)=\mathop{\lim}\limits_{N\to\infty}\int\mathcal{D}\pi{e^{\imath\int\limits_{0}^{\tau}{d\tau^{\prime}\mathcal{L}\left[{\pi,\dot{\pi}}\right]}}}$ (79) We use a Lagrangian which works for both the three and four dimensional cases (see [83]): $\mathcal{L}\left({{x^{\mu}},{{\dot{x}}^{\mu}}}\right)=-\frac{1}{2}m{{\dot{x}}^{\mu}}{{\dot{x}}_{\mu}}-q{{\dot{x}}^{\mu}}{A_{\mu}}\left(x\right)-\frac{m}{2}$ (80) As usual with path integrals, to actually do the integrals we discretize clock time $\tau=\epsilon n$ and replace the integral over clock time with a sum over steps: $\int\limits_{0}^{\tau}{d\tau^{\prime}\mathcal{L}}\to\sum\limits_{j=0}^{n}{\mathcal{L}_{j}}$ with: $\mathcal{L}_{j}\equiv-m\frac{\left(x_{j}-x_{j-1}\right)^{2}}{2\varepsilon^{2}}-q\frac{x_{j}-x_{j-1}}{\epsilon}\frac{A\left(x_{j}\right)+A\left(x_{j-1}\right)}{2}-\frac{m}{2}$ (81) And measure: $\mathcal{D}\pi\equiv\left(-\imath\frac{m^{2}}{4\pi^{2}\varepsilon^{2}}\right)^{N}\prod\limits_{n=1}^{N-1}d^{4}x_{n}$ (82) This gives us the ability to compute the amplitude to get from one point to another in a way that includes paths that vary in time as well as space. ##### Schrödinger equation in four dimensions Usually we derive the path integral expressions from the Schrödinger equation. But here we derive the four dimensional Schrödinger equation from the short time limit of the path integral kernel, running the usual derivation in reverse and with one extra dimension. We get: $\imath\frac{\partial\psi_{\tau}}{\partial\tau}=-\frac{1}{{2m}}\left({\left({{p_{\mu}}-q{A_{\mu}}}\right)\left({{p^{\mu}}-q{A^{\mu}}}\right)-m^{2}}\right){\psi_{\tau}}$ (83) Note that here $p$ is an operator, $A$ is an external field, and $m$ is the constant operator999In the extension to QED, $A$ becomes an operator as well.. This equation goes back to Stueckelberg [14, 15] with further development by Feynman, Fanchi, Land, and Horwitz [16, 23, 24, 27]. We need only the free Schrödinger equation here. With the vector potential $A$ set to zero we have: $\imath\frac{\partial}{{\partial\tau}}{\psi_{\tau}}=-\frac{{{E^{2}}-{\overrightarrow{p}^{2}}-m^{2}}}{{2m}}{\psi_{\tau}}$ (84) Or spelled out: $\imath\frac{\partial}{{\partial\tau}}{\psi_{\tau}}\left({t,\vec{x}}\right)=\frac{1}{{2m}}\left({\frac{{\partial^{2}}}{{\partial{t^{2}}}}-{\nabla^{2}}-{m^{2}}}\right){\psi_{\tau}}\left({t,\vec{x}}\right)$ (85) If the left hand side were zero, the right hand side would be the Klein-Gordon equation (with $t\to\tau$). Over longer times – femtoseconds or more – the left side will in general average to zero, giving the Klein-Gordon equation as the long time limit. More formally we expect that if we average over times $\Delta\tau$ of femtoseconds or greater, we will have: $\int\limits_{\tau}^{\tau+\Delta\tau}{d\tau^{\prime}\int{{d^{4}}x}}\psi_{\tau}^{*}\left(x\right)\frac{{{E^{2}}-{{\vec{p}}^{2}}-{m^{2}}}}{{2m}}{\psi_{\tau}}\left(x\right)\approx 0$ (86) But at short times – attoseconds – we should see effects associated with uncertainty in time. This is in general how we get from a fully four dimensional approach to the usual three dimensional approach; the fluctuations in coordinate time tend to average out over femtosecond and longer time scales. But at shorter times – attoseconds – we will see the effects of uncertainty in time. It is much the same way that in SQM quantum effects in space average out over longer time and distance scales to give classical mechanics. TQM is to SQM – with respect to time – as SQM is to classical mechanics with respect to the three space dimensions. ##### Wave functions in coordinate time We need an estimate of the wave function at source, its evolution in clock time, and the rules for detection – the birth, life, and death of the wave function if you will. ###### Initial wave function We start with the free Klein-Gordon equation and use the method of separation of variables to break the wave function out into a space and a time part: $\varphi_{0}\left(t,x\right)=\tilde{\varphi}_{0}\left(t\right)\bar{\varphi}_{0}\left(x\right)$ (87) Or in energy/momentum space: $\hat{\varphi}_{0}\left(E,p\right)=\hat{\tilde{\varphi}}_{0}\left(E\right)\hat{\bar{\varphi}}_{0}\left(p\right)$ (88) We assume we already have the space part by standard methods. For instance, if this is a GTF in momentum space it will look like: ${{\hat{\bar{\varphi}}}_{\tau}}\left(p\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{p}^{2}}}}}{e^{-\imath p{x_{0}}-\frac{{{\left({p-{p_{0}}}\right)}^{2}}}{{2\sigma_{p}^{2}}}-\imath\frac{{p^{2}}}{{2m}}\tau}}$ (89) This plus the Klein-Gordon equation give us constraints on the time part: $\begin{array}[]{l}\left\langle 1\right\rangle=1\\\ \bar{E}\equiv\left\langle E\right\rangle=\sqrt{m^{2}+\left\langle\vec{p}\right\rangle^{2}}\\\ \left\langle E^{2}\right\rangle=\left\langle m^{2}+\vec{p}^{2}\right\rangle=m^{2}+\left\langle\vec{p}^{2}\right\rangle\end{array}$ (90) To get a robust estimate of the shape of the time part we assume it is the maximum entropy solution that satisfies the constraints. We used the method of Lagrange multipliers to find this getting: $\hat{\tilde{\varphi}}_{0}\left(E\right)=\sqrt[4]{\frac{1}{\pi\sigma_{E}^{2}}}e^{\imath\left(E-E_{0}\right)t_{0}-\frac{\left(E-E_{0}\right)^{2}}{2\sigma_{E}^{2}}}$ (91) with values: $\sigma_{E}^{2}=\sigma_{p}^{2}$ (92) and: $E_{0}=\sqrt{m^{2}+\bar{p}^{2}}$ (93) To get the starting wave function in time we take the inverse Fourier transform: $\tilde{\varphi}_{0}\left(t\right)=\sqrt[4]{\frac{1}{\pi\sigma_{t}^{2}}}e^{-\imath E_{0}\left(t-t_{0}\right)-\frac{t^{2}}{2\sigma_{t}^{2}}}$ (94) $\sigma_{t}^{2}=\frac{1}{\sigma_{E}^{2}}$ (95) The value of the $\sigma_{t}$ will normally be of order attoseconds. We set $t_{0}=0$ as the overall phase is already supplied by the space/momentum part. Since estimates from maximum entropy tend to be robust we expect this approach will give estimates which are order-of-magnitude correct. ###### Propagation of a wave function The TQM form of the Klein-Gordon equation is formally identical to the non- relativistic Schrödinger equation with the additional coordinate time term. In momentum space one can use this insight to write the kernel out by inspection: ${{\hat{K}}_{\tau}}\left({p^{\prime\prime};p^{\prime}}\right)={\delta^{\left(4\right)}}\left({p^{\prime\prime}-p^{\prime}}\right)\exp\left({\imath\frac{{{{p^{\prime}}^{2}}-{m^{2}}}}{{2m}}\tau}\right)\theta\left(\tau\right)$ (96) The coordinate space form is: ${K_{\tau}}\left({x^{\prime\prime};x^{\prime}}\right)=-\imath\frac{{m^{2}}}{{4{\pi^{2}}{\tau^{2}}}}{e^{-\imath m\frac{{{\left({x^{\prime\prime}-x^{\prime}}\right)}^{2}}}{{2\tau}}-\imath\frac{m}{2}\tau}}\theta\left(\tau\right)$ (97) In both cases we have the product of a coordinate time part by the familiar space part. Spelling this out in coordinate space101010We are using an over- tilde and over-bar to distinguish between the coordinate time and the space parts, when this is useful.: $\begin{gathered}{K_{\tau}}\left({x^{\prime\prime};x^{\prime}}\right)={{\tilde{K}}_{\tau}}\left({{t^{\prime\prime}};{t^{\prime}}}\right){{\bar{K}}_{\tau}}\left({{{\vec{x}}^{\prime\prime}};{{\vec{x}}^{\prime}}}\right)\exp\left({-\imath\frac{m}{2}\tau}\right)\theta\left(\tau\right)\hfill\\\ {{\tilde{K}}_{\tau}}\left({{t^{\prime\prime}};{t^{\prime}}}\right)=\sqrt{\frac{{\imath m}}{{2\pi\tau}}}\exp\left({-\imath m\frac{{{\left({{t^{\prime\prime}}-{t^{\prime}}}\right)}^{2}}}{{2\tau}}}\right)\hfill\\\ {{\bar{K}}_{\tau}}\left({{{\vec{x}}^{\prime\prime}};{{\vec{x}}^{\prime}}}\right)={\sqrt{-\frac{\imath m}{{2\pi\tau}}}^{3}}\exp\left({\imath m\frac{{{\left({\vec{x}^{\prime\prime}-\vec{x}^{\prime}}\right)}^{2}}}{{2\tau}}}\right)\hfill\end{gathered}$ (98) where the space part kernel is merely the familiar non-relativistic kernel (e.g. Merzbacher [84]). The additional factor of $\exp\left({-\imath\frac{m}{2}\tau}\right)$ only contributes an overall phase which plays no role in single particle calculations. If the initial wave function can be written as the product of a coordinate time and a three-space part, then the propagation of the three-space part is the same as in non-relativistic quantum mechanics. In general, if the free wave function starts as a direct product in time and space it stays that way. Of particular interest here is the behavior of a GTF in time. If at clock time zero it is given by: $\tilde{\varphi}_{0}\left(t\right)\equiv\sqrt[4]{\frac{1}{\pi\sigma_{t}^{2}}}e^{-\imath E_{0}\left(t-t_{0}\right)-\frac{\left(t-t_{0}\right)^{2}}{2\sigma_{t}^{2}}}$ (99) then as a function of clock time it is: $\tilde{\varphi}_{\tau}\left(t\right)=\sqrt[4]{\frac{1}{\pi\sigma_{t}^{2}}}\sqrt{\frac{1}{f_{\tau}^{\left(t\right)}}}e^{-\imath E_{0}t-\frac{1}{2\sigma_{t}^{2}f_{\tau}^{\left(t\right)}}\left(t-t_{0}-\frac{E_{0}}{m}\tau\right)^{2}+\imath\frac{E_{0}^{2}}{2m}\tau}$ (100) with dispersion factor $f_{\tau}^{\left(t\right)}\equiv 1-\imath\frac{\tau}{m\sigma_{t}^{2}}$ and with expectation, probability density, and uncertainty: $\begin{array}[]{l}\left\langle{t_{\tau}}\right\rangle={t_{0}}+\frac{E}{m}\tau={t_{0}}+\gamma\tau\\\ {{\tilde{\rho}}_{\tau}}\left(t\right)=\sqrt{\frac{1}{{\pi\sigma_{t}^{2}}\left({1+\frac{{\tau^{2}}}{{{m^{2}}\sigma_{t}^{4}}}}\right)}}\exp\left({-\frac{{{\left({t-\left\langle{t_{\tau}}\right\rangle}\right)}^{2}}}{{\sigma_{t}^{2}\left({1+\frac{{\tau^{2}}}{{{m^{2}}\sigma_{t}^{4}}}}\right)}}}\right)\\\ {\left({\Delta t}\right)^{2}}\equiv\left\langle t^{2}\right\rangle-{\left\langle t\right\rangle^{2}}=\frac{{\sigma_{t}^{2}}}{2}\left|{1+\frac{{\tau^{2}}}{{{m^{2}}\sigma_{t}^{4}}}}\right|\end{array}$ (101) The behavior of the time part is given by replacing $x\to t$ and taking the complex conjugate. As noted, coordinate time behaves like a fourth space dimension, albeit one that enters with the sign of $\imath$ flipped. ###### Detection of a wave function To complete the birth, life, death cycle we look at the detection of a wave function. This is obviously the place where things can get very tricky. However the general rule that coordinate time acts as a fourth space coordinate eliminates much of the ambiguity. If we were doing a measurement along the $y$ dimension and we registered a click in a detector located at $y_{detector}\pm\frac{\Delta y}{2}$ we would take this as meaning that we had measured the particle as being at position $y_{detector}$ with uncertainty $\Delta y$. Because of our requirement of the strictest possible correspondence between coordinate time and the space dimensions, the same rule necessarily applies in coordinate time. If we have a clock in the detector with time resolution $\Delta\tau$, then a click in the detector implies a measurement of the particle at coordinate time $t=\tau$ with uncertainty $\Delta\tau.$ So the detector does an implicit conversion from coordinate time to clock time, because we have assumed we know the position of the detector in clock time. How do we justify this assumption? The justification is that we take the clock time of the detector as being itself the average of the coordinate time of the detector: ${\tau_{Detector}}\equiv\left\langle{t_{Detector}}\right\rangle$ (102) The detector will in general be made up of a large number of individual particles, each with a four-position in coordinate time and the three space dimensions. While in TQM, the individual particles may be a bit in the future or past, the average will behave as a classical variable, specifically as the clock time. This drops out of the path integral formalism, which supports an Ehrenfest’s theorem in time (again in [13]). If we take the coordinate time of a macroscopic object as the sum of the coordinate times of its component particles, the principle is macroscopically stronger. The motion in time of a macroscopic object will no more display quantum fluctuations in time than its motion in space displays quantum fluctuations in the $x,y,z$ directions. This explains the differences in the properties of the clock time and coordinate time. The clock time is something we think of as only going forward; its corresponding energy operator is bounded below, usually by zero. But if it is really an expectation of the coordinate time of a macroscopic object, then it is not that the clock time cannot go backward, it is that it is statistically extremely unlikely to do so. The principle is the same as the argument that while the gas molecules in a room could suddenly pile up in one half of a room and leave the other half a vacuum, they are statistically extremely unlikely to do so. Further the conjugate variable for coordinate time ${E^{\left({op}\right)}}\equiv\imath\frac{\partial}{{\partial t}}$ is not subject to Pauli’s theorem [85]. The values of $E$ – the momentum conjugate to the coordinate time $t$ – can go negative. If we look at a non-relativistic GTF: ${{\hat{\tilde{\varphi}}}_{0}}\left(E\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{E}^{2}}}}}{e^{\imath E{t_{0}}-\frac{{{\left({E-{E_{0}}}\right)}^{2}}}{{2\sigma_{E}^{2}}}}},{\sigma_{E}}=\frac{1}{{\sigma_{t}}},{E_{0}}\approx m+\frac{{{\vec{p}}^{2}}}{{2m}}$ (103) the value of $m$ for an electron is about 500,000 eV, while for $\sigma_{t}$ of order attoseconds the $\sigma_{E}$ will be of order 6000 eV. Therefore the negative energy part is about $500000/6000$ or $83$ standard deviations away from the average. And therefore the likelihood of detecting a negative energy part of the wave function is zero to an excellent approximation. But not _exactly_ zero. So, clock time is the classical variable corresponding to the fully quantum mechanical coordinate time. They are two perspectives on the same universe. #### 3.2.2 Probability current in TQM We now compute the probability current in TQM, working in parallel to the Marchewka and Schuss derivation for SQM covered above (and with similar caveats about its applicability). By conservation of probability the sum of the detections as of clock time $\tau$ plus the total probability remaining at $\tau$ must equal one: $1=\int\limits_{0}^{\tau}{d\tau^{\prime}\int\limits_{-\infty}^{\infty}{dt{D_{\tau^{\prime}}}\left(t\right)}}+\int\limits_{-\infty}^{0}{dx}\int\limits_{-\infty}^{\infty}{dt}\psi_{\tau}^{*}\left({t,x}\right){\psi_{\tau}}\left({t,x}\right)$ (104) Again we take the derivative with respect to clock time to get: $\int\limits_{-\infty}^{\infty}{dt{D_{\tau}}\left(t\right)}=-\int\limits_{-\infty}^{0}{dx}\int\limits_{-\infty}^{\infty}{dt}\left({\dot{\psi}_{\tau}^{*}\left({t,x}\right){\psi_{\tau}}\left({t,x}\right)+\psi_{\tau}^{*}\left({t,x}\right){{\dot{\psi}}_{\tau}}\left({t,x}\right)}\right)$ (105) From the Schrödinger equation: $\begin{gathered}{{\dot{\psi}}_{\tau}}=-\imath\frac{1}{{2m}}\left({\frac{{\partial^{2}}}{{\partial{t^{2}}}}-\frac{{\partial^{2}}}{{\partial{x^{2}}}}}\right){\psi_{\tau}}\hfill\\\ \dot{\psi}_{\tau}^{*}=\imath\frac{1}{{2m}}\left({\frac{{\partial^{2}}}{{\partial{t^{2}}}}-\frac{{\partial^{2}}}{{\partial{x^{2}}}}}\right)\psi_{\tau}^{*}\hfill\end{gathered}$ (106) We use integration by parts in coordinate time to show the terms in the second derivative of coordinate time cancel: $\imath\frac{1}{{2m}}\left({\frac{{\partial^{2}}}{{\partial{t^{2}}}}\psi_{\tau}^{*}}\right){\psi_{\tau}}-\imath\frac{1}{{2m}}\psi_{\tau}^{*}\frac{{\partial^{2}}}{{\partial{t^{2}}}}{\psi_{\tau}}\to-\imath\frac{1}{{2m}}\left({\frac{\partial}{{\partial t}}\psi_{\tau}^{*}}\right)\frac{\partial}{{\partial t}}{\psi_{\tau}}+\imath\frac{1}{{2m}}\left({\frac{\partial}{{\partial t}}\psi_{\tau}^{*}}\right)\frac{\partial}{{\partial t}}{\psi_{\tau}}=0$ (107) leaving: $\mathop{\smallint}\limits_{-\infty}^{\infty}dt{D_{\tau}}\left(t\right)=\frac{\imath}{{2m}}\mathop{\smallint}\limits_{-\infty}^{\infty}dt\mathop{\smallint}\limits_{-\infty}^{0}dx\left({\left({\frac{{{\partial^{2}}\psi_{\tau}^{*}\left({t,x}\right)}}{{\partial{x^{2}}}}}\right){\psi_{\tau}}\left({t,x}\right)-\psi_{\tau}^{*}\left({t,x}\right)\left({\frac{{{\partial^{2}}{\psi_{\tau}}\left({t,x}\right)}}{{\partial{x^{2}}}}}\right)}\right)$ (108) We have the probability current in the $x$ direction at each instant in coordinate time for fixed clock time: $j_{\tau}\left({t,x}\right)\equiv-\frac{\imath}{{2m}}\frac{{\partial\psi_{\tau}^{*}\left({t,x}\right)}}{{\partial x}}{\psi_{\tau}}\left({t,x}\right)+\frac{\imath}{{2m}}\psi_{\tau}^{*}\left({t,x}\right)\frac{{\partial{\psi_{\tau}}\left({t,x}\right)}}{{\partial x}}$ (109) $j_{\tau}\left({t,x}\right)=\left({\frac{{p_{x}}}{{2m}}\psi_{\tau}^{*}\left({t,x}\right)}\right){\psi_{\tau}}\left({t,x}\right)+\psi_{\tau}^{*}\left({t,x}\right)\frac{{p_{x}}}{{2m}}{\psi_{\tau}}\left({t,x}\right)$ (110) What we are after is the full detection rate at a specific coordinate time: $D\left(t\right)=\int{d\tau{D_{\tau}}\left(t\right)}$ (111) If we accept that the equality in equation 108 applies at each instant in coordinate time (detailed balance) we have: ${D_{\tau}}\left(t\right)={\left.{{j_{\tau}}\left({t,x}\right)}\right|_{x=0}}$ (112) giving: $D\left(t\right)={\left.{\int{d\tau{j_{\tau}}\left({t,x}\right)}}\right|_{x=0}}$ (113) Recall that the duration in clock time of a path represents the length of the path – in the discrete case the number of steps. To get the sum of all paths ending at a specific coordinate time, we need to sum over all possible path lengths. That is this integral. As in the SQM we are assuming that once the paths enter the detector they do not return. And as with SQM, this means we have at best a reasonable first approximation. However as the SQM approximation works well in practice and as we are interested only in first corrections to SQM, this should be enough to achieve falsifiability. #### 3.2.3 Application to a Gaussian test function If we construct the wave function as the direct product of time and space parts: ${\varphi_{\tau}}\left({t,x}\right)={{\tilde{\varphi}}_{\tau}}\left(t\right){{\bar{\varphi}}_{\tau}}\left(x\right)$ (114) then the expression for the probability current simplifies: ${j_{\tau}}\left({t,x}\right)=\left({\left({\frac{{p_{x}}}{{2m}}\bar{\psi}_{\tau}^{*}\left(x\right)}\right){{\bar{\psi}}_{\tau}}\left(x\right)+\bar{\psi}_{\tau}^{*}\left(x\right)\frac{{p_{x}}}{{2m}}{{\bar{\psi}}_{\tau}}\left(x\right)}\right)\tilde{\varphi}_{\tau}^{*}\left(t\right){{\tilde{\varphi}}_{\tau}}\left(t\right)$ (115) or: ${D_{\tau}}\left(t\right)={{\bar{D}}_{\tau}}{{\tilde{\rho}}_{\tau}}\left(t\right)$ (116) Using the time wave function from equation 100 we have for the non- relativistic probability density in time: $\tilde{\rho}_{\tau}\left(t\right)=\sqrt{\frac{1}{\pi\sigma_{t}^{2}\left|f_{\tau}^{\left(t\right)}\right|^{2}}}e^{-\frac{1}{\sigma_{t}^{2}\left|f_{\tau}^{\left(t\right)}\right|^{2}}\left(t-\tau\right)^{2}}$ (117) To get the probability for detection at coordinate time $t$ we convolute clock time with coordinate time. To solve we look at the limit in long time (as with equation 21): $\sigma_{t}^{2}\left|f_{\tau}^{\left(t\right)}\right|^{2}=\sigma_{t}^{2}+\frac{\tau^{2}}{m^{2}\sigma_{t}^{2}}=\sigma_{t}^{2}+\frac{\left(\bar{\tau}+\delta\tau\right)^{2}}{m^{2}\sigma_{t}^{2}}\approx\frac{\left(\bar{\tau}+\delta\tau\right)^{2}}{m^{2}\sigma_{t}^{2}}\approx\frac{\bar{\tau}^{2}}{m^{2}\sigma_{t}^{2}}$ (118) This gives the probability density: ${{\tilde{\rho}}_{\tau}}\left(t\right)\approx\sqrt{\frac{1}{{\pi\tilde{\sigma}_{\tau}^{2}}}}{e^{-\frac{1}{{\tilde{\sigma}_{\tau}^{2}}}{{\left({t-\tau}\right)}^{2}}}},{{\tilde{\sigma}}_{\tau}}\equiv\frac{{\bar{\tau}}}{{m{\sigma_{t}}}}$ (119) The convolution over $\tau$ is trivial giving: ${{\rho_{\bar{\tau}}}\left(t\right)=\sqrt{\frac{1}{{\pi\sigma_{\tau}^{2}}}}{e^{-\frac{{{\left({t-\bar{\tau}}\right)}^{2}}}{{\sigma_{\tau}^{2}}}}}}$ (120) with the total dispersion in clock time being the sum of the dispersions in coordinate time and in space: ${\sigma_{\tau}^{2}\equiv\tilde{\sigma}_{\tau}^{2}+\bar{\sigma}_{\tau}^{2}}$ (121) This is reasonably simple. The uncertainty is: $\Delta\tau=\frac{1}{{\sqrt{2}}}\sqrt{\tilde{\sigma}_{\tau}^{2}+\bar{\sigma}_{\tau}^{2}}$ (122) We collect the definitions for the two dispersions: $\begin{array}[]{c}\bar{\sigma}_{\tau}^{2}=\frac{{{\bar{\tau}}^{2}}}{{{m^{2}}{v^{2}}\sigma_{x}^{2}}}\hfill\\\ \tilde{\sigma_{t}}^{2}\approx\frac{{{\bar{\tau}}^{2}}}{{{m^{2}}\sigma_{t}^{2}}}\hfill\end{array}$ (123) As noted, we expect the particle wave functions to have initial dispersions in energy/time comparable to their dispersions in momentum/space: $\sigma_{t}\sim\sigma_{x}$ (124) In the non-relativistic case, $v\ll 1$ the total uncertainty will be dominated by the space part. While we have looked specifically at wave functions composed of a direct product of GTFs in time and space, the underlying arguments are general. Therefore we expect that in the case of non- relativistic particles, the uncertainty in time-of-arrival will show evidence of dispersion in coordinate time, but will be dominated by the dispersion in space, because this enters with a $\frac{1}{v}$ factor and in the non- relativistic case $v$ is by definition small. ## 4 Applications ##### Single slit in time At non-relativistic velocities, we expect that the uncertainty in time-of- arrival will be dominated by the space part. To increase the effect of uncertainty in time we can run the wave function through a single slit in time, i.e. a camera shutter which opens and closes very rapidly. In SQM, the wave function will merely be clipped and the uncertainty in time at the detector correspondingly reduced. But in TQM the reduction in $\Delta t$ will increase $\Delta E$. The increase in uncertainty in energy will cause the wave function to be more spread out in time, making the uncertainty in time at the detector arbitrarily great. The case is analogous to the corresponding single slit in space, with $\Delta E,\Delta t$ substituting for $\Delta p,\Delta x$. In our previous paper we examined this case, but our analysis took as its starting point the Kijowski metric and associated approaches. In particular, we shifted back and forth between $q$ and $p$ in phase space in a way that we now recognize as suspect. This obscured an important subtlety in the case of non-relativistic particles. Consider a non-relativistic particle going through a time gate. If the gate is open for only a short time, the particle must have come from an area close to the time gate. Therefore its position in space is also fairly certain. If the particle is traveling with non-relativistic velocity $v\ll 1$ then $\Delta x\sim v\Delta t\Rightarrow\Delta x\ll\Delta t$. This in turn drives up the uncertainty in momentum, $\Delta p\sim{1\mathord{\left/{\vphantom{1{\Delta x}}}\right.\kern-1.2pt}{\Delta x}}$, leading to increased uncertainty in time at the detector, even for SQM. To avoid this problem, we replace the gate with a time-dependent source. We let the particles propagate for a short time past the source, then compare the expected uncertainties without and with dispersion in time. The basic conclusion of the previous work is unchanged: we can make the difference between the two cases arbitrarily large. We work out the details in appendix C. ##### Double slit in time The double slit experiment is often cited as _the_ key experiment for quantum mechanics (mostly famously by Feynman [54]). And Lindner’s “Attosecond Double- Slit Experiment” [86] provided a key inspiration for the current work, in that it showed we could now explore interference effects at the necessary attosecond scale. Building on this, Palacios et al [87] have proposed an experiment using spin-correlated pairs of electrons which takes advantage of the electrons being indistinguishable to look for interference in the time- energy domain111111Unfortunately as far as we know this experiment has not yet been performed.. Both experiments use the standard quantum theory to do the analysis of interference in the time/energy dimension. Horwitz [25, 88, 27] notes that as time is a parameter in standard quantum mechanics it is difficult to fully justify this analysis. He provides an alternative analysis in terms of the Stueckelberg equation (our eq 83) which is not subject to this objection. He gets the same spacing for the fringes as Lindner found. Unfortunately for this investigation since the fringe spacing is the same using the Lindner’s analysis or the one here the fringe pattern does not let us distinguish between the two approaches; it contributes nothing to falsifiability. We do expect, from the analysis of the single slit experiment, that the individual fringes will be smoothed out from the additional dispersion in time: the brights less bright, the darks less dark. However the contribution of this effect to falsifiability is already covered by the analysis of the single slit experiment. ##### Quantum electrodynamics While the above is sufficient to establish that we can detect the effects of uncertainty in time, it is clear that the strongest and most interesting effects will be seen at high energies and relativistic speeds, where the effects of quantum field theory (QFT) will need to be taken into account. In the previous paper, we looked at the QFT approach to systems of spinless, massive particles. We were able to extend TQM to QFT in a straightforward and unambiguous way. We also showed that we recover the usual Feynman rules in the appropriate limit. And that we will see various interesting effects from the additional uncertainty in time. One obvious concern was that the additional dimension in the loop integrals might make the theory still more divergent than QFT already is, perhaps even unrenormalizable. But in the spin 0 massive case we do not see the usual ultraviolet (UV) divergences: the combination of uncertainty in time, entanglement in time, and the use of finite initial wave functions (rather than plane waves or $\delta$ functions) suppresses the UV divergences without the need for regularization. The extension to spin 1/2 particles and to photons appears to be straightforward. We expect to see the various effects of time-as-observable for spin 1/2 particles and photons: interference in time, entanglement in time, correlations in time, and so on. We will also look at the implications for spin correlations, symmetry/anti-symmetry under exchange of identical particles, and the like. One caveat: if we analyze the photon propagator using the familiar trick of letting the photon mass $\mu\to 0$[89] we can see from inspection of the $\frac{1}{2\mu}$ factor in the propagator in momentum space: ${K_{\tau}}\left({k;k^{\prime}}\right)=\exp\left({\imath\frac{{{k^{2}}-{\mu^{2}}}}{{2\mu}}\tau}\right){\delta^{4}}\left({k-k^{\prime}}\right)$ (125) that excursions off-shell will be severely penalized. This in turn will limit the size of the effects of TQM in experiments such as the single slit in time. Such experiments are therefore better run with massive particles, the more massive the better in fact. In this paper we have estimated the initial wave functions in time using dimensional and maximum entropy arguments. Once TQM has been extended to include QED, we should be able to combine the standard QED results with these to serve as the zeroth term in a perturbation expansion, then compute first and higher corrections using the standard methods of time-dependent perturbation theory. This should open up a wide variety of interesting experiments. ##### General Relativity As noted in the introduction, TQM is a part of the relativistic dynamics program so can draw on the extensive relativistic dynamics literature. In particular we can take advantage of Horwitz’s extension of relativistic dynamics to General Relativity for single particles, the classical many body problem, and the quantum many body problem [90, 91, 92]. There would appear to be an interesting complementarity between the work of Horwitz and this work. First, TQM supplies an explicit source for the invariant monotonic parameter $\tau$ so that we do not need to introduce this as an additional assumption. Second, TQM avoids the UV divergences which have created significant difficulties for work to extend QFT to General Relativity. Therefore an appropriate combination of the two approaches might provide useful insights into the extension of QFT to General Relativity. ## 5 Discussion > “It is not surprising that our language should be incapable of describing > the processes occurring within the atoms, for, as has been remarked, it was > invented to describe the experience of daily life, and these consist only of > processes involving exceedingly large numbers of atoms. Furthermore, it is > very difficult to modify our language so that it will be able to describe > these atomic processes, for words can only describe things of which we can > form mental pictures, and this ability, too, is a result of daily > experience. Fortunately, mathematics is not subject to this limitation, and > it has been possible to invent a mathematical scheme — the quantum theory — > which seems entirely adequate for the treatment of atomic processes…” – > Werner Heisenberg [93] ###### Falsifiability The Heisenberg uncertainty principle (HUP) in time follows directly from quantum mechanics and relativity. This was clear to both Einstein and Bohr in 1930. However quantum mechanics since then has not in general included it. Given that the relevant attosecond time scales have been too small to work with – until recently – this is reasonable enough. However it is now possible to look at the time at the relevant scales; it is therefore necessary to do so, if we are understand either time or quantum mechanics. The specific predictions made here are based on only covariance and the requirement of consistency with existing results, they are by construction without free parameters. They should therefore provide at a minimum a reasonable order-of-magnitude target. The time-of-arrival experiment we have focused on here provides merely the most obvious line of attack. Essentially any experiment looking at time- dependent phenomena is a potential candidate. For instance in Auletta [94] there are about three hundred foundational experiments; most can be converted into a test of uncertainty in time by replacing an $x$ with a $t.$ Examples include the single slit in time, the double slit in time, and so on. One may also look at variations intended to multiply normally small effects, as experiments using resonance or diffraction effects. ###### Implications of negative result striking From symmetry, one may argue that a positive result is the more likely, but even a negative result would be interesting. For instance, assume we have established there is no HUP in time in one frame. Consider the same experiment from another frame moving at high velocity relative to the first. Consider say a Gaussian test function (GTF) in $x$ in the first; make a Lorentz boost of this into the second frame. The Lorentz boost will transform $x\to\gamma\left(x-v\tau\right)$. This will turn the uncertainty in space into a mixture of uncertainty in space and in time. We then look for uncertainty in time in the boosted frame. If the HUP in time is also rejected in the second, how do we maintain the principle of covariance? If the HUP in time is present in the second frame, then we can defined a preferred frame as the one in which HUP in time is maximally falsified. Such a preferred frame is anathema to general relativity. Therefore exploring the precise character of the negative result – uniform across frames, more in some frames than others, and so on – would itself represent an interesting research program. ###### Positive result would have significant practical applications If on the other hand, the wave function extends in time, this would not only be interesting from the point of view of fundamental physics, it would open up a variety of practical applications. For instance, there would be an additional channel for quantum communication systems to use. Memristors and other time-dependent circuit elements would show interesting effects. In attosecond chemistry and biochemistry we would expect to see forces of anticipation and regret; if the wave function extends in time, it will cause interactions to start earlier and end later than would otherwise be the case. The mysteries of protein folding could be attacked from a fresh perspective and perhaps unexpected temporal subtleties found. The applications in quantum computing are particularly interesting. Quantum computers will need to compensate for the effects of decoherence along the time dimension. But they should also be able to take advantage of additional computing opportunities along the time dimension. And if we have a deeper understanding of the relationship between time and measurement, we may find opportunities to “cut across the diagonal” in the design of quantum computers. I thank my long time friend Jonathan Smith for invaluable encouragement, guidance, and practical assistance. I thank Ferne Cohen Welch for extraordinary moral and practical support. I thank Martin Land, L. P. Horwitz, and the other organizers and participants of the International Association for Relativistic Dynamics (IARD) 2020 Conference for encouragement, useful discussions, and hosting a talk on this paper at the IARD 2020 conference. I thank the reviewer who drew my attention to Horwitz’s [90]. I thank Steven Libby for several useful conversations. I thank Larry Sorensen for many helpful references. I thank Ashley Fidler for helpful references to the attosecond physics literature. And I thank Avi Marchewka for an interesting and instructive conversation about various approaches to time-of-arrival measurements. And I note none of the above are in any way responsible for any errors of commission or omission in this work. . ## Appendix A Direct computation of the detection rate in diffusion We use conservation of probability and the method of images to make a direct computation of the detection rate in the case of diffusion. The approach is similar to the one used by Marchewka and Schuss to compute the probability current (subsection 3.1.1), although the context is classical rather than quantum. Define ${G_{\tau}}\left(x\right)$ as the probability to get from $d\to x$ without touching the detector at $0$. We get $G$ from the method of images by the same logic as in the discrete case: ${G_{\tau}}\left(x\right)={P_{\tau}}\left({x;-d}\right)-{P_{\tau}}\left({x;d}\right)$ (126) $G$ obeys the diffusion equation: $\frac{{\partial{G_{\tau}}}}{{\partial\tau}}=\frac{1}{{2m}}\frac{{\partial^{2}}}{{\partial{x^{2}}}}{G_{\tau}}$ (127) Take $D_{\tau}$ as the rate of detection at $\tau$. From conservation of probability we have: $1=\int\limits_{-\infty}^{0}{dx{G_{\tau}}\left(x\right)}+\int\limits_{0}^{\tau}{d\tau^{\prime}{D_{\tau^{\prime}}}}$ (128) Take the derivative with respect to $\tau$ and apply the diffusion equation: ${D_{\tau}}=-\int\limits_{-\infty}^{0}{dx\frac{{\partial{G_{\tau}}}}{{\partial\tau}}\left(x\right)}=-\int\limits_{-\infty}^{0}{dx\frac{1}{{2m}}\frac{{\partial^{2}}}{{\partial{x^{2}}}}G}$ (129) Since the term on the right is the integral of a derivative we have: ${D_{\tau}}=-\frac{1}{{2m}}{\left.{\frac{\partial}{{\partial x}}G}\right|_{x=0}}$ (130) By using the explicit form for $P$ (equation 49) we get: ${D_{\tau}}=\frac{d}{\tau}{P_{\tau}}\left(d\right)$ (131) as above (equation 51). ## Appendix B Alternate derivation of the detection amplitude Figure 5: Computation of amplitude for first arrival by using the Laplace transform We compute $F$ by using the Laplace transform. To get to any specific point $x$ we have to get to the point the first time, then return to it zero or more times. Therefore we can write the full kernel $K$ as the convolution of the kernel $F$ to arrive for the first time and the kernel $U$ to return zero or more times: ${K_{\tau}}\left({x;x^{\prime}}\right)=\int\limits_{0}^{\tau}{d\tau^{\prime}{U_{\tau^{\prime}}}\left(x\right){F_{\tau^{\prime}}}\left({x;x^{\prime}}\right)}$ (132) Since the kernel is invariant under space translation: $\begin{gathered}{K_{\tau}}\left({x;x^{\prime}}\right)={K_{\tau}}\left({x-x^{\prime}}\right)\hfill\\\ {F_{\tau^{\prime}}}\left({x;x^{\prime}}\right)={F_{\tau}}\left({x-x^{\prime}}\right)\hfill\end{gathered}$ (133) and: ${U_{\tau}}\left(x\right)={U_{\tau}}\left(0\right)={K_{\tau}}\left(0\right)$ (134) we can simplify the convolution to: ${K_{\tau}}\left({x-x^{\prime}}\right)=\int\limits_{0}^{\tau}{d\tau^{\prime}{U_{\tau^{\prime}}}F_{\tau^{\prime}}\left({x-x^{\prime}}\right)}$ (135) The Laplace transform of $K$ is the product of the Laplace transform of $U$ and $F$: $\mathcal{L}\left[K\right]=\mathcal{L}\left[U\right]\mathcal{L}\left[F\right]$ (136) The free kernel is: ${K_{\tau}}\left(x\right)=\sqrt{\frac{m}{{2\pi\imath\tau}}}\exp\left({\imath m\frac{{x^{2}}}{{2\tau}}}\right)$ (137) with Laplace transform: $\mathcal{L}\left[K\right]=-\left({-{1^{3/4}}}\right)\frac{{e^{\left({-1+\imath}\right)\sqrt{ms}\left|x\right|}}}{{\sqrt{2}\sqrt{s}}}\sqrt{m}$ (138) To get the Laplace transform of $U$ take the $x=0$ case: $\mathcal{L}\left[U\right]=-\frac{{\left({-{1^{3/4}}}\right)}}{{\sqrt{2}\sqrt{s}}}\sqrt{m}$ (139) Therefore we have the Laplace transform for $F$: $\mathcal{L}\left[F\right]={e^{\left({-1+\imath}\right)\sqrt{ms}\left|x\right|}}$ (140) and we get $F_{\tau}$ from the inverse Laplace transform: $\begin{gathered}\hfill\\\ {F_{\tau}}=\frac{\left|x\right|}{\tau}{K_{\tau}}\left(x\right)\hfill\end{gathered}$ (141) ## Appendix C Single slit in time We compare the results of the single slit in time in SQM and TQM. We model the single slit in time as a particle source located at $x=-d$, emitting particles with momentum $p_{0}$ in the $x$-direction, velocity ${v_{0}}=\frac{{p_{0}}}{m}$. In the SQM case, we assume that the wave function is emitted with probability $\bar{G}_{\tau}$: $\bar{G}\left(\tau\right)\equiv\frac{1}{{\sqrt{2\pi{W^{2}}}}}\exp\left({-\frac{{\tau^{2}}}{{2{W^{2}}}}}\right)$ (142) This is normalized to one, with uncertainty in time $\Delta\tau=W$. To extend to TQM we will replace this probability with an amplitude: ${{\tilde{\varphi}}_{0}}\left(t_{0}\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{t}^{2}}}}}\exp\left({-\imath E_{0}t_{0}-\frac{{t_{0}^{2}}}{{2\sigma_{t}^{2}}}}\right)$ (143) This has probability distribution and uncertainty: $\begin{gathered}{{\tilde{\rho}}_{0}}\left({t_{0}}\right)=\sqrt{\frac{1}{{\pi\sigma_{t}^{2}}}}\exp\left({-\frac{{t_{0}^{2}}}{{\sigma_{t}^{2}}}}\right)\\\ \Delta t=\frac{{\sigma_{t}}}{{\sqrt{2}}}\end{gathered}$ (144) We will take ${\sigma_{t}}=\sqrt{2}W$ so that the uncertainty from the gate is equal to the uncertainty from the initial distribution in coordinate time, to make the comparison as fair as possible. The detector will be positioned at $x=0$. The average time of arrival is $\bar{t}=\bar{\tau}=\frac{d}{{v_{0}}}$. We are interested in the uncertainty in time at the detector as defined above for SQM and TQM. ### C.1 Single slit in time in SQM Figure 6: Single slit in time in SQM In SQM the wave function extends in only the three space dimensions. If we break time up into slices, there is no interference across slices. At each time slice, there is an amplitude for paths from that slice to get to the detector. But if we look at the individual paths emitted during that time slice, some get to the detector at one time, some at an earlier or a later time. At the detector all paths that arrive at the same clock time interfere, constructively or destructively, as may happen. This is the picture used by Lindner et al in their analysis. The peaks in the incoming electric wavelet ejected electrons at different times, but when the electrons ejected from different times arrived at the detector at the same time they interfered. To make this specific, we will use a simple model. For convenience, we will assume that the source has an overall time dependency of $\exp\left({-\imath\frac{{p_{0}^{2}}}{{2m}}{\tau_{G}}}\right)$ so that our initial wave function is: ${{\bar{\varphi}}_{G}}\left({x_{G}}\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{x}^{2}}}}}{e^{\imath{p_{0}}{x_{G}}-\frac{1}{{2\sigma_{x}^{2}}}x_{G}^{2}-\frac{{p_{0}^{2}}}{{2m}}{\tau_{G}}}}$ (145) This is normalized to one at the start: $1=\int{d{x_{G}}\varphi_{G}^{*}\left({x_{G}}\right){\varphi_{G}}\left({x_{G}}\right)}$ (146) So the particle will have a total probability of being emitted of one: $1=\int\limits_{-\infty}^{\infty}{d\tau{{\bar{G}}_{\tau}}\int{d{x_{G}}\varphi_{G}^{*}\left({x_{G}}\right){\varphi_{G}}\left({x_{G}}\right)}}$ (147) The amplitude at the detector from a single moment at the gate will be given by: ${{\bar{\varphi}}_{DG}}\left(x\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{x}^{2}}}}}\sqrt{\frac{1}{{f_{DG}^{\left(x\right)}}}}{e^{\imath{p_{0}}x-\frac{1}{{2\sigma_{x}^{2}f_{DG}^{\left(x\right)}}}{{\left({x+d-{v_{0}}{\tau_{DG}}}\right)}^{2}}-\imath\frac{{p_{0}^{2}}}{{2m}}{\tau_{D}}}}$ (148) with ancillary definitions: $\begin{gathered}{\tau_{DG}}\equiv{\tau_{D}}-{\tau_{G}}\hfill\\\ f_{DG}^{\left(x\right)}=1+\imath\frac{{\tau_{DG}}}{{m\sigma_{x}^{2}}}\hfill\end{gathered}$ (149) We have: $x+d-{v_{0}}\left({{\tau_{D}}-{\tau_{G}}}\right)=-{v_{0}}\left({\delta\tau-{\tau_{G}}}\right)$ (150) Both $\delta\tau$ and $\tau_{G}$ are expected small. We can therefore justify taking: $f_{DG}^{\left(x\right)}=1+\imath\frac{{\tau_{DG}}}{{m\sigma_{x}^{2}}}\approx f_{\bar{\tau}}^{\left(x\right)}=1+\imath\frac{{\bar{\tau}}}{{m\sigma_{x}^{2}}}$ (151) Giving: ${{\bar{\varphi}}_{DG}}\left(x\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{x}^{2}}}}}\sqrt{\frac{1}{{f_{\vec{\tau}}^{\left(x\right)}}}}{e^{\imath{p_{0}}x-\frac{{v_{0}^{2}}}{{2\sigma_{x}^{2}f_{\vec{\tau}}^{\left(x\right)}}}{{\left({\delta\tau-{\tau_{G}}}\right)}^{2}}-\imath\frac{{p_{0}^{2}}}{{2m}}{\tau_{D}}}}$ (152) To get the full wave function at the detector we need to take the convolution of this with the gate function: ${{\bar{\psi}}_{D}}\left(x\right)=\int\limits_{-\infty}^{\infty}{d{\tau_{G}}\bar{G}\left({\tau_{G}}\right){{\bar{\varphi}}_{DG}}\left(x\right)}$ (153) giving: ${{\bar{\psi}}_{D}}\left(x\right)=\frac{{\sigma_{x}}}{{v_{0}}}\sqrt[4]{{\frac{1}{{\pi\sigma_{x}^{2}}}}}\frac{1}{{\sqrt{2\pi\left({\frac{{\sigma_{x}^{2}}}{{v_{0}^{2}}}+{W^{2}}+\imath\frac{{\bar{\tau}}}{{mv_{0}^{2}}}}\right)}}}\exp\left({\imath{p_{0}}x-\frac{{{\left({\delta\tau}\right)}^{2}}}{{2\left({\frac{{\sigma_{x}^{2}}}{{v_{0}^{2}}}+{W^{2}}+\imath\frac{{\bar{\tau}}}{{mv_{0}^{2}}}}\right)}}-\imath\frac{{p_{0}^{2}}}{{2m}}{\tau_{D}}}\right)$ (154) We can see that the effect of the gate is to increase the effective dispersion in space: $\frac{{\sigma_{x}^{2}}}{{v_{0}^{2}}}\to\frac{{\sigma_{x}^{2}}}{{v_{0}^{2}}}+{W^{2}}\Rightarrow\sigma_{x}^{2}\to\Sigma_{x}^{2}\equiv\sigma_{x}^{2}+v_{0}^{2}{W^{2}}$ (155) The gate effectively adds an uncertainty of $vW$ to the original uncertainty in space. As $vW$ is about the distance a particle would cross while the gate is open, this is reasonable. We get then as the associated uncertainty in time (eq: 76): $\Delta\tau=\frac{1}{{\sqrt{2}}}{{\bar{\Sigma}}_{\tau}}=\frac{1}{{\sqrt{2}}}\frac{1}{{m{v_{0}}{\Sigma_{x}}}}\bar{\tau}$ The longer the gate stays open the greater the resulting uncertainty in time at the detector. The shorter the gate stays open the less the uncertainty in time at the detector, with the minimum being the uncertainty for a free GTF released at the time and location of the gate. ### C.2 Single slit in time in TQM Figure 7: Single slit in time in TQM In TQM the wave function extends in all four dimensions. There is interference across time so it is no longer legitimate to break time up into separate slices (except for purposes of analysis of course). We will take the source as centered at a specific moment in coordinate time, $t=0$. We can write the four dimensional wave function as the product of the time (eq: 143) and space parts: ${\varphi_{0}}\left({{t_{0}},{{\vec{x}}_{0}}}\right)={\tilde{\varphi}_{0}}\left({t_{0}}\right){\bar{\varphi}_{0}}\left({{\vec{x}}_{0}}\right)$ (156) The space part is as above for clock time $\tau=0$. The time part at clock time $\tau$ is: > > ${{\tilde{\varphi}}_{\tau}}\left(t\right)=\sqrt[4]{{\frac{1}{{\pi\sigma_{t}^{2}}}}}\sqrt{\frac{1}{{f_{\tau}^{\left(t\right)}}}}{e^{-\imath{E_{0}}t-\frac{1}{{2\sigma_{t}^{2}f_{\tau}^{\left(t\right)}}}{{\left({t-\frac{{E_{0}}}{m}\tau}\right)}^{2}}}}$ > (157) For a non-relativistic particle $\frac{E}{m}\approx 1$. The treatment in subsection 3.2.3 applies giving the uncertainty in time at the detector as: $\Delta\tau=\frac{1}{{\sqrt{2}}}\sqrt{\tilde{\sigma}_{\tau}^{2}+\bar{\sigma}_{\tau}^{2}}$ (158) with contributions from the space and time parts of: $\begin{gathered}\bar{\sigma}_{\tau}^{2}=\frac{{{\bar{\tau}}^{2}}}{{{m^{2}}{v^{2}}\sigma_{x}^{2}}}\hfill\\\ \tilde{\sigma}_{\tau}^{2}=\frac{{{\bar{\tau}}^{2}}}{{{m^{2}}\sigma_{t}^{2}}}=\frac{{{\bar{\tau}}^{2}}}{{2{m^{2}}{W^{2}}}}\hfill\end{gathered}$ (159) So we have the scaled uncertainty in time as: $\frac{{\Delta\tau}}{{\bar{\tau}}}=\frac{1}{{\sqrt{2}}}\frac{1}{m}\sqrt{\frac{1}{{{v^{2}}\sigma_{x}^{2}}}+\frac{1}{{2{W^{2}}}}}$ (160) We can see that when $W\sim v\sigma_{x}$ the effects of dispersion in time will be comparable to those from dispersion in space. And as $W\to 0$, the uncertainty in time at the detector will be dominated by the width of the gate in time, going as $\frac{1}{W}$. So the intrinsic uncertainty in space of a GTF creates a corresponding minimum uncertainty in time at the detector given by $\bar{\sigma}_{\tau}$. In SQM the effects of the gate drop to zero as $W\to 0$ while in TQM they go to infinity. In SQM the wave function is clipped in time; in TQM it is diffracted. We therefore have the unambiguous signal – even in the non-relativistic case – that we need to achieve practical falsifiability. ## References * [1] Feynman R P 1965 The Character of Physical Law (Modern Library) ISBN 0 679 60127 9 * [2] Schilpp P A and Bohr N 1949 Discussion with Einstein on Epistemological Problems in Atomic Physics (Evanston: The Library of Living Philosophers) pp 200–241 * [3] Pais A 1982 Subtle is the Lord: The Science and Life of Albert Einstein (Oxford University Press) * [4] Hilgevoord J 1996 American Journal of Physics 64 1451–1456 * [5] Hilgevoord J 1998 American Journal of Physics 66 396–402 * [6] Busch P 2001 The Time Energy Uncertainty Relation (Berlin: Springer-Verlag) pp 69–98 Lecture Notes in Physics * [7] Pauli W 1980 General Principles of Quantum Mechanics (Springer-Verlag) URL http://www.springer.com/us/book/9783540098423 * [8] Dirac P A M 1958 General Principles of Quantum Mechanics 4th ed International series of monographs on physics (Oxford, Clarendon Press) * [9] Muga J G, Sala Mayato R and Egusquiza I L 2002 Time in Quantum Mechanics (Berlin; New York: Springer) ISBN 3540432949 * [10] Muga J G, Sala Mayato R and Egusquiza I L 2008 Time in Quantum Mechanics - Vol 2 (Berlin; New York: Springer-Verlag) ISBN 3540734724 9783540734727 0075-8450 ; * [11] Ossiander M, Siegrist F, Shirvanyan V, Pazourek R, Sommer A, Latka T, Guggenmos A, Nagele S, Feist J, Burgdörfer J, Kienberger R and Schultze M 2016 Nature Physics 13 280 * [12] Abbott E A 1884 Flatland: A Romance of Many Dimensions (Seeley and Co. of London) * [13] Ashmead J 2019 Journal of Physics: Conference Series 1239 012015 URL https://iopscience.iop.org/article/10.1088/1742-6596/1239/1/012015 * [14] Stueckelberg E C G 1941 Helv. Phys. Acta. 14 51 * [15] Stueckelberg E C G 1941 Helv. Phys. Acta. 14 322–323 * [16] Feynman R P 1948 Rev. Mod. Phys. 20(2) 367–387 * [17] Feynman R P 1949 Phys Rev 76 749–759 * [18] Feynman R P 1949 Phys Rev 76 769–789 * [19] Feynman R P 1950 Physical Review 80 440–457 * [20] Horwitz L P and Piron C 1973 Helvetica Physica Acta 46 * [21] Fanchi J R and Collins R E 1978 Found Phys 8 851–877 * [22] Fanchi J R 1993 Found. Phys. 23 * [23] Fanchi J R 1993 Parameterized Relativistic Quantum Theory (Fundamental Theories of Physics vol 56) (Kluwer Academic Publishers) * [24] Land M C and Horwitz L P 1996 ArXiv e-prints (Preprint hep-th/9601021v1) URL http://lanl.arxiv.org/abs/hep-th/9601021v1 * [25] Horwitz L P 2006 Physics Letters A 355 1–6 (Preprint quant-ph/0507044) URL http://arxiv.org/abs/quant-ph/0507044 * [26] Fanchi J R 2011 Found Phys 41 4–32 * [27] Horwitz L P 2015 Relativistic Quantum Mechanics Fundamental Theories of Physics (Springer Dordrecht Heidelberg New York London) * [28] Kijowski J 1974 Reports on Mathematical Physics 6 361–386 * [29] Marchewka A and Schuss Z 1998 Phys.Lett. A240 177–184 (Preprint quant-ph/9708034) URL https://arxiv.org/abs/quant-ph/9708034 * [30] Marchewka A and Schuss Z 1999 Phys. Lett. A 65 042112 (Preprint quant-ph/9906078) URL https://arxiv.org/abs/quant-ph/9906078 * [31] Marchewka A and Schuss Z 1999 ArXiv e-prints (Preprint quant-ph/9906003) URL https://arxiv.org/abs/quant-ph/9906003 * [32] Marchewka A and Schuss Z 2000 Phys. Rev. A 61 052107 (Preprint quant-ph/9903076) URL https://arxiv.org/abs/quant-ph/9903076 * [33] Muga J G and Leavens C R 2000 Physics Reports 338 353–438 * [34] Baute A D, Mayato R S, Palao J P, Muga J G and Egusquiza I L 2000 Phys.Rev.A 61 022118 (Preprint quant-ph/9904055) URL https://arxiv.org/abs/quant-ph/9904055 * [35] Baute A D, Egusquiza I L, Muga J G and Sala-Mayato R 2000 Phys. Rev. A 61 052111 (Preprint quant-ph/9911088) URL https://arxiv.org/abs/quant-ph/9911088 * [36] Baute A D, Egusquiza I L and Muga J G 2001 Phys. Rev. A 64 012501 (Preprint quant-ph/0102005) URL https://arxiv.org/abs/quant-ph/0102005 * [37] Ruggenthaler M, Gruebl G and Kreidl S 2005 Journal of Physics A: Mathematical and General 38 (Preprint quant-ph/0504185) URL https://arxiv.org/abs/quant-ph/0504185 * [38] Anastopoulos C and Savvidou N 2006 J. Math. Phys. 47 122106 (Preprint quant-ph/0509020) URL https://arxiv.org/abs/quant-ph/0509020 * [39] Yearsley J M 2010 ArXiv e-prints (Preprint 1012.2575v1) URL https://arxiv.org/abs/1012.2575v1 * [40] Kiukas J, Ruschhaupt A, Schmidt P O and Werner R F 2012 Journal of Physics A: Mathematical and Theoretical 45 (Preprint 1109.5087) URL https://arxiv.org/abs/1109.5087 * [41] Yearsley J M, Downs D A, Halliwell J J and Hashagen A K 2011 Phys. Rev. A 84 022109 (Preprint 1106.4767) URL https://arxiv.org/abs/1106.4767 * [42] Halliwell J J, Evaeus J, London J and Malik Y 2015 Physics Letters A 379 2445 (Preprint 1504.02509) URL https://arxiv.org/abs/1504.02509 * [43] Kijowski J, Waluk P and Senger K 2015 ArXiv e-prints (Preprint 1512.02867) URL https://arxiv.org/abs/1512.02867 * [44] Das S 2018 ArXiv e-prints (Preprint 1802.01863) URL https://arxiv.org/abs/1802.01863 * [45] Yearsley J M 2011 ArXiv e-prints (Preprint 1110.5790) URL https://arxiv.org/abs/1110.5790 * [46] Rayleigh L 1885 Proceedings of the London Mathematical Society s1-17 4–11 ISSN 0024-6115 (Preprint https://academic.oup.com/plms/article-pdf/s1-17/1/4/4368144/s1-17-1-4.pdf) * [47] Bell J S 1964 Physics 1 195–200 * [48] Clauser J F, Horne M A, Shimony A and Holt R A 1969 Phys. Rev. Lett. 23(15) 880–884 URL htt * [49] Aspect A, Grangier P and Roger G 1981 Phys. Rev. Lett. 47(7) 460–463 * [50] Aspect A, Grangier P and Roger G 1982 Phys. Rev. Lett. 49(2) 91–94 * [51] Aspect A, Dalibard J and Roger G 1982 Phys. Rev. Lett. 49(25) 1804–1807 * [52] Hensen B, Bernien H, Dréau A E, Reiserer A, Kalb N, Blok M S, Ruitenberg J, Vermeulen R F L, Schouten R N, Abellán C, Amaya W, Pruneri V, Mitchell M W, Markham M, Twitchen D J, Elkouss D, Wehner S, Taminiau T H and Hanson R 2015 Nature 526 682–686 * [53] Muga J G, Baute A D, Damborenea J A and Egusquiza I L 2000 ArXiv e-prints (Preprint quant-ph/0009111) URL https://arxiv.org/abs/quant-ph/0009111 * [54] Feynman R P, Leighton R B and Sands M 1965 The Feynman Lectures on Physics vol Volumes I, II, and III (Addison-Wesley) * [55] Sakurai J J 1967 Advanced Quantum Mechanics (Addison-Wesley) ISBN 0 201 06710 2 * [56] Einstein A, Podolsky B and Rosen N 1935 Phys. Rev. 47 777–780 * [57] Bohr N 1935 Physical Review 48 696–702 * [58] Feynman R P, Hibbs A R and Styer D F 2010 Quantum Mechanics and Path Integrals (Mineola, N.Y.: Dover Publications) ISBN 9780486477220 0486477223 URL http://www.loc.gov/catdir/enhancements/fy1006/2010004550-d.html * [59] Feller W 1968 An Introduction to Probability Theory and Its Applications vol I (John Wiley and Sons, Inc.) * [60] Biggs N L 1989 Discrete Mathematics (Oxford Science Publications) * [61] Graham R L, Knuth D E and Patashnik O 1994 Concrete Mathematics: A Foundation for Computer Science (Addison-Wesley Professional) ISBN 0201558025 * [62] Ibe O C 2013 Elements of Random Walk and Diffusion Processes (Hoboken, N.J.: John Wiley & Sons, Inc.) ISBN 1-118-61793-2; 1-118-61805-X; 1-118-62985-X * [63] Grimmett G R and Stirzaker D R 2001 Probability and Random Processes 3rd ed (Oxford University Press) * [64] Petersen A 1963 The Bulletin of the Atomic Scientists 19 * [65] Olsen J D and McDonald K T 2017 URL reference (Preprint https://www.physics.princeton.edu/ mcdonald/examples/orbitdecay.pdf) URL https://www.physics.princeton.edu/ mcdonald/examples/orbitdecay.pdf * [66] Camilleri K and Schlosshauer M 2015 Stud. Hist. Phil. Mod. Phys. 49, 73-83 (2015) (Preprint 1502.06547v1) URL http://arxiv.org/abs/1502.06547v1 * [67] Zurek W H 1991 Physics Today 44 33–44 URL https://arxiv.org/abs/quant-ph/0306072 * [68] Omnès R 1994 The Interpretation of Quantum Mechanics (Princeton: Princeton University Press) ISBN 0 691 03669 1 * [69] Giulini D, Joos E, Kiefer C, Kupsch J, Stamatescu I O and Zeh H D 1996 Decoherence and the Appearance of a Classical World in Quantum Theory (Berlin: Springer-Verlag) * [70] Zeh H D 1996 ArXiv e-prints (Preprint quant-ph/9610014) URL https://arxiv.org/abs/quant-ph/9610014 * [71] Zeh H D 2000 Lect.Notes Phys 538 19–42 URL https://arxiv.org/abs/quant-ph/9905004 * [72] Kim Y S and Noz M E 2003 Optics and Spectroscopy 94 (Preprint quant-ph/0206146) URL https://arxiv.org/pdf/quant-ph/0206146.pdf * [73] Joos E 2003 Decoherence and the Appearance of a Classical World in Quantum Theory 2nd ed (Berlin: Springer) ISBN 3540003908 (acid-free paper) URL http://www.loc.gov/catdir/toc/fy038/2003042813.html * [74] Schlosshauer M A 2007 Decoherence and the Quantum-to-Classical Transition (Berlin: Springer) ISBN 9783540357735 (hbk.) URL http://www.loc.gov/catdir/enhancements/fy0814/2007930038-b.html * [75] Carmichael H 1991 An Open Systems Approach to Quantum Optics: Lectures Presented at the Université Libre de Bruxelles, October 28 to November 4, 1991 (Springer) * [76] Dalibard J, Castin Y and Mølmer K 1992 Phys. Rev. Lett. 68(5) 580–583 * [77] Dum R, Zoller P and Ritsch H 1992 Phys. Rev. A 45(7) 4879–4887 * [78] Hegerfeldt G C and Wilser T S 1991 Classical and Quantum Systems: Foundations and Symmetries ed HD Doebner; W Scherer; F Schroeck J (World Scientific) * [79] Kovachy T, Asenbaum P, Overstreet C, Donnelly C A, Dickerson S M, Sugarbaker A, Hogan J M and Kasevich M A 2015 Nature 528 530–533 URL https://doi.org/10.1038/nature16155 * [80] Asenbaum P, Overstreet C, Kovachy T, Brown D D, Hogan J M and Kasevich M A 2017 Phys. Rev. Lett. 118(18) 183602 URL https://link.aps.org/doi/10.1103/PhysRevLett.118.183602 * [81] Wells H G 1935 The Time Machine (Everyman) URL http://www.fourmilab.ch/etexts/www/wells/timemach/html/ * [82] Jackson J J, Puretzky A A, More K L, Rouleau C M, Eres G and Geohegan D B 2010 ACS Nano 4 7573–7581 URL https://doi.org/10.1021/nn102029y * [83] Goldstein H 1980 Classical Mechanics Second Edition (Reading, MA: Addison-Wesley) * [84] Merzbacher E 1998 Quantum Mechanics (New York: John Wiley and Sons, Inc.) * [85] Pashby T 2014 Time and the Foundations of Quantum Mechanics Ph.D. thesis Dietrich School of Arts and Sciences, University of Pittsburgh URL http://d-scholarship.pitt.edu/21183/1/Pashby2014TimeFoundationsQM.pdf * [86] Lindner F, Schätzel M G, Walther H, Baltuška A, Goulielmakis E, Krausz F, Milošević D B, Bauer D, Becker W and Paulus G G 2005 Physical Review Letters 95 040401 URL http://link.aps.org/doi/10.1103/PhysRevLett.95.040401 http://lanl.arxiv.org/abs/quant-ph/0503165 * [87] Palacios A, Rescigno T N and McCurdy C W 2009 Phys. Rev. Lett. 103(25) 253001 URL https://link.aps.org/doi/10.1103/PhysRevLett.103.253001 * [88] Horwitz L P 2007 Foundations of Physics 37 734–746 URL https://doi.org/10.1007/s10701-007-9127-7 * [89] Zee A 2010 Quantum Field Theory in a Nutshell (Princeton, N.J.: Princeton University Press) ISBN 9780691140346 (hardcover alk. paper) * [90] Horwitz L P 2019 The European Physical Journal Plus 134 313 (Preprint 1810.09248) URL https://arxiv.org/abs/1810.09248 * [91] Horwitz L P 2020 The European Physical Journal Plus 135 479 URL https://doi.org/10.1140/epjp/s13360-020-00446-0 * [92] Horwitz L P 2021 The European Physical Journal Plus 136 32 URL https://doi.org/10.1140/epjp/s13360-020-00967-8 * [93] Heisenberg W 1930 The Physical Principles of the Quantum Theory (Chicago: University of Chicago Press) * [94] Auletta G 2000 Foundations and Interpretation of Quantum Mechanics: In the Light of a Critical-Historical Analysis of the Problems and of a Synthesis of the Results (Singapore: World Scientific Publishing Co. Pte. Ltd.)
# Correspondence between the twisted $N=2$ super-Yang-Mills and conformal Baulieu-Singer theories Octavio C. Junqueiraa,b and Rodrigo F. Sobreirob octavioj<EMAIL_ADDRESS>a UFRJ — Universidade Federal do Rio de Janeiro, Instituto de Física, Caixa Postal 68528, Rio de Janeiro, Brasil b UFF — Universidade Federal Fluminense, Instituto de Física, Av. Litoranea s/n, 24210-346, Niterói, RJ, Brasil ###### Abstract We characterize the correspondence between the twisted $N=2$ super-Yang-Mills theory and the Baulieu-Singer topological theory quantized in the self-dual Landau gauges. While the first is based on an on-shell supersymmetry, the second is based on an off-shell Becchi-Rouet-Stora-Tyutin symmetry. Because of the equivariant cohomology, the twisted $N=2$ in the ultraviolet regime and Baulieu-Singer theories share the same observables, the Donaldson invariants for 4-manifolds. The triviality of the Gribov copies in the Baulieu-Singer theory in these gauges shows that working in the instanton moduli space on the twisted $N=2$ side is equivalent to working in the self-dual gauges on the Baulieu-Singer one. After proving the vanishing of the $\beta$ function in the Baulieu-Singer theory, we conclude that the twisted $N=2$ in the ultraviolet regime, in any Riemannian manifold, is correspondent to the Baulieu-Singer theory in the self-dual Landau gauges—a conformal gauge theory defined in Euclidean flat space. ###### Contents 1. 1 Introduction 2. 2 Topological quantum field theories 1. 2.1 Donaldson-Witten theory 1. 2.1.1 The twist transformation 2. 2.1.2 Donaldson polynomials in the weak coupling limit 2. 2.2 Baulieu-Singer off-shell approach 1. 2.2.1 BRST symmetry in topological gauge theories 2. 2.2.2 Geometric interpretation 3. 2.2.3 Doublet theorem and gauge fixing: Baulieu-Singer gauges 4. 2.2.4 Absence of gauge anomalies 5. 2.2.5 Equivariant cohomology and global observables 3. 3 Quantum properties of BS theory in the self-dual Landau gauges 1. 3.1 Absence of radiative corrections 2. 3.2 Renormalization ambiguity 4. 4 Perturbative $\beta$ functions 1. 4.1 Twisted $N=2$ super-Yang-Mills theory 2. 4.2 Baulieu-Singer topological theory 1. 4.2.1 Nonphysical character of the $\beta$ function in the off-shell approach 2. 4.2.2 Conformal structure of the self-dual gauges 5. 5 Characterization of the DW/BS correspondence 1. 5.1 Quantum equivalence between DW and self-dual BS theories 2. 5.2 Considerations about the gauge dependence and possible generalizations 6. 6 Conclusions 7. A BS Ward identities in the self-dual Landau gauges ## 1 Introduction Throughout the 1980s, based on the self-dual Yang-Mills equations introduced by A. Belavin, A. Polyakov, A. Schwartz, and Y. Tyupkin in their study of instantons [1], S. K. Donaldson discovered and described topological structures of polynomial invariants for smooth 4-manifolds [2, 3, 4]. The connection between the Floer theory for 3-manifolds [5, 6] and Donaldson invariants for 4-manifolds with a nonempty boundary, i.e., that assumes values in Floer groups, has led to Atiyah’s conjecture [7, 8]. In this conjecture, he proposed that the Floer homology must lead to a relativistic quantum field theory. This conjecture was the motivation for Witten’s topological quantum field theory (TQFT) in four dimensions, as Witten himself admits [8]. In [7], Atiyah showed that Floer’s results [6] can be seen as a version of a supersymmetric gauge theory. Answering Atiyah’s conjecture, Witten found a relativistic formulation of [7], capable of reproducing the Donaldson polynomials in the the weak coupling limit of the twisted $N=2$ SYM theory. This TQFT is commonly referred to as the Donaldson-Witten theory (DW) in the Wess-Zumino gauge [9]. In practice, TQFTs have the power to reproduce topological invariants of the basis manifold as observables. The first one to obtain topological invariants from a quantum field theory was A. S. Schwarz in 1978 [10]. He showed that the Ray-Singer analytic torsion [11] can be represented as a partition function of the Abelian Chern-Simons (CS) action, which is invariant by diffeomorphisms. The Schwarz topological theory was the prototype of Witten theories in the 1980s. Indeed, the well-known Witten paper in which he reproduces the Jones polynomials of knot theory [12], is the non-Abelian generalization of Schwarz‘s results [10]. In this work, Witten is actually able to represent topological invariants of 3-manifolds as the partition function of the non- Abelian CS theory. After Witten’s result [8], L. Baulieu and I. M. Singer (BS) showed in [13] that the same topological observables can be obtained from a gauge-fixed topological action. In such an approach, the Becchi-Rouet-Stora-Tyutin (BRST) symmetry [14, 15, 16] plays a fundamental role. It is not built through a linear transformation of a supersymmetric gauge theory, like Witten’s TQFT. It is built through a gauge-fixing procedure of a topological-invariant action, in such a way that the BRST operator naturally appears as nilpotent without requiring the use of equations of motion. The geometric interpretation of the BS theory is that the non-Abelian topological theory lies in a universal space graded as a sum of the ghost number and the form degree, where the vertical direction of this double complex is determined by the ghost number and the horizontal one is determined by the form degree. In this space, the topological BRST transformations is written in terms of a universal connection, and its curvature naturally explains the BS approach as a topological Yang-Mills theory with the same global observables of Witten’s TQFT. From the physical point of view, the motivation to study TQFTs comes from the mathematical tools of such theories, capable of revealing the topological structure of field theories that are independent of variations of the metric, and consequently of the background choice. One of the major obstacles to constructing a quantum theory of gravity is the integration over all metrics. The introduction of a topological phase in gravity would have the power to make a theory of gravity arise from a symmetry breaking mechanism of a background independent topological theory111We must say that the introduction of such a topological phase is one of the intricate problems in topological quantum field theories, since one should develop a mechanism to break the topological symmetry. [8, 17]. On the other hand, we can investigate conformal properties of field theories via topological models. In three dimensions, for instance, the connection between the three-dimensional Chern-Simons theory and two-dimensional conformal theories is well known [12]. In four dimensions, TQFTs are intimately connected with the AdS/CFT correspondence [18, 19]. More recently, motivated by string dualities, a topological gravity phase in the early Universe was proposed [20]. Such a phase could explain some puzzles concerning early Universe cosmology. The fact that DW theory at the UV regime and BS theories share the same observables is a well-known result in the literature [8, 13, 21, 22, 23]. In this paper, we characterize the correspondence between DW TQFT and a conformal BS gauge theory at quantum level. While Witten’s theory is based on the twisted version of the $N=2$ super-Yang-Mills theory, the mentioned conformal theory is based on the Baulieu-Singer BRST gauge-fixing approach to a topological action [13]. In recent works [24, 25, 26], the existence of an extra bosonic symmetry was proved in the case of self-dual Landau gauges222For simplicity we will refer to the (anti-)self-dual Landau gauges, defined by instantons and anti-instantons configurations, see gauge condition (3.3), only by the denotation self-dual gauges.. This bosonic symmetry relates the Faddeev-Popov and the topological ghost fields. Together with the known vector supersymmetry [27] and the vanishing three-level gauge propagator, one observes that the BS theory at the self-dual Landau gauges is indeed tree- level exact [26]. Essentially, the proof of this property is diagrammatic with some help of algebraic renormalization techniques [16]. This remarkable property inevitably implies a vanishing $\beta$ function, since it does not receive quantum corrections. Nevertheless, an entire algebraic proof was still lacking until now. It turns out that, for a complete proof of the vanishing of the $\beta$ function of the BS theory in the self-dual Landau gauges, one extra property must be considered: the fact that the Gribov copies are inoffensive to the self-dual BS theory [28]. This property establishes that the self-dual BS theory is conformal, as it allows to recover some discrete symmetries. The use of these symmetries makes it possible to eliminate the renormalization ambiguities discussed in [25]. With this information, we were able to establish the correspondence between self-dual BS theories (a conformal gauge theory defined in Euclidean spaces) for any value of the coupling constant and DW theory at the deep UV. The paper is organized as follows. Section 2 contains an overview of the main properties of DW and BS theories. We introduce the main aspects of each approach, explaining how each one is constructed from different quantization schemes. As the quantum properties of the Witten’s TQFT is well known in literature, we dedicate Section 3 to discussing the quantum properties of the BS theory in the self-dual Landau gauges. In Section 4, we analyze and compare the corresponding $\beta$ functions of each model, after proving that the self-dual BS is conformal. Finally, in Section 5, we describe the quantum correspondence between Witten and self-dual BS topological theories. Section 6 contains our concluding remarks. ## 2 Topological quantum field theories A topological quantum field theory on a smooth manifold is a quantum field theory which is independent of the metric on the basis manifold. Such a theory has no dynamics, no local degrees of freedom, and is only sensitive to topological invariants which describe the manifold in which the theory is defined. The observables of a TQFT are naturally metric independent. The latter statement leads to the main property of topological field theories, namely, the metric independence of the observable correlation functions of the theory, $\frac{\delta}{\delta g_{\mu\nu}}\langle\mathcal{O}_{\alpha_{1}}(\phi_{i})\mathcal{O}_{\alpha_{2}}(\phi_{i})\cdots\mathcal{O}_{\alpha_{p}}(\phi_{i})\rangle=0\;,$ (2.1) with $\langle\mathcal{O}_{\alpha_{1}}(\phi_{i})\mathcal{O}_{\alpha_{2}}(\phi_{i})\cdots\mathcal{O}_{\alpha_{p}}(\phi_{i})\rangle=\mathcal{N}\int\mathcal{[}{D}\phi_{i}]\mathcal{O}_{\alpha_{1}}(\phi_{i})\mathcal{O}_{\alpha_{2}}(\phi_{i})\cdots\mathcal{O}_{\alpha_{p}}(\phi_{i})\,e^{-S[\phi]}\;,$ (2.2) where $g_{\mu\nu}$ is the metric tensor, $\phi_{i}(x)$ are quantum fields, $\mathcal{O}_{\alpha}$ the functional operators of the fields composing global observables, $S[\phi]$ is the classical action, and $\mathcal{N}$ the appropriate normalization factor. A typical operator $\mathcal{O}_{\alpha}$ is integrated over the whole space in order to capture the global structures of the manifold. Since there are no particles, the only nontrivial observables are of global nature [29, 30]. As a particular result of (2.1), the partition function of a topological theory is itself a topological invariant, $\frac{\delta}{\delta g_{\mu\nu}}Z[J]=0\;,$ (2.3) insofar as $Z[J]$ represents the expectation value of the vacuum in the presence of a external source, $Z[J]=\langle 0|0\rangle_{J}$. As discussed in [31], if the action is explicitly independent of the metric, the topological theory is said to be of Schwarz type; otherwise, if the variation of the action with respect to the metric gives a “BRST-like”-exact term, one says the theory is of Witten type. More precisely, being $\delta$ an infinitesimal transformation that denotes the symmetry of the action $S$ which characterizes the observables of the model, then, if the following properties are satisfied, $\delta\mathcal{O}_{\alpha}(\phi_{i})=0\,,\quad T_{\mu\nu}(\phi_{i})=\delta G_{\mu\nu}(\phi_{i})\;,$ (2.4) where $T_{\mu\nu}$ is the energy-momentum tensor of the model, $\frac{\delta}{\delta g_{\mu\nu}}S=T_{\mu\nu}\;,$ (2.5) and $G_{\mu\nu}$ some tensor, then the quantum field theory can be regarded as topological. Obviously, in this case eq. (2.3) is also satisfied, since the expectation value of the $\delta$-exact term vanishes333The nilpotent $\delta$-operator works precisely as a BRST operator, and it is well-known that expectation values of BRST-exact terms vanish. For a further analysis concerning renormalization properties, and the definition of physical observables, see [16, 14, 32]. [8, 13]. In fact, by using (2.4) and (2.5), and assuming that the measure $[{D}\phi_{i}]$ is invariant under $\delta$, $\displaystyle\frac{\delta}{\delta g_{\mu\nu}}\langle\mathcal{O}_{\alpha_{1}}(\phi_{i})\mathcal{O}_{\alpha_{2}}(\phi_{i})\cdots\mathcal{O}_{\alpha_{p}}(\phi_{i})\rangle$ $\displaystyle=$ $\displaystyle-\int\mathcal{[}{D}\phi_{i}]\mathcal{O}_{\alpha_{1}}(\phi_{i})\mathcal{O}_{\alpha_{2}}(\phi_{i})\cdots\mathcal{O}_{\alpha_{p}}(\phi_{i})T_{\mu\nu}\,e^{-S}$ (2.6) $\displaystyle=$ $\displaystyle\langle\delta[\mathcal{O}_{\alpha_{1}}(\phi_{i})\mathcal{O}_{\alpha_{2}}(\phi_{i})\cdots\mathcal{O}_{\alpha_{2}}(\phi_{i})G_{\mu\nu}]\rangle$ $\displaystyle=$ $\displaystyle 0\;.$ In the above equation we assumed that all $\mathcal{O}_{\alpha}$ are metric independent. Nevertheless this is not a requirement of the theory. It is also possible to have a more general theory in which $\frac{\delta}{\delta g_{\mu\nu}}\mathcal{O}_{\alpha}=\delta\mathcal{Q}_{\mu\nu}\neq 0\;,$ (2.7) that preserves the topological structure of $\delta_{g_{\mu\nu}}\langle\mathcal{O}_{\alpha_{1}}\cdots\mathcal{O}_{\alpha_{p}}\rangle=\langle\delta(\cdots)\rangle=0$ [31]. Analogously to the BRST operator, eq. (2.6) only makes sense if the $\delta$ operator is nilpotent444In Donaldson-Witten theory, for instance, such an operator is on-shell nilpotent, i.e., $\delta^{2}=0$ by using the equations of motion.. ### 2.1 Donaldson-Witten theory As mentioned in the introduction, Witten constructed in [8] a four-dimensional generalization of [7], capable of reproducing the Donaldson invariants [2, 3, 4] in the weak coupling limit. Such a construction can be obtained from the twist transformation of the $N=2$ SYM. Let us quickly revise some important features of such approach. #### 2.1.1 The twist transformation The eight supersymmetric charges ($Q^{i}_{\alpha},\,\bar{Q}_{j\dot{\alpha}}$) of $N=2$ SYM theories obey the SUSY algebra $\displaystyle\\{Q^{i}_{\alpha},\,\bar{Q}_{j\dot{\alpha}}\\}$ $\displaystyle=$ $\displaystyle\delta^{i}_{j}(\sigma_{\mu})_{\alpha\dot{\alpha}}\partial_{\mu}\;,$ $\displaystyle\\{Q^{i}_{\alpha},\,{Q}_{j{\alpha}}\\}$ $\displaystyle=$ $\displaystyle\\{\bar{Q}^{i}_{\dot{\alpha}},\,\bar{Q}_{j\dot{\alpha}}\\}=0\;,$ (2.8) where all indices $(i,j,\alpha,\dot{\alpha})$ run from one to two. The indices $(i,j)$ denote the internal $SU(2)$ symmetry of the $N=2$ SYM action, and $(\alpha,\dot{\alpha})$ are Weyl spinor indices: $\alpha$ denotes right-handed spinors, and $\dot{\alpha}$, left-handed ones. The fact that both indices equally run from one to two suggests the identification between spinor and supersymmetry indices, $i\equiv\alpha\;.$ (2.9) The $N=2$ SYM action theory possesses a gauge group symmetry given by $SU_{L}(2)\times SU_{R}(2)\times SU_{I}(2)\times U_{R}(1)\;,$ (2.10) where $SU_{L}(2)\times SU_{R}(2)$ is the rotation group, $SU_{I}(2)$ is the internal supersymmetry group labeled by $i$, and $U_{{R}}(1)$, the so-called $\mathcal{R}$-symmetry defined by the supercharges ($Q^{i}_{\alpha},\,\bar{Q}_{j\dot{\alpha}}$) which are assigned eigenvalues $(+1$, $-1)$, respectively. The identification performed in eq. (2.9) amounts to a modification of the rotation group, $SU_{L}(2)\times SU_{R}(2)\rightarrow SU_{L}(2)\times SU_{R}(2)^{\prime}\;,$ (2.11) where $SU_{R}(2)^{\prime}$ is the diagonal sum of $SU_{R}(2)$ and $SU_{I}(2)$. The twisted global symmetry of $N=2$ SYM takes the form $SU_{L}(2)\times SU_{R}(2)^{\prime}\times U_{R}(1)$, with the corresponding twisted supercharges $Q^{i}_{\alpha}\rightarrow Q^{\beta}_{\alpha}\,,\quad\bar{Q}_{i\bar{\alpha}}\rightarrow\bar{Q}_{\alpha\dot{\alpha}}\;,$ (2.12) which can be rearranged as $\displaystyle\frac{1}{\sqrt{2}}\epsilon^{\alpha\beta}Q_{\alpha\beta}$ $\displaystyle\equiv$ $\displaystyle\delta\;,$ (2.13) $\displaystyle\frac{1}{\sqrt{2}}\bar{Q}^{\alpha\dot{\alpha}}(\sigma_{\mu})^{\dot{\alpha}\alpha}$ $\displaystyle\equiv$ $\displaystyle\delta_{\mu}\;,$ (2.14) $\displaystyle\frac{1}{\sqrt{2}}(\sigma_{\mu\nu})^{\dot{\alpha}\alpha}Q_{\dot{\alpha}\alpha}$ $\displaystyle\equiv$ $\displaystyle d_{\mu\nu}\;,$ (2.15) where we adopt the conventions for $\epsilon^{\alpha\beta}$, $(\sigma^{\mu})^{\alpha\dot{\alpha}}$ and $(\sigma_{\mu\nu})^{\dot{\alpha}\alpha}$ as the same of [33]. The operator $d_{\mu\nu}$ is manifestly self-dual due to the structure of $\sigma_{\mu\nu}$, $d_{\mu\nu}=\frac{1}{2}\varepsilon_{\mu\nu\lambda\rho}d^{\lambda\rho}\;,$ (2.16) reducing to three the number of its independent components. The operators $\delta$, $\delta_{\mu}$ and $\delta_{\mu\nu}$ possess eight independent components in which the eight original supercharges $(Q_{\beta\alpha},\,\bar{Q}_{\alpha\dot{\alpha}})$ are mapped into. These operators obey the following twisted supersymmetry algebra $\displaystyle\delta^{2}$ $\displaystyle=$ $\displaystyle 0\;,$ (2.17) $\displaystyle\\{\delta,\delta_{\mu}\\}$ $\displaystyle=$ $\displaystyle\partial_{\mu}\;,$ (2.18) $\displaystyle\\{\delta_{\mu},\delta_{\nu}\\}$ $\displaystyle=$ $\displaystyle\\{d_{\mu\nu},\delta\\}=\\{d_{\mu\nu},d_{\lambda\rho}\\}=0\;,$ (2.19) $\displaystyle\\{\delta_{\mu},d_{\lambda\rho}\\}$ $\displaystyle=$ $\displaystyle-(\varepsilon_{\mu\lambda\rho\sigma}\partial^{\sigma}+g_{\mu\lambda}\partial_{\rho}-g_{\mu\rho}\partial_{\lambda})\;.$ (2.20) The nilpotent scalar supersymmetry charge $\delta$ defines the cohomology of Witten’s TQFT, as its observables appear as cohomology classes of $\delta$, which is invariant under a generic differential manifold. It is implicit in the anti-commutation relation (2.18) the topological nature of the model, as it allows to write the common derivative as a $\delta$-exact term. The gauge multiplet of the $N=2$ SYM in Wess-Zumino gauge is given by the fields $(A_{\mu},\psi^{i}_{\alpha},\bar{\psi}^{i}_{\dot{\alpha}},\phi,\bar{\phi})\;,$ (2.21) where $\psi^{i}_{\alpha}$ is a Majorana spinor (the supersymmetric partner of the gauge connection $A_{\mu}$), and $\phi$, a scalar field, all of them belonging to the adjoint representation of the gauge group. The twist transformation is defined by the identification eq. (2.9), and thus only acts on the fields $(\psi^{i}_{\mu},\bar{\psi}^{i}_{\mu})$, leaving the bosonic fields $(A_{\mu},\phi,\bar{\phi})$ unaltered. Explicitly, the twist transformation is given by the linear transformations555Notation: $\Phi_{(\alpha\beta)}=\Phi_{\alpha\beta}+\Phi_{\beta\alpha}$ and $\Phi_{[\alpha\beta]}=\Phi_{\alpha\beta}-\Phi_{\beta\alpha}$. $\displaystyle\psi^{i}_{\beta}\,\,$ $\displaystyle\rightarrow$ $\displaystyle\,\,\psi_{\alpha\beta}=\frac{1}{2}\left(\psi_{(\alpha\beta)}+\psi_{[\alpha\beta]}\right)\;,$ (2.22) $\displaystyle\bar{\psi}^{i}_{\dot{\alpha}}$ $\displaystyle\rightarrow$ $\displaystyle\bar{\psi}_{\alpha\dot{\alpha}}\,\,\rightarrow\,\,\psi_{\mu}=(\sigma_{\mu})^{\alpha\dot{\alpha}}\bar{\psi}_{\alpha\dot{\alpha}}\,,$ (2.23) together with $\displaystyle\psi_{(\alpha\beta)}\,\,$ $\displaystyle\rightarrow$ $\displaystyle\,\,\chi_{\mu\nu}=(\sigma_{\mu\nu})^{\alpha\beta}\psi_{(\alpha\beta)}\;,$ (2.24) $\displaystyle\psi_{[\alpha\beta]}\,\,$ $\displaystyle\rightarrow$ $\displaystyle\eta=\varepsilon^{\alpha\beta}\psi_{[\alpha\beta]}\;.$ (2.25) The twist consists of a mapping of degrees of freedom. The field $\bar{\psi}_{\alpha\dot{\alpha}}$ has four independent components as $(\alpha,\dot{\alpha})=\\{1,2\\}$, and is mapped into the field $\psi_{\mu}$ that also has four independent components of the path integral, as the Lorentz index $\mu=\\{1,2,3,4\\}$ in four dimensions. In the other mappings the same occurs, as the symmetric part of $\psi_{\alpha\beta}$, i.e., $\psi_{(\alpha\beta)}$ has three independent components mapped into the self- dual field $\chi_{\mu\nu}$, and the antisymmetric part, $\psi_{[\alpha\beta]}$, with only one independent component, into $\eta$, a scalar field. We must note that $(\psi_{\mu},\chi_{\mu\nu},\eta)$ are anti- commuting field variables due to their spinor origin. Because it is a linear transformation, the twist simply corresponds to a change of variables with trivial Jacobian that could be absorbed in the normalization factor, in other words, both theories (before and after the twist) are perturbatively indistinguishable. Finally, twisting the $N=2$ SYM action ($S^{N=2}_{SYM}$) [8, 34], in flat Euclidean space, we obtain the Witten four-dimensional topological Yang-Mills action ($S_{W}$), $S^{N=2}_{SYM}[A_{\mu},\psi^{i}_{\alpha},\bar{\psi}^{i}_{\dot{\alpha}},\phi,\bar{\phi}]\,\,\rightarrow\,\,S_{W}[A_{\mu},\psi_{\mu},\chi_{\mu\nu},\bar{\phi},\phi]\;,$ (2.26) where $\displaystyle S_{W}$ $\displaystyle=$ $\displaystyle\frac{1}{g^{2}}\text{Tr}\int d^{4}x\,\left(\frac{1}{2}F^{+}_{\mu\nu}{F^{+}}^{\mu\nu}-\chi_{\mu\nu}\left(D_{\mu}\psi_{\nu}-D_{\nu}\psi_{\mu}\right)^{+}+\eta D_{\mu}\psi^{\mu}\right.$ (2.27) $\displaystyle-$ $\displaystyle\left.\frac{1}{2}\bar{\phi}D_{\mu}D^{\mu}\phi+\frac{1}{2}\bar{\phi}\\{\psi_{\mu},\psi_{\mu}\\}-\frac{1}{2}\phi\\{\chi_{\mu\nu},\chi_{\mu\nu}\\}-\frac{1}{8}\left[\phi,\eta\right]\eta\right.$ $\displaystyle-$ $\displaystyle\left.\frac{1}{32}\left[\phi,\bar{\phi}\right]\left[\phi,\bar{\phi}\right]\right)\;,$ wherein $F^{+}_{\mu\nu}$ is the self-dual field666Following [8, 34], we are considering the positive sign, that corresponds to anti-instantons in the vaccum. A similar construction can be done for instantons, only by changing the sign. $F^{+}_{\mu\nu}=F_{\mu\nu}+\widetilde{F}_{\mu\nu}\,,\quad(\widetilde{F}^{+}_{\mu\nu}=F^{+}_{\mu\nu})\;,$ (2.28) with $\widetilde{F}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}F_{\alpha\beta}$, and, analogously, $\left(D_{\mu}\psi_{\nu}-D_{\nu}\psi_{\mu}\right)^{+}=D_{\mu}\psi_{\nu}-D_{\nu}\psi_{\mu}+\frac{1}{2}\varepsilon_{\mu\nu\alpha\beta}\left(D_{\alpha}\psi_{\beta}-D_{\beta}\psi_{\alpha}\right)\;,$ (2.29) being $D_{\mu}\equiv\partial_{\mu}-g[A_{\mu},\,\cdot\,]$ the covariant derivative in the adjoint representation of the Lie group $G$, with $g$ the coupling constant. The Witten action777Technically, the Witten action (2.27) is the four-dimensional generalization of the non-relativistic topological quantum field theory [7], whose construction is based on the Floer theory for three-manifolds $\mathcal{M}_{3D}$, in which the Chern-Simons action is taken as a Morse function on $\mathcal{M}_{3D}$, see Floer’s original paper [5]. In few words, the critical points of CS action ($W_{CS}$) yield the curvature free configurations, as $\frac{\delta W_{CS}}{\delta A^{a}_{i}}=-\frac{1}{2}\varepsilon^{ijk}F^{jk}$, where $F^{jk}$ is the 2-form curvature in three dimensions, which defines the gradient flows of a Morse function, see [17]. In the supersymmetric formulation of [7], the Hamiltonian (H) is obtained via the “supersymmetric charges” $d_{t}$ and $d^{*}_{t}$, from the well-known relation $d_{t}d_{t}^{*}+d_{t}^{*}d_{t}=2H$, see [35], whereby $d_{t}=e^{-tW_{CS}}de^{tW_{CS}}$ and $d_{t}^{*}=e^{tW_{CS}}d^{*}e^{-tW_{CS}}$, for a real number $t$, being $d$ the exterior derivative on the space of all connections $\mathcal{A}$, according to the transformation $\delta A^{a}_{i}=\psi^{a}_{i}$, and $d^{*}$ its dual. Before identifying the twist transformation, this formulation (in four-dimensions) was employed by Witten in his original paper [12] to obtain the relativistic topological action (2.27). (2.27) possesses the usual Yang-Mills gauge invariance, denoted by888It is implicit in this notation the typical Yang-Mills transformations of all fields, where the gauge field transforms as $A_{\mu}^{\prime}=S^{-1}A_{\mu}S+S^{-1}\partial_{\mu}S$ with $S\in SU(N)$. $\delta^{\text{YM}}_{\text{gauge}}S_{W}=0\;.$ (2.30) The theory, however, does not possess gauge anomalies [36]. The symmetry that defines the cohomology of the theory, also known as equivariant cohomology, is the fermionic scalar supersymmetry $\displaystyle\delta A_{\mu}$ $\displaystyle=$ $\displaystyle-\varepsilon\psi_{\mu}\,,\quad\delta\phi=0\,,\quad\delta\lambda=2i\varepsilon\eta\,,\quad\delta\eta=\frac{1}{2}\varepsilon[\phi,\bar{\phi}]\;,$ $\displaystyle\delta\psi_{\mu}$ $\displaystyle=$ $\displaystyle-\varepsilon D_{\mu}\phi\,,\quad\delta\chi_{\mu\nu}=\varepsilon F^{+}\,,$ (2.31) where $\varepsilon$ is the supersymmetry fermionic parameter that carries no spin, ensuring that the propagating modes of commuting and anticommuting fields have the same helicities999Precisely, the propagating modes of $A_{\mu}$ have helicities $(1,-1)$, and of $(\phi,\bar{\phi})$, $(0,0)$; while of the fermionic fields $(\eta,\psi,\chi)$, helicities $(1,-1,0,0)$.. This symmetry relates bosonic and fermionic degrees of freedom, which are identical—an inheritance of the supersymmetric original action101010The action $S_{W}$ is also invariant under global scaling with dimensions $(1,0,2,2,1,2)$ for $(A,\phi,\bar{\phi},\eta,\psi,\chi)$, respectively; and preserves an additive $U$ symmetry for the assignments $(0,2,-2,-1,1,-1)$. In the BRST formalism, the latter is naturally recognized as ghost numbers, as we will see later on.. The price of working in Wess-Zumino gauge is the fact that, disregarding gauge transformations, one needs to use the equations of motion to recover the nilpotency of $\delta$ [30]. This characterizes the DW theory as an on-shell approach. One can easily verify that (see [8]) $\delta^{2}\Phi=0\;,\quad\text{for}\quad\Phi=\\{A,\psi,\phi,\bar{\phi},\eta\\}\;,$ (2.32) but $\delta^{2}\chi=\text{equations of motion}\;.$ (2.33) Considering the result of eq. (2.33), hereafter we will say that the Witten fermionic symmetry is on-shell nilpotent. This symmetry is associated to an on-shell nilpotent “BRST charge”, $\mathcal{Q}$, according to the definition of the $\delta$ variation of any functional $\mathcal{O}$ as a transformation on the space of all functionals of field variables, namely, $\delta\mathcal{O}=-i\varepsilon\cdot\\{\mathcal{Q},\mathcal{O}\\}\,,\quad\text{such that}\quad\mathcal{Q}^{2}|_{\textit{on-shell}}=0\;.$ (2.34) In order to verify that Witten theory is valid in curved spacetimes, it is worth noting that the commutators of covariant derivatives always appears acting in the scalar field $\phi$, like in $\delta Tr\\{D_{\mu}\psi_{\nu}\cdot\bar{\chi}_{\mu\nu}\\}=\frac{1}{2}Tr([D_{\mu},D_{\nu}]\phi\cdot\bar{\chi}^{\mu\nu})$, so the Riemann tensor does not appear, and the theory could be extended to any Riemannian manifold. In practice one can simply take $\int d^{4}x\rightarrow\int d^{4}x\sqrt{g}\;,$ (2.35) in order to work in a curved spacetime. Such a theory has the property of being invariant under infinitesimal changes in the metric. This property characterizes the Witten model as a topological quantum field theory. Such a feature is verified by the fact that the energy-momentum tensor can be written as the anti-commutator $T_{\mu\nu}=\\{\mathcal{Q},V_{\mu\nu}\\}\;,$ (2.36) which means that $T_{\mu\nu}$ is an on-shell BRST-exact term, $T_{\mu\nu}=\delta V_{\mu\nu}\,,\quad\delta^{2}|_{\textit{on-shell}}=0\;,$ (2.37) with $\displaystyle V_{\mu\nu}$ $\displaystyle=$ $\displaystyle\frac{1}{2}\text{Tr}\\{F_{\mu\sigma}\chi_{\nu}^{\,\,\,\sigma}+F_{\nu\sigma}\chi_{\mu}^{\,\,\,\sigma}-\frac{1}{2}g_{\mu\nu}F_{\sigma\rho}\chi^{\sigma\rho}\\}+\frac{1}{4}g_{\mu\nu}\text{Tr}\eta[\phi,\bar{\phi}]$ (2.38) $\displaystyle+$ $\displaystyle\frac{1}{2}\text{Tr}\\{\psi_{\mu}D^{\nu}\bar{\phi}+\psi_{\nu}D^{\mu}\bar{\phi}-g_{\mu\nu}\psi_{\sigma}D^{\sigma}\bar{\phi}\\}\;.$ Equation (2.37) together with $\delta S_{W}=0$ means that Witten theory satisfies (on-shell) the second condition displayed in eq. (2.4), that allows to say that $S_{W}$ automatically leads to a four-dimensional topological field model. In other words, $\displaystyle\frac{\delta}{\delta g_{\mu\nu}}Z_{W}$ $\displaystyle=$ $\displaystyle\int\mathcal{D}\Phi(-\frac{\delta}{\delta g_{\mu\nu}}\mathcal{S}_{W})\text{exp}(-\mathcal{S}_{W})$ (2.39) $\displaystyle=$ $\displaystyle-\frac{1}{g^{2}}\langle\\{\mathcal{Q},\int_{M}d^{4}x\sqrt{g}V_{\mu\nu}\\}\rangle=0\;,$ as all expected value of a $\delta$-exact term vanish. It remains to know which kind of topological/differential invariants can be represented by the Feynman path integral of Witten’s TQFT. As we know, it will naturally reproduce the Donaldson invariants for four-manifolds. #### 2.1.2 Donaldson polynomials in the weak coupling limit An important feature of the twisted $N=2$ SYM is the fact that the theory can be interpreted as quantum fluctuations around classical instanton configurations. To find the nontrivial classical minima one must note that the pure gauge field terms in $S_{W}$ are $S^{gauge}_{W}[A]=\frac{1}{2}\text{Tr}\int d^{4}x\left(F_{\mu\nu}+\widetilde{F}_{\mu\nu}\right)\left(F^{\mu\nu}+\widetilde{F}^{\mu\nu}\right)\;,$ (2.40) which is positive semidefinite, and only vanishes if the field strength $F_{\mu\nu}$ is anti-self-dual, $F_{\mu\nu}=-\widetilde{F}_{\mu\nu}\;,$ (2.41) the same nontrivial vacuum configuration that minimizes the Yang-Mills action in the case of anti-instantons fields. Hence, Witten’s action has a nontrivial classical minima for $F=-\widetilde{F}$ and $\Phi_{\text{other fields}}=0$. Being precise, the evaluation of the twisted $N=2$ SYM path integral computes quantum corrections to classical anti-instantons solutions. Another important property of Witten theory is the invariance under infinitesimal changes in the coupling constant. The variation of $Z_{W}$ with respect to $g^{2}$ yields, for similar reasons to (2.39), $\delta_{g^{2}}Z_{W}=\delta_{g^{2}}\left(-\frac{1}{g^{2}}\right)\langle\left\\{\mathcal{Q},X\right\\}\rangle=0\;,$ (2.42) where $X=\frac{1}{4}\text{Tr}F_{\mu\nu}\chi^{\mu\nu}+\frac{1}{2}\text{Tr}\psi_{\mu}D^{\mu}\bar{\phi}-\frac{1}{4}\text{Tr}\eta[\phi,\bar{\phi}]\;.$ (2.43) The Witten partition function, $Z_{W}$, is independent of the gauge coupling $g^{2}$, therefore we can evaluate $Z_{W}$ in the weak coupling limit, i.e., in the regime of very small $g^{2}$, in which $Z_{W}$ is dominated by the classical minima. The instanton moduli space $\mathcal{M}_{k,N}$ is defined to be the space of all solutions to $F=\widetilde{F}$ for an instanton with a giving winding number $k$ and gauge group $SU(N)$. By perturbing $F=\widetilde{F}$ nearby the solution $A_{\mu}$ via a gauge transformation $A_{\mu}\rightarrow A_{\mu}+\delta A_{\mu}$, we obtain the self-duality equation $D_{\mu}\delta A_{\nu}+D_{\mu}\delta A_{\nu}+\varepsilon_{\mu\nu\alpha\beta}D^{\alpha}\delta A^{\beta}=0\;.$ (2.44) The solutions of the above equation are called zero modes. Requiring the orthogonal gauge fixing condition111111This condition is equivalent to the Landau gauge, as $D_{\mu}A_{\mu}=\partial_{\mu}A_{\mu}$. It is important to note that one can promote $\partial_{\mu}$ to $D_{\mu}$ in this case, in order to show that $A_{\mu}$ and $\psi_{\mu}$ obey the same equations., $D_{\mu}A_{\mu}=0$, one gets $D_{\mu}(\delta A_{\mu})=0\;.$ (2.45) The Atiyah-Singer index theorem [37, 38] counts the number of solutions to eq. (2.44) and eq. (2.45). In Euclidean spacetimes, for instance, the index theorem gives, see [39] $\text{dim}(\mathcal{M})=4kN\;,$ (2.46) where the modes due to global gauge transformations of the group were included. Looking at fermion zero modes, the $\chi$ equation for $S_{W}$ gives $D_{\mu}\psi_{\nu}+D_{\nu}\psi_{\mu}+\varepsilon_{\mu\nu\alpha\beta}D^{\alpha}\psi^{\beta}=0\;,$ (2.47) and from the $\eta$ equation, $D_{\mu}\psi^{\mu}=0\;.$ (2.48) These are the same equations obtained for the gauge perturbation around an instanton in the orthogonal gauge fixing, so the number of $\psi$ zero modes is also given by $\mathcal{M}_{k,N}$121212As Witten himself admits in his paper [8], “this relation between the fermion equations and the instanton moduli problem was the motivation for introducing precisely this collection of fermions”.. In order to get a non-vanishing partition function, Witten assumed that the moduli space consists of discrete, isolated instantons. Precisely, he assumed that the dimension of the moduli space vanishes131313Otherwise, it occurs a net violation of the $U(1)$ global symmetry of $S_{W}$, and $Z_{W}$ vanishes due to the fermion zero modes, see [8, 40]. The dimension of the instanton moduli spaces depends on the bundle, $E$, and the manifold, $M$. In the $SU(2)$ gauge theory, it can be written as $\text{dim}(\mathcal{M})=8k(E)-\frac{3}{2}(\chi(M)+\sigma(M))\;,$ (2.49) where $k(E)$ is the first Pontryagin (or winding) number of the bundle $E$, and $\chi(M)$ and $\sigma(M)$ are the Euler characteristic and signature of $M$ [38]. (For $M=R^{4}$, $\chi(R^{4})=\sigma(R^{4})=0$.) Thus one can choose a suitable $E$ and $M$ in order to get a vanishing dimension, $\text{dim}(\mathcal{M})=0$.. In expanding around an isolated instanton, in the weak coupling limit $g^{2}\rightarrow 0$, the action is reduced to quadratic terms, $S^{(2)}_{W}=\int_{M}d^{4}x\sqrt{g}\left(\Phi^{(b)}D_{B}\Phi^{(b)}+i\Psi^{(f)}D_{F}\Psi^{(f)}\right)\;,$ (2.50) where $\Phi^{(b)}\equiv\\{A,\phi,\bar{\phi}\\}$ are the bosonic fields, and $\Psi^{(f)}\equiv\\{\eta,\psi,\chi\\}$, the fermionic ones. The Gaussian integral over all fields gives $Z_{W}|_{g^{2}\rightarrow 0}=\frac{\text{Pfaff}(D_{F})}{\sqrt{\det(D_{B})}}\;,$ (2.51) where $\text{Pfaff}(D_{F})$ is the Pfaffian of $D_{F}$, i.e., the square root of the determinant of $D_{F}$ up to a sign. The supersymmetry relates the eigenvalues of the operators $D_{B}$ and $D_{F}$. The relation is a standard result in instanton calculus [41], which yields $Z_{W}|_{g^{2}\rightarrow 0}=\pm\prod_{i}\frac{\lambda_{i}}{\sqrt{|\lambda_{i}|}^{2}}\;,$ (2.52) with $i$ running over all non-zero eigenvalues of $D_{B}$ $(D_{F})$. Therefore, for the $k^{th}$ isolated instanton, $Z^{(k)}_{W}=(-1)^{n_{k}}$, where $n_{k}=0$ or $1$ according to the orientation convention of the moduli space (Donaldson proved the orientability of the moduli space, i.e., that the definition of the sign of $\text{Pfaff}(D_{F})$ is consistent, without global anomalies [4, 8]). In the end, summing over all isolated instantons, $Z_{W}|_{g^{2}\rightarrow 0}=\sum_{k}(-1)^{n_{k}}\;,$ (2.53) which is precisely one of topological invariant for four-manifolds described by Donaldson. The other metric independent observables are constructed in the context of eq. (2.7). These observables can be generated by exploring the descent equations defined by the equivariant cohomology, i.e., the supersymmetry $\delta$-cohomology. For that, being $U_{i}$ the global charge of the operator $\mathcal{O}_{i}$ (see footnote 10 on page 8), it must be understood that, for the observable $\prod_{i}\mathcal{O}_{i}$, $\text{dim}(\mathcal{M})=\sum_{i}U_{i}$141414In order to construct topological invariants, the net $U$ charge must equal the dimension of the moduli space, see [8, 17].. The simplest $\delta$-invariant operator, that does not depend explicitly on the metric, and cannot be written as $\delta(X)=\\{\mathcal{Q},X\\}$ (due to the scaling dimensions) is $W_{0}(x)=\frac{1}{2}\text{Tr}\phi^{2}(x)\,,\quad U(W_{0})=4\;.$ (2.54) Although $W_{0}$ is not a $\delta$-exact operator, taking the derivative of $W_{0}$ with respect of the coordinates, we find $\frac{\partial}{\partial x_{\mu}}W_{0}=i\\{\mathcal{Q},\text{Tr}\phi\psi_{\mu}\\}\;,$ (2.55) which is $\delta$-exact. Using the exterior derivative, $d$, we can rewrite (2.55) as $dW_{0}=i\\{\mathcal{Q},W_{1}\\}\;,$ (2.56) where $W_{1}$ is the 1-form $\text{Tr}(\phi\psi_{\mu})dx^{\mu}$. A straightforward calculation gives $\displaystyle dW_{1}$ $\displaystyle=$ $\displaystyle i\\{\mathcal{Q},W_{2}\\}\;,\quad dW_{2}=i\\{\mathcal{Q},W_{3}\\}\;,$ (2.57) $\displaystyle dW_{3}$ $\displaystyle=$ $\displaystyle i\\{\mathcal{Q},W_{4}\\}\;,\quad dW_{4}=0\;,$ (2.58) with $\displaystyle W_{2}$ $\displaystyle=$ $\displaystyle\text{Tr}(\frac{1}{2}\psi\wedge\psi+i\phi\wedge F)\;,$ (2.59) $\displaystyle W_{3}$ $\displaystyle=$ $\displaystyle i\text{Tr}\psi\wedge F\;,$ (2.60) $\displaystyle W_{4}$ $\displaystyle=$ $\displaystyle-\frac{1}{2}\text{Tr}F\wedge F\;,$ (2.61) where “$\wedge$” is the wedge product, the total charge is $U=4-k$ for each $W_{k}$, and $\phi,\psi$, and $F$ are zero, one, and two forms on $M$, respectively. $F$ is the field strength in the $p$-form formalism151515For the definitions and conventions concerning the p-form formalism used here, see Section 2.2.2., defined in eq. (2.74). Considering now the integral $I(\gamma)=\int_{\gamma}W_{k}\;,$ (2.62) being $\gamma$ a k-dimensional homology cycle on M, we have $\\{\mathcal{Q},I\\}=\int_{\gamma}\\{\mathcal{Q},W_{k}\\}=i\int_{\gamma}dW_{k-1}=0\;.$ (2.63) It proves that $I(\gamma)$ is $\delta$-invariant and, then, a possible observable. To be a global observable of the topological theory, we just have to prove that $I(\gamma)$ is BRST exact, which can be immediately verified taking $\gamma$ as the boundary $\partial\beta$, and applying the Stokes theorem, $I(\gamma)=\int_{\partial\beta}W_{k}=\int_{\beta}dW_{k}=i\\{\mathcal{Q},\int_{\beta}W_{k+1}\\}\;.$ (2.64) We conclude, from equations (2.63) and (2.64), that $I(\gamma)$ are the global observables of the model as their expectation values produce metric independent quantities, i.e., topological invariants for four-manifolds. Finally, the general path integral representation of Donaldson invariants in Witten’s TQFT takes the form $Z(\gamma_{1},\cdots,\gamma_{r})=\int\mathcal{D}\Phi\left(\prod_{i}\int_{\gamma_{i}}W_{k_{i}}\right)e^{-S_{W}}=\langle\prod_{i}\int_{\gamma_{i}}W_{k_{i}}\rangle\;,$ (2.65) with moduli space dimension $\text{dim}(\mathcal{M})=\sum_{i}^{r}(4-k_{r})\;.$ (2.66) One of the beautiful results is the appearing of $W_{4}$ in the descent equations. Up to a sign, the observable $\int_{\gamma}W_{4}=-\frac{1}{2}\int_{\gamma}F\wedge F$ (2.67) is the Pontryagin action written in the formalism of p-forms. The Pontryagin action, a well-known topological invariant of four-manifolds, naturally appear as one of the Donaldson polynomials—with a trivial winding number in this case, since $U(W_{4})=0$, and consequently the dimension of the moduli space vanishes. ### 2.2 Baulieu-Singer off-shell approach Let us now turn to the main properties Baulieu-Singer approach for TQFTs [13], which is based on an off-shell BRST symmetry, built from the gauge fixing of an original action composed of topological invariants. #### 2.2.1 BRST symmetry in topological gauge theories The four-dimensional spacetime is assumed to be Euclidean and flat161616Throughout this work we consider flat Euclidean spacetime. Although the topological action is background independent, the gauge-fixing term entails the introduction of a background. Ultimately, background independence is recovered at the level of correlation functions due to BRST symmetry [13, 42, 43].. The non-Abelian topological action $S_{0}[A]$ in four-dimensional spacetime representing the topological invariants is the Pontryagin action171717It is worth mentioning that the action $S_{0}[A]$ could encompass a wide range of topological gauge theories. The Pontryagin action is the most common case because it can be defined for all semi-simple Lie groups. Nevertheless, other cases can also be considered. For instance, Gauss-Bonnet and Nieh-Yang topological gravities can be formulated for orthogonal groups [44]., $S_{0}[A]=\frac{1}{2}\int d^{4}x\,F^{a}_{\mu\nu}\widetilde{F}^{a}_{\mu\nu}\;,$ (2.68) that labels topologically inequivalent field configurations, as $S_{0}[A]=32\pi^{2}k$, in which $k$ is the topological charge known as winding number. We must note that the Pontryagin action has two different gauge symmetries to be fixed, these are: (i) the gauge field symmetry, $\delta A_{\mu}^{a}=D_{\mu}^{ab}\omega^{b}+\alpha_{\mu}^{a}\;;$ (2.69) (ii) the topological parameter symmetry, $\delta\alpha_{\mu}^{a}=D_{\mu}^{ab}\lambda^{b}\;.$ (2.70) where $D_{\mu}^{ab}\equiv\delta^{ab}\partial_{\mu}-gf^{abc}A^{c}_{\mu}$ are the components of the covariant derivative in the adjoint representation of the Lie group $G$, $f^{abc}$ are the structure constants of G, while $\omega^{a}$, $\alpha^{a}_{\mu}$ and $\lambda^{a}$ are the infinitesimal $G$-valued gauge parameters. As a consequence of (2.69), the field strength also transforms as a gauge field181818The antisymmetrization index notation here employed means that, for a generic tensor, $S_{[\mu\nu]}=S_{\mu\nu}-S_{\nu\mu}$., $\delta F_{\mu\nu}^{a}=-gf^{abc}\omega^{b}F_{\mu\nu}^{c}+D_{[\mu}^{ab}\alpha_{\nu]}^{b}\;.$ (2.71) The first parameter ($\omega^{a}$) reflects the usual Yang-Mills symmetry of $S[A]$, whereas the second one ($\alpha^{a}_{\mu}$) is the topological shift associated to the fact that $S[A]$ is a topological invariant, i.e., invariant under continuous deformations. The third gauge parameter ($\lambda^{a}$) is due to an internal ambiguity present in the gauge transformation of the gauge field (2.69). The transformation of the gauge field is composed by two independent symmetries. If the space has a boundary, the parameter $\alpha^{a}_{\mu}(x)$ must vanish at this boundary but not $\omega^{a}(x)$, what explains the internal ambiguity described by (2.70) in which $\alpha^{a}_{\mu}(x)$ is absorbed into $\omega^{a}(x)$, and not the other way around [13]. Following the BRST quantization procedure, the gauge parameters present in the gauge transformations (2.69)-(2.71) are promoted to ghost fields: $\omega^{a}\rightarrow c^{a}$, $\alpha^{a}_{\mu}\rightarrow\psi^{a}_{\mu}$, and $\lambda^{a}\rightarrow\phi^{a}$; $c^{a}$ is the well-known Faddeev-Popov (FP) ghost; $\psi^{a}_{\mu}$ is a topological fermionic ghost; and $\phi^{a}$ is a bosonic ghost. The corresponding BRST transformations are $\displaystyle sA_{\mu}^{a}$ $\displaystyle=$ $\displaystyle- D_{\mu}^{ab}c^{b}+\psi^{a}_{\mu},$ $\displaystyle sc^{a}$ $\displaystyle=$ $\displaystyle\frac{g}{2}f^{abc}c^{b}c^{c}+\phi^{a},$ $\displaystyle s\psi_{\mu}^{a}$ $\displaystyle=$ $\displaystyle gf^{abc}c^{b}\psi^{c}_{\mu}+D_{\mu}^{ab}\phi^{b},$ $\displaystyle s\phi^{a}$ $\displaystyle=$ $\displaystyle gf^{abc}c^{b}\phi^{c},$ (2.72) from which one can easily check the nilpotency of the BRST operator, $s^{2}=0\;,$ (2.73) by applying two times the BRST operator $s$ on the fields. Naturally, $S_{0}[A]$ is invariant under the BRST transformations (2.72). The nilpotency property of $s$ defines the cohomology of the theory, which allows for the gauge fixing of the Pontryagin action in a BRST invariant fashion. Furthermore, such a property is related to the geometric structure of the off- shell BRST transformations in non-Abelian topological gauge theories. #### 2.2.2 Geometric interpretation In order to simplify equations in the following sections, we will employ again the formalism of differential forms. In this formalism, the fields $c$ and $\phi$ are 0-forms, $\psi$ is the 1-form $\psi_{\mu}dx_{\mu}$, and $F$, the following 2-form: $F=dA+A\wedge A=\frac{1}{2}F_{\mu\nu}dx_{\mu}\wedge dx_{\nu}\;,$ (2.74) where $A$ is the 1-form $A_{\mu}dx^{\mu}$, and “$\wedge$” is the wedge product which indicates that the tensor product is completely antisymmetric, and $d$ is the exterior derivative191919The exterior derivative operation in the space of smooth $p$-forms, $\Lambda_{p}$, $d:\Lambda_{p}\rightarrow\Lambda_{p+1}$, on a generic $p$-form $\omega_{p}$, $\omega_{p}=\omega_{i_{1},i_{2},\dots,i_{p}}dx^{i_{1}}\wedge dx^{i_{2}}\cdots\wedge dx^{i_{p}}\;,$ (2.75) is locally defined by $d\omega_{p}=\frac{\partial\omega_{i_{1},i_{2},\dots,i_{p}}}{\partial x^{j}}dx^{j}\wedge dx^{i_{1}}\wedge dx^{i_{2}}\cdots\wedge dx^{i_{p}}\;,$ (2.76) being $\omega_{p}$ a $p$-form, $d\omega_{p}$ is a $(p+1)$-form. It follows that the exterior derivative is nilpotent, $d^{2}=0$, due to the antisymmetric property of the indices. One assumes that $s$ anticommutes with $d$, $\\{s,d\\}=0$.. With this we can then write the BRST transformations in the form $\displaystyle sA$ $\displaystyle=$ $\displaystyle Dc+\psi\;,$ $\displaystyle sc$ $\displaystyle=$ $\displaystyle\frac{1}{2}[c,c]+\phi\;,$ $\displaystyle s\psi$ $\displaystyle=$ $\displaystyle D\phi+[c,\psi]\;,$ $\displaystyle s\phi$ $\displaystyle=$ $\displaystyle[c,\phi]\;.$ (2.77) The geometric meaning of the topological BRST transformations of (2.2.2) is revealed from the definition of the extended exterior derivative, $\widetilde{d}$, as the sum of the ordinary exterior derivative with the BRST operator, $\widetilde{d}=d+s\;,$ (2.78) and the generalized connection $\widetilde{A}=A+c\;.$ (2.79) By direct inspection one sees that the BRST transformations can be written in terms of the generalized curvature202020The nature of $\phi$ as the “curvature” in the in instanton moduli space direction is implicit in the BRST transformation of the FP ghost, that can be rewritten in the geometric form $sc+\frac{1}{2}[c,c]=\phi$. $\mathcal{F}=F+\psi+\phi\;,$ (2.80) such that $\mathcal{F}=\widetilde{d}\widetilde{A}+\frac{1}{2}[\widetilde{A},\widetilde{A}]\;,$ (2.81) with the Bianchi identity $\widetilde{D}\mathcal{F}=\widetilde{d}\mathcal{F}+[\widetilde{A},\mathcal{F}]=0\;.$ (2.82) Here, the space is graded as a sum of form degree and ghost number, in which the BRST operator is the exterior differential operator in the moduli space direction $\mathcal{A}/\mathcal{G}$, where the gauge fields that differ by a gauge transformation are identified. The whole space is then $M\times\mathcal{A}/\mathcal{G}$, being $M$ a four-dimensional manifold. According to the gauges worked out in this paper, $M$ will be an Euclidean flat space. In the definition (2.79) and following equations we are adding quantities with different form degrees and ghost numbers as thought they were of the same nature. Obviously this is not being done directly. We must see equations (2.81) and (2.82) as an expansion in form degrees and ghost numbers in which the elements with the same nature on both sides have to be compared. The relevant cohomology is defined by the cohomology of $M\times\mathcal{A}/\mathcal{G}$, ${\widetilde{d}}^{2}=0$, being valid without requiring equations of motion. Such a geometric structure reveals the BRST off-shell character of the BS approach212121For a detailed study on the geometric interpretation of the universal fiber bundle and its curvature, we suggest for instance [45, 21].. We will discuss in Section 2.2.5 how the universal curvature $\mathcal{F}$ generates the same global observables of Witten theory, i.e., the Donaldson polynomials. #### 2.2.3 Doublet theorem and gauge fixing: Baulieu-Singer gauges Let us recall the doublet theorem [16] which will be indispensable to fix the gauge ambiguities without changing the physical content of the theory. Suppose a theory that contains a pair of fields or sources that form a doublet, i.e., $\displaystyle\hat{\delta}\mathcal{X}_{i}$ $\displaystyle=$ $\displaystyle\mathcal{Y}_{i}\;,$ $\displaystyle\hat{\delta}\mathcal{Y}_{i}$ $\displaystyle=$ $\displaystyle 0\;,$ (2.83) where $i$ is a generic index, and $\hat{\delta}$ is a fermionic nilpotent operator. The field (source) $\mathcal{X}_{i}$ is assumed to be fermionic. As the operator $\hat{\delta}$ increases the ghost number in one unity by definition, if $\mathcal{X}_{i}$ is an anti-commuting quantity, $\mathcal{Y}_{i}$ is a commuting one. The doublet structure of $(\mathcal{X}_{i},\mathcal{Y}_{i})$ in eq. (2.2.3) implies that such fields (or sources) belong to the trivial part of the cohomology of $\hat{\delta}$. The proof is as follows. First, we define the operators $\displaystyle\hat{N}$ $\displaystyle=$ $\displaystyle\int dx\left(\mathcal{X}_{i}\frac{\partial}{\partial\mathcal{X}_{i}}+\mathcal{Y}_{i}\frac{\partial}{\partial\mathcal{Y}_{i}}\right)\;,$ (2.84) $\displaystyle\hat{A}$ $\displaystyle=$ $\displaystyle\int dx\,\mathcal{X}_{i}\frac{\partial}{\partial\mathcal{Y}_{i}}\;,$ (2.85) $\displaystyle\hat{\delta}$ $\displaystyle=$ $\displaystyle\mathcal{Y}_{i}\frac{\partial}{\partial\mathcal{X}_{i}}\;,$ (2.86) which obey the commutation relations $\displaystyle\\{\hat{\delta},\hat{A}\\}$ $\displaystyle=$ $\displaystyle\hat{N}\;,$ (2.87) $\displaystyle\left[\hat{\delta},\hat{N}\right]$ $\displaystyle=$ $\displaystyle 0\;,$ (2.88) where $\hat{\delta}$ is a nilpotent operator as it is fermionic, $\hat{\delta}^{2}=0$. The operator $\hat{N}$ is the counting operator. Being $\bigtriangleup$ a polynomial in the fields, sources and parameters, the cohomology of the nilpotent operator $\hat{\delta}$, as we know, is given by the the solutions of $\hat{\delta}\bigtriangleup=0\;,$ (2.89) that is not exact, i.e., that cannot be written in the form $\bigtriangleup=\hat{\delta}\Sigma\;.$ (2.90) The general expression of $\bigtriangleup$ is then $\bigtriangleup=\widetilde{\bigtriangleup}+\hat{\delta}\Sigma\;,$ (2.91) where $\widetilde{\bigtriangleup}$ belongs to the non-trivial part of the cohomology, in other words, it is closed, $\hat{\delta}\widetilde{\bigtriangleup}=0$, but not exact, $\widetilde{\bigtriangleup}\neq\hat{\delta}\widetilde{\Sigma}$. One can expand $\bigtriangleup$ in eigenvectors of $\hat{N}$, $\bigtriangleup=\sum_{n\geq 0}\bigtriangleup_{n}\;,$ (2.92) such that $\hat{N}\bigtriangleup_{n}=n\bigtriangleup_{n}$, where $n$ is the total number of $\mathcal{X}_{i}$ and $\mathcal{Y}_{i}$ in $\bigtriangleup_{n}$. Such an expansion is consistent as each $\bigtriangleup_{n}$ is a polynomial in $\mathcal{X}_{i}$ and $\mathcal{Y}_{i}$, and $\hat{\delta}\bigtriangleup_{n}=0$ for $\forall n\geq 1$, according to (2.2.3) and the commuting properties of $\mathcal{X}_{i}$ and $\mathcal{Y}_{i}$. Finally, rewriting (2.92) as $\bigtriangleup=\bigtriangleup_{0}+\sum_{n\geq 1}\frac{1}{n}\hat{N}\bigtriangleup_{n}\;,$ (2.93) and then, using the commuting relation (2.87), we get $\bigtriangleup=\bigtriangleup_{0}+\hat{\delta}\left(\sum_{n\geq 1}\frac{1}{n}\hat{A}\bigtriangleup_{n}\right)\;,$ (2.94) which shows that all terms which contain at least one field (source) of the doublet never enter the non-trivial part of the cohomology of $\hat{\delta}$, being thus non-physical—for a more complete analysis, see for instance [16, 46]. In order to fix the three gauge symmetries of the non-Abelian topological theory (2.69)-(2.71) we introduce the following three BRST doublets: $\displaystyle s\bar{c}^{a}$ $\displaystyle=$ $\displaystyle b^{a}\;,\;\;\;\;\;\;\;\;sb^{a}\;=\;0\;,$ $\displaystyle s\bar{\chi}^{a}_{\mu\nu}$ $\displaystyle=$ $\displaystyle B_{\mu\nu}^{a}\;,\;\;sB_{\mu\nu}^{a}\;=\;0\;,$ $\displaystyle s\bar{\phi}^{a}$ $\displaystyle=$ $\displaystyle\bar{\eta}^{a}\;,\;\;\;\;\;\;\;s\bar{\eta}^{a}\;=\;0\;,$ (2.95) where $\bar{\chi}^{a}_{\mu\nu}$ and $B_{\mu\nu}^{a}$ are (anti-)self-dual fields according to the (negative) positive sign, see (2.98) below. The $\mathcal{G}$-valued Lagrange multiplier fields $b^{a}$, $B^{a}_{\mu\nu}$ and $\bar{\eta}$ have respectively ghost numbers $0$, $0$, and $-1$; while the antighost fields $\bar{c}^{a}$, $\bar{\chi}^{a}_{\mu\nu}$ and $\bar{\phi}^{a}$, ghost numbers $-1$, $-1$ and $-2$. (For completeness and further use, the quantum numbers of all fields are displayed in Table 1.) Field | | $A$ | $\psi$ | $c$ | $\phi$ | $\bar{c}$ | $b$ | $\bar{\phi}$ | $\bar{\eta}$ | $\bar{\chi}$ | $B$ ---|---|---|---|---|---|---|---|---|---|---|--- Dim | | 1 | 1 | 0 | 0 | 2 | 2 | 2 | 2 | 2 | 2 Ghost no | | 0 | 1 | 1 | 2 | -1 | 0 | -2 | -1 | -1 | 0 Table 1: Quantum numbers of the fields. Working in Baulieu-Singer gauges amounts to considering the constraints [13] $\displaystyle\partial_{\mu}A_{\mu}^{a}$ $\displaystyle=$ $\displaystyle-\frac{1}{2}\rho_{1}b^{a}\;,$ (2.96) $\displaystyle D_{\mu}^{ab}\psi_{\mu}^{a}$ $\displaystyle=$ $\displaystyle 0\;,$ (2.97) $\displaystyle F_{\mu\nu}^{a}\pm\widetilde{F}_{\mu\nu}^{a}$ $\displaystyle=$ $\displaystyle-\frac{1}{2}\rho_{2}B^{a}_{\mu\nu}\;,$ (2.98) where $\rho_{1}$ and $\rho_{2}$ are real gauge parameters. In a few words, beyond the gauge fixing of the topological ghost (2.97), we must interpret the requirement of two extra gauge fixings due to the fact that the gauge field possesses two independent gauge symmetries. In this sense, condition (2.96) fixes the usual Yang-Mills symmetry $\delta A^{a}_{\mu}=D_{\mu}^{ab}\omega^{b}$, and the second one, (2.98), the topological shift $\delta A^{a}_{\mu}=\alpha^{a}_{\mu}$. The (anti-)self-dual condition for the field strength (in the limit $\rho_{2}\rightarrow 0$) is convenient to identify the well-known observables of topological theories for four-manifolds (see [17]) given by the Donaldson invariants [2, 3], that are described in terms of the instantons. The partition functional of the topological action in BS gauges (2.96) takes the form $Z_{BS}=\int\mathcal{D}c\mathcal{D}\bar{c}\mathcal{D}\psi_{\mu}\mathcal{D}\bar{\chi}_{\mu\nu}\mathcal{D}B_{\mu\nu}\mathcal{D}\phi\mathcal{D}\bar{\phi}\mathcal{D}\eta e^{-S_{BS}}\;,$ (2.99) whereby $S_{BS}=S_{0}[A]+S_{gf}^{BS}\;,$ (2.100) being $S_{gf}^{BS}$ the gauge-fixing action which belongs to trivial part of the cohomology, given by $\displaystyle S_{gf}^{BS}$ $\displaystyle=$ $\displaystyle s\,\text{Tr}\int d^{4}x\left[\bar{\chi}_{\mu\nu}\left(F_{\mu\nu}\pm\widetilde{F}_{\mu\nu}+\frac{1}{2}\rho_{2}B_{\mu\nu}\right)+\bar{\phi}D_{\mu}\psi_{\mu}+\bar{c}\left(\partial_{\mu}A_{\mu}-\frac{1}{2}\rho_{1}b\right)\right]$ (2.101) $\displaystyle=$ $\displaystyle\text{Tr}\int d^{4}x\left[B_{\mu\nu}\left(F_{\mu\nu}\pm\widetilde{F}_{\mu\nu}+\frac{1}{2}\rho_{2}B_{\mu\nu}\right)+\bar{\chi}_{\mu\nu}\left(D_{[\mu}\psi_{\nu]}\pm\frac{1}{2}\varepsilon_{\mu\nu\alpha\beta}D_{[\alpha}\psi_{\beta]}\right)\right.$ $\displaystyle-$ $\displaystyle\left.\bar{\chi}_{\mu\nu}\left[c,F_{\mu\nu}\pm\widetilde{F}_{\mu\nu}\right]+\eta D_{\mu}\psi_{\mu}+\bar{\phi}\left[\psi_{\mu},\psi_{\mu}\right]+\bar{\phi}D_{\mu}D_{\mu}\phi-b\left(\partial_{\mu}A_{\mu}-\frac{1}{2}\rho_{1}b\right)\right.$ $\displaystyle-$ $\displaystyle\left.\bar{c}\partial_{\mu}D_{\mu}c-\bar{c}\partial_{\mu}\psi_{\mu}\right]\;.$ A key observation is that, for $\rho_{1}=\rho_{2}=1$, one can eliminate the topological term $S_{0}[A]$, i.e., the Pontryagin action, by integrating out the field $B_{\mu\nu}$, such that $\text{Tr}\\{B_{\mu\nu}\left(F_{\mu\nu}+\widetilde{F}_{\mu\nu}\right)+\frac{1}{2}B_{\mu\nu}B_{\mu\nu}\\}\rightarrow\text{Tr}\\{F_{\mu\nu}F_{\mu\nu}+F_{\mu\nu}\widetilde{F}_{\mu\nu}\\}\;.$ (2.102) In this case we obtain a classical topological action which is equivalent to a Yang-Mills action plus ghost interactions. Such an action, however, does not produce local observables as the cohomology of the theory remain the same, as we will discuss in more detail later in Section 2.2.5. Another important property is that the Green functions of local observables in (2.99) do not depend on the choice of the background metric [13]. Let $S_{BS}^{g}$ be an action with metric choice $g_{\mu\nu}$, and $S_{BS}^{g+\delta g}$, the same action up to the change of $g_{\mu\nu}$ into $g_{\mu\nu}+\delta g_{\mu\nu}$. As the only terms depending on the metric belong to the trivial part of cohomology we conclude immediately that $S_{BS}^{g}$ and $S_{BS}^{g+\delta g}$ only differ by a BRST-exact term, $S_{BS}^{g}-S_{BS}^{g+\delta g}=s\int d^{4}x\bigtriangleup^{(-1)}\;,$ (2.103) where $\bigtriangleup^{(-1)}$ is a polynomial of the fields, with ghost number $-1$. It means that the expectation values of local operators are the same if computed with a background metric $g_{\mu\nu}$ or $g_{\mu\nu}+\delta g_{\mu\nu}$, $\frac{\delta}{\delta g_{\mu\nu}}\langle\prod_{p}\mathcal{O}_{\alpha_{p}}(\phi_{i})\rangle=0\;,$ (2.104) where $\mathcal{O}_{\alpha_{p}}(\phi_{i})$ are functional operators of the quantum fields $\phi_{i}(x)$, see eq. (2.6). An anomaly in the topological BRST symmetry would break the above equation. However there is no $4$-form with ghost number 1, $\bigtriangleup^{(1)}_{4-\text{form}}$, defined modulo $s$\- and $d$\- exact terms which obeys (cf. [13]) $s\bigtriangleup^{(1)}_{4-\text{form}}+d\bigtriangleup^{(2)}_{3-\text{form}}=0\;.$ (2.105) Therefore, radiative corrections which could break the topological property (2.104) at the quantum level are not expected. The formal proof of the absence of gauge anomalies to all orders in the topological BS theory is achieved by employing the isomorphism described in [47, 22]. #### 2.2.4 Absence of gauge anomalies The proof of the absence of gauge anomalies for the Slavnov-Taylor identity, $\mathcal{S}(S)=0\;,$ (2.106) consists in proving that the cohomology of $\mathcal{S}$ is empty. In the equation above, $S$ is the classical action for a given gauge choice, and $\mathcal{S}=\int d^{4}x\,(s\Phi^{I})\frac{\delta}{\delta\Phi^{I}}\;,$ (2.107) where $\Phi^{I}$ represents all fields. As $\mathcal{S}$ is a Ward identity, in the absence of anomalies the symmetry (2.106) is also valid at the quantum level, i.e., $\mathcal{S}(\Gamma)=0$, being $\Gamma$ the quantum action in loop expansion. In eq. (2.107), $s\Phi^{I}$ represents the BRST transformation of each field $\Phi^{I}$. The fields $\bar{c}$, $b$, $\bar{\chi}_{\mu\nu}$, $B_{\mu\nu}$, $\bar{\phi}$ and $\bar{\eta}$ transform as doublets, cf. eq. (2.2.3). Changing the variables according to the redefinitions $\displaystyle\psi$ $\displaystyle\rightarrow$ $\displaystyle\psi^{\prime}=\psi-Dc\;,$ $\displaystyle\phi$ $\displaystyle\rightarrow$ $\displaystyle\phi^{\prime}=\phi-\frac{1}{2}[c,c]\;,$ (2.108) the BRST transformations (2.2.2) are reduced to the doublet transformations $\displaystyle sA$ $\displaystyle=$ $\displaystyle\psi^{\prime}\;,$ $\displaystyle s\psi^{\prime}$ $\displaystyle=$ $\displaystyle 0\;,$ $\displaystyle sc$ $\displaystyle=$ $\displaystyle\phi^{\prime}\;,$ $\displaystyle s\phi^{\prime}$ $\displaystyle=$ $\displaystyle 0\;.$ It configures a reduced transformation in which the non-linear part of the BRST transformations in the Slavnov-Taylor identity were eliminated. The complete transformation in this space is given by the reduced operator $\mathcal{S}_{doublet}=\int d^{4}x\,(s{\Phi^{\prime}}^{I})\frac{\delta}{\delta{\Phi^{\prime}}^{I}}\;,$ (2.110) where ${\Phi^{\prime}}=\\{A,\psi^{\prime},c,\phi^{\prime},\bar{c},b,\bar{\chi}_{\mu\nu},B_{\mu\nu},\bar{\phi},\eta\\}$, which is composed of five doublets. It means that $\mathcal{S}_{doublet}$ has vanishing cohomology ($H$), $H(\mathcal{S}_{doublet})=\varnothing\;,$ (2.111) in other words, that any polynomial of the fields $\Phi^{\prime}$, $\bigtriangleup(\Phi^{\prime})$, that satisfies $\mathcal{S}_{doublet}(\bigtriangleup(\Phi^{\prime}))=0\;,$ (2.112) belonging to the trivial part of the cohomology of $\mathcal{S}_{doublet}$ (see the doublet theorem in previous section). The crucial point here is the fact that the cohomology of $\mathcal{S}$ in the space of local integrated functionals in the fields and sources is isomorphic to a subspace of $H(\mathcal{S}_{doublet})$. Consequently $\mathcal{S}$ has also vanishing cohomology [47, 29, 22], $H(\mathcal{S})=\varnothing\;.$ (2.113) The result (2.113) shows that there is no room for an anomaly in the Salvnov- Taylor identity (2.106). All counterterms at the quantum level will belong to the trivial part of cohomology, and the condition (2.105) for the existence of an anomaly capable of breaking the topological property (2.104) never occurs. Due to the algebraic structure of the theory, eq. (2.113) proves that all Ward identities are free of gauge anomalies, cf. [22]. As a consequence of this result, the background metric independence is valid to all orders in perturbation theory. The second point, and not least, is the conclusion that the BS theory has no local observables. Due to its vanishing cohomology (2.113), all BRST-invariant quantities must belong to the non-physical (or trivial) part of the cohomology of $s$, and the only possible observables are the global ones, i.e., topological invariants for four-manifolds. Such observables are characterized by the cohomology of $s$ [48, 29], in which the observables are globally defined in agreement with the supersymmetric formulation of J. H. Horne [49]. A simple way to identify theses observables is accomplished by studying the cohomology of the extended space $M\times\mathcal{A}/\mathcal{G}$, where the metric independent observables, known as Chern classes, are constructed in terms of the universal curvature $\mathcal{F}$ (2.80). The Donaldson polynomials are naturally recovered, characterized by the so-called equivariant cohomology, that relates the BS approach to Witten theory at the level of observables. #### 2.2.5 Equivariant cohomology and global observables Witten’s topological theory is constructed without fixing its remaining ordinary Yang-Mills gauge symmetry. The theory is developed in the instanton moduli space $\mathcal{A}/\mathcal{G}$. A generic observable of his theory, $\mathcal{O}^{(W)}_{\alpha_{i}}$, is naturally gauge invariant under Yang- Mills gauge transformations, $s_{YM}\mathcal{O}^{(W)}_{\alpha_{i}}=0\;,$ (2.114) where $s_{YM}$ is the nilpotent BRST operator related to the ordinary Yang- Mills symmetry, i.e., without including the topological shift, namely, $\displaystyle s_{YM}A_{\mu}$ $\displaystyle=$ $\displaystyle D_{\mu}c\;,$ $\displaystyle s_{YM}\Phi_{adj}$ $\displaystyle=$ $\displaystyle[c,\Phi_{adj}]\;,$ (2.115) where $\Phi_{adj}$ is a generic field in adjoint representation. We conclude that we can add an ordinary Yang-Mills gauge transformation (in the $\mathcal{A}/\mathcal{G}$ direction) to Witten fermionic symmetry based on the “topological shift” $\delta A_{\mu}\sim\psi_{\mu}$, $\delta\rightarrow\delta_{eq}=\delta+s_{YM}\;,$ (2.116) in such a way that the descent equations for $\delta\sim\\{\mathcal{Q},\,\cdot\,\\}$ will remain the same, see (2.34) and (2.57)-(2.61). The operator $\delta_{eq}$ is nilpotent when acting on gauge- invariant quantities under YM transformations, defining thus a cohomology in a space where the fields that differ by a Yang-Mills gauge transformations are identified, known as equivariant cohomology. Such a property indicates that there is a link between Witten theory and BS approach in which the BRST operator, $s$, is naturally defined taking into account the topological shift and the ordinary Yang-Mills transformation in a single formalism. To prove the link between both approaches, we must remember that the universal curvature in the space $M\times\mathcal{A}/\mathcal{G}$ is given by the sum $\mathcal{F}=F+\psi+\phi$. The difference between the on-shell BRST operator, $s$, and the Witten fermionic symmetry, $\delta$, for $\mathbb{X}=(F,\psi,\phi)$ is of the form $s\mathbb{X}=\delta\mathbb{X}+[\mathbb{X},c]\;,$ (2.117) in other words, in the space of the fields $(F,\psi,\phi)$, $s$ and $\delta$ differ by an ordinary Yang-Mills transformation, as $(F,\psi,\phi)$ transform in the adjoint representation of the gauge group. These fields are the only ones we need to obtain the Donaldson polynomials as the observables of the BS theory, since in the space $M\times\mathcal{A}/\mathcal{G}$ they are constructed in terms of $\mathcal{F}$. This allows for identifying the equivariant operator with the BRST one, $\delta_{eq}\equiv s$, according to the construction of the observables in Witten and BS theory, respectively. To understand the above statement, we must invoke the $n$’th Chern class, $\widetilde{\mathcal{W}}_{n}$, defined in terms of the universal curvature by $\widetilde{\mathcal{W}}_{n}=\text{Tr}\,(\underbrace{\mathcal{F}\wedge\mathcal{F}\wedge\cdots\wedge\mathcal{F}}_{\mbox{n times}})$ (2.118) where $n=\\{1,2,3,\cdots\\}$ is the number of wedge products. $\widetilde{\mathcal{W}}_{n}$ represents the most general observables of BS theory222222It is not possible to construct topological observables using the Hodge product, as it is metric dependent. For this reason we never obtain Yang-Mills terms of the type $\\{\text{Tr}(F_{\mu\nu}F^{\mu\nu}),\text{Tr}(F_{\mu\nu}F^{\nu\sigma}F^{\mu}\;_{\sigma}),\cdots\\}$, without Levi-Civita tensors in the internal product, in the place of metric tensors. Moreover, the Wilson loop $W_{P}^{(C)}=\text{Tr}\\{\mathcal{P}e^{i\oint_{C}A_{\mu}dx_{\mu}}\\}\;,$ (2.119) is not an observable in the non-Abelian topological BS case, as it is not gauge-invariant due to the topological shift symmetry. In any case, it does not make sense to discuss confinement in the BS theory, as it is not confining to any energy scale. Thence, the only possibilities for topological invariants are the wedge products in $\widetilde{\mathcal{W}}_{n}$.. Weyl theorem [21] ensures that $\widetilde{\mathcal{W}}_{n}$ is closed with respect to the extended differential operator $\widetilde{d}=d+s$ [13, 50], i.e., $\widetilde{d}\,\widetilde{\mathcal{W}}_{n}=0\;.$ (2.120) If we choose the first Chern class $\widetilde{\mathcal{W}}_{1}=\text{Tr}\,(\mathcal{F}\wedge\mathcal{F})\;,$ (2.121) the expansion in ghost numbers of equation (2.120) yields $\displaystyle s\text{Tr}\,(F\wedge F)$ $\displaystyle=$ $\displaystyle d\text{Tr}\,(-2\psi\wedge F)\;,$ (2.122) $\displaystyle s\text{Tr}\,(\psi\wedge F)$ $\displaystyle=$ $\displaystyle d\text{Tr}\,(-\frac{1}{2}\psi\wedge\psi-\phi F)\;,$ (2.123) $\displaystyle s\text{Tr}\,(\psi\wedge\psi+2\phi F)$ $\displaystyle=$ $\displaystyle d\text{Tr}\,(2\psi\phi)\;,$ (2.124) $\displaystyle s\text{Tr}\,(\psi\phi)$ $\displaystyle=$ $\displaystyle d\text{Tr}\,(-\frac{1}{2}\phi\phi)\;,$ (2.125) $\displaystyle s\text{Tr}\,(\phi\phi)$ $\displaystyle=$ $\displaystyle 0\;,$ (2.126) which are the same descent equations obtained in (2.57)-(2.61) following Witten analysis, only replacing $\delta$ (or $\delta_{eq}$) by $s$, proving that Baulieu-Singer and Witten topological theories possess the same observables given by the Donaldson invariants (2.65). It should not seem surprising the fact that the observables in the BS approach are naturally invariant under ordinary Yang-Mills symmetry, as the $n$’th Chern class is Yang-Mills invariant itself (2.118) since $\mathcal{F}$ transforms in the adjoint representation of the gauge group. Equation (2.120) provides a powerful tool to obtain Donaldson polynomials for any ghost number. One must note that we do not have to worry about with the independence of Faddeev-Popov ghosts to construct the observables in the BS approach. Although the gauge-fixed BS action has FP ghosts due to the gauge fixing of the Yang- Mills ambiguity, the $(c,\bar{c})$ independence of $\widetilde{\mathcal{W}}_{n}$ is a direct consequence of the fact that the universal curvature of the space $M\times\mathcal{A}/\mathcal{G}$ does not depend on FP ghosts, but only on $F$, and the ghosts $\psi$ and $\phi$. In the weak coupling limit of the twisted $N=2$, the observables of both theories are undoubtedly the same: the topological Donaldson invariants [21, 22, 23]. We might ask if the quantum behavior are also compatible, once BS and Witten actions does not differ by a BRST-exact term, $S_{BS}-S_{W}=\Sigma_{\mathcal{G}}\neq s(\cdots)\;.$ (2.127) The relation above does not depend on the gauge choice. Consequently, we cannot say, in principle, that BS and Witten partition functions are equivalent at quantum level, since $Z_{BS}=\int\mathcal{D}\Phi e^{-S_{BS}}=\int\mathcal{D}\Phi e^{-S_{W}-\Sigma_{\mathcal{G}}}\;,$ (2.128) wherein $\Sigma_{\mathcal{G}}$ is not $s$-exact. At a first view, $Z_{BS}\neq Z_{W}=\int\mathcal{D}\Phi e^{-S_{W}}$. The fact that $\Sigma_{\mathcal{G}}\neq s(\cdots)$ opens the possibility for both theories to have different quantum properties. The one-loop exactness of twisted $N=2$ SYM $\beta$-function for instance is a well-known result in literature [34]. We will now analyze the Ward identities of the BS theory in self-dual Landau gauges, in order to compare the quantum properties of the DW and BS theories. ## 3 Quantum properties of BS theory in the self-dual Landau gauges In this section we will summarize the quantum properties of BS theory in the self-dual Landau (SDL) gauges232323For simplicity, throughout the text we will refer to the Baulieu-Singer theory in the self-dual Landau gauges as self-dual BS theory.. Extra details can be found in [27, 51, 24, 26, 25]. ### 3.1 Absence of radiative corrections Working in the self-dual Landau gauges amounts to considering the constraints [27] $\displaystyle\partial_{\mu}A_{\mu}^{a}$ $\displaystyle=$ $\displaystyle 0\;,$ (3.1) $\displaystyle\partial_{\mu}\psi_{\mu}^{a}$ $\displaystyle=$ $\displaystyle 0\;,$ (3.2) $\displaystyle F_{\mu\nu}^{a}\pm\widetilde{F}_{\mu\nu}^{a}$ $\displaystyle=$ $\displaystyle 0\;.$ (3.3) Through the introduction of the three BRST doublets described in eq. (2.2.3), the complete gauge-fixed topological action in SDL gauges takes the form $S[\Phi]=S_{0}[A]+S_{gf}[\Phi]\;,$ (3.4) with $S_{0}[A]$ standing for the Pontryagin action, and $\displaystyle S_{gf}\left[\Phi\right]$ $\displaystyle=$ $\displaystyle s\int d^{4}z\left[\bar{c}^{a}\partial_{\mu}A_{\mu}^{a}+\frac{1}{2}\bar{\chi}^{a}_{\mu\nu}\left(F_{\mu\nu}^{a}\pm\widetilde{F}_{\mu\nu}^{a}\right)+\bar{\phi}^{a}\partial_{\mu}\psi^{a}_{\mu}\right]$ (3.5) $\displaystyle=$ $\displaystyle\int d^{4}z\left[b^{a}\partial_{\mu}A_{\mu}^{a}+\frac{1}{2}B^{a}_{\mu\nu}\left(F_{\mu\nu}^{a}\pm\widetilde{F}_{\mu\nu}^{a}\right)+\left(\bar{\eta}^{a}-\bar{c}^{a}\right)\partial_{\mu}\psi^{a}_{\mu}+\bar{c}^{a}\partial_{\mu}D_{\mu}^{ab}c^{b}+\right.$ $\displaystyle-$ $\displaystyle\left.\frac{1}{2}gf^{abc}\bar{\chi}^{a}_{\mu\nu}c^{b}\left(F_{\mu\nu}^{c}\pm\widetilde{F}_{\mu\nu}^{c}\right)-\bar{\chi}^{a}_{\mu\nu}\left(\delta_{\mu\alpha}\delta_{\nu\beta}\pm\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}\right)D_{\alpha}^{ab}\psi_{\beta}^{b}+\bar{\phi}^{a}\partial_{\mu}D_{\mu}^{ab}\phi^{b}+\right.$ $\displaystyle+$ $\displaystyle\left.gf^{abc}\bar{\phi}^{a}\partial_{\mu}\left(c^{b}\psi^{c}_{\mu}\right)\right]\;.$ This action possesses a rich set of symmetries, see [24] and Appendix A. In order to control the non-linearity of the Slavnov-Taylor identity (Equation (A.1)) and the bosonic symmetry $\mathcal{T}$ (Equation (A.13)), we have to introduce external sources given by the following three BRST doublets [27] $\displaystyle s\tau^{a}_{\mu}$ $\displaystyle=$ $\displaystyle\Omega_{\mu}^{a}\;,\;\;\;\;\;\;\;\;s\Omega_{\mu}^{a}\;=\;0\;,$ $\displaystyle sE^{a}$ $\displaystyle=$ $\displaystyle L^{a}\;,\;\;\;\;\;\;\;\,\;sL^{a}\;=\;0\;,$ $\displaystyle s\Lambda^{a}_{\mu\nu}$ $\displaystyle=$ $\displaystyle K^{a}_{\mu\nu}\;,\quad\,\,\,\,\,sK^{a}_{\mu\nu}=0\;.$ (3.6) The respective external action, written as a BRST-exact contribution preserving the the physical content of theory, takes the form $\displaystyle S_{ext}$ $\displaystyle=$ $\displaystyle s\int d^{4}z\left(\tau_{\mu}^{a}D_{\mu}^{ab}c^{b}+\frac{g}{2}f^{abc}E^{a}c^{b}c^{c}+gf^{abc}\Lambda^{a}_{\mu\nu}c^{b}\bar{\chi}^{c}_{\mu\nu}\right)$ (3.7) $\displaystyle=$ $\displaystyle\int d^{4}z\left[\Omega_{\mu}^{a}D_{\mu}^{ab}c^{b}+\frac{g}{2}f^{abc}L^{a}c^{b}c^{c}+gf^{abc}K^{a}_{\mu\nu}c^{b}\bar{\chi}^{c}_{\mu\nu}+\tau^{a}_{\mu}\left(D_{\mu}^{ab}\phi^{b}+gf^{abc}c^{b}\psi_{\mu}^{c}\right)\right.$ $\displaystyle+$ $\displaystyle\left.gf^{abc}E^{a}c^{b}\phi^{c}+gf^{abc}\Lambda^{a}_{\mu\nu}c^{b}B^{c}_{\mu\nu}-gf^{abc}\Lambda^{a}_{\mu\nu}\phi^{b}\bar{\chi}^{c}_{\mu\nu}\right.$ $\displaystyle-$ $\displaystyle\left.\frac{g^{2}}{2}f^{abc}f^{bde}\Lambda^{a}_{\mu\nu}\bar{\chi}^{c}_{\mu\nu}c^{d}c^{e}\right]\;,$ with the corresponding quantum number of the external sources displayed in Table 2 below. Therefore, the full classical action to be quantized is $\Sigma[\Phi]=S_{0}[A]+S_{gf}[\Phi]+S_{ext}[\Phi]\;.$ (3.8) The introduction of the external action does not break the original symmetries, and the physical limit is obtained by setting the external sources to zero [16]. Source | | $\tau$ | $\Omega$ | $E$ | $L$ | $\Lambda$ | $K$ ---|---|---|---|---|---|---|--- Dim | | 3 | 3 | 4 | 4 | 2 | 2 Ghost no | | -2 | -1 | -3 | -2 | -1 | 0 Table 2: Quantum numbers of the external sources. One of the symmetries are of particular interest to us: the vector supersymmetry described by eq. (A.12), cf. [27, 24]. By applying BRST- algebraic renormalization techniques [16], and disregarding Gribov ambiguities, it was proved in [24], with the help of Feynman diagrams, that all two-point functions are tree-level exact, as a consequence of the Ward identities of the model. In particular, as a consequence of the vector supersymmetry (A.11), the gauge field propagator vanishes to all orders in perturbation theory, $\displaystyle\langle A^{a}_{\mu}(p)A^{b}_{\nu}(q)\rangle=0\;.$ (3.9) In [26] this result was generalized: not only the two-point functions of the self-dual BS theory are tree-level exact, but all $n$-point Green functions of the model do not receive any radiative corrections. This is a direct consequence of the null gauge propagator (3.9) together with the vertex structure of the full action (3.8). Following the Feynman rules notation of [26], we represent the relevant propagators by $\displaystyle\langle AA\rangle$ $\displaystyle=\leavevmode\hbox to30.5pt{\vbox to3.5pt{\pgfpicture\makeatletter\hbox{\hskip 0.25pt\lower-1.75pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@curveto{0.33331pt}{-0.83249pt}{1.46764pt}{-1.5pt}{2.79993pt}{-1.5pt}\pgfsys@curveto{4.13222pt}{-1.5pt}{5.26656pt}{-0.83249pt}{5.59987pt}{0.0pt}\pgfsys@curveto{5.93318pt}{0.83249pt}{5.46547pt}{1.5pt}{4.7998pt}{1.5pt}\pgfsys@curveto{4.13414pt}{1.5pt}{3.66644pt}{0.83249pt}{3.99976pt}{0.0pt}\pgfsys@curveto{4.33331pt}{-0.83249pt}{5.46764pt}{-1.5pt}{6.79993pt}{-1.5pt}\pgfsys@curveto{8.13222pt}{-1.5pt}{9.26656pt}{-0.83249pt}{9.59987pt}{0.0pt}\pgfsys@curveto{9.93318pt}{0.83249pt}{9.46547pt}{1.5pt}{8.7998pt}{1.5pt}\pgfsys@curveto{8.13414pt}{1.5pt}{7.66644pt}{0.83249pt}{7.99976pt}{0.0pt}\pgfsys@curveto{8.33331pt}{-0.83249pt}{9.46764pt}{-1.5pt}{10.79993pt}{-1.5pt}\pgfsys@curveto{12.13222pt}{-1.5pt}{13.26656pt}{-0.83249pt}{13.59987pt}{0.0pt}\pgfsys@curveto{13.93318pt}{0.83249pt}{13.46547pt}{1.5pt}{12.7998pt}{1.5pt}\pgfsys@curveto{12.13414pt}{1.5pt}{11.66644pt}{0.83249pt}{11.99976pt}{0.0pt}\pgfsys@curveto{12.33331pt}{-0.83249pt}{13.46764pt}{-1.5pt}{14.79993pt}{-1.5pt}\pgfsys@curveto{16.13222pt}{-1.5pt}{17.26656pt}{-0.83249pt}{17.59987pt}{0.0pt}\pgfsys@curveto{17.93318pt}{0.83249pt}{17.46547pt}{1.5pt}{16.7998pt}{1.5pt}\pgfsys@curveto{16.13414pt}{1.5pt}{15.66644pt}{0.83249pt}{15.99976pt}{0.0pt}\pgfsys@curveto{16.33331pt}{-0.83249pt}{17.46764pt}{-1.5pt}{18.79993pt}{-1.5pt}\pgfsys@curveto{20.13222pt}{-1.5pt}{21.26656pt}{-0.83249pt}{21.59987pt}{0.0pt}\pgfsys@curveto{21.93318pt}{0.83249pt}{21.46547pt}{1.5pt}{20.7998pt}{1.5pt}\pgfsys@curveto{20.13414pt}{1.5pt}{19.66644pt}{0.83249pt}{19.99976pt}{0.0pt}\pgfsys@curveto{20.33331pt}{-0.83249pt}{21.46764pt}{-1.5pt}{22.79993pt}{-1.5pt}\pgfsys@curveto{24.13222pt}{-1.5pt}{25.26656pt}{-0.83249pt}{25.59987pt}{0.0pt}\pgfsys@curveto{25.93318pt}{0.83249pt}{25.46547pt}{1.5pt}{24.7998pt}{1.5pt}\pgfsys@curveto{24.13414pt}{1.5pt}{23.66644pt}{0.83249pt}{23.99976pt}{0.0pt}\pgfsys@curveto{24.33331pt}{-0.83249pt}{25.46764pt}{-1.5pt}{26.79993pt}{-1.5pt}\pgfsys@curveto{28.13222pt}{-1.5pt}{29.26656pt}{-0.83249pt}{29.59987pt}{0.0pt}\pgfsys@lineto{30.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,$ $\displaystyle\langle c\bar{c}\rangle$ $\displaystyle=\leavevmode\hbox to31pt{\vbox to1pt{\pgfpicture\makeatletter\hbox{\hskip 0.5pt\lower-0.5pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{30.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,$ $\displaystyle\langle\bar{\chi}\psi\rangle$ $\displaystyle=\leavevmode\hbox to30.5pt{\vbox to2pt{\pgfpicture\makeatletter\hbox{\hskip 0.25pt\lower-1.0pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}} {{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}} {} }{{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.3125pt}{0.0pt}\pgfsys@lineto{0.625pt}{0.375pt}\pgfsys@lineto{0.9375pt}{0.64952pt}\pgfsys@lineto{1.25pt}{0.75pt}\pgfsys@lineto{1.5625pt}{0.64952pt}\pgfsys@lineto{1.875pt}{0.375pt}\pgfsys@lineto{2.1875pt}{0.0pt}\pgfsys@lineto{2.5pt}{-0.375pt}\pgfsys@lineto{2.8125pt}{-0.64952pt}\pgfsys@lineto{3.125pt}{-0.75pt}\pgfsys@lineto{3.4375pt}{-0.64952pt}\pgfsys@lineto{3.75pt}{-0.375pt}\pgfsys@lineto{4.0625pt}{0.0pt}\pgfsys@lineto{4.375pt}{0.375pt}\pgfsys@lineto{4.6875pt}{0.64952pt}\pgfsys@lineto{5.0pt}{0.75pt}\pgfsys@lineto{5.3125pt}{0.64952pt}\pgfsys@lineto{5.625pt}{0.375pt}\pgfsys@lineto{5.9375pt}{0.0pt}\pgfsys@lineto{6.25pt}{-0.375pt}\pgfsys@lineto{6.5625pt}{-0.64952pt}\pgfsys@lineto{6.875pt}{-0.75pt}\pgfsys@lineto{7.1875pt}{-0.64952pt}\pgfsys@lineto{7.5pt}{-0.375pt}\pgfsys@lineto{7.8125pt}{0.0pt}\pgfsys@lineto{8.125pt}{0.375pt}\pgfsys@lineto{8.4375pt}{0.64952pt}\pgfsys@lineto{8.75pt}{0.75pt}\pgfsys@lineto{9.0625pt}{0.64952pt}\pgfsys@lineto{9.375pt}{0.375pt}\pgfsys@lineto{9.6875pt}{0.0pt}\pgfsys@lineto{10.0pt}{-0.375pt}\pgfsys@lineto{10.3125pt}{-0.64952pt}\pgfsys@lineto{10.625pt}{-0.75pt}\pgfsys@lineto{10.9375pt}{-0.64952pt}\pgfsys@lineto{11.25pt}{-0.375pt}\pgfsys@lineto{11.5625pt}{0.0pt}\pgfsys@lineto{11.875pt}{0.375pt}\pgfsys@lineto{12.1875pt}{0.64952pt}\pgfsys@lineto{12.5pt}{0.75pt}\pgfsys@lineto{12.8125pt}{0.64952pt}\pgfsys@lineto{13.125pt}{0.375pt}\pgfsys@lineto{13.4375pt}{0.0pt}\pgfsys@lineto{13.75pt}{-0.375pt}\pgfsys@lineto{14.0625pt}{-0.64952pt}\pgfsys@lineto{14.375pt}{-0.75pt}\pgfsys@lineto{14.6875pt}{-0.64952pt}\pgfsys@lineto{15.0pt}{-0.375pt}\pgfsys@lineto{30.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,\qquad$ $\displaystyle\langle AB\rangle$ $\displaystyle=\leavevmode\hbox to30.5pt{\vbox to4pt{\pgfpicture\makeatletter\hbox{\hskip 0.25pt\lower-1.75pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ } {{}} {{}} {{}} {{}} { {} {}{ } {} {} { } {} {} {{}}{}{{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@curveto{0.33331pt}{-0.83249pt}{1.46764pt}{-1.5pt}{2.79993pt}{-1.5pt}\pgfsys@curveto{4.13222pt}{-1.5pt}{5.26656pt}{-0.83249pt}{5.59987pt}{0.0pt}\pgfsys@curveto{5.93318pt}{0.83249pt}{5.46547pt}{1.5pt}{4.7998pt}{1.5pt}\pgfsys@curveto{4.13414pt}{1.5pt}{3.66644pt}{0.83249pt}{3.99976pt}{0.0pt}\pgfsys@curveto{4.33331pt}{-0.83249pt}{5.46764pt}{-1.5pt}{6.79993pt}{-1.5pt}\pgfsys@curveto{8.13222pt}{-1.5pt}{9.26656pt}{-0.83249pt}{9.59987pt}{0.0pt}\pgfsys@curveto{9.93318pt}{0.83249pt}{9.46547pt}{1.5pt}{8.7998pt}{1.5pt}\pgfsys@curveto{8.13414pt}{1.5pt}{7.66644pt}{0.83249pt}{7.99976pt}{0.0pt}\pgfsys@curveto{8.33331pt}{-0.83249pt}{9.46764pt}{-1.5pt}{10.79993pt}{-1.5pt}\pgfsys@curveto{12.13222pt}{-1.5pt}{13.26656pt}{-0.83249pt}{13.59987pt}{0.0pt}\pgfsys@curveto{13.93318pt}{0.83249pt}{13.46547pt}{1.5pt}{12.7998pt}{1.5pt}\pgfsys@curveto{12.13414pt}{1.5pt}{11.66644pt}{0.83249pt}{11.99976pt}{0.0pt}\pgfsys@curveto{12.33331pt}{-0.83249pt}{13.46764pt}{-1.5pt}{14.79993pt}{-1.5pt}\pgfsys@curveto{16.13222pt}{-1.5pt}{17.26656pt}{-0.83249pt}{17.59987pt}{0.0pt}\pgfsys@lineto{18.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} {{}} {{}} {{}} {{}} { {} {}{ } {} {} {} { } {} {} {} {{}}{}{{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{18.0pt}{0.0pt}\pgfsys@lineto{30.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{18.0pt}{2.0pt}\pgfsys@lineto{30.0pt}{2.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,$ $\displaystyle\langle\phi\bar{\phi}\rangle$ $\displaystyle=\leavevmode\hbox to31pt{\vbox to3pt{\pgfpicture\makeatletter\hbox{\hskip 0.5pt\lower-0.5pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{30.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{2.0pt}\pgfsys@lineto{30.0pt}{2.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys<EMAIL_ADDRESS>(3.10) The relevant vertices are represented by: $\displaystyle\leavevmode\hbox to45.62pt{\vbox to37.77pt{\pgfpicture\makeatletter\hbox{\hskip 23.74109pt\lower-19.4366pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{-1.50891pt}\pgfsys@lineto{-23.49109pt}{-1.50891pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-14.22581pt}{-9.89197pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$B$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{0.49109pt}\pgfsys@lineto{-23.49109pt}{0.49109pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}{}{{}}{}{{}} {}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{-1.50891pt}\pgfsys@curveto{2.33325pt}{-1.86188pt}{3.60733pt}{-1.5318pt}{4.54941pt}{-0.58972pt}\pgfsys@curveto{5.49149pt}{0.35236pt}{5.82158pt}{1.62645pt}{5.46861pt}{2.45079pt}\pgfsys@curveto{5.11565pt}{3.27513pt}{4.31293pt}{3.41641pt}{3.84222pt}{2.94571pt}\pgfsys@curveto{3.37154pt}{2.47502pt}{3.51282pt}{1.6723pt}{4.33716pt}{1.31934pt}\pgfsys@curveto{5.16168pt}{0.96655pt}{6.43576pt}{1.29663pt}{7.37784pt}{2.23871pt}\pgfsys@curveto{8.31992pt}{3.18079pt}{8.65001pt}{4.45488pt}{8.29704pt}{5.27922pt}\pgfsys@curveto{7.94408pt}{6.10356pt}{7.14136pt}{6.24484pt}{6.67065pt}{5.77414pt}\pgfsys@curveto{6.19997pt}{5.30345pt}{6.34125pt}{4.50073pt}{7.16559pt}{4.14777pt}\pgfsys@curveto{7.99011pt}{3.79498pt}{9.26419pt}{4.12506pt}{10.20627pt}{5.06714pt}\pgfsys@curveto{11.14835pt}{6.00922pt}{11.47844pt}{7.28331pt}{11.12547pt}{8.10765pt}\pgfsys@curveto{10.7725pt}{8.93199pt}{9.96979pt}{9.07327pt}{9.49908pt}{8.60257pt}\pgfsys@curveto{9.0284pt}{8.13188pt}{9.16968pt}{7.32916pt}{9.99402pt}{6.9762pt}\pgfsys@curveto{10.81854pt}{6.62341pt}{12.09262pt}{6.95349pt}{13.0347pt}{7.89557pt}\pgfsys@curveto{13.97678pt}{8.83765pt}{14.30687pt}{10.11174pt}{13.9539pt}{10.93608pt}\pgfsys@curveto{13.60094pt}{11.76042pt}{12.79822pt}{11.9017pt}{12.32751pt}{11.431pt}\pgfsys@curveto{11.85683pt}{10.96031pt}{11.99811pt}{10.1576pt}{12.82245pt}{9.80463pt}\pgfsys@curveto{13.64697pt}{9.45184pt}{14.92105pt}{9.78192pt}{15.86313pt}{10.724pt}\pgfsys@curveto{16.8052pt}{11.66608pt}{17.1353pt}{12.94017pt}{16.78233pt}{13.76451pt}\pgfsys@lineto{19.1866pt}{16.16878pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ }}{ } {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{1.48134pt}{10.19633pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$A$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{-1.50891pt}\pgfsys@curveto{1.15594pt}{-2.33325pt}{1.48602pt}{-3.60733pt}{2.4281pt}{-4.54941pt}\pgfsys@curveto{3.37018pt}{-5.49149pt}{4.64427pt}{-5.82158pt}{5.46861pt}{-5.46861pt}\pgfsys@curveto{6.29295pt}{-5.11565pt}{6.43423pt}{-4.31293pt}{5.96353pt}{-3.84222pt}\pgfsys@curveto{5.49284pt}{-3.37154pt}{4.69012pt}{-3.51282pt}{4.33716pt}{-4.33716pt}\pgfsys@curveto{3.98438pt}{-5.16168pt}{4.31445pt}{-6.43576pt}{5.25653pt}{-7.37784pt}\pgfsys@curveto{6.19861pt}{-8.31992pt}{7.4727pt}{-8.65001pt}{8.29704pt}{-8.29704pt}\pgfsys@curveto{9.12138pt}{-7.94408pt}{9.26266pt}{-7.14136pt}{8.79196pt}{-6.67065pt}\pgfsys@curveto{8.32127pt}{-6.19997pt}{7.51855pt}{-6.34125pt}{7.16559pt}{-7.16559pt}\pgfsys@curveto{6.8128pt}{-7.99011pt}{7.14288pt}{-9.26419pt}{8.08496pt}{-10.20627pt}\pgfsys@curveto{9.02704pt}{-11.14835pt}{10.30113pt}{-11.47844pt}{11.12547pt}{-11.12547pt}\pgfsys@curveto{11.94981pt}{-10.7725pt}{12.0911pt}{-9.96979pt}{11.62039pt}{-9.49908pt}\pgfsys@curveto{11.1497pt}{-9.0284pt}{10.34698pt}{-9.16968pt}{9.99402pt}{-9.99402pt}\pgfsys@curveto{9.64124pt}{-10.81854pt}{9.97131pt}{-12.09262pt}{10.91339pt}{-13.0347pt}\pgfsys@curveto{11.85547pt}{-13.97678pt}{13.12956pt}{-14.30687pt}{13.9539pt}{-13.9539pt}\pgfsys@curveto{14.77824pt}{-13.60094pt}{14.91953pt}{-12.79822pt}{14.44882pt}{-12.32751pt}\pgfsys@curveto{13.97813pt}{-11.85683pt}{13.17542pt}{-11.99811pt}{12.82245pt}{-12.82245pt}\pgfsys@curveto{12.46967pt}{-13.64697pt}{12.79974pt}{-14.92105pt}{13.74182pt}{-15.86313pt}\pgfsys@curveto{14.6839pt}{-16.8052pt}{15.958pt}{-17.1353pt}{16.78233pt}{-16.78233pt}\pgfsys@lineto{19.1866pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{13.21416pt}{-7.48134pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$A$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,$ $\displaystyle\leavevmode\hbox to52.35pt{\vbox to41.03pt{\pgfpicture\makeatletter\hbox{\hskip 32.66676pt\lower-19.6866pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-30.00035pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}} {{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}} {} }{{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-2.13394pt}{0.0pt}\pgfsys@moveto{-2.13394pt}{0.0pt}\pgfsys@lineto{-2.48116pt}{0.0pt}\pgfsys@lineto{-2.82837pt}{-0.375pt}\pgfsys@lineto{-3.17558pt}{-0.64952pt}\pgfsys@lineto{-3.5228pt}{-0.75pt}\pgfsys@lineto{-3.87001pt}{-0.64952pt}\pgfsys@lineto{-4.21722pt}{-0.375pt}\pgfsys@lineto{-4.56444pt}{0.0pt}\pgfsys@lineto{-4.91165pt}{0.375pt}\pgfsys@lineto{-5.25887pt}{0.64952pt}\pgfsys@lineto{-5.60608pt}{0.75pt}\pgfsys@lineto{-5.9533pt}{0.64952pt}\pgfsys@lineto{-6.3005pt}{0.375pt}\pgfsys@lineto{-6.64772pt}{0.0pt}\pgfsys@lineto{-6.99493pt}{-0.375pt}\pgfsys@lineto{-7.34215pt}{-0.64952pt}\pgfsys@lineto{-7.68936pt}{-0.75pt}\pgfsys@lineto{-8.03658pt}{-0.64952pt}\pgfsys@lineto{-8.38379pt}{-0.375pt}\pgfsys@lineto{-8.731pt}{0.0pt}\pgfsys@lineto{-9.07822pt}{0.375pt}\pgfsys@lineto{-9.42543pt}{0.64952pt}\pgfsys@lineto{-9.77264pt}{0.75pt}\pgfsys@lineto{-10.11986pt}{0.64952pt}\pgfsys@lineto{-10.46707pt}{0.375pt}\pgfsys@lineto{-10.81429pt}{0.0pt}\pgfsys@lineto{-11.1615pt}{-0.375pt}\pgfsys@lineto{-11.50871pt}{-0.64952pt}\pgfsys@lineto{-11.85593pt}{-0.75pt}\pgfsys@lineto{-12.20314pt}{-0.64952pt}\pgfsys@lineto{-12.55035pt}{-0.375pt}\pgfsys@lineto{-12.89757pt}{0.0pt}\pgfsys@lineto{-13.24478pt}{0.375pt}\pgfsys@lineto{-13.592pt}{0.64952pt}\pgfsys@lineto{-13.93921pt}{0.75pt}\pgfsys@lineto{-14.28642pt}{0.64952pt}\pgfsys@lineto{-14.63364pt}{0.375pt}\pgfsys@lineto{-14.98085pt}{0.0pt}\pgfsys@lineto{-15.32806pt}{-0.375pt}\pgfsys@lineto{-15.67528pt}{-0.64952pt}\pgfsys@lineto{-16.02249pt}{-0.75pt}\pgfsys@lineto{-16.3697pt}{-0.64952pt}\pgfsys@lineto{-16.71692pt}{-0.375pt}\pgfsys@lineto{-17.06413pt}{0.0pt}\pgfsys@lineto{-17.41135pt}{0.375pt}\pgfsys@lineto{-17.75856pt}{0.64952pt}\pgfsys@lineto{-18.10577pt}{0.75pt}\pgfsys@lineto{-18.45299pt}{0.64952pt}\pgfsys@lineto{-18.8002pt}{0.375pt}\pgfsys@lineto{-19.14742pt}{0.0pt}\pgfsys@lineto{-19.49463pt}{-0.375pt}\pgfsys@lineto{-19.84184pt}{-0.64952pt}\pgfsys@lineto{-20.18906pt}{-0.75pt}\pgfsys@lineto{-20.53627pt}{-0.64952pt}\pgfsys@lineto{-20.88348pt}{-0.375pt}\pgfsys@lineto{-21.2307pt}{0.0pt}\pgfsys@lineto{-21.57791pt}{0.375pt}\pgfsys@lineto{-21.92513pt}{0.64952pt}\pgfsys@lineto{-22.27234pt}{0.75pt}\pgfsys@lineto{-22.61955pt}{0.64952pt}\pgfsys@lineto{-22.96677pt}{0.375pt}\pgfsys@lineto{-23.31398pt}{0.0pt}\pgfsys@lineto{-23.6612pt}{-0.375pt}\pgfsys@lineto{-24.0084pt}{-0.64952pt}\pgfsys@lineto{-24.35562pt}{-0.75pt}\pgfsys@lineto{-24.70284pt}{-0.64952pt}\pgfsys@lineto{-25.05005pt}{-0.375pt}\pgfsys@lineto{-25.39726pt}{0.0pt}\pgfsys@lineto{-25.74448pt}{0.375pt}\pgfsys@lineto{-26.09169pt}{0.64952pt}\pgfsys@lineto{-26.4389pt}{0.75pt}\pgfsys@lineto{-27.13394pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-17.13394pt}{-8.54417pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\bar{\chi}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{1.50891pt}\pgfsys@curveto{2.33325pt}{1.15594pt}{3.60733pt}{1.48602pt}{4.54941pt}{2.4281pt}\pgfsys@curveto{5.49149pt}{3.37018pt}{5.82158pt}{4.64427pt}{5.46861pt}{5.46861pt}\pgfsys@curveto{5.11565pt}{6.29295pt}{4.31293pt}{6.43423pt}{3.84222pt}{5.96353pt}\pgfsys@curveto{3.37154pt}{5.49284pt}{3.51282pt}{4.69012pt}{4.33716pt}{4.33716pt}\pgfsys@curveto{5.16168pt}{3.98438pt}{6.43576pt}{4.31445pt}{7.37784pt}{5.25653pt}\pgfsys@curveto{8.31992pt}{6.19861pt}{8.65001pt}{7.4727pt}{8.29704pt}{8.29704pt}\pgfsys@curveto{7.94408pt}{9.12138pt}{7.14136pt}{9.26266pt}{6.67065pt}{8.79196pt}\pgfsys@curveto{6.19997pt}{8.32127pt}{6.34125pt}{7.51855pt}{7.16559pt}{7.16559pt}\pgfsys@curveto{7.99011pt}{6.8128pt}{9.26419pt}{7.14288pt}{10.20627pt}{8.08496pt}\pgfsys@curveto{11.14835pt}{9.02704pt}{11.47844pt}{10.30113pt}{11.12547pt}{11.12547pt}\pgfsys@curveto{10.7725pt}{11.94981pt}{9.96979pt}{12.0911pt}{9.49908pt}{11.62039pt}\pgfsys@curveto{9.0284pt}{11.1497pt}{9.16968pt}{10.34698pt}{9.99402pt}{9.99402pt}\pgfsys@curveto{10.81854pt}{9.64124pt}{12.09262pt}{9.97131pt}{13.0347pt}{10.91339pt}\pgfsys@curveto{13.97678pt}{11.85547pt}{14.30687pt}{13.12956pt}{13.9539pt}{13.9539pt}\pgfsys@curveto{13.60094pt}{14.77824pt}{12.79822pt}{14.91953pt}{12.32751pt}{14.44882pt}\pgfsys@curveto{11.85683pt}{13.97813pt}{11.99811pt}{13.17542pt}{12.82245pt}{12.82245pt}\pgfsys@curveto{13.64697pt}{12.46967pt}{14.92105pt}{12.79974pt}{15.86313pt}{13.74182pt}\pgfsys@curveto{16.8052pt}{14.6839pt}{17.1353pt}{15.958pt}{16.78233pt}{16.78233pt}\pgfsys@lineto{19.1866pt}{19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ }}{ } {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{1.48134pt}{13.21416pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$A$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{-1.50891pt}\pgfsys@lineto{19.1866pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{13.21416pt}{-7.48134pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$c$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,$ $\displaystyle\leavevmode\hbox to48.26pt{\vbox to42.43pt{\pgfpicture\makeatletter\hbox{\hskip 27.38394pt\lower-19.43661pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.46725pt}{0.83249pt}{-3.60158pt}{1.5pt}{-4.93387pt}{1.5pt}\pgfsys@curveto{-6.26616pt}{1.5pt}{-7.4005pt}{0.83249pt}{-7.73381pt}{0.0pt}\pgfsys@curveto{-8.06712pt}{-0.83249pt}{-7.59941pt}{-1.5pt}{-6.93375pt}{-1.5pt}\pgfsys@curveto{-6.26808pt}{-1.5pt}{-5.80038pt}{-0.83249pt}{-6.1337pt}{0.0pt}\pgfsys@curveto{-6.46725pt}{0.83249pt}{-7.60158pt}{1.5pt}{-8.93387pt}{1.5pt}\pgfsys@curveto{-10.26616pt}{1.5pt}{-11.4005pt}{0.83249pt}{-11.73381pt}{0.0pt}\pgfsys@curveto{-12.06712pt}{-0.83249pt}{-11.59941pt}{-1.5pt}{-10.93375pt}{-1.5pt}\pgfsys@curveto{-10.26808pt}{-1.5pt}{-9.80038pt}{-0.83249pt}{-10.1337pt}{0.0pt}\pgfsys@curveto{-10.46725pt}{0.83249pt}{-11.60158pt}{1.5pt}{-12.93387pt}{1.5pt}\pgfsys@curveto{-14.26616pt}{1.5pt}{-15.4005pt}{0.83249pt}{-15.73381pt}{0.0pt}\pgfsys@curveto{-16.06712pt}{-0.83249pt}{-15.59941pt}{-1.5pt}{-14.93375pt}{-1.5pt}\pgfsys@curveto{-14.26808pt}{-1.5pt}{-13.80038pt}{-0.83249pt}{-14.1337pt}{0.0pt}\pgfsys@curveto{-14.46725pt}{0.83249pt}{-15.60158pt}{1.5pt}{-16.93387pt}{1.5pt}\pgfsys@curveto{-18.26616pt}{1.5pt}{-19.4005pt}{0.83249pt}{-19.73381pt}{0.0pt}\pgfsys@curveto{-20.06712pt}{-0.83249pt}{-19.59941pt}{-1.5pt}{-18.93375pt}{-1.5pt}\pgfsys@curveto{-18.26808pt}{-1.5pt}{-17.80038pt}{-0.83249pt}{-18.1337pt}{0.0pt}\pgfsys@curveto{-18.46725pt}{0.83249pt}{-19.60158pt}{1.5pt}{-20.93387pt}{1.5pt}\pgfsys@curveto{-22.26616pt}{1.5pt}{-23.4005pt}{0.83249pt}{-23.73381pt}{0.0pt}\pgfsys@lineto{-27.13394pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-17.63394pt}{-8.33305pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$A$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}} {{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}} {} }{{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{1.50891pt}\pgfsys@moveto{1.50891pt}{1.50891pt}\pgfsys@lineto{1.75441pt}{1.75441pt}\pgfsys@lineto{1.73477pt}{2.26508pt}\pgfsys@lineto{1.78615pt}{2.7047pt}\pgfsys@lineto{1.96062pt}{3.02126pt}\pgfsys@lineto{2.27716pt}{3.19571pt}\pgfsys@lineto{2.7168pt}{3.2471pt}\pgfsys@lineto{3.22745pt}{3.22745pt}\pgfsys@lineto{3.73811pt}{3.20781pt}\pgfsys@lineto{4.17773pt}{3.25919pt}\pgfsys@lineto{4.4943pt}{3.43365pt}\pgfsys@lineto{4.66875pt}{3.7502pt}\pgfsys@lineto{4.72014pt}{4.18983pt}\pgfsys@lineto{4.70049pt}{4.70049pt}\pgfsys@lineto{4.68085pt}{5.21115pt}\pgfsys@lineto{4.73222pt}{5.65077pt}\pgfsys@lineto{4.9067pt}{5.96733pt}\pgfsys@lineto{5.22324pt}{6.14178pt}\pgfsys@lineto{5.66287pt}{6.19318pt}\pgfsys@lineto{6.17352pt}{6.17352pt}\pgfsys@lineto{6.68419pt}{6.15388pt}\pgfsys@lineto{7.12383pt}{6.20528pt}\pgfsys@lineto{7.44037pt}{6.37973pt}\pgfsys@lineto{7.61484pt}{6.69629pt}\pgfsys@lineto{7.66621pt}{7.13591pt}\pgfsys@lineto{7.64658pt}{7.64658pt}\pgfsys@lineto{7.62692pt}{8.15723pt}\pgfsys@lineto{7.67831pt}{8.59686pt}\pgfsys@lineto{7.85277pt}{8.9134pt}\pgfsys@lineto{8.16933pt}{9.08788pt}\pgfsys@lineto{8.60895pt}{9.13925pt}\pgfsys@lineto{9.11961pt}{9.11961pt}\pgfsys@lineto{9.63026pt}{9.09996pt}\pgfsys@lineto{10.0699pt}{9.15135pt}\pgfsys@lineto{10.38644pt}{9.3258pt}\pgfsys@lineto{10.56091pt}{9.64236pt}\pgfsys@lineto{10.61229pt}{10.08199pt}\pgfsys@lineto{10.59265pt}{10.59265pt}\pgfsys@lineto{10.573pt}{11.1033pt}\pgfsys@lineto{10.62439pt}{11.54294pt}\pgfsys@lineto{10.79886pt}{11.8595pt}\pgfsys@lineto{19.1866pt}{19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ }}{ } {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{2.27022pt}{14.76971pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\psi$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}} {{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}} {} }{{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{-1.50891pt}\pgfsys@moveto{1.50891pt}{-1.50891pt}\pgfsys@lineto{1.75441pt}{-1.75441pt}\pgfsys@lineto{2.26508pt}{-1.73477pt}\pgfsys@lineto{2.7047pt}{-1.78615pt}\pgfsys@lineto{3.02126pt}{-1.96062pt}\pgfsys@lineto{3.19571pt}{-2.27716pt}\pgfsys@lineto{3.2471pt}{-2.7168pt}\pgfsys@lineto{3.22745pt}{-3.22745pt}\pgfsys@lineto{3.20781pt}{-3.73811pt}\pgfsys@lineto{3.25919pt}{-4.17773pt}\pgfsys@lineto{3.43365pt}{-4.4943pt}\pgfsys@lineto{3.7502pt}{-4.66875pt}\pgfsys@lineto{4.18983pt}{-4.72014pt}\pgfsys@lineto{4.70049pt}{-4.70049pt}\pgfsys@lineto{5.21115pt}{-4.68085pt}\pgfsys@lineto{5.65077pt}{-4.73222pt}\pgfsys@lineto{5.96733pt}{-4.9067pt}\pgfsys@lineto{6.14178pt}{-5.22324pt}\pgfsys@lineto{6.19318pt}{-5.66287pt}\pgfsys@lineto{6.17352pt}{-6.17352pt}\pgfsys@lineto{6.15388pt}{-6.68419pt}\pgfsys@lineto{6.20528pt}{-7.12383pt}\pgfsys@lineto{6.37973pt}{-7.44037pt}\pgfsys@lineto{6.69629pt}{-7.61484pt}\pgfsys@lineto{7.13591pt}{-7.66621pt}\pgfsys@lineto{7.64658pt}{-7.64658pt}\pgfsys@lineto{8.15723pt}{-7.62692pt}\pgfsys@lineto{8.59686pt}{-7.67831pt}\pgfsys@lineto{8.9134pt}{-7.85277pt}\pgfsys@lineto{9.08788pt}{-8.16933pt}\pgfsys@lineto{9.13925pt}{-8.60895pt}\pgfsys@lineto{9.11961pt}{-9.11961pt}\pgfsys@lineto{9.09996pt}{-9.63026pt}\pgfsys@lineto{9.15135pt}{-10.0699pt}\pgfsys@lineto{9.3258pt}{-10.38644pt}\pgfsys@lineto{9.64236pt}{-10.56091pt}\pgfsys@lineto{10.08199pt}{-10.61229pt}\pgfsys@lineto{10.59265pt}{-10.59265pt}\pgfsys@lineto{11.1033pt}{-10.573pt}\pgfsys@lineto{11.54294pt}{-10.62439pt}\pgfsys@lineto{19.18661pt}{-19.18661pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{13.21416pt}{-7.48134pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\bar{\chi}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,$ $\displaystyle\leavevmode\hbox to48.26pt{\vbox to39.37pt{\pgfpicture\makeatletter\hbox{\hskip 27.38394pt\lower-19.6866pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.46725pt}{0.83249pt}{-3.60158pt}{1.5pt}{-4.93387pt}{1.5pt}\pgfsys@curveto{-6.26616pt}{1.5pt}{-7.4005pt}{0.83249pt}{-7.73381pt}{0.0pt}\pgfsys@curveto{-8.06712pt}{-0.83249pt}{-7.59941pt}{-1.5pt}{-6.93375pt}{-1.5pt}\pgfsys@curveto{-6.26808pt}{-1.5pt}{-5.80038pt}{-0.83249pt}{-6.1337pt}{0.0pt}\pgfsys@curveto{-6.46725pt}{0.83249pt}{-7.60158pt}{1.5pt}{-8.93387pt}{1.5pt}\pgfsys@curveto{-10.26616pt}{1.5pt}{-11.4005pt}{0.83249pt}{-11.73381pt}{0.0pt}\pgfsys@curveto{-12.06712pt}{-0.83249pt}{-11.59941pt}{-1.5pt}{-10.93375pt}{-1.5pt}\pgfsys@curveto{-10.26808pt}{-1.5pt}{-9.80038pt}{-0.83249pt}{-10.1337pt}{0.0pt}\pgfsys@curveto{-10.46725pt}{0.83249pt}{-11.60158pt}{1.5pt}{-12.93387pt}{1.5pt}\pgfsys@curveto{-14.26616pt}{1.5pt}{-15.4005pt}{0.83249pt}{-15.73381pt}{0.0pt}\pgfsys@curveto{-16.06712pt}{-0.83249pt}{-15.59941pt}{-1.5pt}{-14.93375pt}{-1.5pt}\pgfsys@curveto{-14.26808pt}{-1.5pt}{-13.80038pt}{-0.83249pt}{-14.1337pt}{0.0pt}\pgfsys@curveto{-14.46725pt}{0.83249pt}{-15.60158pt}{1.5pt}{-16.93387pt}{1.5pt}\pgfsys@curveto{-18.26616pt}{1.5pt}{-19.4005pt}{0.83249pt}{-19.73381pt}{0.0pt}\pgfsys@curveto{-20.06712pt}{-0.83249pt}{-19.59941pt}{-1.5pt}{-18.93375pt}{-1.5pt}\pgfsys@curveto{-18.26808pt}{-1.5pt}{-17.80038pt}{-0.83249pt}{-18.1337pt}{0.0pt}\pgfsys@curveto{-18.46725pt}{0.83249pt}{-19.60158pt}{1.5pt}{-20.93387pt}{1.5pt}\pgfsys@curveto{-22.26616pt}{1.5pt}{-23.4005pt}{0.83249pt}{-23.73381pt}{0.0pt}\pgfsys@lineto{-27.13394pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-17.63394pt}{-8.33305pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$A$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{1.50891pt}\pgfsys@lineto{19.1866pt}{19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ }}{ } {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{4.0193pt}{13.21416pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$c$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{-1.50891pt}\pgfsys@lineto{19.1866pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{13.21416pt}{-7.48134pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\bar{c}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,\qquad$ $\displaystyle\leavevmode\hbox to48.7pt{\vbox to41.24pt{\pgfpicture\makeatletter\hbox{\hskip 27.38394pt\lower-19.6866pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.46725pt}{0.83249pt}{-3.60158pt}{1.5pt}{-4.93387pt}{1.5pt}\pgfsys@curveto{-6.26616pt}{1.5pt}{-7.4005pt}{0.83249pt}{-7.73381pt}{0.0pt}\pgfsys@curveto{-8.06712pt}{-0.83249pt}{-7.59941pt}{-1.5pt}{-6.93375pt}{-1.5pt}\pgfsys@curveto{-6.26808pt}{-1.5pt}{-5.80038pt}{-0.83249pt}{-6.1337pt}{0.0pt}\pgfsys@curveto{-6.46725pt}{0.83249pt}{-7.60158pt}{1.5pt}{-8.93387pt}{1.5pt}\pgfsys@curveto{-10.26616pt}{1.5pt}{-11.4005pt}{0.83249pt}{-11.73381pt}{0.0pt}\pgfsys@curveto{-12.06712pt}{-0.83249pt}{-11.59941pt}{-1.5pt}{-10.93375pt}{-1.5pt}\pgfsys@curveto{-10.26808pt}{-1.5pt}{-9.80038pt}{-0.83249pt}{-10.1337pt}{0.0pt}\pgfsys@curveto{-10.46725pt}{0.83249pt}{-11.60158pt}{1.5pt}{-12.93387pt}{1.5pt}\pgfsys@curveto{-14.26616pt}{1.5pt}{-15.4005pt}{0.83249pt}{-15.73381pt}{0.0pt}\pgfsys@curveto{-16.06712pt}{-0.83249pt}{-15.59941pt}{-1.5pt}{-14.93375pt}{-1.5pt}\pgfsys@curveto{-14.26808pt}{-1.5pt}{-13.80038pt}{-0.83249pt}{-14.1337pt}{0.0pt}\pgfsys@curveto{-14.46725pt}{0.83249pt}{-15.60158pt}{1.5pt}{-16.93387pt}{1.5pt}\pgfsys@curveto{-18.26616pt}{1.5pt}{-19.4005pt}{0.83249pt}{-19.73381pt}{0.0pt}\pgfsys@curveto{-20.06712pt}{-0.83249pt}{-19.59941pt}{-1.5pt}{-18.93375pt}{-1.5pt}\pgfsys@curveto{-18.26808pt}{-1.5pt}{-17.80038pt}{-0.83249pt}{-18.1337pt}{0.0pt}\pgfsys@curveto{-18.46725pt}{0.83249pt}{-19.60158pt}{1.5pt}{-20.93387pt}{1.5pt}\pgfsys@curveto{-22.26616pt}{1.5pt}{-23.4005pt}{0.83249pt}{-23.73381pt}{0.0pt}\pgfsys@lineto{-27.13394pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-17.63394pt}{-8.33305pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$A$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{1.50891pt}\pgfsys@lineto{19.1866pt}{19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ }}{ } {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{2.48134pt}{13.21416pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\bar{\phi}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{-1.50891pt}\pgfsys@lineto{19.1866pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{13.21416pt}{-5.92578pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\phi$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{3.13394pt}{0.0pt}\pgfsys@lineto{20.81163pt}{17.67769pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{3.13394pt}{0.0pt}\pgfsys@lineto{20.81163pt}{-17.67769pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,$ $\displaystyle\leavevmode\hbox to43.68pt{\vbox to39.66pt{\pgfpicture\makeatletter\hbox{\hskip 25.5pt\lower-18.17769pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{-25.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-15.0pt}{-8.54417pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\bar{\phi}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{2.0pt}\pgfsys@lineto{-25.0pt}{2.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}} {{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}} {} }{{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.2455pt}{0.2455pt}\pgfsys@lineto{0.22586pt}{0.75616pt}\pgfsys@lineto{0.27724pt}{1.19579pt}\pgfsys@lineto{0.4517pt}{1.51234pt}\pgfsys@lineto{0.76825pt}{1.6868pt}\pgfsys@lineto{1.20789pt}{1.73819pt}\pgfsys@lineto{1.71854pt}{1.71854pt}\pgfsys@lineto{2.2292pt}{1.6989pt}\pgfsys@lineto{2.66882pt}{1.75027pt}\pgfsys@lineto{2.98538pt}{1.92474pt}\pgfsys@lineto{3.15984pt}{2.24129pt}\pgfsys@lineto{3.21123pt}{2.68092pt}\pgfsys@lineto{3.19157pt}{3.19157pt}\pgfsys@lineto{3.17194pt}{3.70224pt}\pgfsys@lineto{3.22331pt}{4.14186pt}\pgfsys@lineto{3.39778pt}{4.45842pt}\pgfsys@lineto{3.71432pt}{4.63287pt}\pgfsys@lineto{4.15396pt}{4.68427pt}\pgfsys@lineto{4.66461pt}{4.66461pt}\pgfsys@lineto{5.17528pt}{4.64497pt}\pgfsys@lineto{5.61491pt}{4.69637pt}\pgfsys@lineto{5.93146pt}{4.87082pt}\pgfsys@lineto{6.10593pt}{5.18738pt}\pgfsys@lineto{6.1573pt}{5.627pt}\pgfsys@lineto{6.13766pt}{6.13766pt}\pgfsys@lineto{6.11801pt}{6.64832pt}\pgfsys@lineto{6.1694pt}{7.08795pt}\pgfsys@lineto{6.34386pt}{7.4045pt}\pgfsys@lineto{6.66042pt}{7.57896pt}\pgfsys@lineto{7.10004pt}{7.63034pt}\pgfsys@lineto{7.6107pt}{7.6107pt}\pgfsys@lineto{8.12135pt}{7.59105pt}\pgfsys@lineto{8.56099pt}{7.64244pt}\pgfsys@lineto{8.87753pt}{7.8169pt}\pgfsys@lineto{9.052pt}{8.13345pt}\pgfsys@lineto{9.10338pt}{8.57307pt}\pgfsys@lineto{9.08374pt}{9.08374pt}\pgfsys@lineto{9.06409pt}{9.59439pt}\pgfsys@lineto{9.11548pt}{10.03403pt}\pgfsys@lineto{9.28995pt}{10.35059pt}\pgfsys@lineto{9.60649pt}{10.52504pt}\pgfsys@lineto{10.04613pt}{10.57643pt}\pgfsys@lineto{10.55678pt}{10.55678pt}\pgfsys@lineto{17.67767pt}{17.67767pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ }}{ } {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.7613pt}{13.2608pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\psi$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{17.67769pt}{-17.67769pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{11.70525pt}{-5.97243pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$c$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,$ $\displaystyle\leavevmode\hbox to41.57pt{\vbox to42.91pt{\pgfpicture\makeatletter\hbox{\hskip 21.88057pt\lower-21.55833pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-1.50891pt}{1.50891pt}\pgfsys@curveto{-1.15594pt}{2.33325pt}{-1.48602pt}{3.60733pt}{-2.4281pt}{4.54941pt}\pgfsys@curveto{-3.37018pt}{5.49149pt}{-4.64427pt}{5.82158pt}{-5.46861pt}{5.46861pt}\pgfsys@curveto{-6.29295pt}{5.11565pt}{-6.43423pt}{4.31293pt}{-5.96353pt}{3.84222pt}\pgfsys@curveto{-5.49284pt}{3.37154pt}{-4.69012pt}{3.51282pt}{-4.33716pt}{4.33716pt}\pgfsys@curveto{-3.98438pt}{5.16168pt}{-4.31445pt}{6.43576pt}{-5.25653pt}{7.37784pt}\pgfsys@curveto{-6.19861pt}{8.31992pt}{-7.4727pt}{8.65001pt}{-8.29704pt}{8.29704pt}\pgfsys@curveto{-9.12138pt}{7.94408pt}{-9.26266pt}{7.14136pt}{-8.79196pt}{6.67065pt}\pgfsys@curveto{-8.32127pt}{6.19997pt}{-7.51855pt}{6.34125pt}{-7.16559pt}{7.16559pt}\pgfsys@curveto{-6.8128pt}{7.99011pt}{-7.14288pt}{9.26419pt}{-8.08496pt}{10.20627pt}\pgfsys@curveto{-9.02704pt}{11.14835pt}{-10.30113pt}{11.47844pt}{-11.12547pt}{11.12547pt}\pgfsys@curveto{-11.94981pt}{10.7725pt}{-12.0911pt}{9.96979pt}{-11.62039pt}{9.49908pt}\pgfsys@curveto{-11.1497pt}{9.0284pt}{-10.34698pt}{9.16968pt}{-9.99402pt}{9.99402pt}\pgfsys@curveto{-9.64124pt}{10.81854pt}{-9.97131pt}{12.09262pt}{-10.91339pt}{13.0347pt}\pgfsys@curveto{-11.85547pt}{13.97678pt}{-13.12956pt}{14.30687pt}{-13.9539pt}{13.9539pt}\pgfsys@curveto{-14.77824pt}{13.60094pt}{-14.91953pt}{12.79822pt}{-14.44882pt}{12.32751pt}\pgfsys@curveto{-13.97813pt}{11.85683pt}{-13.17542pt}{11.99811pt}{-12.82245pt}{12.82245pt}\pgfsys@curveto{-12.46967pt}{13.64697pt}{-12.79974pt}{14.92105pt}{-13.74182pt}{15.86313pt}\pgfsys@curveto{-14.6839pt}{16.8052pt}{-15.958pt}{17.1353pt}{-16.78233pt}{16.78233pt}\pgfsys@lineto{-19.1866pt}{19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-19.21416pt}{2.0147pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$A$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}} {{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}} {} }{{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-1.50891pt}{-1.50891pt}\pgfsys@moveto{-1.50891pt}{-1.50891pt}\pgfsys@lineto{-1.75441pt}{-1.75441pt}\pgfsys@lineto{-1.73477pt}{-2.26508pt}\pgfsys@lineto{-1.78615pt}{-2.7047pt}\pgfsys@lineto{-1.96062pt}{-3.02126pt}\pgfsys@lineto{-2.27716pt}{-3.19571pt}\pgfsys@lineto{-2.7168pt}{-3.2471pt}\pgfsys@lineto{-3.22745pt}{-3.22745pt}\pgfsys@lineto{-3.73811pt}{-3.20781pt}\pgfsys@lineto{-4.17773pt}{-3.25919pt}\pgfsys@lineto{-4.4943pt}{-3.43365pt}\pgfsys@lineto{-4.66875pt}{-3.7502pt}\pgfsys@lineto{-4.72014pt}{-4.18983pt}\pgfsys@lineto{-4.70049pt}{-4.70049pt}\pgfsys@lineto{-4.68085pt}{-5.21115pt}\pgfsys@lineto{-4.73222pt}{-5.65077pt}\pgfsys@lineto{-4.9067pt}{-5.96733pt}\pgfsys@lineto{-5.22324pt}{-6.14178pt}\pgfsys@lineto{-5.66287pt}{-6.19318pt}\pgfsys@lineto{-6.17352pt}{-6.17352pt}\pgfsys@lineto{-6.68419pt}{-6.15388pt}\pgfsys@lineto{-7.12383pt}{-6.20528pt}\pgfsys@lineto{-7.44037pt}{-6.37973pt}\pgfsys@lineto{-7.61484pt}{-6.69629pt}\pgfsys@lineto{-7.66621pt}{-7.13591pt}\pgfsys@lineto{-7.64658pt}{-7.64658pt}\pgfsys@lineto{-7.62692pt}{-8.15723pt}\pgfsys@lineto{-7.67831pt}{-8.59686pt}\pgfsys@lineto{-7.85277pt}{-8.9134pt}\pgfsys@lineto{-8.16933pt}{-9.08788pt}\pgfsys@lineto{-8.60895pt}{-9.13925pt}\pgfsys@lineto{-9.11961pt}{-9.11961pt}\pgfsys@lineto{-9.63026pt}{-9.09996pt}\pgfsys@lineto{-10.0699pt}{-9.15135pt}\pgfsys@lineto{-10.38644pt}{-9.3258pt}\pgfsys@lineto{-10.56091pt}{-9.64236pt}\pgfsys@lineto{-10.61229pt}{-10.08199pt}\pgfsys@lineto{-10.59265pt}{-10.59265pt}\pgfsys@lineto{-10.573pt}{-11.1033pt}\pgfsys@lineto{-10.62439pt}{-11.54294pt}\pgfsys@lineto{-10.79886pt}{-11.8595pt}\pgfsys@lineto{-11.1154pt}{-12.03395pt}\pgfsys@lineto{-11.55504pt}{-12.08534pt}\pgfsys@lineto{-12.06569pt}{-12.06569pt}\pgfsys@lineto{-12.57635pt}{-12.04605pt}\pgfsys@lineto{-13.01598pt}{-12.09743pt}\pgfsys@lineto{-13.33253pt}{-12.2719pt}\pgfsys@lineto{-13.50699pt}{-12.58844pt}\pgfsys@lineto{-13.55838pt}{-13.02808pt}\pgfsys@lineto{-13.53873pt}{-13.53873pt}\pgfsys@lineto{-13.51909pt}{-14.0494pt}\pgfsys@lineto{-13.57047pt}{-14.48901pt}\pgfsys@lineto{-13.74493pt}{-14.80557pt}\pgfsys@lineto{-14.06148pt}{-14.98003pt}\pgfsys@lineto{-14.50111pt}{-15.03142pt}\pgfsys@lineto{-15.01176pt}{-15.01176pt}\pgfsys@lineto{-15.52243pt}{-14.99213pt}\pgfsys@lineto{-15.96205pt}{-15.0435pt}\pgfsys@lineto{-16.27861pt}{-15.21797pt}\pgfsys@lineto{-16.45306pt}{-15.53452pt}\pgfsys@lineto{-16.50446pt}{-15.97415pt}\pgfsys@lineto{-16.48482pt}{-16.48482pt}\pgfsys@lineto{-16.46516pt}{-16.99547pt}\pgfsys@lineto{-16.51656pt}{-17.4351pt}\pgfsys@lineto{-16.69101pt}{-17.75165pt}\pgfsys@lineto{-17.00757pt}{-17.92612pt}\pgfsys@lineto{-17.44719pt}{-17.9775pt}\pgfsys@lineto{-17.95786pt}{-17.95786pt}\pgfsys@lineto{-18.4685pt}{-17.9382pt}\pgfsys@lineto{-18.90814pt}{-17.9896pt}\pgfsys@lineto{-19.22469pt}{-18.16405pt}\pgfsys@lineto{-19.18661pt}{-19.18661pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ }}{ } {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-7.48134pt}{-18.89192pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\bar{\chi}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{1.50891pt}\pgfsys@curveto{2.33325pt}{1.15594pt}{3.60733pt}{1.48602pt}{4.54941pt}{2.4281pt}\pgfsys@curveto{5.49149pt}{3.37018pt}{5.82158pt}{4.64427pt}{5.46861pt}{5.46861pt}\pgfsys@curveto{5.11565pt}{6.29295pt}{4.31293pt}{6.43423pt}{3.84222pt}{5.96353pt}\pgfsys@curveto{3.37154pt}{5.49284pt}{3.51282pt}{4.69012pt}{4.33716pt}{4.33716pt}\pgfsys@curveto{5.16168pt}{3.98438pt}{6.43576pt}{4.31445pt}{7.37784pt}{5.25653pt}\pgfsys@curveto{8.31992pt}{6.19861pt}{8.65001pt}{7.4727pt}{8.29704pt}{8.29704pt}\pgfsys@curveto{7.94408pt}{9.12138pt}{7.14136pt}{9.26266pt}{6.67065pt}{8.79196pt}\pgfsys@curveto{6.19997pt}{8.32127pt}{6.34125pt}{7.51855pt}{7.16559pt}{7.16559pt}\pgfsys@curveto{7.99011pt}{6.8128pt}{9.26419pt}{7.14288pt}{10.20627pt}{8.08496pt}\pgfsys@curveto{11.14835pt}{9.02704pt}{11.47844pt}{10.30113pt}{11.12547pt}{11.12547pt}\pgfsys@curveto{10.7725pt}{11.94981pt}{9.96979pt}{12.0911pt}{9.49908pt}{11.62039pt}\pgfsys@curveto{9.0284pt}{11.1497pt}{9.16968pt}{10.34698pt}{9.99402pt}{9.99402pt}\pgfsys@curveto{10.81854pt}{9.64124pt}{12.09262pt}{9.97131pt}{13.0347pt}{10.91339pt}\pgfsys@curveto{13.97678pt}{11.85547pt}{14.30687pt}{13.12956pt}{13.9539pt}{13.9539pt}\pgfsys@curveto{13.60094pt}{14.77824pt}{12.79822pt}{14.91953pt}{12.32751pt}{14.44882pt}\pgfsys@curveto{11.85683pt}{13.97813pt}{11.99811pt}{13.17542pt}{12.82245pt}{12.82245pt}\pgfsys@curveto{13.64697pt}{12.46967pt}{14.92105pt}{12.79974pt}{15.86313pt}{13.74182pt}\pgfsys@curveto{16.8052pt}{14.6839pt}{17.1353pt}{15.958pt}{16.78233pt}{16.78233pt}\pgfsys@lineto{19.1866pt}{19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ }}{ } {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{1.48134pt}{13.21416pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$A$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{1.50891pt}{-1.50891pt}\pgfsys@lineto{19.1866pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{13.21416pt}{-7.48134pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$c$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys<EMAIL_ADDRESS>(3.11) Using these diagrams, one identifies a kind of cascade effect in which the number of internal $A$-legs always increases when trying to construct loop Feynman diagrams, according to the diagram below, $\begin{split}\leavevmode\hbox to123.87pt{\vbox to179.83pt{\pgfpicture\makeatletter\hbox{\hskip 2.38394pt\lower-88.41568pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{36.40182pt}{0.0pt}\pgfsys@curveto{36.40182pt}{1.17856pt}{35.44644pt}{2.13394pt}{34.26788pt}{2.13394pt}\pgfsys@curveto{33.08932pt}{2.13394pt}{32.13394pt}{1.17856pt}{32.13394pt}{0.0pt}\pgfsys@curveto{32.13394pt}{-1.17856pt}{33.08932pt}{-2.13394pt}{34.26788pt}{-2.13394pt}\pgfsys@curveto{35.44644pt}{-2.13394pt}{36.40182pt}{-1.17856pt}{36.40182pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{34.26788pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{34.26788pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{ }}{ }{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{60.63287pt}{24.23105pt}\pgfsys@curveto{60.63287pt}{25.4096pt}{59.67749pt}{26.36499pt}{58.49893pt}{26.36499pt}\pgfsys@curveto{57.32037pt}{26.36499pt}{56.36499pt}{25.4096pt}{56.36499pt}{24.23105pt}\pgfsys@curveto{56.36499pt}{23.05249pt}{57.32037pt}{22.0971pt}{58.49893pt}{22.0971pt}\pgfsys@curveto{59.67749pt}{22.0971pt}{60.63287pt}{23.05249pt}{60.63287pt}{24.23105pt}\pgfsys@closepath\pgfsys@moveto{58.49893pt}{24.23105pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{58.49893pt}{24.23105pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{60.63287pt}{58.49893pt}\pgfsys@curveto{60.63287pt}{59.67749pt}{59.67749pt}{60.63287pt}{58.49893pt}{60.63287pt}\pgfsys@curveto{57.32037pt}{60.63287pt}{56.36499pt}{59.67749pt}{56.36499pt}{58.49893pt}\pgfsys@curveto{56.36499pt}{57.32037pt}{57.32037pt}{56.36499pt}{58.49893pt}{56.36499pt}\pgfsys@curveto{59.67749pt}{56.36499pt}{60.63287pt}{57.32037pt}{60.63287pt}{58.49893pt}\pgfsys@closepath\pgfsys@moveto{58.49893pt}{58.49893pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{58.49893pt}{58.49893pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{55.49893pt}{83.14928pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\vdots$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{94.90076pt}{24.23105pt}\pgfsys@curveto{94.90076pt}{25.4096pt}{93.94537pt}{26.36499pt}{92.76682pt}{26.36499pt}\pgfsys@curveto{91.58826pt}{26.36499pt}{90.63287pt}{25.4096pt}{90.63287pt}{24.23105pt}\pgfsys@curveto{90.63287pt}{23.05249pt}{91.58826pt}{22.0971pt}{92.76682pt}{22.0971pt}\pgfsys@curveto{93.94537pt}{22.0971pt}{94.90076pt}{23.05249pt}{94.90076pt}{24.23105pt}\pgfsys@closepath\pgfsys@moveto{92.76682pt}{24.23105pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{92.76682pt}{24.23105pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{112.81717pt}{22.23105pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\cdots$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{ }}{ }{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{60.63287pt}{-24.23105pt}\pgfsys@curveto{60.63287pt}{-23.05249pt}{59.67749pt}{-22.0971pt}{58.49893pt}{-22.0971pt}\pgfsys@curveto{57.32037pt}{-22.0971pt}{56.36499pt}{-23.05249pt}{56.36499pt}{-24.23105pt}\pgfsys@curveto{56.36499pt}{-25.4096pt}{57.32037pt}{-26.36499pt}{58.49893pt}{-26.36499pt}\pgfsys@curveto{59.67749pt}{-26.36499pt}{60.63287pt}{-25.4096pt}{60.63287pt}{-24.23105pt}\pgfsys@closepath\pgfsys@moveto{58.49893pt}{-24.23105pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{58.49893pt}{-24.23105pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{94.90076pt}{-24.23105pt}\pgfsys@curveto{94.90076pt}{-23.05249pt}{93.94537pt}{-22.0971pt}{92.76682pt}{-22.0971pt}\pgfsys@curveto{91.58826pt}{-22.0971pt}{90.63287pt}{-23.05249pt}{90.63287pt}{-24.23105pt}\pgfsys@curveto{90.63287pt}{-25.4096pt}{91.58826pt}{-26.36499pt}{92.76682pt}{-26.36499pt}\pgfsys@curveto{93.94537pt}{-26.36499pt}{94.90076pt}{-25.4096pt}{94.90076pt}{-24.23105pt}\pgfsys@closepath\pgfsys@moveto{92.76682pt}{-24.23105pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{92.76682pt}{-24.23105pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{112.81717pt}{-26.23105pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\cdots$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{60.63287pt}{-58.49893pt}\pgfsys@curveto{60.63287pt}{-57.32037pt}{59.67749pt}{-56.36499pt}{58.49893pt}{-56.36499pt}\pgfsys@curveto{57.32037pt}{-56.36499pt}{56.36499pt}{-57.32037pt}{56.36499pt}{-58.49893pt}\pgfsys@curveto{56.36499pt}{-59.67749pt}{57.32037pt}{-60.63287pt}{58.49893pt}{-60.63287pt}\pgfsys@curveto{59.67749pt}{-60.63287pt}{60.63287pt}{-59.67749pt}{60.63287pt}{-58.49893pt}\pgfsys@closepath\pgfsys@moveto{58.49893pt}{-58.49893pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{58.49893pt}{-58.49893pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{55.49893pt}{-84.14928pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\vdots$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ } {{}} {{}} {{}} {{}} { {} {}{ } {} {} { } {} {} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.46725pt}{-0.83249pt}{3.60158pt}{-1.5pt}{4.93387pt}{-1.5pt}\pgfsys@curveto{6.26616pt}{-1.5pt}{7.4005pt}{-0.83249pt}{7.73381pt}{0.0pt}\pgfsys@curveto{8.06712pt}{0.83249pt}{7.59941pt}{1.5pt}{6.93375pt}{1.5pt}\pgfsys@curveto{6.26808pt}{1.5pt}{5.80038pt}{0.83249pt}{6.1337pt}{0.0pt}\pgfsys@curveto{6.46725pt}{-0.83249pt}{7.60158pt}{-1.5pt}{8.93387pt}{-1.5pt}\pgfsys@curveto{10.26616pt}{-1.5pt}{11.4005pt}{-0.83249pt}{11.73381pt}{0.0pt}\pgfsys@curveto{12.06712pt}{0.83249pt}{11.59941pt}{1.5pt}{10.93375pt}{1.5pt}\pgfsys@curveto{10.26808pt}{1.5pt}{9.80038pt}{0.83249pt}{10.1337pt}{0.0pt}\pgfsys@curveto{10.46725pt}{-0.83249pt}{11.60158pt}{-1.5pt}{12.93387pt}{-1.5pt}\pgfsys@curveto{14.26616pt}{-1.5pt}{15.4005pt}{-0.83249pt}{15.73381pt}{0.0pt}\pgfsys@curveto{16.06712pt}{0.83249pt}{15.59941pt}{1.5pt}{14.93375pt}{1.5pt}\pgfsys@curveto{14.26808pt}{1.5pt}{13.80038pt}{0.83249pt}{14.1337pt}{0.0pt}\pgfsys@curveto{14.46725pt}{-0.83249pt}{15.60158pt}{-1.5pt}{16.93387pt}{-1.5pt}\pgfsys@curveto{18.26616pt}{-1.5pt}{19.4005pt}{-0.83249pt}{19.73381pt}{0.0pt}\pgfsys@lineto{20.13394pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} {{}} {{}} {{}} {{}} { {} {}{ } {} {} {} { } {} {} {} {{}}{}{{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{20.13394pt}{0.0pt}\pgfsys@lineto{32.13394pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{20.13394pt}{2.0pt}\pgfsys@lineto{32.13394pt}{2.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ } {{}} {{}} {{}} {{}} { {} {}{ } {} {} { } {} {} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{35.7768pt}{1.50891pt}\pgfsys@curveto{36.60114pt}{1.15594pt}{37.87521pt}{1.48602pt}{38.81729pt}{2.4281pt}\pgfsys@curveto{39.75937pt}{3.37018pt}{40.08946pt}{4.64427pt}{39.7365pt}{5.46861pt}\pgfsys@curveto{39.38353pt}{6.29295pt}{38.58081pt}{6.43423pt}{38.1101pt}{5.96353pt}\pgfsys@curveto{37.63942pt}{5.49284pt}{37.7807pt}{4.69012pt}{38.60504pt}{4.33716pt}\pgfsys@curveto{39.42957pt}{3.98438pt}{40.70364pt}{4.31445pt}{41.64572pt}{5.25653pt}\pgfsys@curveto{42.5878pt}{6.19861pt}{42.9179pt}{7.4727pt}{42.56493pt}{8.29704pt}\pgfsys@curveto{42.21196pt}{9.12138pt}{41.40924pt}{9.26266pt}{40.93854pt}{8.79196pt}\pgfsys@curveto{40.46785pt}{8.32127pt}{40.60913pt}{7.51855pt}{41.43347pt}{7.16559pt}\pgfsys@curveto{42.258pt}{6.8128pt}{43.53207pt}{7.14288pt}{44.47415pt}{8.08496pt}\pgfsys@curveto{45.41623pt}{9.02704pt}{45.74632pt}{10.30113pt}{45.39336pt}{11.12547pt}\pgfsys@curveto{45.04039pt}{11.94981pt}{44.23767pt}{12.0911pt}{43.76697pt}{11.62039pt}\pgfsys@curveto{43.29628pt}{11.1497pt}{43.43756pt}{10.34698pt}{44.2619pt}{9.99402pt}\pgfsys@curveto{45.08643pt}{9.64124pt}{46.3605pt}{9.97131pt}{47.30258pt}{10.91339pt}\pgfsys@curveto{48.24466pt}{11.85547pt}{48.57475pt}{13.12956pt}{48.22179pt}{13.9539pt}\pgfsys@lineto{48.50471pt}{14.23683pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} {{}} {{}} {{}} {{}} { {} {}{ } {} {} {} { } {} {} {} {{}}{}{{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{48.50473pt}{14.23685pt}\pgfsys@lineto{56.99002pt}{22.72214pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{47.09052pt}{15.65106pt}\pgfsys@lineto{55.5758pt}{24.13635pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ } {{}} {{}} {{}} {{}} { {} {}{ } {} {} { } {} {} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{58.49893pt}{26.36499pt}\pgfsys@curveto{59.33142pt}{26.6983pt}{59.99893pt}{27.83263pt}{59.99893pt}{29.16492pt}\pgfsys@curveto{59.99893pt}{30.49721pt}{59.33142pt}{31.63155pt}{58.49893pt}{31.96486pt}\pgfsys@curveto{57.66644pt}{32.29817pt}{56.99893pt}{31.83046pt}{56.99893pt}{31.1648pt}\pgfsys@curveto{56.99893pt}{30.49913pt}{57.66644pt}{30.03143pt}{58.49893pt}{30.36475pt}\pgfsys@curveto{59.33142pt}{30.6983pt}{59.99893pt}{31.83263pt}{59.99893pt}{33.16492pt}\pgfsys@curveto{59.99893pt}{34.49721pt}{59.33142pt}{35.63155pt}{58.49893pt}{35.96486pt}\pgfsys@curveto{57.66644pt}{36.29817pt}{56.99893pt}{35.83046pt}{56.99893pt}{35.1648pt}\pgfsys@curveto{56.99893pt}{34.49913pt}{57.66644pt}{34.03143pt}{58.49893pt}{34.36475pt}\pgfsys@curveto{59.33142pt}{34.6983pt}{59.99893pt}{35.83263pt}{59.99893pt}{37.16492pt}\pgfsys@curveto{59.99893pt}{38.49721pt}{59.33142pt}{39.63155pt}{58.49893pt}{39.96486pt}\pgfsys@curveto{57.66644pt}{40.29817pt}{56.99893pt}{39.83046pt}{56.99893pt}{39.1648pt}\pgfsys@curveto{56.99893pt}{38.49913pt}{57.66644pt}{38.03143pt}{58.49893pt}{38.36475pt}\pgfsys@curveto{59.33142pt}{38.6983pt}{59.99893pt}{39.83263pt}{59.99893pt}{41.16492pt}\pgfsys@curveto{59.99893pt}{42.49721pt}{59.33142pt}{43.63155pt}{58.49893pt}{43.96486pt}\pgfsys@lineto{58.49893pt}{44.36499pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} {{}} {{}} {{}} {{}} { {} {}{ } {} {} {} { } {} {} {} {{}}{}{{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{58.49893pt}{44.36499pt}\pgfsys@lineto{58.49893pt}{56.36499pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{56.49893pt}{44.36499pt}\pgfsys@lineto{56.49893pt}{56.36499pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{56.99002pt}{60.00784pt}\pgfsys@curveto{57.34299pt}{60.83218pt}{57.01291pt}{62.10626pt}{56.07083pt}{63.04834pt}\pgfsys@curveto{55.12875pt}{63.99042pt}{53.85466pt}{64.32051pt}{53.03032pt}{63.96754pt}\pgfsys@curveto{52.20598pt}{63.61458pt}{52.0647pt}{62.81186pt}{52.5354pt}{62.34116pt}\pgfsys@curveto{53.00609pt}{61.87047pt}{53.8088pt}{62.01175pt}{54.16177pt}{62.83609pt}\pgfsys@curveto{54.51456pt}{63.66061pt}{54.18448pt}{64.9347pt}{53.2424pt}{65.87677pt}\pgfsys@curveto{52.30032pt}{66.81885pt}{51.02623pt}{67.14894pt}{50.20189pt}{66.79597pt}\pgfsys@curveto{49.37755pt}{66.44301pt}{49.23627pt}{65.64029pt}{49.70697pt}{65.16959pt}\pgfsys@curveto{50.17766pt}{64.6989pt}{50.98038pt}{64.84018pt}{51.33334pt}{65.66452pt}\pgfsys@curveto{51.68613pt}{66.48904pt}{51.35605pt}{67.76312pt}{50.41397pt}{68.7052pt}\pgfsys@curveto{49.4719pt}{69.64728pt}{48.1978pt}{69.97737pt}{47.37346pt}{69.6244pt}\pgfsys@curveto{46.54912pt}{69.27144pt}{46.40784pt}{68.46872pt}{46.87854pt}{67.99802pt}\pgfsys@curveto{47.34923pt}{67.52733pt}{48.15195pt}{67.66861pt}{48.50491pt}{68.49295pt}\pgfsys@curveto{48.8577pt}{69.31747pt}{48.52762pt}{70.59155pt}{47.58554pt}{71.53363pt}\pgfsys@curveto{46.64346pt}{72.47571pt}{45.36937pt}{72.8058pt}{44.54503pt}{72.45284pt}\pgfsys@lineto{44.2621pt}{72.73576pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{60.00784pt}{60.00784pt}\pgfsys@curveto{60.83218pt}{59.65488pt}{62.10626pt}{59.98495pt}{63.04834pt}{60.92703pt}\pgfsys@curveto{63.99042pt}{61.86911pt}{64.32051pt}{63.1432pt}{63.96754pt}{63.96754pt}\pgfsys@curveto{63.61458pt}{64.79189pt}{62.81186pt}{64.93317pt}{62.34116pt}{64.46246pt}\pgfsys@curveto{61.87047pt}{63.99178pt}{62.01175pt}{63.18906pt}{62.83609pt}{62.83609pt}\pgfsys@curveto{63.66061pt}{62.4833pt}{64.9347pt}{62.81339pt}{65.87677pt}{63.75546pt}\pgfsys@curveto{66.81885pt}{64.69754pt}{67.14894pt}{65.97163pt}{66.79597pt}{66.79597pt}\pgfsys@curveto{66.44301pt}{67.62032pt}{65.64029pt}{67.7616pt}{65.16959pt}{67.2909pt}\pgfsys@curveto{64.6989pt}{66.8202pt}{64.84018pt}{66.01749pt}{65.66452pt}{65.66452pt}\pgfsys@curveto{66.48904pt}{65.31174pt}{67.76312pt}{65.64182pt}{68.7052pt}{66.5839pt}\pgfsys@curveto{69.64728pt}{67.52597pt}{69.97737pt}{68.80006pt}{69.6244pt}{69.6244pt}\pgfsys@curveto{69.27144pt}{70.44875pt}{68.46872pt}{70.59003pt}{67.99802pt}{70.11932pt}\pgfsys@curveto{67.52733pt}{69.64864pt}{67.66861pt}{68.84592pt}{68.49295pt}{68.49295pt}\pgfsys@curveto{69.31747pt}{68.14017pt}{70.59155pt}{68.47025pt}{71.53363pt}{69.41232pt}\pgfsys@curveto{72.47571pt}{70.3544pt}{72.8058pt}{71.6285pt}{72.45284pt}{72.45284pt}\pgfsys@lineto{72.73576pt}{72.73576pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ } {{}} {{}} {{}} {{}} { {} {}{ } {} {} { } {} {} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{60.63287pt}{24.23105pt}\pgfsys@curveto{60.96619pt}{23.39856pt}{62.10051pt}{22.73105pt}{63.4328pt}{22.73105pt}\pgfsys@curveto{64.76509pt}{22.73105pt}{65.89943pt}{23.39856pt}{66.23274pt}{24.23105pt}\pgfsys@curveto{66.56606pt}{25.06354pt}{66.09834pt}{25.73105pt}{65.43268pt}{25.73105pt}\pgfsys@curveto{64.76701pt}{25.73105pt}{64.29932pt}{25.06354pt}{64.63263pt}{24.23105pt}\pgfsys@curveto{64.96619pt}{23.39856pt}{66.10051pt}{22.73105pt}{67.4328pt}{22.73105pt}\pgfsys@curveto{68.76509pt}{22.73105pt}{69.89943pt}{23.39856pt}{70.23274pt}{24.23105pt}\pgfsys@curveto{70.56606pt}{25.06354pt}{70.09834pt}{25.73105pt}{69.43268pt}{25.73105pt}\pgfsys@curveto{68.76701pt}{25.73105pt}{68.29932pt}{25.06354pt}{68.63263pt}{24.23105pt}\pgfsys@curveto{68.96619pt}{23.39856pt}{70.10051pt}{22.73105pt}{71.4328pt}{22.73105pt}\pgfsys@curveto{72.76509pt}{22.73105pt}{73.89943pt}{23.39856pt}{74.23274pt}{24.23105pt}\pgfsys@curveto{74.56606pt}{25.06354pt}{74.09834pt}{25.73105pt}{73.43268pt}{25.73105pt}\pgfsys@curveto{72.76701pt}{25.73105pt}{72.29932pt}{25.06354pt}{72.63263pt}{24.23105pt}\pgfsys@curveto{72.96619pt}{23.39856pt}{74.10051pt}{22.73105pt}{75.4328pt}{22.73105pt}\pgfsys@curveto{76.76509pt}{22.73105pt}{77.89943pt}{23.39856pt}{78.23274pt}{24.23105pt}\pgfsys@lineto{78.63287pt}{24.23105pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} {{}} {{}} {{}} {{}} { {} {}{ } {} {} {} { } {} {} {} {{}}{}{{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{78.63287pt}{24.23105pt}\pgfsys@lineto{90.63287pt}{24.23105pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{78.63287pt}{26.23105pt}\pgfsys@lineto{90.63287pt}{26.23105pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{94.27573pt}{25.73996pt}\pgfsys@curveto{95.10007pt}{25.387pt}{96.37415pt}{25.71707pt}{97.31622pt}{26.65915pt}\pgfsys@curveto{98.2583pt}{27.60123pt}{98.5884pt}{28.87532pt}{98.23543pt}{29.69966pt}\pgfsys@curveto{97.88246pt}{30.524pt}{97.07974pt}{30.66528pt}{96.60904pt}{30.19458pt}\pgfsys@curveto{96.13835pt}{29.72389pt}{96.27963pt}{28.92117pt}{97.10397pt}{28.5682pt}\pgfsys@curveto{97.9285pt}{28.21542pt}{99.20258pt}{28.5455pt}{100.14465pt}{29.48758pt}\pgfsys@curveto{101.08673pt}{30.42966pt}{101.41682pt}{31.70375pt}{101.06386pt}{32.52809pt}\pgfsys@curveto{100.71089pt}{33.35243pt}{99.90817pt}{33.49371pt}{99.43747pt}{33.02301pt}\pgfsys@curveto{98.96678pt}{32.55232pt}{99.10806pt}{31.7496pt}{99.9324pt}{31.39664pt}\pgfsys@curveto{100.75693pt}{31.04385pt}{102.031pt}{31.37393pt}{102.97308pt}{32.31601pt}\pgfsys@curveto{103.91516pt}{33.25809pt}{104.24525pt}{34.53218pt}{103.89229pt}{35.35652pt}\pgfsys@curveto{103.53932pt}{36.18086pt}{102.7366pt}{36.32214pt}{102.2659pt}{35.85144pt}\pgfsys@curveto{101.79521pt}{35.38075pt}{101.9365pt}{34.57803pt}{102.76083pt}{34.22507pt}\pgfsys@curveto{103.58536pt}{33.87228pt}{104.85944pt}{34.20236pt}{105.80151pt}{35.14444pt}\pgfsys@curveto{106.74359pt}{36.08652pt}{107.07368pt}{37.36061pt}{106.72072pt}{38.18495pt}\pgfsys@lineto{107.00365pt}{38.46788pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{94.27573pt}{22.72214pt}\pgfsys@curveto{93.92276pt}{21.8978pt}{94.25284pt}{20.62372pt}{95.19492pt}{19.68164pt}\pgfsys@curveto{96.137pt}{18.73956pt}{97.41109pt}{18.40947pt}{98.23543pt}{18.76244pt}\pgfsys@curveto{99.05977pt}{19.1154pt}{99.20105pt}{19.91812pt}{98.73035pt}{20.38882pt}\pgfsys@curveto{98.25966pt}{20.85951pt}{97.45694pt}{20.71823pt}{97.10397pt}{19.89389pt}\pgfsys@curveto{96.75119pt}{19.06937pt}{97.08127pt}{17.79529pt}{98.02335pt}{16.85321pt}\pgfsys@curveto{98.96542pt}{15.91113pt}{100.23952pt}{15.58104pt}{101.06386pt}{15.934pt}\pgfsys@curveto{101.8882pt}{16.28697pt}{102.02948pt}{17.08969pt}{101.55878pt}{17.5604pt}\pgfsys@curveto{101.08809pt}{18.03108pt}{100.28537pt}{17.8898pt}{99.9324pt}{17.06546pt}\pgfsys@curveto{99.57962pt}{16.24094pt}{99.9097pt}{14.96686pt}{100.85178pt}{14.02478pt}\pgfsys@curveto{101.79385pt}{13.0827pt}{103.06795pt}{12.75261pt}{103.89229pt}{13.10558pt}\pgfsys@curveto{104.71663pt}{13.45854pt}{104.85791pt}{14.26126pt}{104.3872pt}{14.73196pt}\pgfsys@curveto{103.91652pt}{15.20265pt}{103.1138pt}{15.06137pt}{102.76083pt}{14.23703pt}\pgfsys@curveto{102.40805pt}{13.4125pt}{102.73813pt}{12.13843pt}{103.6802pt}{11.19635pt}\pgfsys@curveto{104.62228pt}{10.25427pt}{105.89638pt}{9.92418pt}{106.72072pt}{10.27715pt}\pgfsys@lineto{107.00365pt}{9.99422pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ } {{}} {{}} {{}} {{}} { {} {}{ } {} {} { } {} {} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{35.7768pt}{-1.50891pt}\pgfsys@curveto{35.42383pt}{-2.33325pt}{35.7539pt}{-3.60733pt}{36.69598pt}{-4.54941pt}\pgfsys@curveto{37.63806pt}{-5.49149pt}{38.91216pt}{-5.82158pt}{39.7365pt}{-5.46861pt}\pgfsys@curveto{40.56084pt}{-5.11565pt}{40.70212pt}{-4.31293pt}{40.23141pt}{-3.84222pt}\pgfsys@curveto{39.76073pt}{-3.37154pt}{38.95801pt}{-3.51282pt}{38.60504pt}{-4.33716pt}\pgfsys@curveto{38.25226pt}{-5.16168pt}{38.58234pt}{-6.43576pt}{39.52441pt}{-7.37784pt}\pgfsys@curveto{40.46649pt}{-8.31992pt}{41.74059pt}{-8.65001pt}{42.56493pt}{-8.29704pt}\pgfsys@curveto{43.38927pt}{-7.94408pt}{43.53055pt}{-7.14136pt}{43.05984pt}{-6.67065pt}\pgfsys@curveto{42.58916pt}{-6.19997pt}{41.78644pt}{-6.34125pt}{41.43347pt}{-7.16559pt}\pgfsys@curveto{41.08069pt}{-7.99011pt}{41.41077pt}{-9.26419pt}{42.35284pt}{-10.20627pt}\pgfsys@curveto{43.29492pt}{-11.14835pt}{44.56902pt}{-11.47844pt}{45.39336pt}{-11.12547pt}\pgfsys@curveto{46.2177pt}{-10.7725pt}{46.35898pt}{-9.96979pt}{45.88828pt}{-9.49908pt}\pgfsys@curveto{45.41759pt}{-9.0284pt}{44.61487pt}{-9.16968pt}{44.2619pt}{-9.99402pt}\pgfsys@curveto{43.90912pt}{-10.81854pt}{44.2392pt}{-12.09262pt}{45.18127pt}{-13.0347pt}\pgfsys@curveto{46.12335pt}{-13.97678pt}{47.39745pt}{-14.30687pt}{48.22179pt}{-13.9539pt}\pgfsys@lineto{48.50471pt}{-14.23683pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} {{}} {{}} {{}} {{}} { {} {}{ } {} {} {} { } {} {} {} {{}}{}{{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{48.50473pt}{-14.23685pt}\pgfsys@lineto{56.99002pt}{-22.72214pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{49.91895pt}{-12.82263pt}\pgfsys@lineto{58.40424pt}{-21.30792pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ } {{}} {{}} {{}} {{}} { {} {}{ } {} {} { } {} {} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{60.63287pt}{-24.23105pt}\pgfsys@curveto{60.96619pt}{-25.06354pt}{62.10051pt}{-25.73105pt}{63.4328pt}{-25.73105pt}\pgfsys@curveto{64.76509pt}{-25.73105pt}{65.89943pt}{-25.06354pt}{66.23274pt}{-24.23105pt}\pgfsys@curveto{66.56606pt}{-23.39856pt}{66.09834pt}{-22.73105pt}{65.43268pt}{-22.73105pt}\pgfsys@curveto{64.76701pt}{-22.73105pt}{64.29932pt}{-23.39856pt}{64.63263pt}{-24.23105pt}\pgfsys@curveto{64.96619pt}{-25.06354pt}{66.10051pt}{-25.73105pt}{67.4328pt}{-25.73105pt}\pgfsys@curveto{68.76509pt}{-25.73105pt}{69.89943pt}{-25.06354pt}{70.23274pt}{-24.23105pt}\pgfsys@curveto{70.56606pt}{-23.39856pt}{70.09834pt}{-22.73105pt}{69.43268pt}{-22.73105pt}\pgfsys@curveto{68.76701pt}{-22.73105pt}{68.29932pt}{-23.39856pt}{68.63263pt}{-24.23105pt}\pgfsys@curveto{68.96619pt}{-25.06354pt}{70.10051pt}{-25.73105pt}{71.4328pt}{-25.73105pt}\pgfsys@curveto{72.76509pt}{-25.73105pt}{73.89943pt}{-25.06354pt}{74.23274pt}{-24.23105pt}\pgfsys@curveto{74.56606pt}{-23.39856pt}{74.09834pt}{-22.73105pt}{73.43268pt}{-22.73105pt}\pgfsys@curveto{72.76701pt}{-22.73105pt}{72.29932pt}{-23.39856pt}{72.63263pt}{-24.23105pt}\pgfsys@curveto{72.96619pt}{-25.06354pt}{74.10051pt}{-25.73105pt}{75.4328pt}{-25.73105pt}\pgfsys@curveto{76.76509pt}{-25.73105pt}{77.89943pt}{-25.06354pt}{78.23274pt}{-24.23105pt}\pgfsys@lineto{78.63287pt}{-24.23105pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} {{}} {{}} {{}} {{}} { {} {}{ } {} {} {} { } {} {} {} {{}}{}{{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{78.63287pt}{-24.23105pt}\pgfsys@lineto{90.63287pt}{-24.23105pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{78.63287pt}{-22.23105pt}\pgfsys@lineto{90.63287pt}{-22.23105pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{94.27573pt}{-22.72214pt}\pgfsys@curveto{95.10007pt}{-23.0751pt}{96.37415pt}{-22.74503pt}{97.31622pt}{-21.80295pt}\pgfsys@curveto{98.2583pt}{-20.86087pt}{98.5884pt}{-19.58678pt}{98.23543pt}{-18.76244pt}\pgfsys@curveto{97.88246pt}{-17.9381pt}{97.07974pt}{-17.79681pt}{96.60904pt}{-18.26752pt}\pgfsys@curveto{96.13835pt}{-18.7382pt}{96.27963pt}{-19.54092pt}{97.10397pt}{-19.89389pt}\pgfsys@curveto{97.9285pt}{-20.24667pt}{99.20258pt}{-19.9166pt}{100.14465pt}{-18.97452pt}\pgfsys@curveto{101.08673pt}{-18.03244pt}{101.41682pt}{-16.75835pt}{101.06386pt}{-15.934pt}\pgfsys@curveto{100.71089pt}{-15.10966pt}{99.90817pt}{-14.96838pt}{99.43747pt}{-15.43909pt}\pgfsys@curveto{98.96678pt}{-15.90977pt}{99.10806pt}{-16.7125pt}{99.9324pt}{-17.06546pt}\pgfsys@curveto{100.75693pt}{-17.41824pt}{102.031pt}{-17.08817pt}{102.97308pt}{-16.14609pt}\pgfsys@curveto{103.91516pt}{-15.20401pt}{104.24525pt}{-13.92992pt}{103.89229pt}{-13.10558pt}\pgfsys@curveto{103.53932pt}{-12.28123pt}{102.7366pt}{-12.13995pt}{102.2659pt}{-12.61066pt}\pgfsys@curveto{101.79521pt}{-13.08134pt}{101.9365pt}{-13.88406pt}{102.76083pt}{-14.23703pt}\pgfsys@curveto{103.58536pt}{-14.58981pt}{104.85944pt}{-14.25974pt}{105.80151pt}{-13.31766pt}\pgfsys@curveto{106.74359pt}{-12.37558pt}{107.07368pt}{-11.10149pt}{106.72072pt}{-10.27715pt}\pgfsys@lineto{107.00365pt}{-9.99422pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{94.27573pt}{-25.73996pt}\pgfsys@curveto{93.92276pt}{-26.5643pt}{94.25284pt}{-27.83838pt}{95.19492pt}{-28.78046pt}\pgfsys@curveto{96.137pt}{-29.72253pt}{97.41109pt}{-30.05263pt}{98.23543pt}{-29.69966pt}\pgfsys@curveto{99.05977pt}{-29.3467pt}{99.20105pt}{-28.54398pt}{98.73035pt}{-28.07327pt}\pgfsys@curveto{98.25966pt}{-27.60258pt}{97.45694pt}{-27.74387pt}{97.10397pt}{-28.5682pt}\pgfsys@curveto{96.75119pt}{-29.39273pt}{97.08127pt}{-30.66681pt}{98.02335pt}{-31.60889pt}\pgfsys@curveto{98.96542pt}{-32.55096pt}{100.23952pt}{-32.88106pt}{101.06386pt}{-32.52809pt}\pgfsys@curveto{101.8882pt}{-32.17513pt}{102.02948pt}{-31.3724pt}{101.55878pt}{-30.9017pt}\pgfsys@curveto{101.08809pt}{-30.43102pt}{100.28537pt}{-30.5723pt}{99.9324pt}{-31.39664pt}\pgfsys@curveto{99.57962pt}{-32.22116pt}{99.9097pt}{-33.49524pt}{100.85178pt}{-34.43732pt}\pgfsys@curveto{101.79385pt}{-35.3794pt}{103.06795pt}{-35.70949pt}{103.89229pt}{-35.35652pt}\pgfsys@curveto{104.71663pt}{-35.00356pt}{104.85791pt}{-34.20084pt}{104.3872pt}{-33.73013pt}\pgfsys@curveto{103.91652pt}{-33.25945pt}{103.1138pt}{-33.40073pt}{102.76083pt}{-34.22507pt}\pgfsys@curveto{102.40805pt}{-35.04959pt}{102.73813pt}{-36.32367pt}{103.6802pt}{-37.26575pt}\pgfsys@curveto{104.62228pt}{-38.20782pt}{105.89638pt}{-38.53792pt}{106.72072pt}{-38.18495pt}\pgfsys@lineto{107.00365pt}{-38.46788pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ } {{}} {{}} {{}} {{}} { {} {}{ } {} {} { } {} {} {{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{58.49893pt}{-26.36499pt}\pgfsys@curveto{57.66644pt}{-26.6983pt}{56.99893pt}{-27.83263pt}{56.99893pt}{-29.16492pt}\pgfsys@curveto{56.99893pt}{-30.49721pt}{57.66644pt}{-31.63155pt}{58.49893pt}{-31.96486pt}\pgfsys@curveto{59.33142pt}{-32.29817pt}{59.99893pt}{-31.83046pt}{59.99893pt}{-31.1648pt}\pgfsys@curveto{59.99893pt}{-30.49913pt}{59.33142pt}{-30.03143pt}{58.49893pt}{-30.36475pt}\pgfsys@curveto{57.66644pt}{-30.6983pt}{56.99893pt}{-31.83263pt}{56.99893pt}{-33.16492pt}\pgfsys@curveto{56.99893pt}{-34.49721pt}{57.66644pt}{-35.63155pt}{58.49893pt}{-35.96486pt}\pgfsys@curveto{59.33142pt}{-36.29817pt}{59.99893pt}{-35.83046pt}{59.99893pt}{-35.1648pt}\pgfsys@curveto{59.99893pt}{-34.49913pt}{59.33142pt}{-34.03143pt}{58.49893pt}{-34.36475pt}\pgfsys@curveto{57.66644pt}{-34.6983pt}{56.99893pt}{-35.83263pt}{56.99893pt}{-37.16492pt}\pgfsys@curveto{56.99893pt}{-38.49721pt}{57.66644pt}{-39.63155pt}{58.49893pt}{-39.96486pt}\pgfsys@curveto{59.33142pt}{-40.29817pt}{59.99893pt}{-39.83046pt}{59.99893pt}{-39.1648pt}\pgfsys@curveto{59.99893pt}{-38.49913pt}{59.33142pt}{-38.03143pt}{58.49893pt}{-38.36475pt}\pgfsys@curveto{57.66644pt}{-38.6983pt}{56.99893pt}{-39.83263pt}{56.99893pt}{-41.16492pt}\pgfsys@curveto{56.99893pt}{-42.49721pt}{57.66644pt}{-43.63155pt}{58.49893pt}{-43.96486pt}\pgfsys@lineto{58.49893pt}{-44.36499pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} {{}} {{}} {{}} {{}} { {} {}{ } {} {} {} { } {} {} {} {{}}{}{{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{58.49893pt}{-44.36499pt}\pgfsys@lineto{58.49893pt}{-56.36499pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{\pgfsys@beginscope\pgfsys@invoke{ }{}\pgfsys@moveto{60.49893pt}{-44.36499pt}\pgfsys@lineto{60.49893pt}{-56.36499pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{60.00784pt}{-60.00784pt}\pgfsys@curveto{59.65488pt}{-60.83218pt}{59.98495pt}{-62.10626pt}{60.92703pt}{-63.04834pt}\pgfsys@curveto{61.86911pt}{-63.99042pt}{63.1432pt}{-64.32051pt}{63.96754pt}{-63.96754pt}\pgfsys@curveto{64.79189pt}{-63.61458pt}{64.93317pt}{-62.81186pt}{64.46246pt}{-62.34116pt}\pgfsys@curveto{63.99178pt}{-61.87047pt}{63.18906pt}{-62.01175pt}{62.83609pt}{-62.83609pt}\pgfsys@curveto{62.4833pt}{-63.66061pt}{62.81339pt}{-64.9347pt}{63.75546pt}{-65.87677pt}\pgfsys@curveto{64.69754pt}{-66.81885pt}{65.97163pt}{-67.14894pt}{66.79597pt}{-66.79597pt}\pgfsys@curveto{67.62032pt}{-66.44301pt}{67.7616pt}{-65.64029pt}{67.2909pt}{-65.16959pt}\pgfsys@curveto{66.8202pt}{-64.6989pt}{66.01749pt}{-64.84018pt}{65.66452pt}{-65.66452pt}\pgfsys@curveto{65.31174pt}{-66.48904pt}{65.64182pt}{-67.76312pt}{66.5839pt}{-68.7052pt}\pgfsys@curveto{67.52597pt}{-69.64728pt}{68.80006pt}{-69.97737pt}{69.6244pt}{-69.6244pt}\pgfsys@curveto{70.44875pt}{-69.27144pt}{70.59003pt}{-68.46872pt}{70.11932pt}{-67.99802pt}\pgfsys@curveto{69.64864pt}{-67.52733pt}{68.84592pt}{-67.66861pt}{68.49295pt}{-68.49295pt}\pgfsys@curveto{68.14017pt}{-69.31747pt}{68.47025pt}{-70.59155pt}{69.41232pt}{-71.53363pt}\pgfsys@curveto{70.3544pt}{-72.47571pt}{71.6285pt}{-72.8058pt}{72.45284pt}{-72.45284pt}\pgfsys@lineto{72.73576pt}{-72.73576pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{56.99002pt}{-60.00784pt}\pgfsys@curveto{56.16568pt}{-59.65488pt}{54.8916pt}{-59.98495pt}{53.94952pt}{-60.92703pt}\pgfsys@curveto{53.00745pt}{-61.86911pt}{52.67735pt}{-63.1432pt}{53.03032pt}{-63.96754pt}\pgfsys@curveto{53.38329pt}{-64.79189pt}{54.186pt}{-64.93317pt}{54.65671pt}{-64.46246pt}\pgfsys@curveto{55.1274pt}{-63.99178pt}{54.98611pt}{-63.18906pt}{54.16177pt}{-62.83609pt}\pgfsys@curveto{53.33725pt}{-62.4833pt}{52.06317pt}{-62.81339pt}{51.1211pt}{-63.75546pt}\pgfsys@curveto{50.17902pt}{-64.69754pt}{49.84892pt}{-65.97163pt}{50.20189pt}{-66.79597pt}\pgfsys@curveto{50.55486pt}{-67.62032pt}{51.35757pt}{-67.7616pt}{51.82828pt}{-67.2909pt}\pgfsys@curveto{52.29897pt}{-66.8202pt}{52.15768pt}{-66.01749pt}{51.33334pt}{-65.66452pt}\pgfsys@curveto{50.50882pt}{-65.31174pt}{49.23474pt}{-65.64182pt}{48.29266pt}{-66.5839pt}\pgfsys@curveto{47.35059pt}{-67.52597pt}{47.0205pt}{-68.80006pt}{47.37346pt}{-69.6244pt}\pgfsys@curveto{47.72643pt}{-70.44875pt}{48.52914pt}{-70.59003pt}{48.99985pt}{-70.11932pt}\pgfsys@curveto{49.47054pt}{-69.64864pt}{49.32925pt}{-68.84592pt}{48.50491pt}{-68.49295pt}\pgfsys@curveto{47.68039pt}{-68.14017pt}{46.40631pt}{-68.47025pt}{45.46423pt}{-69.41232pt}\pgfsys@curveto{44.52216pt}{-70.3544pt}{44.19206pt}{-71.6285pt}{44.54503pt}{-72.45284pt}\pgfsys@lineto{44.2621pt}{-72.73576pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\;.\end{split}$ (3.12) This makes it impossible to close loops without using the $\langle AA\rangle$ propagator242424The formal proof of this result can be found in [26]., which vanishes by means of (3.9). Note that, internally, the $A$-leg always propagates to the vertex $BAA$. Looking at the full action (3.8), the only vertex that does possess $A$-legs is $\bar{\phi}c\psi$, but the $\bar{\phi}$-leg could only propagate to the vertex $\bar{\phi}A\phi$ through $\langle\bar{\phi}\phi\rangle$; the $c$-leg only to $\bar{c}Ac$ through $\langle\bar{c}c\rangle$ and; the $\psi$-leg to the vertexes $\bar{\chi}A\psi$, $\bar{\chi}cA$ or $\bar{\chi}cAA$ through $\langle\psi\bar{\chi}\rangle$ ($\langle\bar{\eta}\psi\rangle$ is not considered because there is no vertex containing $\bar{\eta}$). All possible branches produce at least one remaining internal $A$-leg, and the cascade effect is not avoided, as represented by the diagrams $\leavevmode\hbox to93.39pt{\vbox to93.39pt{\pgfpicture\makeatletter\hbox{\hskip 46.57054pt\lower-46.82054pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-25.0pt}{0.0pt}\pgfsys@curveto{-25.0pt}{1.17856pt}{-25.95538pt}{2.13394pt}{-27.13394pt}{2.13394pt}\pgfsys@curveto{-28.3125pt}{2.13394pt}{-29.26788pt}{1.17856pt}{-29.26788pt}{0.0pt}\pgfsys@curveto{-29.26788pt}{-1.17856pt}{-28.3125pt}{-2.13394pt}{-27.13394pt}{-2.13394pt}\pgfsys@curveto{-25.95538pt}{-2.13394pt}{-25.0pt}{-1.17856pt}{-25.0pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{-27.13394pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-27.13394pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{ }}{ }{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{19.1866pt}\pgfsys@curveto{21.32054pt}{20.36516pt}{20.36516pt}{21.32054pt}{19.1866pt}{21.32054pt}\pgfsys@curveto{18.00804pt}{21.32054pt}{17.05266pt}{20.36516pt}{17.05266pt}{19.1866pt}\pgfsys@curveto{17.05266pt}{18.00804pt}{18.00804pt}{17.05266pt}{19.1866pt}{17.05266pt}\pgfsys@curveto{20.36516pt}{17.05266pt}{21.32054pt}{18.00804pt}{21.32054pt}{19.1866pt}\pgfsys@closepath\pgfsys@moveto{19.1866pt}{19.1866pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{19.1866pt}{19.1866pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{ }}{ }{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{-19.1866pt}\pgfsys@curveto{21.32054pt}{-18.00804pt}{20.36516pt}{-17.05266pt}{19.1866pt}{-17.05266pt}\pgfsys@curveto{18.00804pt}{-17.05266pt}{17.05266pt}{-18.00804pt}{17.05266pt}{-19.1866pt}\pgfsys@curveto{17.05266pt}{-20.36516pt}{18.00804pt}{-21.32054pt}{19.1866pt}{-21.32054pt}\pgfsys@curveto{20.36516pt}{-21.32054pt}{21.32054pt}{-20.36516pt}{21.32054pt}{-19.1866pt}\pgfsys@closepath\pgfsys@moveto{19.1866pt}{-19.1866pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{19.1866pt}{-19.1866pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{-25.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{2.0pt}\pgfsys@lineto{-25.0pt}{2.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}} {{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}} {} }{{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.2455pt}{0.2455pt}\pgfsys@lineto{0.22586pt}{0.75616pt}\pgfsys@lineto{0.27724pt}{1.19579pt}\pgfsys@lineto{0.4517pt}{1.51234pt}\pgfsys@lineto{0.76825pt}{1.6868pt}\pgfsys@lineto{1.20789pt}{1.73819pt}\pgfsys@lineto{1.71854pt}{1.71854pt}\pgfsys@lineto{2.2292pt}{1.6989pt}\pgfsys@lineto{2.66882pt}{1.75027pt}\pgfsys@lineto{2.98538pt}{1.92474pt}\pgfsys@lineto{3.15984pt}{2.24129pt}\pgfsys@lineto{3.21123pt}{2.68092pt}\pgfsys@lineto{3.19157pt}{3.19157pt}\pgfsys@lineto{3.17194pt}{3.70224pt}\pgfsys@lineto{3.22331pt}{4.14186pt}\pgfsys@lineto{3.39778pt}{4.45842pt}\pgfsys@lineto{3.71432pt}{4.63287pt}\pgfsys@lineto{4.15396pt}{4.68427pt}\pgfsys@lineto{4.66461pt}{4.66461pt}\pgfsys@lineto{5.17528pt}{4.64497pt}\pgfsys@lineto{5.61491pt}{4.69637pt}\pgfsys@lineto{5.93146pt}{4.87082pt}\pgfsys@lineto{6.10593pt}{5.18738pt}\pgfsys@lineto{6.1573pt}{5.627pt}\pgfsys@lineto{6.13766pt}{6.13766pt}\pgfsys@lineto{6.11801pt}{6.64832pt}\pgfsys@lineto{6.1694pt}{7.08795pt}\pgfsys@lineto{6.34386pt}{7.4045pt}\pgfsys@lineto{6.66042pt}{7.57896pt}\pgfsys@lineto{7.10004pt}{7.63034pt}\pgfsys@lineto{7.6107pt}{7.6107pt}\pgfsys@lineto{8.12135pt}{7.59105pt}\pgfsys@lineto{8.56099pt}{7.64244pt}\pgfsys@lineto{8.87753pt}{7.8169pt}\pgfsys@lineto{9.052pt}{8.13345pt}\pgfsys@lineto{9.10338pt}{8.57307pt}\pgfsys@lineto{9.08374pt}{9.08374pt}\pgfsys@lineto{9.06409pt}{9.59439pt}\pgfsys@lineto{9.11548pt}{10.03403pt}\pgfsys@lineto{9.28995pt}{10.35059pt}\pgfsys@lineto{9.60649pt}{10.52504pt}\pgfsys@lineto{10.04613pt}{10.57643pt}\pgfsys@lineto{10.55678pt}{10.55678pt}\pgfsys@lineto{17.67767pt}{17.67767pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{17.67769pt}{-17.67769pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{19.1866pt}{21.32054pt}\pgfsys@curveto{20.01909pt}{21.65385pt}{20.6866pt}{22.78818pt}{20.6866pt}{24.12047pt}\pgfsys@curveto{20.6866pt}{25.45276pt}{20.01909pt}{26.5871pt}{19.1866pt}{26.92041pt}\pgfsys@curveto{18.35411pt}{27.25372pt}{17.6866pt}{26.78601pt}{17.6866pt}{26.12035pt}\pgfsys@curveto{17.6866pt}{25.45468pt}{18.35411pt}{24.98698pt}{19.1866pt}{25.3203pt}\pgfsys@curveto{20.01909pt}{25.65385pt}{20.6866pt}{26.78818pt}{20.6866pt}{28.12047pt}\pgfsys@curveto{20.6866pt}{29.45276pt}{20.01909pt}{30.5871pt}{19.1866pt}{30.92041pt}\pgfsys@curveto{18.35411pt}{31.25372pt}{17.6866pt}{30.78601pt}{17.6866pt}{30.12035pt}\pgfsys@curveto{17.6866pt}{29.45468pt}{18.35411pt}{28.98698pt}{19.1866pt}{29.3203pt}\pgfsys@curveto{20.01909pt}{29.65385pt}{20.6866pt}{30.78818pt}{20.6866pt}{32.12047pt}\pgfsys@curveto{20.6866pt}{33.45276pt}{20.01909pt}{34.5871pt}{19.1866pt}{34.92041pt}\pgfsys@curveto{18.35411pt}{35.25372pt}{17.6866pt}{34.78601pt}{17.6866pt}{34.12035pt}\pgfsys@curveto{17.6866pt}{33.45468pt}{18.35411pt}{32.98698pt}{19.1866pt}{33.3203pt}\pgfsys@curveto{20.01909pt}{33.65385pt}{20.6866pt}{34.78818pt}{20.6866pt}{36.12047pt}\pgfsys@curveto{20.6866pt}{37.45276pt}{20.01909pt}{38.5871pt}{19.1866pt}{38.92041pt}\pgfsys@curveto{18.35411pt}{39.25372pt}{17.6866pt}{38.78601pt}{17.6866pt}{38.12035pt}\pgfsys@curveto{17.6866pt}{37.45468pt}{18.35411pt}{36.98698pt}{19.1866pt}{37.3203pt}\pgfsys@curveto{20.01909pt}{37.65385pt}{20.6866pt}{38.78818pt}{20.6866pt}{40.12047pt}\pgfsys@curveto{20.6866pt}{41.45276pt}{20.01909pt}{42.5871pt}{19.1866pt}{42.92041pt}\pgfsys@lineto{19.1866pt}{46.32054pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{19.1866pt}\pgfsys@lineto{46.32054pt}{19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{-19.1866pt}\pgfsys@curveto{21.65385pt}{-20.01909pt}{22.78818pt}{-20.6866pt}{24.12047pt}{-20.6866pt}\pgfsys@curveto{25.45276pt}{-20.6866pt}{26.5871pt}{-20.01909pt}{26.92041pt}{-19.1866pt}\pgfsys@curveto{27.25372pt}{-18.35411pt}{26.78601pt}{-17.6866pt}{26.12035pt}{-17.6866pt}\pgfsys@curveto{25.45468pt}{-17.6866pt}{24.98698pt}{-18.35411pt}{25.3203pt}{-19.1866pt}\pgfsys@curveto{25.65385pt}{-20.01909pt}{26.78818pt}{-20.6866pt}{28.12047pt}{-20.6866pt}\pgfsys@curveto{29.45276pt}{-20.6866pt}{30.5871pt}{-20.01909pt}{30.92041pt}{-19.1866pt}\pgfsys@curveto{31.25372pt}{-18.35411pt}{30.78601pt}{-17.6866pt}{30.12035pt}{-17.6866pt}\pgfsys@curveto{29.45468pt}{-17.6866pt}{28.98698pt}{-18.35411pt}{29.3203pt}{-19.1866pt}\pgfsys@curveto{29.65385pt}{-20.01909pt}{30.78818pt}{-20.6866pt}{32.12047pt}{-20.6866pt}\pgfsys@curveto{33.45276pt}{-20.6866pt}{34.5871pt}{-20.01909pt}{34.92041pt}{-19.1866pt}\pgfsys@curveto{35.25372pt}{-18.35411pt}{34.78601pt}{-17.6866pt}{34.12035pt}{-17.6866pt}\pgfsys@curveto{33.45468pt}{-17.6866pt}{32.98698pt}{-18.35411pt}{33.3203pt}{-19.1866pt}\pgfsys@curveto{33.65385pt}{-20.01909pt}{34.78818pt}{-20.6866pt}{36.12047pt}{-20.6866pt}\pgfsys@curveto{37.45276pt}{-20.6866pt}{38.5871pt}{-20.01909pt}{38.92041pt}{-19.1866pt}\pgfsys@curveto{39.25372pt}{-18.35411pt}{38.78601pt}{-17.6866pt}{38.12035pt}{-17.6866pt}\pgfsys@curveto{37.45468pt}{-17.6866pt}{36.98698pt}{-18.35411pt}{37.3203pt}{-19.1866pt}\pgfsys@curveto{37.65385pt}{-20.01909pt}{38.78818pt}{-20.6866pt}{40.12047pt}{-20.6866pt}\pgfsys@curveto{41.45276pt}{-20.6866pt}{42.5871pt}{-20.01909pt}{42.92041pt}{-19.1866pt}\pgfsys@lineto{46.32054pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{19.1866pt}{-21.32054pt}\pgfsys@lineto{19.1866pt}{-46.32054pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-28.64285pt}{-1.50891pt}\pgfsys@curveto{-29.4672pt}{-1.15594pt}{-30.74127pt}{-1.48602pt}{-31.68335pt}{-2.4281pt}\pgfsys@curveto{-32.62543pt}{-3.37018pt}{-32.95552pt}{-4.64427pt}{-32.60255pt}{-5.46861pt}\pgfsys@curveto{-32.24959pt}{-6.29295pt}{-31.44687pt}{-6.43423pt}{-30.97617pt}{-5.96353pt}\pgfsys@curveto{-30.50548pt}{-5.49284pt}{-30.64676pt}{-4.69012pt}{-31.4711pt}{-4.33716pt}\pgfsys@curveto{-32.29562pt}{-3.98438pt}{-33.5697pt}{-4.31445pt}{-34.51178pt}{-5.25653pt}\pgfsys@curveto{-35.45386pt}{-6.19861pt}{-35.78395pt}{-7.4727pt}{-35.43098pt}{-8.29704pt}\pgfsys@curveto{-35.07802pt}{-9.12138pt}{-34.2753pt}{-9.26266pt}{-33.8046pt}{-8.79196pt}\pgfsys@curveto{-33.33391pt}{-8.32127pt}{-33.47519pt}{-7.51855pt}{-34.29953pt}{-7.16559pt}\pgfsys@curveto{-35.12405pt}{-6.8128pt}{-36.39813pt}{-7.14288pt}{-37.34021pt}{-8.08496pt}\pgfsys@curveto{-38.28229pt}{-9.02704pt}{-38.61238pt}{-10.30113pt}{-38.25941pt}{-11.12547pt}\pgfsys@curveto{-37.90645pt}{-11.94981pt}{-37.10373pt}{-12.0911pt}{-36.63303pt}{-11.62039pt}\pgfsys@curveto{-36.16234pt}{-11.1497pt}{-36.30362pt}{-10.34698pt}{-37.12796pt}{-9.99402pt}\pgfsys@curveto{-37.95248pt}{-9.64124pt}{-39.22656pt}{-9.97131pt}{-40.16864pt}{-10.91339pt}\pgfsys@curveto{-41.11072pt}{-11.85547pt}{-41.44081pt}{-13.12956pt}{-41.08784pt}{-13.9539pt}\pgfsys@curveto{-40.73488pt}{-14.77824pt}{-39.93216pt}{-14.91953pt}{-39.46146pt}{-14.44882pt}\pgfsys@curveto{-38.99077pt}{-13.97813pt}{-39.13205pt}{-13.17542pt}{-39.95639pt}{-12.82245pt}\pgfsys@curveto{-40.78091pt}{-12.46967pt}{-42.055pt}{-12.79974pt}{-42.99707pt}{-13.74182pt}\pgfsys@curveto{-43.93915pt}{-14.6839pt}{-44.26924pt}{-15.958pt}{-43.91628pt}{-16.78233pt}\pgfsys@lineto{-46.32054pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-26.0pt}{2.0pt}\pgfsys@lineto{-43.67769pt}{19.67769pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-27.41422pt}{0.58578pt}\pgfsys@lineto{-45.0919pt}{18.26347pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,\quad\quad\quad\leavevmode\hbox to93.14pt{\vbox to93.39pt{\pgfpicture\makeatletter\hbox{\hskip 46.57054pt\lower-46.82054pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-25.0pt}{0.0pt}\pgfsys@curveto{-25.0pt}{1.17856pt}{-25.95538pt}{2.13394pt}{-27.13394pt}{2.13394pt}\pgfsys@curveto{-28.3125pt}{2.13394pt}{-29.26788pt}{1.17856pt}{-29.26788pt}{0.0pt}\pgfsys@curveto{-29.26788pt}{-1.17856pt}{-28.3125pt}{-2.13394pt}{-27.13394pt}{-2.13394pt}\pgfsys@curveto{-25.95538pt}{-2.13394pt}{-25.0pt}{-1.17856pt}{-25.0pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{-27.13394pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-27.13394pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{ }}{ }{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{19.1866pt}\pgfsys@curveto{21.32054pt}{20.36516pt}{20.36516pt}{21.32054pt}{19.1866pt}{21.32054pt}\pgfsys@curveto{18.00804pt}{21.32054pt}{17.05266pt}{20.36516pt}{17.05266pt}{19.1866pt}\pgfsys@curveto{17.05266pt}{18.00804pt}{18.00804pt}{17.05266pt}{19.1866pt}{17.05266pt}\pgfsys@curveto{20.36516pt}{17.05266pt}{21.32054pt}{18.00804pt}{21.32054pt}{19.1866pt}\pgfsys@closepath\pgfsys@moveto{19.1866pt}{19.1866pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{19.1866pt}{19.1866pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{ }}{ }{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{-19.1866pt}\pgfsys@curveto{21.32054pt}{-18.00804pt}{20.36516pt}{-17.05266pt}{19.1866pt}{-17.05266pt}\pgfsys@curveto{18.00804pt}{-17.05266pt}{17.05266pt}{-18.00804pt}{17.05266pt}{-19.1866pt}\pgfsys@curveto{17.05266pt}{-20.36516pt}{18.00804pt}{-21.32054pt}{19.1866pt}{-21.32054pt}\pgfsys@curveto{20.36516pt}{-21.32054pt}{21.32054pt}{-20.36516pt}{21.32054pt}{-19.1866pt}\pgfsys@closepath\pgfsys@moveto{19.1866pt}{-19.1866pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{19.1866pt}{-19.1866pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{-25.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{2.0pt}\pgfsys@lineto{-25.0pt}{2.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}} {{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}} {} }{{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.2455pt}{0.2455pt}\pgfsys@lineto{0.22586pt}{0.75616pt}\pgfsys@lineto{0.27724pt}{1.19579pt}\pgfsys@lineto{0.4517pt}{1.51234pt}\pgfsys@lineto{0.76825pt}{1.6868pt}\pgfsys@lineto{1.20789pt}{1.73819pt}\pgfsys@lineto{1.71854pt}{1.71854pt}\pgfsys@lineto{2.2292pt}{1.6989pt}\pgfsys@lineto{2.66882pt}{1.75027pt}\pgfsys@lineto{2.98538pt}{1.92474pt}\pgfsys@lineto{3.15984pt}{2.24129pt}\pgfsys@lineto{3.21123pt}{2.68092pt}\pgfsys@lineto{3.19157pt}{3.19157pt}\pgfsys@lineto{3.17194pt}{3.70224pt}\pgfsys@lineto{3.22331pt}{4.14186pt}\pgfsys@lineto{3.39778pt}{4.45842pt}\pgfsys@lineto{3.71432pt}{4.63287pt}\pgfsys@lineto{4.15396pt}{4.68427pt}\pgfsys@lineto{4.66461pt}{4.66461pt}\pgfsys@lineto{5.17528pt}{4.64497pt}\pgfsys@lineto{5.61491pt}{4.69637pt}\pgfsys@lineto{5.93146pt}{4.87082pt}\pgfsys@lineto{6.10593pt}{5.18738pt}\pgfsys@lineto{6.1573pt}{5.627pt}\pgfsys@lineto{6.13766pt}{6.13766pt}\pgfsys@lineto{6.11801pt}{6.64832pt}\pgfsys@lineto{6.1694pt}{7.08795pt}\pgfsys@lineto{6.34386pt}{7.4045pt}\pgfsys@lineto{6.66042pt}{7.57896pt}\pgfsys@lineto{7.10004pt}{7.63034pt}\pgfsys@lineto{7.6107pt}{7.6107pt}\pgfsys@lineto{8.12135pt}{7.59105pt}\pgfsys@lineto{8.56099pt}{7.64244pt}\pgfsys@lineto{8.87753pt}{7.8169pt}\pgfsys@lineto{9.052pt}{8.13345pt}\pgfsys@lineto{9.10338pt}{8.57307pt}\pgfsys@lineto{9.08374pt}{9.08374pt}\pgfsys@lineto{9.06409pt}{9.59439pt}\pgfsys@lineto{9.11548pt}{10.03403pt}\pgfsys@lineto{9.28995pt}{10.35059pt}\pgfsys@lineto{9.60649pt}{10.52504pt}\pgfsys@lineto{10.04613pt}{10.57643pt}\pgfsys@lineto{10.55678pt}{10.55678pt}\pgfsys@lineto{17.67767pt}{17.67767pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{17.67769pt}{-17.67769pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{19.1866pt}{21.32054pt}\pgfsys@curveto{20.01909pt}{21.65385pt}{20.6866pt}{22.78818pt}{20.6866pt}{24.12047pt}\pgfsys@curveto{20.6866pt}{25.45276pt}{20.01909pt}{26.5871pt}{19.1866pt}{26.92041pt}\pgfsys@curveto{18.35411pt}{27.25372pt}{17.6866pt}{26.78601pt}{17.6866pt}{26.12035pt}\pgfsys@curveto{17.6866pt}{25.45468pt}{18.35411pt}{24.98698pt}{19.1866pt}{25.3203pt}\pgfsys@curveto{20.01909pt}{25.65385pt}{20.6866pt}{26.78818pt}{20.6866pt}{28.12047pt}\pgfsys@curveto{20.6866pt}{29.45276pt}{20.01909pt}{30.5871pt}{19.1866pt}{30.92041pt}\pgfsys@curveto{18.35411pt}{31.25372pt}{17.6866pt}{30.78601pt}{17.6866pt}{30.12035pt}\pgfsys@curveto{17.6866pt}{29.45468pt}{18.35411pt}{28.98698pt}{19.1866pt}{29.3203pt}\pgfsys@curveto{20.01909pt}{29.65385pt}{20.6866pt}{30.78818pt}{20.6866pt}{32.12047pt}\pgfsys@curveto{20.6866pt}{33.45276pt}{20.01909pt}{34.5871pt}{19.1866pt}{34.92041pt}\pgfsys@curveto{18.35411pt}{35.25372pt}{17.6866pt}{34.78601pt}{17.6866pt}{34.12035pt}\pgfsys@curveto{17.6866pt}{33.45468pt}{18.35411pt}{32.98698pt}{19.1866pt}{33.3203pt}\pgfsys@curveto{20.01909pt}{33.65385pt}{20.6866pt}{34.78818pt}{20.6866pt}{36.12047pt}\pgfsys@curveto{20.6866pt}{37.45276pt}{20.01909pt}{38.5871pt}{19.1866pt}{38.92041pt}\pgfsys@curveto{18.35411pt}{39.25372pt}{17.6866pt}{38.78601pt}{17.6866pt}{38.12035pt}\pgfsys@curveto{17.6866pt}{37.45468pt}{18.35411pt}{36.98698pt}{19.1866pt}{37.3203pt}\pgfsys@curveto{20.01909pt}{37.65385pt}{20.6866pt}{38.78818pt}{20.6866pt}{40.12047pt}\pgfsys@curveto{20.6866pt}{41.45276pt}{20.01909pt}{42.5871pt}{19.1866pt}{42.92041pt}\pgfsys@lineto{19.1866pt}{46.32054pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}} {{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}} {} }{{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{19.1866pt}\pgfsys@moveto{21.32054pt}{19.1866pt}\pgfsys@lineto{21.66776pt}{19.1866pt}\pgfsys@lineto{22.01497pt}{19.5616pt}\pgfsys@lineto{22.36218pt}{19.83612pt}\pgfsys@lineto{22.7094pt}{19.9366pt}\pgfsys@lineto{23.05661pt}{19.83612pt}\pgfsys@lineto{46.32054pt}{19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{-19.1866pt}\pgfsys@curveto{21.65385pt}{-20.01909pt}{22.78818pt}{-20.6866pt}{24.12047pt}{-20.6866pt}\pgfsys@curveto{25.45276pt}{-20.6866pt}{26.5871pt}{-20.01909pt}{26.92041pt}{-19.1866pt}\pgfsys@curveto{27.25372pt}{-18.35411pt}{26.78601pt}{-17.6866pt}{26.12035pt}{-17.6866pt}\pgfsys@curveto{25.45468pt}{-17.6866pt}{24.98698pt}{-18.35411pt}{25.3203pt}{-19.1866pt}\pgfsys@curveto{25.65385pt}{-20.01909pt}{26.78818pt}{-20.6866pt}{28.12047pt}{-20.6866pt}\pgfsys@curveto{29.45276pt}{-20.6866pt}{30.5871pt}{-20.01909pt}{30.92041pt}{-19.1866pt}\pgfsys@curveto{31.25372pt}{-18.35411pt}{30.78601pt}{-17.6866pt}{30.12035pt}{-17.6866pt}\pgfsys@curveto{29.45468pt}{-17.6866pt}{28.98698pt}{-18.35411pt}{29.3203pt}{-19.1866pt}\pgfsys@curveto{29.65385pt}{-20.01909pt}{30.78818pt}{-20.6866pt}{32.12047pt}{-20.6866pt}\pgfsys@curveto{33.45276pt}{-20.6866pt}{34.5871pt}{-20.01909pt}{34.92041pt}{-19.1866pt}\pgfsys@curveto{35.25372pt}{-18.35411pt}{34.78601pt}{-17.6866pt}{34.12035pt}{-17.6866pt}\pgfsys@curveto{33.45468pt}{-17.6866pt}{32.98698pt}{-18.35411pt}{33.3203pt}{-19.1866pt}\pgfsys@curveto{33.65385pt}{-20.01909pt}{34.78818pt}{-20.6866pt}{36.12047pt}{-20.6866pt}\pgfsys@curveto{37.45276pt}{-20.6866pt}{38.5871pt}{-20.01909pt}{38.92041pt}{-19.1866pt}\pgfsys@curveto{39.25372pt}{-18.35411pt}{38.78601pt}{-17.6866pt}{38.12035pt}{-17.6866pt}\pgfsys@curveto{37.45468pt}{-17.6866pt}{36.98698pt}{-18.35411pt}{37.3203pt}{-19.1866pt}\pgfsys@curveto{37.65385pt}{-20.01909pt}{38.78818pt}{-20.6866pt}{40.12047pt}{-20.6866pt}\pgfsys@curveto{41.45276pt}{-20.6866pt}{42.5871pt}{-20.01909pt}{42.92041pt}{-19.1866pt}\pgfsys@lineto{46.32054pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{19.1866pt}{-21.32054pt}\pgfsys@lineto{19.1866pt}{-46.32054pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-28.64285pt}{-1.50891pt}\pgfsys@curveto{-29.4672pt}{-1.15594pt}{-30.74127pt}{-1.48602pt}{-31.68335pt}{-2.4281pt}\pgfsys@curveto{-32.62543pt}{-3.37018pt}{-32.95552pt}{-4.64427pt}{-32.60255pt}{-5.46861pt}\pgfsys@curveto{-32.24959pt}{-6.29295pt}{-31.44687pt}{-6.43423pt}{-30.97617pt}{-5.96353pt}\pgfsys@curveto{-30.50548pt}{-5.49284pt}{-30.64676pt}{-4.69012pt}{-31.4711pt}{-4.33716pt}\pgfsys@curveto{-32.29562pt}{-3.98438pt}{-33.5697pt}{-4.31445pt}{-34.51178pt}{-5.25653pt}\pgfsys@curveto{-35.45386pt}{-6.19861pt}{-35.78395pt}{-7.4727pt}{-35.43098pt}{-8.29704pt}\pgfsys@curveto{-35.07802pt}{-9.12138pt}{-34.2753pt}{-9.26266pt}{-33.8046pt}{-8.79196pt}\pgfsys@curveto{-33.33391pt}{-8.32127pt}{-33.47519pt}{-7.51855pt}{-34.29953pt}{-7.16559pt}\pgfsys@curveto{-35.12405pt}{-6.8128pt}{-36.39813pt}{-7.14288pt}{-37.34021pt}{-8.08496pt}\pgfsys@curveto{-38.28229pt}{-9.02704pt}{-38.61238pt}{-10.30113pt}{-38.25941pt}{-11.12547pt}\pgfsys@curveto{-37.90645pt}{-11.94981pt}{-37.10373pt}{-12.0911pt}{-36.63303pt}{-11.62039pt}\pgfsys@curveto{-36.16234pt}{-11.1497pt}{-36.30362pt}{-10.34698pt}{-37.12796pt}{-9.99402pt}\pgfsys@curveto{-37.95248pt}{-9.64124pt}{-39.22656pt}{-9.97131pt}{-40.16864pt}{-10.91339pt}\pgfsys@curveto{-41.11072pt}{-11.85547pt}{-41.44081pt}{-13.12956pt}{-41.08784pt}{-13.9539pt}\pgfsys@curveto{-40.73488pt}{-14.77824pt}{-39.93216pt}{-14.91953pt}{-39.46146pt}{-14.44882pt}\pgfsys@curveto{-38.99077pt}{-13.97813pt}{-39.13205pt}{-13.17542pt}{-39.95639pt}{-12.82245pt}\pgfsys@curveto{-40.78091pt}{-12.46967pt}{-42.055pt}{-12.79974pt}{-42.99707pt}{-13.74182pt}\pgfsys@curveto{-43.93915pt}{-14.6839pt}{-44.26924pt}{-15.958pt}{-43.91628pt}{-16.78233pt}\pgfsys@lineto{-46.32054pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-26.0pt}{2.0pt}\pgfsys@lineto{-43.67769pt}{19.67769pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-27.41422pt}{0.58578pt}\pgfsys@lineto{-45.0919pt}{18.26347pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad,\quad\quad\quad\leavevmode\hbox to93.14pt{\vbox to85.44pt{\pgfpicture\makeatletter\hbox{\hskip 46.57054pt\lower-46.82054pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{2.13394pt}{0.0pt}\pgfsys@curveto{2.13394pt}{1.17856pt}{1.17856pt}{2.13394pt}{0.0pt}{2.13394pt}\pgfsys@curveto{-1.17856pt}{2.13394pt}{-2.13394pt}{1.17856pt}{-2.13394pt}{0.0pt}\pgfsys@curveto{-2.13394pt}{-1.17856pt}{-1.17856pt}{-2.13394pt}{0.0pt}{-2.13394pt}\pgfsys@curveto{1.17856pt}{-2.13394pt}{2.13394pt}{-1.17856pt}{2.13394pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-25.0pt}{0.0pt}\pgfsys@curveto{-25.0pt}{1.17856pt}{-25.95538pt}{2.13394pt}{-27.13394pt}{2.13394pt}\pgfsys@curveto{-28.3125pt}{2.13394pt}{-29.26788pt}{1.17856pt}{-29.26788pt}{0.0pt}\pgfsys@curveto{-29.26788pt}{-1.17856pt}{-28.3125pt}{-2.13394pt}{-27.13394pt}{-2.13394pt}\pgfsys@curveto{-25.95538pt}{-2.13394pt}{-25.0pt}{-1.17856pt}{-25.0pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{-27.13394pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-27.13394pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{ }}{ }{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{19.1866pt}\pgfsys@curveto{21.32054pt}{20.36516pt}{20.36516pt}{21.32054pt}{19.1866pt}{21.32054pt}\pgfsys@curveto{18.00804pt}{21.32054pt}{17.05266pt}{20.36516pt}{17.05266pt}{19.1866pt}\pgfsys@curveto{17.05266pt}{18.00804pt}{18.00804pt}{17.05266pt}{19.1866pt}{17.05266pt}\pgfsys@curveto{20.36516pt}{17.05266pt}{21.32054pt}{18.00804pt}{21.32054pt}{19.1866pt}\pgfsys@closepath\pgfsys@moveto{19.1866pt}{19.1866pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{19.1866pt}{19.1866pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{{}{{{}}}{{ }}{ }{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{-19.1866pt}\pgfsys@curveto{21.32054pt}{-18.00804pt}{20.36516pt}{-17.05266pt}{19.1866pt}{-17.05266pt}\pgfsys@curveto{18.00804pt}{-17.05266pt}{17.05266pt}{-18.00804pt}{17.05266pt}{-19.1866pt}\pgfsys@curveto{17.05266pt}{-20.36516pt}{18.00804pt}{-21.32054pt}{19.1866pt}{-21.32054pt}\pgfsys@curveto{20.36516pt}{-21.32054pt}{21.32054pt}{-20.36516pt}{21.32054pt}{-19.1866pt}\pgfsys@closepath\pgfsys@moveto{19.1866pt}{-19.1866pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{19.1866pt}{-19.1866pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{ {}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{-25.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{2.0pt}\pgfsys@lineto{-25.0pt}{2.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}} {{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}} {} }{{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {}} {{{{}{}{{}} }}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.2455pt}{0.2455pt}\pgfsys@lineto{0.22586pt}{0.75616pt}\pgfsys@lineto{0.27724pt}{1.19579pt}\pgfsys@lineto{0.4517pt}{1.51234pt}\pgfsys@lineto{0.76825pt}{1.6868pt}\pgfsys@lineto{1.20789pt}{1.73819pt}\pgfsys@lineto{1.71854pt}{1.71854pt}\pgfsys@lineto{2.2292pt}{1.6989pt}\pgfsys@lineto{2.66882pt}{1.75027pt}\pgfsys@lineto{2.98538pt}{1.92474pt}\pgfsys@lineto{3.15984pt}{2.24129pt}\pgfsys@lineto{3.21123pt}{2.68092pt}\pgfsys@lineto{3.19157pt}{3.19157pt}\pgfsys@lineto{3.17194pt}{3.70224pt}\pgfsys@lineto{3.22331pt}{4.14186pt}\pgfsys@lineto{3.39778pt}{4.45842pt}\pgfsys@lineto{3.71432pt}{4.63287pt}\pgfsys@lineto{4.15396pt}{4.68427pt}\pgfsys@lineto{4.66461pt}{4.66461pt}\pgfsys@lineto{5.17528pt}{4.64497pt}\pgfsys@lineto{5.61491pt}{4.69637pt}\pgfsys@lineto{5.93146pt}{4.87082pt}\pgfsys@lineto{6.10593pt}{5.18738pt}\pgfsys@lineto{6.1573pt}{5.627pt}\pgfsys@lineto{6.13766pt}{6.13766pt}\pgfsys@lineto{6.11801pt}{6.64832pt}\pgfsys@lineto{6.1694pt}{7.08795pt}\pgfsys@lineto{6.34386pt}{7.4045pt}\pgfsys@lineto{6.66042pt}{7.57896pt}\pgfsys@lineto{7.10004pt}{7.63034pt}\pgfsys@lineto{7.6107pt}{7.6107pt}\pgfsys@lineto{8.12135pt}{7.59105pt}\pgfsys@lineto{8.56099pt}{7.64244pt}\pgfsys@lineto{8.87753pt}{7.8169pt}\pgfsys@lineto{9.052pt}{8.13345pt}\pgfsys@lineto{9.10338pt}{8.57307pt}\pgfsys@lineto{9.08374pt}{9.08374pt}\pgfsys@lineto{9.06409pt}{9.59439pt}\pgfsys@lineto{9.11548pt}{10.03403pt}\pgfsys@lineto{9.28995pt}{10.35059pt}\pgfsys@lineto{9.60649pt}{10.52504pt}\pgfsys@lineto{10.04613pt}{10.57643pt}\pgfsys@lineto{10.55678pt}{10.55678pt}\pgfsys@lineto{17.67767pt}{17.67767pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {{{{{}}{}{}{}{}{{}}}}}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{17.67769pt}{-17.67769pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{20.69551pt}{20.69551pt}\pgfsys@curveto{21.51985pt}{20.34254pt}{22.79393pt}{20.67262pt}{23.73601pt}{21.6147pt}\pgfsys@curveto{24.67809pt}{22.55678pt}{25.00818pt}{23.83087pt}{24.65521pt}{24.65521pt}\pgfsys@curveto{24.30225pt}{25.47955pt}{23.49953pt}{25.62083pt}{23.02882pt}{25.15013pt}\pgfsys@curveto{22.55814pt}{24.67944pt}{22.69942pt}{23.87672pt}{23.52376pt}{23.52376pt}\pgfsys@curveto{24.34828pt}{23.17097pt}{25.62236pt}{23.50105pt}{26.56444pt}{24.44313pt}\pgfsys@curveto{27.50652pt}{25.38521pt}{27.83661pt}{26.6593pt}{27.48364pt}{27.48364pt}\pgfsys@curveto{27.13068pt}{28.30798pt}{26.32796pt}{28.44926pt}{25.85725pt}{27.97856pt}\pgfsys@curveto{25.38657pt}{27.50787pt}{25.52785pt}{26.70515pt}{26.35219pt}{26.35219pt}\pgfsys@curveto{27.17671pt}{25.9994pt}{28.45079pt}{26.32948pt}{29.39287pt}{27.27156pt}\pgfsys@curveto{30.33495pt}{28.21364pt}{30.66504pt}{29.48773pt}{30.31207pt}{30.31207pt}\pgfsys@curveto{29.9591pt}{31.13641pt}{29.15639pt}{31.2777pt}{28.68568pt}{30.80699pt}\pgfsys@curveto{28.215pt}{30.3363pt}{28.35628pt}{29.53358pt}{29.18062pt}{29.18062pt}\pgfsys@curveto{30.00514pt}{28.82784pt}{31.27922pt}{29.15791pt}{32.2213pt}{30.09999pt}\pgfsys@curveto{33.16338pt}{31.04207pt}{33.49347pt}{32.31616pt}{33.1405pt}{33.1405pt}\pgfsys@curveto{32.78754pt}{33.96484pt}{31.98482pt}{34.10612pt}{31.51411pt}{33.63542pt}\pgfsys@curveto{31.04343pt}{33.16473pt}{31.18471pt}{32.36201pt}{32.00905pt}{32.00905pt}\pgfsys@curveto{32.83357pt}{31.65627pt}{34.10765pt}{31.98634pt}{35.04973pt}{32.92842pt}\pgfsys@curveto{35.9918pt}{33.8705pt}{36.3219pt}{35.14459pt}{35.96893pt}{35.96893pt}\pgfsys@lineto{38.3732pt}{38.3732pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{20.69551pt}{17.67769pt}\pgfsys@lineto{38.3732pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{17.67769pt}{20.69551pt}\pgfsys@curveto{18.03065pt}{21.51985pt}{17.70058pt}{22.79393pt}{16.7585pt}{23.73601pt}\pgfsys@curveto{15.81642pt}{24.67809pt}{14.54233pt}{25.00818pt}{13.71799pt}{24.65521pt}\pgfsys@curveto{12.89365pt}{24.30225pt}{12.75237pt}{23.49953pt}{13.22307pt}{23.02882pt}\pgfsys@curveto{13.69376pt}{22.55814pt}{14.49648pt}{22.69942pt}{14.84944pt}{23.52376pt}\pgfsys@curveto{15.20222pt}{24.34828pt}{14.87215pt}{25.62236pt}{13.93007pt}{26.56444pt}\pgfsys@curveto{12.98799pt}{27.50652pt}{11.7139pt}{27.83661pt}{10.88956pt}{27.48364pt}\pgfsys@curveto{10.06522pt}{27.13068pt}{9.92393pt}{26.32796pt}{10.39464pt}{25.85725pt}\pgfsys@curveto{10.86533pt}{25.38657pt}{11.66805pt}{25.52785pt}{12.02101pt}{26.35219pt}\pgfsys@curveto{12.3738pt}{27.17671pt}{12.04372pt}{28.45079pt}{11.10164pt}{29.39287pt}\pgfsys@curveto{10.15956pt}{30.33495pt}{8.88547pt}{30.66504pt}{8.06113pt}{30.31207pt}\pgfsys@curveto{7.23679pt}{29.9591pt}{7.0955pt}{29.15639pt}{7.56621pt}{28.68568pt}\pgfsys@curveto{8.0369pt}{28.215pt}{8.83961pt}{28.35628pt}{9.19258pt}{29.18062pt}\pgfsys@curveto{9.54536pt}{30.00514pt}{9.21529pt}{31.27922pt}{8.27321pt}{32.2213pt}\pgfsys@curveto{7.33113pt}{33.16338pt}{6.05704pt}{33.49347pt}{5.2327pt}{33.1405pt}\pgfsys@curveto{4.40836pt}{32.78754pt}{4.26707pt}{31.98482pt}{4.73778pt}{31.51411pt}\pgfsys@curveto{5.20847pt}{31.04343pt}{6.01118pt}{31.18471pt}{6.36415pt}{32.00905pt}\pgfsys@curveto{6.71693pt}{32.83357pt}{6.38686pt}{34.10765pt}{5.44478pt}{35.04973pt}\pgfsys@curveto{4.5027pt}{35.9918pt}{3.2286pt}{36.3219pt}{2.40427pt}{35.96893pt}\pgfsys@lineto{0.00002pt}{38.37318pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{21.32054pt}{-19.1866pt}\pgfsys@curveto{21.65385pt}{-20.01909pt}{22.78818pt}{-20.6866pt}{24.12047pt}{-20.6866pt}\pgfsys@curveto{25.45276pt}{-20.6866pt}{26.5871pt}{-20.01909pt}{26.92041pt}{-19.1866pt}\pgfsys@curveto{27.25372pt}{-18.35411pt}{26.78601pt}{-17.6866pt}{26.12035pt}{-17.6866pt}\pgfsys@curveto{25.45468pt}{-17.6866pt}{24.98698pt}{-18.35411pt}{25.3203pt}{-19.1866pt}\pgfsys@curveto{25.65385pt}{-20.01909pt}{26.78818pt}{-20.6866pt}{28.12047pt}{-20.6866pt}\pgfsys@curveto{29.45276pt}{-20.6866pt}{30.5871pt}{-20.01909pt}{30.92041pt}{-19.1866pt}\pgfsys@curveto{31.25372pt}{-18.35411pt}{30.78601pt}{-17.6866pt}{30.12035pt}{-17.6866pt}\pgfsys@curveto{29.45468pt}{-17.6866pt}{28.98698pt}{-18.35411pt}{29.3203pt}{-19.1866pt}\pgfsys@curveto{29.65385pt}{-20.01909pt}{30.78818pt}{-20.6866pt}{32.12047pt}{-20.6866pt}\pgfsys@curveto{33.45276pt}{-20.6866pt}{34.5871pt}{-20.01909pt}{34.92041pt}{-19.1866pt}\pgfsys@curveto{35.25372pt}{-18.35411pt}{34.78601pt}{-17.6866pt}{34.12035pt}{-17.6866pt}\pgfsys@curveto{33.45468pt}{-17.6866pt}{32.98698pt}{-18.35411pt}{33.3203pt}{-19.1866pt}\pgfsys@curveto{33.65385pt}{-20.01909pt}{34.78818pt}{-20.6866pt}{36.12047pt}{-20.6866pt}\pgfsys@curveto{37.45276pt}{-20.6866pt}{38.5871pt}{-20.01909pt}{38.92041pt}{-19.1866pt}\pgfsys@curveto{39.25372pt}{-18.35411pt}{38.78601pt}{-17.6866pt}{38.12035pt}{-17.6866pt}\pgfsys@curveto{37.45468pt}{-17.6866pt}{36.98698pt}{-18.35411pt}{37.3203pt}{-19.1866pt}\pgfsys@curveto{37.65385pt}{-20.01909pt}{38.78818pt}{-20.6866pt}{40.12047pt}{-20.6866pt}\pgfsys@curveto{41.45276pt}{-20.6866pt}{42.5871pt}{-20.01909pt}{42.92041pt}{-19.1866pt}\pgfsys@lineto{46.32054pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{19.1866pt}{-21.32054pt}\pgfsys@lineto{19.1866pt}{-46.32054pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}} {}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{\pgfsys@beginscope\pgfsys@invoke{ } {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {}{}{} {}{}{} }{{{{}{}{{}} }}{{}}{{}} {{{}}} } \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.5pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-28.64285pt}{-1.50891pt}\pgfsys@curveto{-29.4672pt}{-1.15594pt}{-30.74127pt}{-1.48602pt}{-31.68335pt}{-2.4281pt}\pgfsys@curveto{-32.62543pt}{-3.37018pt}{-32.95552pt}{-4.64427pt}{-32.60255pt}{-5.46861pt}\pgfsys@curveto{-32.24959pt}{-6.29295pt}{-31.44687pt}{-6.43423pt}{-30.97617pt}{-5.96353pt}\pgfsys@curveto{-30.50548pt}{-5.49284pt}{-30.64676pt}{-4.69012pt}{-31.4711pt}{-4.33716pt}\pgfsys@curveto{-32.29562pt}{-3.98438pt}{-33.5697pt}{-4.31445pt}{-34.51178pt}{-5.25653pt}\pgfsys@curveto{-35.45386pt}{-6.19861pt}{-35.78395pt}{-7.4727pt}{-35.43098pt}{-8.29704pt}\pgfsys@curveto{-35.07802pt}{-9.12138pt}{-34.2753pt}{-9.26266pt}{-33.8046pt}{-8.79196pt}\pgfsys@curveto{-33.33391pt}{-8.32127pt}{-33.47519pt}{-7.51855pt}{-34.29953pt}{-7.16559pt}\pgfsys@curveto{-35.12405pt}{-6.8128pt}{-36.39813pt}{-7.14288pt}{-37.34021pt}{-8.08496pt}\pgfsys@curveto{-38.28229pt}{-9.02704pt}{-38.61238pt}{-10.30113pt}{-38.25941pt}{-11.12547pt}\pgfsys@curveto{-37.90645pt}{-11.94981pt}{-37.10373pt}{-12.0911pt}{-36.63303pt}{-11.62039pt}\pgfsys@curveto{-36.16234pt}{-11.1497pt}{-36.30362pt}{-10.34698pt}{-37.12796pt}{-9.99402pt}\pgfsys@curveto{-37.95248pt}{-9.64124pt}{-39.22656pt}{-9.97131pt}{-40.16864pt}{-10.91339pt}\pgfsys@curveto{-41.11072pt}{-11.85547pt}{-41.44081pt}{-13.12956pt}{-41.08784pt}{-13.9539pt}\pgfsys@curveto{-40.73488pt}{-14.77824pt}{-39.93216pt}{-14.91953pt}{-39.46146pt}{-14.44882pt}\pgfsys@curveto{-38.99077pt}{-13.97813pt}{-39.13205pt}{-13.17542pt}{-39.95639pt}{-12.82245pt}\pgfsys@curveto{-40.78091pt}{-12.46967pt}{-42.055pt}{-12.79974pt}{-42.99707pt}{-13.74182pt}\pgfsys@curveto{-43.93915pt}{-14.6839pt}{-44.26924pt}{-15.958pt}{-43.91628pt}{-16.78233pt}\pgfsys@lineto{-46.32054pt}{-19.1866pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-26.0pt}{2.0pt}\pgfsys@lineto{-43.67769pt}{19.67769pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{{}} {}{}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{1.0pt,2.0pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{1.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{-27.41422pt}{0.58578pt}\pgfsys@lineto{-45.0919pt}{18.26347pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys<EMAIL_ADDRESS>(3.13) The apparently only non-zero correlation functions are of the type $\langle BBB\ldots bb\rangle=\langle s\bar{\chi}BB\ldots bb\rangle=\langle s(\bar{\chi}BB\ldots bb)\rangle\;,$ (3.14) i.e., with external $B_{\mu\nu}^{a}$ or $b^{a}$ fields. But (3.14) automatically vanishes as it is BRST-exact. In a few words, using perturbative techniques, one sees that the tree-level exactness of the BS in the self-dual gauges is a consequence of the vector supersymmetry and BRST symmetry. ### 3.2 Renormalization ambiguity Once we have at our disposal all Ward identities, we are able to construct the most general counterterm $\Sigma^{c}$ that can absorb the divergences arising in the evaluation of Feynman graphs. Due to the triviality of the BRST cohomology [27, 24], $\Sigma^{c}$ belongs to trivial part of the BRST cohomology. The fact that the BS theory is quantum stable is a well-known result in literature [51, 27, 24]. In [24], it was introduced an extra non- linear bosonic symmetry that relates the topological ghost with the Faddeev- Popov one (among other transformations involving other fields) through the transformation $\delta\psi^{a}_{\mu}\longmapsto D^{ab}_{\mu}c^{b}\;,$ (3.15) described by the Ward identity $\mathcal{T}$ in Equation (A.13). Taking into account this extra symmetry, from the multiplicative redefinition of the fields, sources and parameters of the model, $\displaystyle\Phi_{0}$ $\displaystyle=$ $\displaystyle Z^{1/2}_{\Phi}\Phi\;,\quad\Phi_{0}=\\{A^{a}_{\mu},\psi^{a}_{\mu},c^{a},\bar{c}^{a},\phi^{a},\bar{\phi}^{a},b^{a},\bar{\eta}^{a},\bar{\chi}^{a}_{\mu\nu},B^{a}_{\mu\nu}\\}\;,$ $\displaystyle\mathcal{J}_{0}$ $\displaystyle=$ $\displaystyle Z_{\mathcal{J}}\mathcal{J}\;,\quad\;\;\mathcal{J}=\\{\tau^{a}_{\mu},\Omega^{a}_{\mu},E^{a},L^{a},\Lambda^{a}_{\mu\nu},K^{a}_{\mu\nu}\\}\;,$ $\displaystyle g_{0}$ $\displaystyle=$ $\displaystyle Z_{g}g\;,$ (3.16) one proves the quantum stability of the BS theory in self-dual gauges with only one independent renormalization parameter, i.e., that the quantum action $\Gamma\equiv\Sigma(\Phi_{0},\mathcal{J}_{0},g_{0})$ at one-loop is of the form $\Sigma(\Phi_{0},\mathcal{J}_{0},g_{0})=\Sigma(\Phi,\mathcal{J},g)+\epsilon\Sigma^{c}(\Phi,\mathcal{J},g)\;,$ (3.17) with $\Sigma^{c}=a\int d^{4}x\,\left(B^{a}_{\mu\nu}F^{a}_{\mu\nu}-2\bar{\chi}^{a}_{\mu\nu}D^{ab}_{\mu}\psi^{b}_{\nu}-gf^{abc}\bar{\chi}^{a}_{\mu\nu}c^{b}F^{c}_{\mu\nu}\right)\;,$ (3.18) whereby the resulting $Z$ factors obey the following system of equations: $\displaystyle Z^{1/2}_{A}$ $\displaystyle=$ $\displaystyle Z_{b}^{-1/2}=Z_{g}^{-1}\;,$ $\displaystyle Z^{1/2}_{\bar{c}}$ $\displaystyle=$ $\displaystyle Z^{1/2}_{\bar{\eta}}=Z_{\psi}^{-1/2}=Z_{\Omega}=Z^{-1/2}_{c}\;,$ $\displaystyle Z_{\bar{\phi}}^{1/2}$ $\displaystyle=$ $\displaystyle Z_{\phi}^{-1/2}=Z_{\tau}=Z_{L}=Z^{-1}_{g}Z^{-1}_{c}\;,$ $\displaystyle Z_{E}$ $\displaystyle=$ $\displaystyle Z_{g}^{-2}Z^{-3/2}_{c}\;,$ $\displaystyle Z_{K}$ $\displaystyle=$ $\displaystyle Z_{g}^{-1}Z_{c}^{-1/2}Z_{\bar{\chi}}^{-1/2}\;,$ $\displaystyle Z_{\Lambda}$ $\displaystyle=$ $\displaystyle Z_{g}^{-2}Z_{c}^{-1}Z_{\bar{\chi}}^{-1/2}\;,$ $\displaystyle Z^{1/2}_{B}Z^{1/2}_{A}$ $\displaystyle=$ $\displaystyle Z^{1/2}_{\bar{\chi}}Z^{1/2}_{c}=1+\epsilon a\;,$ (3.19) with the independent renormalization parameter denoted by $a$. Due to the recursive nature of algebraic renormalization [16], the results (3.19) show that the model is renormalizable to all orders in perturbation theory. From the algebraic analysis so far, we cannot prove that $Z_{g}=1$, as suggested by the tree-level exactness obtained via the study of the Feynman diagrams. The system of $Z$ factors (3.19) is undetermined. As we can easily see, the number of equations $n$ and the number of variables $z$ (the $Z$ factors) are related by $z=n+2$, indicating that there is a kind of freedom in the choice of two of the $Z$ factors. In [25], the origin of such an ambiguity was explained: it is due to the absence of a kinetic gauge field term out from the trivial BRST cohomology, and due to the absence of discrete symmetries involving the ghost fields. The symmetries of the SDL gauges eliminate the kinetic term of the Faddeev-Popov ghost in the counterterm, i.e., $Z_{c}Z_{\bar{c}}=1\;.$ (3.20) Moreover, from the gauge-ghost vertex ($\bar{c}Ac$), which is also absent in the counterterm, we achieve $Z_{g}Z^{1/2}_{A}=1\;.$ (3.21) The two relations (3.20) and (3.21) are decoupled, in other words, only by determining $Z_{c}$ or $Z_{\bar{c}}$ we do not get any information about $Z_{g}$ or $Z_{A}$. As there are no kinetic terms for the gauge field in the classical action (3.8), the independent determination of $Z_{A}$ becomes impossible. The same analysis can be performed for the bosonic and topological ghosts, see [25]. Extra information is then required in order to determine the system (3.19). In the ordinary Yang-Mills theories (quantized in the Landau gauge), $Z_{c}=Z_{\bar{c}}$ which relies on the discrete symmetry $\displaystyle c^{a}$ $\displaystyle\longrightarrow$ $\displaystyle\bar{c}^{a}\;,$ $\displaystyle\bar{c}^{a}$ $\displaystyle\longrightarrow$ $\displaystyle-{c}^{a}\;.$ (3.22) This condition, together with the Faddeev-Popov ghost kinetic term, are sufficient to determine $Z_{c}$ and $Z_{\bar{c}}$. It is easy to see that the action (3.8) does not obey such a symmetry. Discrete symmetries between the other ghosts of topological Yang-Mills theories ($\phi^{a}$ and $\bar{\phi}^{a}$ and; $\psi^{a}_{\mu}$ and $\bar{\chi}^{a}_{\mu\nu}$) are also not present in (3.8), which explains the second ambiguity. In Witten’s theory, such an ambiguity will not appear by this reasoning since Witten’s action contains discrete symmetries ensured by the time-reversal symmetry (3.2) in Landau gauge, together with $\displaystyle\phi$ $\displaystyle\rightarrow$ $\displaystyle\bar{\phi}\;,\quad\bar{\phi}\rightarrow\phi\;,$ $\displaystyle\psi_{\mu}$ $\displaystyle\rightarrow$ $\displaystyle\chi_{\mu}\;,\quad\chi_{\mu}\rightarrow\psi_{\mu}\;,$ (3.23) whereby the components of $\chi_{\mu}$ are defined as follows $\chi_{0}\equiv\eta\;,\quad\chi_{i}\equiv\chi_{0i}=\frac{1}{2}\varepsilon_{ijk}\chi_{jk}\;,$ (3.24) implying a “particle-antiparticle” relationship between $\bar{c}$ and $c$, $\bar{\phi}$ and $\phi$, and $\psi_{\mu}$ and $\chi_{\mu}$, as demonstrated in [52]. This ambiguity is also present in a generalized class of renormalizable gauges [25]. In fact, one could relate this ambiguity with the fact that all local degrees of freedom are non-physical (e.g. the gauge field propagator is totally gauge dependent). In self-dual Landau gauges, where the vector supersymmetry is present, the Feynman diagram structure indicates that imposing $Z_{c}=Z_{\bar{c}}$ and $Z_{\phi}=Z_{\bar{\phi}}$ is consistent with the model. Hence the Z-factor system (3.19) would naturally yield $Z_{g}=1$, in accordance with the absence of radiative corrections in this gauge choice. However, without recovering the discrete symmetries between the ghosts, such an imposition seems to be artificial. As we will see later, the renormalization ambiguity can be solved in the SDL gauges, i.e., the discrete symmetries can be reconstructed, due to the triviality of the Gribov copies [28], which allows for a non-local transformation with trivial Jacobian, capable of recovering such symmetries. ## 4 Perturbative $\beta$ functions Our aim in this section is to compare the DW and BS $\beta$-functions to prove that these topological gauge theories are not completely equivalent at the quantum level, and then identify in which energy regimes the correspondence could occur. The DW $\beta$-function is well known [52, 34], as we will briefly describe. It remains the task of determining the self-dual BS one to perform the comparison. ### 4.1 Twisted $N=2$ super-Yang-Mills theory In [52] the authors have computed the one-loop $\beta$-function of the DW theory. Later, the authors of [34] employed the algebraic renormalization techniques to also study DW theory, and prove that the $\beta$-function of twisted $N=2$ SYM ($\beta^{N=2}_{g}$) is one-loop exact. The reason is that the composite operator $\text{Tr}\phi^{2}(x)$ does not renormalize [53]. For that, they considered the fact that the operator $d_{\mu\nu}$, defined in expression (2.15), is redundant [54]. Thence, the definition of an extended BRST operator, namely, $\mathcal{S}=s_{YM}+\omega\delta+\varepsilon_{\mu}\delta_{\mu}\;,$ (4.1) could be employed. In expression (4.1), $\omega$ and $\varepsilon_{\mu}$ are global ghosts, and $\delta$ and $\delta_{\mu}$ were defined in equations (2.13) and (2.14). The relevant property of the operator $\mathcal{S}$ is that it is on-shell nilpotent in the space of integrated local functionals, since $\mathcal{S}^{2}=\omega\varepsilon_{\mu}\partial_{\mu}+\text{eqs of motion}\;.$ (4.2) We point out that this extended BRST construction requires the equations of motion to obtain a nilpotent BRST operator—a standard behavior of Witten theory, representing a different quantization scheme of the BS theory. Considering the non-renormalization of $\text{Tr}\phi^{2}$ and the on-shell cohomology of the operator defined in eq. (4.2), the result is that the $\beta$-function only receives contributions to one-loop order, and is given by $\beta_{g}^{N=2}=-Kg^{3}\;,$ (4.3) with $K$ being a constant. The computation of $\beta_{g}^{N=2}$ via Feynman diagrams was performed in [52] by evaluating the one-loop contributions to the gauge field propagator (where the Landau gauge was used to fix the Yang-Mills symmetry of Witten action (2.30)). The behavior of one-loop exactness of the $N=2$ $\beta$-function had been usually understood in terms of the analogous Adler-Bardeen theorem for the $U(1)$ axial current in the $N=2$ SYM [9]. Despite the independence of the Witten partition function under changes in the coupling constant, the result (4.3) should not be surprising. In the twisted version, we can see that the trace of the energy-momentum is not zero, but given by (see [8]) $g_{\mu\nu}T^{\mu\nu}=\text{Tr}\\{D_{\mu}\phi D^{\mu}\bar{\phi}-2iD_{\mu}\eta\psi^{\mu}+2i\bar{\phi}[\psi_{\mu},\psi^{\mu}]+2i\phi[\eta,\eta]+\frac{1}{2}[\phi,\bar{\phi}]^{2}]\\}\;,$ (4.4) meaning that $S_{W}$ is not conformally invariant under the transformation $\delta g_{\mu\nu}=h(x)g_{\mu\nu}\;,$ (4.5) for an arbitrary real function $h(x)$ on $M$. Nonetheless, the trace of the energy-momentum tensor can be written as a total covariant divergence, $g_{\mu\nu}T^{\mu\nu}=D_{\mu}\left[\text{Tr}(\bar{\phi}D^{\mu}\phi-2i\eta\psi^{\mu})\right]\;,$ (4.6) which means that $S_{W}$ is invariant under a global rescaling of the metric, $\delta g_{\mu\nu}=wg_{\mu\nu}$, with $w$ constant [8]. The $N=2$ $\beta$-function only vanishes if we take the weak coupling limit $g^{2}\rightarrow 0$, $\beta_{g}^{N=2}(g^{2}\rightarrow 0)=0\;.$ (4.7) In this limit, the possibility of loop corrections to the effective action is eliminated, and the Donaldson polynomials are reproduced as the observables of the theory. There is no Ward identity, or a particular property of the vertices and propagators of $S_{W}$, capable of eliminating these quantum corrections for an arbitrary energy regime—this situation is distinct from the BS theory in the self-dual Landau gauges. ### 4.2 Baulieu-Singer topological theory As suggested by the tree-level exactness of the BS theory in the self-dual Landau gauges, according to the analysis of the Feynman diagrams performed in Section 3, we will formally prove that the self-dual BS theory is conformal. Before proving the vanishing of the BS $\beta$-function in this gauge, we will first discuss the non-physical character of the coupling constant in this off- shell approach, since $g$ is introduced in the BS theory as a gauge parameter, in the trivial part of the BRST cohomology. #### 4.2.1 Nonphysical character of the $\beta$ function in the off-shell approach In [52], Brooks _et al._ argued that only one counterterm is required in the on-shell Witten theory, specifically for the YM term $\text{Tr}\,F^{2}_{\mu\nu}$. In any case, the Donaldson invariants are described by DW theory in the weak coupling limit $g^{2}\rightarrow 0$, where the theory is dominated by the classical minima. On the other hand, it is evident that the BS theory is distinct from Brooks et al. construction because their methods are based on different BRST quantization schemes, with different cohomological properties. We do not expect a similar result in the BS theory. According to the cohomology of the BS model, if the $\beta^{BS}_{g}$ is not zero, we should find that it is $\text{Tr}\,(F_{\mu\nu}\pm\widetilde{F}_{\mu\nu})^{2}$ rather than $\text{Tr}\,F^{2}_{\mu\nu}$ which is renormalized252525See [55], where Birmingham et al. had employed the Batalin-Vilkovisky algorithm [56]—a similar quantization to BS approach, i.e., with similar cohomological properties.. In this way, the minima of the effective action preserves the instanton configuration at the quantum level, according to the global degrees of freedom of the instantons, which defines the observables of the BS theory—the Donaldson invariants. A possible discrepancy between $\beta$-functions for the BS approach in different gauge choices cannot be attributed to a gauge anomaly, since it is forbidden in these models due to the trivial BRST cohomology [55], cf. equation (2.105). For instance, if we would had chosen the gauge $D^{ab}_{\mu}\psi^{b}_{\mu}=0$ for the topological ghost, with the covariant derivative instead of the ordinary one, the vector supersymmetry would be broken, and the gauge propagator would not vanish to all orders anymore. In ordinary Yang-Mills theories, the $\beta$-function is an on-shell gauge- invariant physical quantity. Nonetheless, in gauge-fixed BRST topological theories of BS type, the coupling constant is non-physical, introduced in the trivial part of the cohomology, together with the gauge-fixing action. In these terms, it is not contradictory that the $\beta$-function is gauge dependent as it computes the running of a non-physical parameter. We must observe that the physical observables of the theory, the Donaldson invariants, naturally do not depend on the gauge coupling. So that there is no inconsistency that the observables of this kind of theory, described by topological invariants, i.e., exact numbers, do not depend on the coupling constant, and consequently on its running, being $g$ an unobservable quantity. As DW and BS theories possess the same observables, we should then consider the instanton configuration not as a gauge fixing condition, but as a physical requirement in order to obtain the correct degrees of freedom that correspond to the description of all global observables. Furthermore, the Atiyah-Singer index theorem [37] determines the dimension of the instanton moduli space, in which the Donaldson invariants are defined—see [57, 58] for some exact instanton solutions, whose properties cannot be attributed to gauge artifacts. #### 4.2.2 Conformal structure of the self-dual gauges To prove that the algebraic renormalization is in harmony with the Feynman diagram analysis in the self-dual Landau gauges, which shows that the BS model does not receive radiative corrections in this gauge, we must invoke a result recently published in [28]. In this work, it was demonstrated that the Gribov ambiguities [59, 60] are inoffensive in the self-dual BS theory262626The result was proved to be valid to all orders in perturbation theory by making use of the Zwanziger’s approach [61] to the Gribov problem [59].. The quantization of this model in a local section of the field space where the eigenvalues of the Faddeev-Popov determinant are positive, is equivalent to its quantization in the whole field space. In other words, the introduction of the Gribov horizon does not affect the dynamics of the BS theory in SDL gauges, as its correspondent gap equation forbids the introduction of a Gribov massive parameter in the gauge field propagator. This result also suggests that the fiber bundle structure of the BS theory is trivial [62]. Let us quickly recall the Gribov procedure in the quantization of non-Abelian gauge theories [59, 60]. It essentially consists in eliminating remaining gauge ambiguities usually present in non-Abelian gauge theories, known as Gribov copies, which are not eliminated in the Faddeev-Popov (FP) procedure [63, 64]. In Yang-Mills theories, the FP gauge-fixing procedure results in the well-known functional generator $Z_{YM}=\mathcal{N}\int\mathcal{D}A|\det[-\partial_{\mu}D_{\mu}^{ab}]|\delta(\partial_{\mu}A_{\mu})e^{-S_{YM}}=\mathcal{N}\int\mathcal{D}A\mathcal{D}\bar{c}\mathcal{D}ce^{-(S_{YM}+S_{gf})}\;,$ (4.8) whereby $S_{gf}$ is the well-known gauge-fixing action given by $S_{gf}=\int d^{4}x\left(\bar{c}^{a}\partial_{\mu}D_{\mu}^{ab}c^{b}-\frac{1}{2\alpha}(\partial_{\mu}A_{\mu}^{a})^{2}\right)\;.$ (4.9) In (4.9), the limit $\alpha\longrightarrow 0$ must be taken in order to reach the Landau gauge, $\partial_{\mu}A_{\mu}=0\;.$ (4.10) Consider a gauge orbit272727The gauge orbit is the equivalence class of gauge field configurations that only differ by a gauge transformation, representing thus the same physics according to the gauge the gauge principle. $A_{\mu}^{U}=UA_{\mu}U^{\dagger}-\frac{i}{g}(\partial_{\mu}U)U^{\dagger}\;,$ (4.11) with $U=e^{-igT^{a}\theta^{a}(x)}\;\Big{|}\;U\in SU(N)$ with $\theta^{a}(x)$ being the local gauge parameters of the non-Abelian symmetry, and $T^{a}$ the generators of the gauge group. The FP hypothesis [63, 64] is that there is only one gauge configuration in the orbit (4.11) obeying the Landau gauge condition (4.10). In his seminal work [59], V. N. Gribov demonstrated that this hypothesis fail at the YM low energy regime because one can always find two configurations $\tilde{A}$ and $A$ obeying the Landau gauge condition and yet being related by a gauge transformation. At infinitesimal level, the condition for a configuration $A$ to have a Gribov copy $\tilde{A}$ is that the FP operator develops zero-modes through $-\partial_{\mu}D_{\mu}\theta=0\,.$ (4.12) with $\theta^{a}$ taken as an infinitesimal parameter, $U\approx 1-\theta^{a}T^{a}$. Equation (4.12) is recognized as the Gribov copies equation in the Landau gauge (and also in linear covariant gauges–See [65, 66, 67, 68]). Equation (4.12) can be seen as an eigenvalue equation for the FP operator where $\theta$ is the zero mode of the operator. In Landau gauge, this operator is Hermitian, and thus, its eigenvalues are real. For values of $A_{\mu}$ sufficiently small, the eigenvalues of the FP operator will be positive, as $-\partial^{2}$ only has positive eigenvalues282828In Abelian theories, such as QED, $-\partial^{2}$ is the “FP operator”, and the copy equation only possesses trivial solutions in the thermodynamic limit, meaning that the Gribov copies are inoffensive in this case.. As $A_{\mu}$ increases, it will attain a first zero mode (4.12). Such region in which the FP operator has its first vanishing eigenvalue is called Gribov horizon (See also [60]). Gribov’s proposal was to restrict the path integral domain to the region $\Omega$ (the Gribov region) defined by $\Omega=\\{A^{a}_{\mu};\;\partial_{\mu}A_{\mu}=0,\;-\partial D>0\\}\,.$ (4.13) Such restriction ensures the elimination of all infinitesimal copies and also guarantees that no independent field is eliminated [69]. The implementation of the restriction to the Gribov region $\Omega$ is accomplished by the introduction of a step-function $\Theta(-\partial D)$ in the Feynman path integral, that leads to the well-known no-pole condition for the FP ghost propagator $\langle(\partial D)^{-1}\rangle$, which explodes at when a zero mode is attained. The main result of introducing the restriction of the Feynman path integral domain to the Gribov region is a modified gluon propagator, due to the emergence of a massive parameter for the gauge field, the so called Gribov parameter $\gamma$. In the presence of the Gribov horizon, the gluon propagator takes the form $\langle A^{a}_{\mu}(k)A_{\nu}^{b}(k)\rangle=\delta^{ab}\delta(p+k)\frac{k^{2}}{k^{4}+\gamma^{4}}P_{\mu\nu}(k)\;,$ (4.14) where $P_{\mu\nu}(k)=\delta_{\mu\nu}-k_{\mu}k_{\nu}/k^{2}$, and $\gamma$ fixed by the gap equation [61, 70], $\frac{\partial\Gamma}{\partial\gamma^{2}}=0\;.$ (4.15) According to Zwanziger’s generalization [61], the gap equation above is valid to all orders in perturbation theory—see [71, 72], where the all-order proof of the equivalence between Gribov and Zwanziger methods was worked out. An important feature of the Gribov parameter is that it only affects the infrared dynamics. The matching between Gribov-Zwanziger theory and recent lattice data is achieved through the introduction of two-dimensional condensates, see [73]. The introduction of the Gribov horizon in the action explicitly breaks the BRST symmetry. This is usually an unwanted result, as the BRST symmetry is necessary to prove the unitarity, to ensure the renormalizability to all orders, and to define the physical gauge-invariant observables of the theory [74, 75, 76]. This breaking however brought to light the physical meaning of the infrared $\gamma$ parameter, and its intrinsic non-perturbative character. One can prove that the BRST breaking is proportional to $\gamma^{2}$, in other words, the BRST symmetry is restored in the perturbative regime. One says that the BRST symmetry is only broken in a soft way, cf. [77, 78, 76, 79]. Only more recently, a universal, gauge independent, (non-perturbative) BRST invariant way to introduce the Gribov horizon was developed [67, 68, 80, 81, 82]. In the self-dual topological BS theory, it was proved in [28] that all topological gauge copies associated to the gauge ambiguities (2.69) and (2.70), are eliminated through the introduction of the usual Gribov restriction, in which the path integral domain is restricted to the region $\Omega$—see eq. (4.13). Moreover, due to the triviality of the gap equation, it was verified that the Gribov copies does not affect the infrared dynamics of the self-dual BS theory because $\gamma_{BS}=0$ is the only possible solution of the gap equation292929A similar situation occurs in the $N=4$ SYM which possesses a vanishing $\beta$-function, indicating the conformal structure of the self-dual BS. The absence of an invariant scale makes it impossible to attach a dynamical meaning to the Gribov parameter [83].. Thus, no mass parameter seems to emerge in the BS theory, preserving its conformal character at quantum level. Specifically, the tree-level exactness in SDL gauges is preserved. Such a behavior can be inferred from the absence of radiative corrections which ensures the semi-positivity of all two-point functions. The FP ghost propagator, for instance, reads $\langle\bar{c}^{a}(k)c^{b}(k)\rangle=\delta^{ab}\frac{1}{k^{2}}\;,$ (4.16) which is valid to all orders, demonstrating that the FP operator will remain positive-definite at quantum level, consistent with the inverse of the FP propagator being positive, thus proving that we are inside the Gribov region. Moreover, the gauge two-point function remains trivial, i.e., $\langle A^{a}_{\mu}(k)A_{\nu}^{b}(k)\rangle=0$ to all orders. Exploring the positive-definiteness of the FP ghost propagator, we are able to safely perform the following shifts: $\displaystyle\bar{\eta}^{a}$ $\displaystyle\longmapsto$ $\displaystyle\bar{\eta}^{a}+\bar{c}^{a}\;,$ $\displaystyle\phi^{b}$ $\displaystyle\longmapsto$ $\displaystyle\phi^{b}-gf^{cde}(\partial_{\nu}D_{\nu}^{bc})^{-1}\partial_{\mu}\left(c^{d}\psi^{e}_{\mu}\right)\;,$ $\displaystyle\bar{c}^{a}$ $\displaystyle\longmapsto$ $\displaystyle\bar{c}^{a}-\frac{1}{2}gf^{cde}\bar{\chi}^{d}_{\mu\nu}(F_{\pm})^{e}_{\mu\nu}(\partial_{\nu}D_{\nu}^{ca})^{-1}\;.$ (4.17) It is worth noting that these shifts generate a trivial Jacobian. Calling $\frac{1}{2}\rho_{1}=\alpha$ and $\frac{1}{2}\rho_{2}=\beta$, and implementing the BS gauge constraints (2.96) and (2.98), together with $\partial_{\mu}\psi_{\mu}=0$, the final gauge-fixing action, integrating out the auxiliary fields $b$ and $B$ in the action (2.101), is $\displaystyle S_{gf}(\alpha,\beta)$ $\displaystyle=$ $\displaystyle\int d^{4}x\left[-\frac{1}{2\alpha}(\partial A)^{2}-\frac{1}{4\beta}F_{\pm}^{2}\right]-\int d^{4}x\left[\left(\bar{\eta}^{a}-\bar{c}^{a}\right)\partial_{\mu}\psi^{a}_{\mu}\right.$ (4.18) $\displaystyle+$ $\displaystyle\left.\bar{c}^{a}\partial_{\mu}D_{\mu}^{ab}c^{b}-\frac{1}{2}gf^{abc}\bar{\chi}^{a}_{\mu\nu}c^{b}\left(F_{\mu\nu}^{c}\pm\widetilde{F}_{\mu\nu}^{c}\right)-\bar{\chi}^{a}_{\mu\nu}\left(\delta_{\mu\alpha}\delta_{\nu\beta}\pm\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}\right)D_{\alpha}^{ab}\psi_{\beta}^{b}\right.$ $\displaystyle+$ $\displaystyle\left.\bar{\phi}^{a}\partial_{\mu}D_{\mu}^{ab}\phi^{b}+gf^{abc}\bar{\phi}^{a}\partial_{\mu}\left(c^{b}\psi^{c}_{\mu}\right)\right]\;,$ where $F_{\pm}=F\pm\tilde{F}$ and $D_{\pm}\equiv\left(\delta_{\mu\alpha}\delta_{\nu\beta}-\delta_{\nu\alpha}\delta_{\mu\beta}\pm\epsilon_{\mu\nu\alpha\beta}\right)D_{\alpha}^{ab}$. The self dual Landau gauges is recovered by setting $\alpha,\beta\rightarrow 0$. Then, applying the shifts (4.2.2) on the action $S_{gf}(\alpha,\beta)$, one gets $\displaystyle S_{gf}^{shifted}(\alpha,\beta)$ $\displaystyle=$ $\displaystyle\int d^{4}x\left[-\frac{1}{2\alpha}(\partial A)^{2}-\frac{1}{4\beta}F_{\pm}^{2}\right]-\int d^{4}x\left[\bar{\eta}^{a}\partial_{\mu}\psi^{a}_{\mu}+\bar{c}^{a}\partial_{\mu}D_{\mu}^{ab}c^{b}\right.$ (4.19) $\displaystyle-$ $\displaystyle\left.\bar{\chi}^{a}_{\mu\nu}\left(\delta_{\mu\alpha}\delta_{\nu\beta}\pm\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}\right)D_{\alpha}^{ab}\psi_{\beta}^{b}+\bar{\phi}^{a}\partial_{\mu}D_{\mu}^{ab}\phi^{b}\right]\;.$ As the Jacobian of the shifts that performs $S_{gf}(\alpha,\beta)\rightarrow S_{gf}^{shifted}(\alpha,\beta)$ is trivial, the quantization of both actions are perturbatively equivalent, cf. [34]. Such a Jacobian only generates a number that can be absorbed by the normalization factor. This shows that the discrete symmetries (3.2) and (3.2) present in the Witten theory can be recovered, which naturally impose the relations $Z_{c}=Z_{\bar{c}}\quad\text{and}\quad Z_{\phi}=Z_{\bar{\phi}}$ (4.20) to be valid in the BS theory. Hence, combining (4.20) with the $Z$-factor system (3.19), one obtains $Z_{g}=1\;,$ (4.21) which proves that the algebraic analysis is in harmony with the result obtained via the study of the Feynman diagrams in the presence of the vector symmetry, i.e., that the topological BS theory (following the self-dual Landau gauges) is conformal, in accordance with the absence of radiative corrections. The algebraic renormalization and the Feynman diagram analysis consist of two independent methods. In the loop expansion, used to construct the diagrams in Section 3.1, we expand around the trivial $A=0$ sector, i.e., around an instanton with winding number zero. One may wonder if it is physically relevant, thinking about the importance of instanton configurations in the topological theory, in order to construct the Donaldson invariants. Exactly the topological nature of the off-shell BS theory saves the day here. Let us first remark that it is possible to write down a BRST invariant version of the Gribov restriction, that is, if $\gamma$ were to be nonzero, whilst preserving equivalence with the above formalism303030In the sense that all correlation functions will be identical., see [67, 80, 82] for details. As already reminded before, the topological partition function does not depend on the coupling $g$. This means all observables can be computed in the $g\to 0$ limit. Expanding around a nontrivial instanton background rather than around $A=0$ would lead to corrections of the type $e^{-1/g^{2}}$ into the effective action, but the latter vanishes exponentially fast once $g\to 0$ is considered, that only represent a liberty of the theory, i.e., it would not affect the global observables, see [28]. As such, we can a priori work around $A=0$, without losing the generality of the result, which will be unaltered for a generic instanton background. ## 5 Characterization of the DW/BS correspondence We will characterize in this section the quantum correspondence between the twisted $N=2$ SYM in the ultraviolet regime and the conformal Baulieu-Singer theory in the SDL gauges. ### 5.1 Quantum equivalence between DW and self-dual BS theories The result obtained in (4.21) in the SDL gauges proves that the self-dual BS $\beta$-function vanishes. This result is completely different from the twisted $N=2$ SYM which receives one-loop corrections, and possesses a non- vanishing $\beta$-function given by (4.3). The correspondence between the BS and $N=2$ $\beta$-functions occurs when we take the weak coupling limit ($g^{2}\rightarrow 0)$ on the $N=2$ side. In this limit, $\beta^{N=2}_{g}\rightarrow 0$. On the BS side, however, the vanishing of the $\beta$-function is valid for an arbitrary coupling constant, and not only for a weak coupling, being the conformal property a consequence of the vector supersymmetry which forbids radiative corrections. In DW theory, such a property is obtained by taking $g^{2}\rightarrow 0$ as small as we want (as long as $g^{2}\neq 0$), as a consequence of the property that shows that the observables of DW theory are insensitive under changes of $g$. That is how Witten computed its partition function that naturally reproduces the Donaldson invariants for four manifolds, i.e., by eliminating the influence of the vertices at $g^{2}\rightarrow 0$, and taking only the quadratic part of the action. The BS theory is also invariant under changes of $g$, as it only appears in the trivial part of the BRST cohomology, but the tree-level exactness is a general property of the BS theory in self-dual Landau gauges, i.e., it is valid for an arbitrary perturbative regime. The characterization of the correspondence between the twisted $N=2$ SYM and a conformal field theory is now complete. The fact that the twisted $N=2$ SYM in the weak coupling limit, and the BS theory share the same global observables is a well-known result in literature [22, 84, 23]. In the DW theory, the Donaldson invariants are defined by the $\delta$-supersymmetry (2.1.1) according to the bi-descent equations encoded in (2.65). In the BS one, the same bi-descent equations appears, constructed from the $n$’th Chern class $\widetilde{\mathcal{W}}_{n}$ defined in terms of the universal curvature in the extended space $M\times\mathcal{A}/\mathcal{G}$. Such an equivalence is ensured by the equivariant cohomology that allows for the replacement $s\rightarrow\delta$, as $\widetilde{\mathcal{W}}_{n}$ is invariant under ordinary Yang-Mills transformations. We are now defining in which energy regimes such an equivalence occurs when we employ the self-dual Landau in the BS, and formal proving the correspondence between the twisted $N=2$ and a conformal gauge theory. The fact that the observables are the same, as a consequence of the equivariant cohomology, do not characterizes the correspondence at quantum level (we will provide a counter-example in the next section). The correspondence between the DW and BS observables, given by the equivalence $\langle\mathcal{O}^{DW}_{\alpha_{1}}(\phi_{i})\mathcal{O}^{DW}_{\alpha_{2}}(\phi_{i})\cdots\mathcal{O}^{DW}_{\alpha_{p}}(\phi_{i})\rangle_{g^{2}\rightarrow 0}=\langle\mathcal{O}^{BS}_{\alpha_{1}}(\phi^{\prime}_{i})\mathcal{O}^{BS}_{\alpha_{2}}(\phi^{\prime}_{i})\cdots\mathcal{O}^{BS}_{\alpha_{p}}(\phi^{\prime}_{i})\rangle_{SDL}\;,$ (5.1) is independent of the gauge choice. The field content that defines the observables are the same in both theories, $\phi_{i}\equiv\phi^{\prime}_{i}$, since the observables are independent of the FP ghosts (which appear in the gauge-fixed BS action). In a few words, $\mathcal{O}^{DW}_{\alpha}(\phi_{i})\equiv\mathcal{O}^{BS}_{\alpha}(\phi^{\prime}_{i})$, represented by the product in eq. (2.65). As demonstrated in Section 2.2.5, the BS observables naturally do not depend on $(c,\bar{c})$, due to the invariance of $\widetilde{\mathcal{W}}_{n}$ under $s_{YM}$ (the Yang-Mills BRST operator). The BS reproduces the Donaldson polynomials only as a consequence of the structure of the off-shell BRST transformations (2.72). Witten works exclusively in the moduli space $\mathcal{A}/\mathcal{G}$, i.e., without fixing the gauge, being its observables naturally independent of the FP ghost. Another crucial point is that the gauge fixing term in which the FP ghosts are introduced in the self-dual BS theory does not allow for the influence of Gribov copies. For this reason, working in the moduli space $\mathcal{A}/\mathcal{G}$ in the DW theory are completely correspondent to work in the BS theory in SDL gauges for an arbitrary $g$, since $\gamma^{4}=0$. Fixing the remaining YM gauge symmetry of Witten’s action (2.30), instead of working in $\mathcal{A}/\mathcal{G}$, would not break such a correspondence since the Gribov copies could only affect the non- perturbative regime, being inoffensive at the ultraviolet limit $g^{2}\rightarrow 0$. The quantum equivalence are illustrated by the agreement between the $\beta$\- functions, $\beta^{N=2}_{g}(g^{2}\rightarrow 0)=\beta_{g}^{BS}(g)=0$. Finally, due to the property of Witten theory (2.35), which ensures that Witten theory can be extended to any Riemannian manifold, the DW/BS correspondence is characterized as follows: the twsited $N=2$ SYM at $g^{2}\rightarrow 0$, in any Riemannian manifold (that can be continuously deformed into each other, including $\mathbb{R}^{4}$)313131This is the only requirement that guarantees that the observables of both sides are correspondent, as the conformal BS is defined in Euclidean spaces. In the DW theory, spaces that can be continuously deformed, one into the other, represent the same Donaldson invariants for a class of manifolds, since a continuous variation of the metric is equivalent to add a $\delta$-exact term to the action, which does not alter the observables. , defined in the instanton moduli space $\mathcal{A}/\mathcal{G}$, is correspondent to the topological BS theory in the self-dual Landau gauges in Euclidean spaces, in an arbitrary perturbative regime. Such a BS theory consists of a conformal field theory, where the gauge copies are inoffensive in the infrared, since the massive infrared parameter originated from the gauge copies vanishes in this gauge—see Table 3 bellow. Twisted $N=2$ SYM | BS in self-dual Landau gauges ---|--- On-shell $\delta$-supersymmetry | Off-shell BRST + vector supersymmetry Donaldson invariants ($\delta$) | Donaldson invariants ($s\rightarrow\delta$) $g^{2}\rightarrow 0$ | arbitrary $g$ Any Riemannian manifold, $g_{\mu\nu}$ | Euclidean spaces, $\delta_{\mu\nu}$ $\mathcal{A}/\mathcal{G}$ | gauge-fixed $|$ $\gamma^{4}_{Gribov}=0$ $\beta_{g}^{N=2}\rightarrow 0$ | $\beta_{g}^{BS}(g)=0$ Table 3: Characterization of the DW/BS correspondence. We emphasize that we use perturbative techniques to prove the conformal property of the self-dual BS theory. The fact that the self-dual BS theory in the strong limit $g^{2}\rightarrow\infty$ is also correspondent to Witten’s TQFT defined at $g^{2}\rightarrow 0$, can be conjectured by means of the cohomological structure of the off-shell BRST symmetry. Changing $g$ in the BS theory is equivalent to add a BRST-exact term in the action, i.e., it is equivalent to perform a change in the gauge choice. Moreover, the global observables of BS theory, described by the Chern classes $\widetilde{\mathcal{W}}_{n}$, does not depend on the gauge choice, having only the power of reproducing the Donaldson invariants for four-manifolds. Also, the Gribov ambiguities are irrelevant to the BS model (at least in the self-dual Landau gauges), a property that should remain valid at the strong regime. ### 5.2 Considerations about the gauge dependence and possible generalizations Due to the exact nature of the topological Donaldson invariants, which are given by exact numbers, we can consider the supposition that quantum corrections should not affect the tree-level results, and that the description of the Donaldson invariants by the gauge-fixed BS approach should not depend on the gauge choice. Although intuitive, this argument is not sufficient or complete. As a counterexample, we invoke the $\beta$-function obtained by Birmingham et al. in [55], where the Batalin-Vilkovisky (BV) algorithm [56] was employed. Such a model possesses similar cohomological properties to the BS theory. For a particular configuration of auxiliary fields used in [55], the BV method recovers the BS gauges with $\rho_{1}=\rho_{2}=0$, together with the constraint $D_{\mu}\psi_{\mu}=0$—see eq. (2.96). This constraint on the topological ghost, with the covariant derivative instead of the ordinary one, breaks the vector supersymmetry, allowing for quantum corrections. Consequently, the $\beta$-function computed by Birmingham et al. is not zero. As noted by the authors of [55], it is $\text{Tr}\,(F_{\mu\nu}\pm\widetilde{F}_{\mu\nu})^{2}$ rather than $\text{Tr}\,F_{\mu\nu}^{2}$ which is renormalized, meaning that the vacuum configurations are preserved. As expected, the structure of the instanton moduli space, that defines the Donaldson invariants, is not altered. About the gauge dependence of the $\beta$-function in off-shell topological gauge models, see Sec. 4.2.1. The coupling constant in this model is non- physical, belonging to the trivial part of cohomology. Any change in the unobservable coupling constant only leads to a BRST-exact variation. The only observables are the global ones, and we must expect that, for different gauge choices, the global observables are not affected. According to the result of Birmingham et al. in [55], it is possible to obtain non-trivial quantum corrections without destroying the topological properties of the underlying theory, preserving the observables. Analogously, we can consider the possibility in which the fields could also be consistently renormalized, i.e., in such a way that the bi-descent equations, that defines the Donaldson invariants, are not altered. This reasoning shows that the renormalization of topological gauge theories, consistent with the global observables, is not a trivial issue. The vector supersymmetry, that leads to the conformal BS theory, is a particular symmetry of the self-dual Landau gauge. One must note that $\partial_{\mu}A_{\mu}=D_{\mu}A_{\mu}$, due to antisymmetric property of $f^{abc}$. To impose $\partial_{\mu}\psi_{\mu}=0$ or $D_{\mu}\psi_{\mu}=0$ automatically preserves the instanton moduli space, where $A_{\mu}$ and $\psi_{\mu}$ obey the same equations of motion, according to the Atiyah-Singer theorem that correctly defines its dimension. The preservation of the instanton moduli space is then the only requirement of the topological theory, being the conformal property a particular feature of the self-dual Landau gauges. The dimension of the instanton moduli space should not depend on the gauge choice, being protected by the Atiyah-Singer theorem. The second generalization that can be worked out is in the direction of developing a model in which the BS theory can be constructed for a generic $g_{\mu\nu}$. Again, any change on the Euclidean metric to a generic one corresponds to the addition of a BRST-exact term in the BS theory. This means that such a variation can be interpreted as a change in the gauge choice, and the previous discussion can be also applied here. The vector supersymmetry is easily defined in flat spaces. In order to reproduce the conformal properties of the SDL gauges in Euclidean space for a generic $g_{\mu\nu}$, we will face the task of finding a class of metrics whose corresponding action possesses a rich set of Ward identities ($\mathbf{W}_{I}$), capable of reproducing the same effect of the self-dual ones, see Appendix A, given by the 11 functional operators $\mathbf{W}^{BS}_{I}\equiv\\{\mathcal{S},\mathcal{W}_{\mu},\mathcal{W}^{a}_{1},\mathcal{W}^{a}_{2},\mathcal{W}^{a}_{3},\mathcal{W}^{a}_{4},\mathcal{G}^{a}_{\phi},\mathcal{G}^{a}_{1},\mathcal{G}^{a}_{2},\mathcal{T},\mathcal{G}_{3}\\}$. Besides that, we will face another inconvenient task of having to study the Gribov copies in curved spacetimes, which is a highly nontrivial problem. The vanishing of the Gribov parameter in the self-dual BS in Euclidean spaces ensures that the DW/BS correspondence is valid for a generic coupling constant on the BS side. ## 6 Conclusions We perform a comparative study between Donaldson-Witten TQFT [8] and the Baulieu-Singer approach [13]. While DW theory is obtained via the twist transformation of the $N=2$ SYM, BS theory is based on the BRST gauge fixing of an action composed of topological invariants—see Sections 2.1 and 2.2, respectively. Besides that, Witten theory is defined by an on-shell supersymmetry, according to the fermionic symmetry, see eq. (2.1.1), whose associated charge is only nilpotent if we use the equations of motion. Such a symmetry defines the observables of the theory, given by the Donaldson polynomials. The BS approach, in turn, consists of an off-shell BRST construction, which enjoys the topological BRST symmetry (2.72), whose observables are also given by the Donaldson invariants, due to the equivariant cohomology—defined by Witten’s fermionic symmetry—which also characterizes the BS observables that can be written as Chern classes for the curvature in the extended space $M\times\mathcal{A}/\mathcal{G}$, where $M$ is a Riemannian manifold and $\mathcal{A}/\mathcal{G}$ is the instanton moduli space, see Section 2.2.5. Despite sharing the same observables, we note that these theories do not necessarily have the same quantum properties, as Witten and BS actions do not differ by a BRST-exact term, cf. (2.127). In a few words, the BRST quantization schemes of Witten and BS theories are not equivalent. In harmony with the quantum properties of BS approach in the self-dual Landau gauges, see Section 3, we formally prove that the topological self-dual BS theory is conformal. According to the Feynman diagram analysis performed in [26], we proved the absence of quantum corrections in the BS theory in the presence of the vector supersymmetry. In Section 4.2.1, we discussed the nonphysical character of the coupling constant in the off-shell BS approach. Then, to construct an algebraic proof that the self-dual BS is conformal, we first solved the renormalization ambiguity in topological Yang-Mills theories described in [25], using a nonlocal transformation which recovers discrete symmetries between ghost and antighost fields. Such nonlocal transformations showed to be a freedom of the self-dual BS theory due to the triviality of the Gribov copies in the SDL gauges [28], see Section 4.2.2. As a consequence of this triviality, using the Ward identities of the model—together with the symmetry between the topological and Faddeed-Popov ghosts introduced in [24]—and employing algebraic renormalization techniques, we conclude that $Z_{g}=1$, i.e., that the self-dual BS $\beta$ function indeed vanishes. We observed that these theories do not possess the same quantum structure, by comparing the $\beta$ function of each model, see Section 4. From this analysis, we characterized the correspondence between the twisted $N=2$ SYM and BS theories at quantum level, defining in which regimes such a correspondence occurs, see Section 5. In spite of having distinct BRST constructions, we conclude that working in the instanton moduli space $\mathcal{A}/\mathcal{G}$ on the DW side is completely equivalent to working in the self-dual Landau gauges on the BS one, since the Gribov copies do not affect its infrared dynamics. In a few words, the twisted $N=2$ SYM in any Riemannian manifold (that can be continuously deformed into $M=\mathbb{R}^{4}$), in the weak coupling limit $g^{2}\rightarrow 0$, is correspondent to the BS theory in the self-dual Landau gauges in an arbitrary perturbative regime, which consists of a conformal gauge theory defined in Euclidean flat space, see Table 3. Such a characterization could shed some light on the connection between supersymmetry, topology, off-shell BRST symmetry, and non-Abelian conformal gauge theories in four dimensions. ## Acknowledgments We would like to thank A. D. Pereira, G. Sadovski, and A. A. Tomaz for enlightening discussions, which was indispensable for the development of this work. This study was financed in part by The Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) – Finance Code 001 – and the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq-Brazil) – Finance Code 159928/2019-2. ## Appendix A BS Ward identities in the self-dual Landau gauges The BS action in the self-dual Landau gauges (3.8) enjoys the following Ward identities: (i) The Slavnov-Taylor identity, which expresses the BRST invariance of the action (3.8): $\mathcal{S}(\Sigma)=0\;,$ (A.1) where $\displaystyle\mathcal{S}(\Sigma)$ $\displaystyle=$ $\displaystyle\int d^{4}z\left[\left(\psi^{a}_{\mu}-\frac{\delta\Sigma}{\delta\Omega^{a}_{\mu}}\right)\frac{\delta\Sigma}{\delta A^{a}_{\mu}}+\frac{\delta\Sigma}{\delta\tau^{a}_{\mu}}\frac{\delta\Sigma}{\delta\psi^{a}_{\mu}}+\left(\phi^{a}+\frac{\delta\Sigma}{\delta L^{a}}\right)\frac{\delta\Sigma}{\delta c^{a}}+\frac{\delta\Sigma}{\delta E^{a}}\frac{\delta\Sigma}{\delta\phi^{a}}+\right.$ (A.2) $\displaystyle+$ $\displaystyle\left.b^{a}\frac{\delta\Sigma}{\delta\bar{c}^{a}}+\bar{\eta}^{a}\frac{\delta\Sigma}{\delta\bar{\phi}^{a}}+B^{a}_{\mu\nu}\frac{\delta\Sigma}{\delta\bar{\chi}^{a}_{\mu\nu}}+\Omega^{a}_{\mu}\frac{\delta\Sigma}{\delta\tau^{a}_{\mu}}+L^{a}\frac{\delta\Sigma}{\delta E^{a}}+K^{a}_{\mu\nu}\frac{\delta\Sigma}{\delta\Lambda^{a}_{\mu\nu}}\right]\;.$ (ii) Ordinary Landau gauge fixing and Faddeev-Popov anti-ghost equation: $\displaystyle\mathcal{W}^{a}_{1}\Sigma$ $\displaystyle=$ $\displaystyle\frac{\delta\Sigma}{\delta b^{a}}=\partial_{\mu}A_{\mu}^{a}\;,$ $\displaystyle\mathcal{W}^{a}_{2}\Sigma$ $\displaystyle=$ $\displaystyle\frac{\delta\Sigma}{\delta\bar{c}^{a}}-\partial_{\mu}\frac{\delta\Sigma}{\delta\Omega^{a}_{\mu}}=-\partial_{\mu}\psi_{\mu}^{a}\;.$ (A.3) (iii) Topological Landau gauge fixing and bosonic anti-ghost equation: $\displaystyle\mathcal{W}^{a}_{3}\Sigma$ $\displaystyle=$ $\displaystyle\frac{\delta\Sigma}{\delta\bar{\eta}^{a}}=\partial_{\mu}\psi_{\mu}^{a}\;,$ $\displaystyle\mathcal{W}^{a}_{4}\Sigma$ $\displaystyle=$ $\displaystyle\frac{\delta\Sigma}{\delta\bar{\phi}^{a}}-\partial_{\mu}\frac{\delta\Sigma}{\delta\tau^{a}_{\mu}}=0\;.$ (A.4) (iv) Bosonic ghost equation: $\mathcal{G}^{a}_{\phi}\Sigma=\Delta_{\phi}^{a}\;,$ (A.5) where $\displaystyle\mathcal{G}^{a}_{\phi}$ $\displaystyle=$ $\displaystyle\int d^{4}z\left(\frac{\delta}{\delta\phi^{a}}-gf^{abc}\bar{\phi}^{b}\frac{\delta}{\delta b^{c}}\right)\;,$ $\displaystyle\Delta_{\phi}^{a}$ $\displaystyle=$ $\displaystyle gf^{abc}\int d^{4}z\left(\tau_{\mu}^{b}A_{\mu}^{c}+E^{b}c^{c}+\Lambda_{\mu\nu}^{b}\bar{\chi}_{\mu\nu}^{c}\right)\;.$ (A.6) (v) Ordinary Faddeev-Popov ghost equation: $\mathcal{G}^{a}_{1}\Sigma=\Delta^{a}\;,$ (A.7) where $\displaystyle\mathcal{G}^{a}_{1}$ $\displaystyle=$ $\displaystyle\int d^{4}z\left[\frac{\delta}{\delta c^{a}}+gf^{abc}\left(\bar{c}^{b}\frac{\delta}{\delta b^{c}}+\bar{\phi}^{b}\frac{\delta}{\delta\bar{\eta}^{c}}+\bar{\chi}^{b}_{\mu\nu}\frac{\delta}{\delta B^{c}_{\mu\nu}}+\Lambda^{b}_{\mu\nu}\frac{\delta}{\delta K^{c}_{\mu\nu}}\right)\right]\;,$ $\displaystyle\Delta^{a}$ $\displaystyle=$ $\displaystyle gf^{abc}\int d^{4}z\left(E^{b}\phi^{c}-\Omega_{\mu}^{b}A_{\mu}^{c}-\tau_{\mu}^{b}\psi_{\mu}^{c}-L^{b}c^{c}+\Lambda_{\mu\nu}^{b}B_{\mu\nu}^{c}-K_{\mu\nu}^{b}\bar{\chi}_{\mu\nu}^{c}\right)\;.$ (A.8) (vi) Second Faddeev-Popov ghost equation: $\mathcal{G}^{a}_{2}\Sigma=\Delta^{a}\;,$ (A.9) where $\mathcal{G}^{a}_{2}=\int d^{4}z\left[\frac{\delta}{\delta c^{a}}-gf^{abc}\left(\bar{\phi}^{b}\frac{\delta}{\delta\bar{c}^{c}}+A^{b}_{\mu}\frac{\delta}{\delta\psi^{c}_{\mu}}+c^{b}\frac{\delta}{\delta\phi^{c}}-\bar{\eta}^{b}\frac{\delta}{\delta b^{c}}+E^{b}\frac{\delta}{\delta L^{c}}\right)\right]\;.$ (A.10) (vii) Vector supersymmetry: $\mathcal{W}_{\mu}\Sigma=0\;,$ (A.11) where $\displaystyle\mathcal{W}_{\mu}$ $\displaystyle=$ $\displaystyle\int d^{4}z\left[\partial_{\mu}A_{\nu}^{a}\frac{\delta}{\delta\psi_{\nu}^{a}}+\partial_{\mu}c^{a}\frac{\delta}{\delta\phi^{a}}+\partial_{\mu}\bar{\chi}_{\nu\alpha}^{a}\frac{\delta}{\delta B_{\nu\alpha}^{a}}+\partial_{\mu}\bar{\phi}^{a}\left(\frac{\delta}{\delta\bar{\eta}^{a}}+\frac{\delta}{\delta\bar{c}^{a}}\right)+\right.$ (A.12) $\displaystyle+$ $\displaystyle\left.\left(\partial_{\mu}\bar{c}^{a}-\partial_{\mu}\bar{\eta}^{a}\right)\frac{\delta}{\delta b^{a}}+\partial_{\mu}\tau_{\nu}^{a}\frac{\delta}{\delta\Omega_{\nu}^{a}}+\partial_{\mu}E^{a}\frac{\delta}{\delta L^{a}}+\partial_{\mu}\Lambda^{a}_{\nu\alpha}\frac{\delta}{\delta K_{\nu\alpha}^{a}}\right]\;.$ (viii) Bosonic non-linear symmetry: $\mathcal{T}(\Sigma)=0\;,$ (A.13) where $\mathcal{T}(\Sigma)=\int d^{4}z\left[\frac{\delta\Sigma}{\delta\Omega^{a}_{\mu}}\frac{\delta\Sigma}{\delta\psi^{a}_{\mu}}-\frac{\delta\Sigma}{\delta L^{a}}\frac{\delta\Sigma}{\delta\phi^{a}}-\frac{\delta\Sigma}{\delta K^{a}_{\mu\nu}}\frac{\delta\Sigma}{\delta B^{a}_{\mu\nu}}+\left(\bar{c}^{a}-\bar{\eta}^{a}\right)\left(\frac{\delta\Sigma}{\delta\bar{c}^{a}}+\frac{\delta\Sigma}{\delta\bar{\eta}^{a}}\right)\right]\;.\\\ $ (ix) Global ghost supersymmetry: $\mathcal{G}_{3}\Sigma=0\;,$ (A.14) where $\mathcal{G}_{3}=\int d^{4}z\left[\bar{\phi}^{a}\left(\frac{\delta}{\delta\bar{\eta}^{a}}+\frac{\delta}{\delta\bar{c}^{a}}\right)-c^{a}\frac{\delta}{\delta\phi^{a}}+\tau^{a}_{\mu}\frac{\delta}{\delta\Omega^{a}_{\mu}}+2E^{a}\frac{\delta}{\delta L^{a}}+\Lambda^{a}_{\mu\nu}\frac{\delta}{\delta K^{a}_{\mu\nu}}\right]\;.$ (A.15) The last two symmetries are the new ones introduced in [24]. The non-linear bosonic symmetry (vii) is precisely the one discussed in Section 3.2, see eq. (3.15) which relates the FP and topological ghosts. We remark that the Faddeev-Popov ghost equations (A.7) and (A.9) can be combined to obtain an exact global supersymmetry, $\Delta\mathcal{G}^{a}\Sigma=0\;,$ (A.16) where $\displaystyle\Delta\mathcal{G}^{a}$ $\displaystyle=$ $\displaystyle\mathcal{G}_{1}^{a}-\mathcal{G}_{2}^{a}\;=\;\int d^{4}z\;f^{abc}\left[\left(\bar{c}^{b}-\bar{\eta}^{b}\right)\frac{\delta}{\delta b^{c}}+\bar{\phi}^{b}\left(\frac{\delta}{\delta\bar{\eta}^{c}}+\frac{\delta}{\delta\bar{c}^{c}}\right)+A_{\mu}^{b}\frac{\delta}{\delta\psi_{\mu}^{c}}+\right.$ (A.17) $\displaystyle+$ $\displaystyle\left.\bar{\chi}^{b}_{\mu\nu}\frac{\delta}{\delta B_{\mu\nu}^{c}}+c^{b}\frac{\delta}{\delta\phi^{c}}+\Lambda^{b}_{\mu\nu}\frac{\delta}{\delta K_{\mu\nu}^{c}}+\tau_{\mu}^{b}\frac{\delta}{\delta\Omega_{\mu}^{c}}+E^{b}\frac{\delta}{\delta L^{c}}\right]\;.$ We observe the similarity of the equation (A.16) with the vector supersymmetry (A.11). It is also worth mentioning that, even though the ghost number of the operator (A.17) is $-1$, resembling an anti-BRST symmetry, it is not a genuine anti-BRST symmetry—see for instance [85] for the explicit anti-BRST symmetry in topological gauge theories. ## References * [1] A. A. Belavin, A. M. Polyakov, A. S. Schwartz, and Y. S. Tyupkin, “Pseudoparticle solutions of the Yang-Mills equations”. Physics Letters B 59 no. 1, (1975) 85–87. * [2] S. Donaldson, “Polynomial invariants for smooth four-manifolds”. Topology 29 no. 3, (1990) 257 – 315. http://www.sciencedirect.com/science/article/pii/004093839090001Z. * [3] S. K. Donaldson, “An application of gauge theory to four-dimensional topology”. Journal of Differential Geometry 18 no. 2, (1983) 279–315. * [4] S. K. Donaldson, “The orientation of yang-mills moduli spaces and 4-manifold topology”. J. Differential Geom. 26 no. 3, (1987) 397–428. https://doi.org/10.4310/jdg/1214441485. * [5] A. Floer, “Morse theory for fixed points of symplectic diffeomorphisms”. Bull. Amer. Math. Soc. (N.S.) 16 no. 2, (04, 1987) 279–281. https://projecteuclid.org:443/euclid.bams/1183553837. * [6] A. Floer, “An instanton-invariant for $3$-manifolds”. Comm. Math. Phys. 118 no. 2, (1988) 215–240. https://projecteuclid.org:443/euclid.cmp/1104161987. * [7] M. Atiyah, “NEW INVARIANTS OF THREE-DIMENSIONAL AND FOUR-DIMENSIONAL MANIFOLDS”. Proc. Symp. Pure Math. 48 (1988) 285–299. * [8] E. Witten, “Topological quantum field theory”. Communications in Mathematical Physics 117 no. 3, (9, 1988) 353–386. * [9] P. West, Introduction to Supersymmetry and Supergravity. World Scientific, 5, 1990. * [10] A. S. Schwarz, “The Partition Function of Degenerate Quadratic Functional and Ray-Singer Invariants”. Lett. Math. Phys. 2 (1978) 247–252. * [11] D. Ray and I. Singer, “Analytic torsion for complex manifolds”. Annals Math. 98 (1973) 154–177. * [12] E. Witten, “Quantum field theory and the Jones polynomial”. Communications in Mathematical Physics 121 no. 3, (9, 1989) 351–399. * [13] L. Baulieu and I. Singer, “Topological Yang-Mills symmetry”. Nuclear Physics B \- Proceedings Supplements 5 no. 2, (12, 1988) 12–19. * [14] C. Becchi, A. Rouet, and R. Stora, “Renormalization of gauge theories”. Annals of Physics 98 no. 2, (6, 1976) 287–321. * [15] I. V. Tyutin, “Gauge Invariance in Field Theory and Statistical Physics in Operator Formalism”. * [16] O. Piguet and S. P. Sorella, Algebraic Renormalization, vol. 28 of Lecture Notes in Physics Monographs. Springer Berlin Heidelberg, Berlin, Heidelberg, 1995. * [17] P. Van Baal, “An introduction to Topological Yang-Mills Theory”. Acta Physica Polonica B21 no. 2, (1990) 73. * [18] E. Witten, “AdS / CFT correspondence and topological field theory”. JHEP 12 (1998) 012. * [19] P. Benetti Genolini, P. Richmond, and J. Sparks, “Topological AdS/CFT”. JHEP 12 (2017) 039. * [20] P. Agrawal, S. Gukov, G. Obied, and C. Vafa, “Topological Gravity as the Early Phase of Our Universe”. * [21] M. Weis, “Topological Aspects of Quantum Gravity”. hep-th/9806179. * [22] F. Delduc, N. Maggiore, O. Piguet, and S. Wolf, “Note on constrained cohomology”. Phys. Lett. B 385 (1996) 132–138. * [23] I. S. Boldo, C. P. Constantinidis, O. Piguet, M. Lefranc, J. L. Boldo, C. P. Constantinidis, F. Gieres, M. Lefrancois, and O. Piguet, “Observables in Topological Yang-Mills Theories”. International Journal of Modern Physics A 19 no. 17n18, (3, 2003) 2971–3004, hep-th/0303053. * [24] O. C. Junqueira, A. D. Pereira, G. Sadovski, R. F. Sobreiro, and A. A. Tomaz, “Topological Yang-Mills theories in self-dual and anti-self-dual Landau gauges revisited”. Physical Review D 96 no. 8, (10, 2017) 085008, 1707.06666. * [25] O. C. Junqueira, A. D. Pereira, G. Sadovski, R. F. Sobreiro, and A. A. Tomaz, “More about the renormalization properties of topological Yang-Mills theories”. Physical Review D 98 no. 10, (11, 2018) 105017, 1807.01517. * [26] O. C. Junqueira, A. D. Pereira, G. Sadovski, R. F. Sobreiro, and A. A. Tomaz, “Absence of radiative corrections in four-dimensional topological Yang-Mills theories”. Physical Review D 98 no. 2, (7, 2018) 21701, 1805.01850. * [27] A. Brandhuber, O. Moritsch, M. de Oliveira, O. Piguet, and M. Schweda, “A renormalized supersymmetry in the topological Yang-Mills field theory”. Nuclear Physics B 431 no. 1-2, (12, 1994) 173–190, hep-th/9407105. * [28] D. Dudal, C. Felix, O. Junqueira, D. Montes, A. Pereira, G. Sadovski, R. Sobreiro, and A. Tomaz, “Infinitesimal Gribov copies in gauge-fixed topological Yang-Mills theories”. Phys. Lett. B 807 (2020) 135531. * [29] S. Sorella, “Algebraic Characterization of the Topological $\sigma$ Model”. Phys. Lett. B 228 (1989) 69–74. * [30] A. Blasi and R. Collina, “Basic Cohomology of Topological Quantum Field Theories”. Phys. Lett. B 222 (1989) 419–424. * [31] J. M. F. Labastida and C. Lozano, “Lectures in Topological Quantum Field Theory”. hep-th/9709192. * [32] T. Kugo and I. Ojima, “Local Covariant Operator Formalism of Nonabelian Gauge Theories and Quark Confinement Problem”. Prog. Theor. Phys. Suppl. 66 (1979) 1–130. * [33] J. Wess and J. Bagger, Supersymmetry and supergravity. Princeton University Press, Princeton, NJ, USA, 1992. * [34] A. Blasi, V. Lemes, N. Maggiore, S. Sorella, A. Tanzini, O. Ventura, and L. Vilar, “Perturbative beta function of N=2 superYang-Mills theories”. JHEP 05 (2000) 039. * [35] E. Witten, “Supersymmetry and Morse theory”. J. Diff. Geom. 17 no. 4, (1982) 661–692. * [36] N. Maggiore, “Algebraic renormalization of N=2 superYang-Mills theories coupled to matter”. Int. J. Mod. Phys. A 10 (1995) 3781–3802. * [37] M. F. Atiyah and I. M. Singer, “The index of elliptic operators on compact manifolds”. Bull. Am. Math. Soc. 69 (1969) 422–433. * [38] M. F. Atiyah, N. J. Hitchin, and I. M. Singer, “Self-Duality in Four-Dimensional Riemannian Geometry”. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 362 no. 1711, (9, 1978) 425–461. * [39] D. Tong, “TASI lectures on solitons: Instantons, monopoles, vortices and kinks”. in Theoretical Advanced Study Institute in Elementary Particle Physics: Many Dimensions of String Theory. 6, 2005. * [40] G. ’t Hooft, “Computation of the quantum effects due to a four-dimensional pseudoparticle”. Physical Review D 14 no. 12, (12, 1976) 3432–3450. * [41] A. D’Adda and P. Di Vecchia, “Supersymmetry and Instantons”. Phys. Lett. B 73 (1978) 162. * [42] M. Blau and G. Thompson, “Do metric independent classical actions lead to topological field theories?”. Physics Letters B 255 no. 4, (2, 1991) 535–542. * [43] M. Abud and G. Fiore, “Batalin-Vilkovisky approach to the metric independence of TQFT”. Physics Letters B 293 no. 1-2, (10, 1992) 89–93. * [44] A. Mardones and J. Zanelli, “Lovelock-Cartan theory of gravity”. Classical and Quantum Gravity 8 no. 8, (8, 1991) 1545–1558. * [45] M. Daniel and C. M. Viallet, “The geometrical setting of gauge theories of the Yang-Mills type”. Reviews of Modern Physics 52 no. 1, (1, 1980) 175–197. * [46] N. Vandersickel, A study of the Gribov-Zwanziger action: from propagators to glueballs. PhD thesis, Ghent University, 4, 2011. 1104.1315. * [47] J. A. Dixon, “COHOMOLOGY AND RENORMALIZATION OF GAUGE THEORIES. 2.”. * [48] S. Ouvry, R. Stora, and P. Van Baal, “On the algebraic characterization of Witten’s topological Yang-Mills theory”. Physics Letters B 220 no. 1-2, (1989) 159–163. * [49] J. H. Horne, “Superspace Versions of Topological Theories”. Nucl. Phys. B 318 (1989) 22–52. * [50] H. Kanno, “Weyl Algebra Structure and Geometrical Meaning of BRST Transformation in Topological Quantum Field Theory”. Z. Phys. C 43 (1989) 477. * [51] M. Werneck de Oliveira, “Algebraic renormalization of the topological Yang-Mills field theory”. Phys. Lett. B 307 (1993) 347–352. * [52] R. Brooks, D. Montano, and J. Sonnenschein, “Gauge fixing and renormalization in topological quantum field theory”. Physics Letters B 214 no. 1, (11, 1988) 91–97. * [53] V. Lemes, N. Maggiore, M. Sarandy, S. Sorella, A. Tanzini, and O. Ventura, “Nonrenormalization theorems for N=2 super-Yang-Mills”. * [54] F. Fucito, A. Tanzini, L. C. Q. Vilar, O. S. Ventura, C. A. G. Sasaki, and S. P. Sorella, “Algebraic Renormalization: perturbative twisted considerations on topological Yang-Mills theory and on N=2 supersymmetric gauge theories”. 1st School on Field Theory and Gravitation Vitoria, Brazil, April 15-19, 1997 hep-th (7, 1997) 15–19, hep-th/9707209. * [55] D. Birmingham, M. Rakowski, G. Thompson, I. Centre, T. Pto, T. Physics, I. Centre, T. Pttvsk, P. Vi, and P. Jussieu, “Renormalization of topological field theory”. Nuclear Physics B 329 no. 1, (1, 1990) 83–97. * [56] I. Batalin and G. Vilkovisky, “Quantization of Gauge Theories with Linearly Dependent Generators”. Phys. Rev. D 28 (1983) 2567–2582. [Erratum: Phys.Rev.D 30, 508 (1984)]. * [57] E. Witten, “Some Exact Multi - Instanton Solutions of Classical Yang-Mills Theory”. Phys. Rev. Lett. 38 (1977) 121–124. [,124(1976)]. * [58] R. Jackiw, C. Nohl, and C. Rebbi, “Conformal Properties of Pseudoparticle Configurations”. Phys. Rev. D15 (1977) 1642. [,128(1976)]. * [59] V. Gribov, “Quantization of non-Abelian gauge theories”. Nuclear Physics B 139 no. 1-2, (6, 1978) 1–19. * [60] R. F. Sobreiro and S. P. Sorella, “Introduction to the Gribov Ambiguities In Euclidean Yang-Mills Theories”. hep-th/0504095. * [61] D. Zwanziger, “Action From the Gribov Horizon”. Nucl. Phys. B 321 (1989) 591–604. * [62] I. M. Singer, “Some remarks on the Gribov ambiguity”. Communications in Mathematical Physics 60 no. 1, (2, 1978) 7–12. * [63] L. Faddeev and V. Popov, “Feynman diagrams for the Yang-Mills field”. Physics Letters B 25 no. 1, (7, 1967) 29–30. * [64] C. Itzykson and J. Zuber, Quantum Field Theory. International Series In Pure and Applied Physics. McGraw-Hill, New York, 1980. * [65] R. Sobreiro and S. Sorella, “A Study of the Gribov copies in linear covariant gauges in Euclidean Yang-Mills theories”. JHEP 06 (2005) 054. * [66] M. Capri, A. Pereira, R. Sobreiro, and S. Sorella, “Non-perturbative treatment of the linear covariant gauges by taking into account the Gribov copies”. Eur. Phys. J. C 75 no. 10, (2015) 479. * [67] M. Capri, D. Dudal, D. Fiorentini, M. Guimaraes, I. Justo, A. Pereira, B. Mintz, L. Palhares, R. Sobreiro, and S. Sorella, “Exact nilpotent nonperturbative BRST symmetry for the Gribov-Zwanziger action in the linear covariant gauge”. Phys. Rev. D 92 no. 4, (2015) 045039. * [68] M. Capri, D. Fiorentini, M. Guimaraes, B. Mintz, L. Palhares, S. Sorella, D. Dudal, I. Justo, A. Pereira, and R. Sobreiro, “More on the nonperturbative Gribov-Zwanziger quantization of linear covariant gauges”. Phys. Rev. D 93 no. 6, (2016) 065019. * [69] G. Dell’Antonio and D. Zwanziger, “Every gauge orbit passes inside the Gribov horizon”. Communications in Mathematical Physics 138 no. 2, (5, 1991) 291–299. * [70] N. Vandersickel, D. Dudal, O. Oliveira, and S. P. Sorella, “From propagators to glueballs in the Gribov-Zwanziger framework”. AIP Conf. Proc. 1343 (2011) 155–157. * [71] A. J. Gomez, M. S. Guimaraes, R. F. Sobreiro, and S. P. Sorella, “Equivalence between Zwanziger’s horizon function and Gribov’s no-pole ghost form factor”. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics 683 no. 2-3, (10, 2009) 217–221, 0910.3596. * [72] M. Capri, D. Dudal, M. Guimaraes, L. Palhares, and S. Sorella, “An all-order proof of the equivalence between Gribov’s no-pole and Zwanziger’s horizon conditions”. Physics Letters B 719 no. 4-5, (2, 2013) 448–453, 1212.2419. * [73] D. Dudal, J. A. Gracey, S. P. Sorella, N. Vandersickel, and H. Verschelde, “A Refinement of the Gribov-Zwanziger approach in the Landau gauge: Infrared propagators in harmony with the lattice results”. Phys. Rev. D 78 (2008) 065047. * [74] A. A. Slavnov, “Physical Unitarity in the {BRST} Approach”. Phys. Lett. B 217 (1989) 91–94. * [75] S. A. Frolov and A. A. Slavnov, “Construction of the Effective Action for General Gauge Theories via Unitarity”. Nucl. Phys. B 347 (1990) 333–346. * [76] D. Dudal, S. P. Sorella, N. Vandersickel, and H. Verschelde, “Gribov no-pole condition, Zwanziger horizon function, Kugo-Ojima confinement criterion, boundary conditions, BRST breaking and all that”. Phys. Rev. D 79 (2009) 121701. * [77] L. Baulieu and S. P. Sorella, “Soft breaking of BRST invariance for introducing non-perturbative infrared effects in a local and renormalizable way”. Phys. Lett. B671 no. 4-5, (8, 2008) 481–485, 0808.1356. * [78] S. P. Sorella, “Gribov horizon and BRST symmetry: A Few remarks”. Phys. Rev. D 80 (2009) 025013. * [79] S. P. Sorella, D. Dudal, M. S. Guimaraes, and N. Vandersickel, “Features of the Refined Gribov-Zwanziger theory: Propagators, BRST soft symmetry breaking and glueball masses”. PoS FACESQCD (2010) 022. * [80] M. A. L. Capri, D. Dudal, D. Fiorentini, M. S. Guimaraes, I. F. Justo, A. D. Pereira, B. W. Mintz, L. F. Palhares, R. F. Sobreiro, and S. P. Sorella, “Local and BRST-invariant Yang-Mills theory within the Gribov horizon”. Phys. Rev. D94 no. 2, (2016) 025035. * [81] A. D. Pereira, R. F. Sobreiro, and S. P. Sorella, “Non-perturbative BRST quantization of Euclidean Yang–Mills theories in Curci–Ferrari gauges”. Eur. Phys. J. C 76 no. 10, (2016) 528. * [82] M. A. L. Capri, D. Dudal, M. S. Guimaraes, A. D. Pereira, B. W. Mintz, L. F. Palhares, and S. P. Sorella, “The universal character of Zwanziger’s horizon function in Euclidean Yang–Mills theories”. Phys. Lett. B781 (2018) 48–54. * [83] M. A. L. Capri, M. S. Guimaraes, I. F. Justo, L. F. Palhares, and S. P. Sorella, “On the irrelevance of the Gribov issue in $\mathcal{N}=4$ Super Yang-Mills in the Landau gauge”. Physics Letters B 735 (4, 2014) 277–281, 1404.7163. * [84] J. L. Boldo, C. P. Constantinidis, O. Piguet, F. Gieres, and M. Lefrançois, “Topological Yang-Mills Theories and their Observables: A Superspace Approach”. International Journal of Modern Physics A 18 no. 12, (5, 2003) 2119–2125, hep-th/0303084. * [85] N. R. Braga and C. F. Godinho, “Extended BRST invariance in topological Yang-Mills theory revisited”. Phys. Rev. D 61 (2000) 125019.
# Autonomous Vehicle-to-Grid Design for Provision of Frequency Control Ancillary Service and Distribution Voltage Regulation††thanks: This work was supported in part by Japan Science and Technology Agency, Core Research for Evolutional Science and Technology (JST-CREST) Program #JP-MJCR15K3. (Corresponding author: Yoshihiko Susuki<EMAIL_ADDRESS> Shota Yumiki111During this work, S. Yumiki, Y. Susuki, and A. Ishigame were with Department of Electrical and Information Systems, Osaka Prefecture University, Japan., Yoshihiko Susuki, Yuta Oshikubo222During this work, Y. Oshikubo and Y. Ota were with Department of Electrical and Electronics Engineering, Tokyo City University, Japan. Y. Ota is presently with Mobility Systems Design Joint Research Laboratory, Osaka University, Japan., Yutaka Ota, Ryo Masegi333During this work, R. Masegi, A. Kawashima. S. Inagaki, and T. Suzuki were with Department of Mechanical Science and Engineering, Nagoya University, Japan. S. Inagaki is presently with Department of Mechatronics, Nanzan University, Japan., Akihiko Kawashima, Atsushi Ishigame, Shinkichi Inagaki, and Tatsuya Suzuki ###### Abstract We develop a system-level design for the provision of Ancillary Service (AS) for control of electric power grids by in-vehicle batteries, suitably applied to Electric Vehicles (EVs) operated in a sharing service. An architecture for cooperation between transportation and energy management systems is introduced that enables us to design an autonomous Vehicle-to-Grid (V2G) for the provision of multi-objective AS: primary frequency control in a transmission grid and voltage amplitude regulation in a distribution grid connected to EVs. The design is based on the ordinary differential equation model of distribution voltage, which has been recently introduced as a new physics- based model, and is utilized in this paper for assessing and regulating the impact of spatiotemporal charging/charging of a large population of EVs to a distribution grid. Effectiveness of the autonomous V2G design is evaluated with numerical simulations of realistic models for transmission and distribution grids with synthetic operation data on EVs in a sharing service. In addition, we present a hardware-in-the-loop test for evaluating its feasibility in a situation where inevitable latency is involved due to power, control, and communication equipments. ## 1 Introduction The coordinate use of batteries in electric vehicles (EVs) has attracted a lot of interest for the control of power grids. In a large-scale power transmission grid, a large population of in-vehicle batteries is investigated for the provision of ancillary service (AS) to the so-called transmission system operator (TSO), coined in [1, 2] as the vehicle-to-grid (V2G). It aims to shift the peak load (called valley filling) and to provide reserves for primary, secondary, and tertiary frequency controls of the transmission grid: see, e.g., [3, 4] for technological aspects and [5] for recent implementation. The V2G for the reserve of primary frequency control (PFC) is called frequency response in PJM444Abbreviation of a regional transmission organization that coordinates the movement of wholesale electricity in all part of 13 states and the District of Columbia in United States of America [6] and also called frequency-controlled normal operation reserve in the Nordic energy region [7]. It is required to achieve fast responsiveness of several seconds to several minutes [1]. In-vehicle batteries are capable of responding faster than synchronous generators used in large thermal power plants and are hence suitable for the PFC provision. Much volume of literature exists for the provision of PFC reserve by in-vehicle batteries: as papers including experiments, see [8, 9, 10, 11]. On the other hand, in a small-scale distribution grid, the so-called distribution system operator (DSO) [12] is investigated for procuring AS from EVs [13, 14]. The AS is intended for compensating with in-vehicle batteries some of services provided by DSO including congestion prevention and voltage magnitude regulation (see [13] for details). The coordinate use of in-vehicle batteries for DSO has been reported recently, see references in the comprehensive review [14]. The situation where a large population of EVs is connected to the distribution grid has motivated a research direction in power grids for past ten years [15]. One major concern in the situation is with how the impact of EV’s charging/discharging to the distribution voltage is assessed and regulated. An EV can move anywhere and thus conduct charging and discharging anywhere (precisely, any location such as households and charging stations) in the distribution grid. The charging of EVs uncoordinated in the grid can lead to larger voltage variations and lower power quality. As one of the early studies, the authors of [16, 17] studied the impact of EV’s charging to the distribution voltage based on the power-flow equation and proposed an optimization-based scheduling of EV’s charging considering the voltage impact. The authors of [18] presented a review of the status and implementation of V2G technologies on distributed grids until 2012. The assessment and regulation problems are now still actively studied by many groups of researchers, see, e.g., [19, 20] and references in [14]. As stated in [14], since EVs are mainly connected to the distribution grid, the provision of AS to TSO may affect the distribution grid, which has been scarcely studied until now. One early study is found in [21] where the PFC and voltage regulation in a small power grid are simultaneously achieved with in-vehicle batteries. To the best of our survey, there is no comprehensive research on methodology and tools for managing the charging/discharging of EVs in which they provide the PFC reserve while its physical impact on the distribution voltage is regulated. The research is of technological significance since EVs have been becoming more and more popular as new service providers in recent years [14]. Our research team has developed theory and algorithm for solving this problem by means of EVs in a _sharing service_ [22, 23, 24, 25, 26]. An EV-sharing system has a function of tracking trajectories of EVs to allocate them upon user’s requests [27, 28] and has attracted interest as the last mile transportation [29]. It is capable of monitoring and managing the status of in-vehicle batteries such as state-of-charge and degree-of-health, and hence it can work for their effectiveness use for not only its primary concern of transportation but also the AS to exiting power grids. The work in [22], which motivates the present paper, shows a possibility of cooperative transportation-energy management, in which EV-sharing system and EMS work consistently. A computational method was developed in [23, 24, 25] for synthesizing a spatial pattern of EV operation in terms of charging/discharging modes, which was referred to as the synthesis of _charging/discharging pattern_. The method builds upon a new physics-based model of distribution voltage profile, called the nonlinear ordinary differential equation (ODE) model, which is originally derived in [30] and utilized in [31, 26] for the impact assessment of shared EVs in a distribution grid. The method in [23, 24, 25] determines the amount of charging/discharging power of _local_ charging stations, where individual EVs are connected to a distribution grid, so that the _global_ objectives of AS for frequency control and of voltage regulation are achieved. This idea is called in [25] as the provision of _multi-objective AS_ 555This implies the achievement of multiple control objectives by provision of a single AS and does not imply the use of any technique of multi-objective optimization in the computational method. by in-vehicle batteries. Here, it should be mentioned that independently from us, the authors of [32, 33] investigated the use of automated, shared EVs for the V2G and stated in [32] “This allows for a direct connection to the high voltage electricity transmission in designated points without overloading the low-voltage distribution network.” Similar arguments can be found in the recent review [14] and the recent paper [34]. These indicate that the cooperative transportation-energy management, possibly combined with the emergent automated driving, is worth pursuing. The cooperative management was conceptually reported in [22], but its detailed architecture including a connection to the multi-objective AS and its control system was not developed in [23, 24, 25, 26]. We devote the present paper to show what type of architecture is possible in the cooperation of EV-sharing operator and DSO, and how it enables the provision of multi-objective AS. The cooperation implies the exchange of information between the two operators so that they simultaneously achieve multiple functions in the EV sharing and distribution grid: see Section 2. The architecture in this paper aims to provide the multi-objective AS—PFC reserve for a transmission grid and voltage amplitude regulation support for a distribution grid. For this, we propose an autonomous V2G design in the architecture based on the preceding papers [37, 35]. The authors of [37] proposed an autonomous scheme of the provision of PFC reserve by distributed EVs, in which its impact on distribution voltage is not considered. In [35], a computational technique was developed for determining an upper bound for the synthesis of charging/discharging patterns in terms of the voltage regulation. The technique is based on the nonlinear ODE model in the similar manner as in [23, 24, 25]. In the design proposed here, charging or discharging power by EVs distributed for multiple stations is regulated with the _local_ measurement of frequency for the provision of PFC reserve, while the maximum amount of charging or discharging power is determined _globally_ for the voltage regulation. Effectiveness of the design is evaluated with numerical simulations of realistic models for transmission and distribution grids with synthetic operation data on EVs in a sharing service. In addition to the effectiveness, we use the hardware-in-the loop (HIL) testbed partly developed in [25] in order to evaluate practical feasibility of the design. The main contributions of this paper are three-fold: * • The architecture for the cooperation between EV-sharing operator and DSO is introduced (see Section 2). This sort of architecture is originally reported in [3, 13] where the cooperation is addressed between DSO and PEV (plug-in EV) aggregator, which plays a key role in the so-called direct control [3] that does not actively involve vehicle owners in the control actions imposed to PEVs. Our novelty in this paper is therefore to introduce as an PEV aggregator the EV-sharing manager which can monitor and manage the status of in-vehicle batteries in a centralized manner. Note that the architecture introduced in this paper is an extended version of [35] in which no AS requirement was introduced. * • The autonomous V2G design considering the voltage regulation is introduced and proven to be effective with numerical simulations. The mechanism for the PFC provision is based on the so-called droop-type characteristic [21, 38, 37, 39] and is therefore not new. In this paper, by combing the mechanism with the technique in [35], we propose the design for providing the PFC reserve by in- vehicle batteries in a decentralized manner while regulating (precisely, guaranteeing by design) their impact of to the distribution voltage. This design is novel to the best of our survey. This contribution is not limited to EVs and applicable to general battery devices connected to distribution grids, which state-of-art is presented in literature, e.g, [36]. * • The practical feasibility of the autonomous V2G design is established with the Power-HIL testbed. The Power-HIL is utilized for testing the coupling of transportation and energy systems, especially, the V2G: see, e.g., [8, 9, 10, 40]. The importance of HIL for validating the AS provision of power- electronics-interfaced distributed generations like EV batteries is discussed in [41]. Our Power-HIL testing shows that the dynamics of frequency and voltage under the autonomous V2G design are consistently simulated. The simulation is done in a connection of multiple components occurring in practice, such as hardware including EV batteries, software (digital simulator), communication lines, and measurement devices. It is then shown that the autonomous V2G design works in a practical situation where inevitable latency due to physical, control, and communication equipments is involved. Establishing the feasibility of the design is novel. It should be noted that the conference proceeding [35] as a preliminary work of this paper does not contain the autonomous V2G design in Sections 4 to 7, which is the main contribution of the present paper. The rest of this paper is organized as follows. In Section 2 we propose the architecture for the cooperation between EV-sharing operator and DSO. Section 3.1 introduces the nonlinear ODE model of distribution voltage, and Section 3.2 reviews its use for the computation of upper bound reported in [35]. In Section 4, we describe the autonomous V2G design for the provision of multi- objective AS that works in the proposed architecture. Effectiveness of the proposed design is evaluated in Section 5. Its feasibility testing is presented in Sections 6 and 7. Section 8 is the conclusion of this paper with a brief summary and future directions. ## 2 Overview of Cooperative Transportation-Energy Management This section introduces an architecture for the cooperation between EV-sharing operator and DSO, as shown in Figure 1, on which the autonomous V2G design works. The basic function of EV-sharing operator is based on [42]. The operator receives operation data of users (including reservations), vehicles, and charging stations. Based on the data, the EV-sharing operator determines the assignment of vehicles to users, their reallocation among stations in the system. Moreover, it sends to DSO the prediction data of how the number of EVs is changed in time at each charging station. In our study, the data include the number of EVs that can contribute to the PFC reserve in AS. Using the prediction data, DSO manages a distribution grid spanned in a distinct where the shared vehicles are operated. The basic function of DSO is discussed in [12, 14]. According to the references, DSO is responsible for realizing a reliable grid operation, keeping the quality of electricity, minimizing the loss of energy, and responding to normal and emergency situations correctly. In our study, by using the prediction data, DSO takes a new function of predicting how the charging and discharging of EVs affect the grid. In addition to this, DSO utilizes the method proposed in Section 3.2 and determines an upper bound for the synthesis of charging/discharging patterns in order to regulate the voltage impact. The determined bound is sent from DSO to EV-sharing operator as shown in the upper part of Figure 1. With the cooperation, we design the autonomous V2G scheme for the shared vehicles in order to provide the PFC reserve to an upper TSO. TSO has a function of keeping the balance of demand and supply of electric power in the grid [6]. EV-sharing operator sends the upper bound to each station, and then the station manages the charging or discharging of multiple EVs connected there (see the lower part of Figure 1) such that the total amount of charged or discharged power is within the receiving upper bound. Thereby, the charging and discharging of power can provide the PFC reserve to TSO, while their impact to the distribution voltage can be regulated (precisely, they can support the voltage regulation by DSO). Throughout the paper, we will contend that the exchange of information on EV-sharing operator and DSO in the upper part of Figure 1 is a key enabler for the provision of multi-objective AS: PFC reserve and voltage regulation. Figure 1: Proposed architecture for cooperation between electric vehicle (EV)-sharing operator and distribution system operator (DSO), on which an autonomous vehicle-to-grid (V2G) is designed for the provision of multi- objective ancillary service (AS). ## 3 ODE-Based Analysis of Distribution Voltage Profile ### 3.1 ODE Model This section introduces the ODE model of distribution voltage profile based on [30]. A single feeder model is shown in Figure 2 that is straight-line and starts at a bank (transformer), where we introduce the origin of the displacement (location) $x\in\mathbb{R}$ as $x=0$. The voltage phasor at the location $x$ is represented by $v(x)\exp\\{{\mathrm{i}}\theta(x)\\}$ where ${\mathrm{i}}$ is the imaginary unit, $v(x)$ the _voltage amplitude_ in volt [V], and $\theta(x)$ the _voltage phase_ in radian. Then, the following nonlinear ODE is derived in [30] to determine the functions $v(x)$ and $\theta(x)$: $\left.\begin{aligned} \frac{d^{2}v}{dx^{2}}&=v\left(\frac{d\theta}{dx}\right)^{2}-\frac{Gp(x)+Bq(x)}{v(G^{2}+B^{2})}\\\ -\frac{d}{dx}\left(v^{2}\frac{d\theta}{dx}\right)&=\displaystyle\frac{Bp(x)-Gq(x)}{G^{2}+B^{2}}\end{aligned}\right\\}.$ (1) The constant parameters $G$ and $B$ stand for the per-unit-length conductance and susceptance [S/km]. Also, the function $p(x)$ (or $q(x)$) is the active (or reactive) power flowing into the feeder (note that $p(x)>0$ indicates the positive active power flowing to the feeder at $x$). We will call $p(x)$ and $q(x)$ the _power density functions_ whose are in [W/km] and [Var/km], respectively. Also, as the important ancillary function in this paper, the _voltage gradient_ [V/km] is defined as $w(x):=\frac{d}{dx}v(x).$ (2) Figure 2: Single-line representation of balanced three-phase distribution feeder that starts at a bank (transformer), through a finite-length line with length $L$, and ends at a non-loading terminal The ODE poses a nonlinear boundary-value problem and thus can not be analytically solved. The authors of [23, 24] propose an approximate solution of the problem. For this approximation, consider again the simple feeder model in Figure 2, where $N$ number of charing stations and loads are located at $x=\xi_{i}\in(0,L)$ ($i=1,\ldots,N$) satisfying $\xi_{i+1}<\xi_{i}$. It is assumed that the in-vehicle batteries and loads are operated under unity-power factor. The assumption can be relaxed as in [24]. Thus, by denoting as $P_{i}$ the active-power discharged ($P_{i}>0$) (or charged (consumed) ($P_{i}<0$)) at $x=\xi_{i}$, the power density functions $p(x)$ and $q(x)$ are represented as $p(x)=\sum^{N}_{i=1}P_{i}\delta(x-\xi_{i}),\qquad q(x)=0,$ (3) where $\delta(x-\xi_{i})$ is the Dirac’s delta function supported at $x=\xi_{i}$. Then, the following approximation of solution for $w(x)$ is derived in [23, 24]: $w(x)\sim\sum_{j\in\mathcal{I}_{x}}P_{j}\frac{G}{Y^{2}},$ (4) where $Y:=\sqrt{G^{2}+B^{2}}$, $\mathcal{I}_{x}\subseteq\\{1,2,\ldots,N\\}$ is the set of all indexes $i$ of the locations of charging stations and loads satisfying $x<\xi_{i}$, $i\in\\{1,\ldots,N\\}$, $\xi_{N+1}:=0$. This implies that the voltage gradient $w(x)$ (and hence the voltage amplitude $v(x)$ through (2)) can be controlled with the regulation of charging/discharging power $P_{j}$ by EVs. Based on the observation, we will introduce a systematic method of determining the values of charging and discharging power of in- vehicle batteries. ### 3.2 Computation of Upper Bound for Charging/Discharging Power This section is a review of [35] and summarizes the computation of upper bound for the synthesis of charging/discharging patterns. For this, let us assume that all charging stations consume non-negative power (operated at the charging mode), i.e., $P_{i}:=P_{\mathrm{EVs},i}\leq 0$ for all stations $i\in\\{1,\ldots,N_{\mathrm{sta}}\\}$, where $N_{\mathrm{sta}}$ is their total number. The associated location of the $i$-th station is denoted as $\xi_{\mathrm{sta},i}\in\\{\xi_{i},\ldots,\xi_{N}\\}$. Then, from (4), the deviation of voltage amplitude at the end of the feeder ($x=L$) due to the charging of EVs is approximately estimated in [35] as $dv(L)=\sum_{i=1}^{N_{\mathrm{sta}}}\frac{G}{Y^{2}}|P_{\mathrm{EVs},i}|\xi_{\mathrm{sta},i}.$ (5) See [35] for its derivation. This shows that $dv(L)$ is determined with the amounts of charging power and locations of the stations. The evaluation formula (5) can work for the case, where all the stations are operated at the discharging mode (i.e., $P_{\mathrm{EVs},i}\geq 0$ for all $i$). We now describe the method for computing an upper bound of the synthesis of charging (or discharging) patterns [35]. The method is based on the simple evaluation of (5) and effective for regulating $dv(L)$ at its pre-defined acceptable value, which we denote by $dV_{\mathrm{limit}}$. The method eventually works if $dv(L)>dV_{\mathrm{limit}}$. For this, we use a common parameter for each station, denoted by $\alpha\in[0,1]$, and refine (5) as $\displaystyle dv(L)_{\alpha}$ $\displaystyle:=\sum_{i=1}^{N_{\mathrm{sta}}}\frac{G}{Y^{2}}|\alpha P_{\mathrm{EVs},i}^{\mathrm{max}}|\xi_{\mathrm{sta},i}=\alpha\times dv(L).$ (6) This $dv(L)_{\alpha}$ represents a modification of the measure of (5) by uniformly regulating (decreasing) the maximum power of every station. Then, by taking $dv(L)_{\alpha}=dV_{\mathrm{limit}}$ as the physical specification of voltage, $\alpha$ is determined as $\alpha=\frac{dV_{\mathrm{limit}}}{dv(L)}.$ (7) For computing the upper bound, $\alpha_{\mathrm{cha}}$ (or $\alpha_{\mathrm{discha}}$) is calculated with (7) and the pre-defined value $dV_{\mathrm{cha,limit}}$ at the charging mode (or $dV_{\mathrm{discha,limit}}$ at the discharging mode). Hence, it is possible to compute the upper bound as $\\{-{\alpha_{\mathrm{cha}}}P_{\mathrm{EVs},i}^{\mathrm{max}}:i=1,\ldots,N_{\mathrm{sta}}\\}$ (or $\\{{\alpha_{\mathrm{discha}}}P_{\mathrm{EVs},i}^{\mathrm{max}}:i=1,\ldots,N_{\mathrm{sta}}\\}$) so that the deviation of voltage amplitude at the end point of the feeder due to charging (or discharging) of EVs can be bounded at $dV_{\mathrm{cha,limit}}$ (or $dV_{\mathrm{discha,limit}}$). Here, it should be emphasized that the computation of upper bound becomes feasible when the cooperative management works for DSO and EV-sharing operator. It requires both information of distribution grid and mobility system. Like Figure 1, from EV-sharing operator, DSO receives prediction data of the number of EVs at charging stations in a distribution feeder. By using (5), DSO judges whether the deviation of voltage amplitude at the end point of the feeder is smaller than $dV_{\mathrm{limit}}$ or not. For this, it is firstly supposed that all the number of in-vehicle batteries connected to every station can charge (or discharge) with the maximum power $-P_{\mathrm{EVs},i}^{\mathrm{max}}\,(<0)$ and $P_{\mathrm{EVs},i}^{\mathrm{max}}\,(>0)$. If $dv(L)\leq dV_{\mathrm{limit}}$, then the in-vehicle batteries under the sharing service can charge or discharge with their maximum power at every station. Otherwise, namely, $dv(L)>dV_{\mathrm{limit}}$, then DSO computes the upper bound as shown above and sends it to EV-sharing operator in order to keep the voltage level. ## 4 Proposed Autonomous Vehicle-to-Grid Design This section aims to describe the main idea of this paper: to propose the novel control system referred to as autonomous V2G design for providing the multi-objective AS: PFC reserve and voltage regulation. In this design, charging or discharging power by EVs distributed for multiple stations is regulated with the _local_ measurement of frequency, while the maximum amount of charging or discharging power is determined _globally_ using the method in Section 3.2. In this sense, the distributed EVs are cooperative with the voltage regulation of a distribution grid. For the multi-objective AS, in the rest of this paper we consider the conventional hierarchical structure of transmission and distribution grids: in the high-level transmission grid or the associated TSO, the frequency deviation is determined in a dynamic manner: and in a low-level distribution feeder or DSO, the voltage profile is determined in a static manner. For the distribution feeder, we utilize the ODE model in order to assess and regulate the voltage profile. For the transmission grid, we introduce the block diagram for frequency dynamics in Figure 3 that is based on [43, 25]. The deviation $\Delta\omega$ of the grid’s angular frequency from the nominal, $50\,{\mu\Omega\mathrm{Hz}}\times 2\pi$ in this paper, is determined with the net imbalance $\Delta P$ of supply and demand in the entire grid, the inertia constant $M$ of the grid, and its damping coefficient $D$. The net imbalance $\Delta P$ is calculated with the output of thermal plant determined by economic dispatch control (EDC) [44], load frequency control (LFC) [44], load in the hierarchical grids, power generation by photo-voltaic (PV) generation units, and charging/discharging power of EVs. The model of thermal power plant with turbine and governor is also based on [43, 25]. Figure 3: Block diagram for frequency dynamics of a transmission grid Now, we are in position to describe the autonomous V2G design for determining active power output by EVs. Specifically, we adopt from [37] the autonomous scheme based on distributed EVs for realizing the fast responsiveness to TSO. Here, it is supposed that the target distribution feeder has $N_{\mathrm{sta}}$ charging stations labeled by integer $i$ as introduced in Section 3.2. Then, DSO computes the upper bound of charging (or discharging) power as $-\alpha_{\mathrm{cha}}P_{\mathrm{EVs},i}^{\mathrm{max}}$ (or $\alpha_{\mathrm{discha}}P_{\mathrm{EVs},i}^{\mathrm{max}})$ at $i$-th station connected to the feeder at $x=\xi_{\mathrm{sta},i}$. In this paper, we propose that the charging/discharging power $P_{\mathrm{EVs},i}$ at $i$-th station connected to the feeder at $x=\xi_{\mathrm{sta},i}$ is regulated with the droop-type characteristics as in [37]. To do this, the frequency deviation $\Delta f:=\Delta\omega/(2\pi)$ is locally measured at the connection terminal, and $P_{\mathrm{EVs},i}$ is then determined as shown in Figure 4, $\makebox[-5.0pt]{}P_{\mathrm{EVs},i}=\left\\{\begin{array}[]{rll}{-\alpha_{\mathrm{cha}}P_{\mathrm{EVs},i}^{\max}},&\textrm{if}&\Delta f_{1}\leq\Delta f\\\ \vskip 5.69054pt\cr{K_{\mathrm{cha}}\Delta f},&\textrm{else if}&0\leq\Delta f<\Delta f_{1}\\\ \vskip 5.69054pt\cr{K_{\mathrm{discha}}\Delta f},&\textrm{else if}&-\Delta f_{1}\leq\Delta f<0\\\ \vskip 5.69054pt\cr{\alpha_{\mathrm{discha}}P_{\mathrm{EVs},i}^{\mathrm{max}}},&\textrm{else},\end{array}\right.$ (8) where $K_{\mathrm{cha}}$ (or $K_{\mathrm{discha}}$) is calculated with ${\alpha_{\mathrm{cha}}P_{\mathrm{EVs},i}^{\max}}$ (or ${\alpha_{\mathrm{discha}}P_{\mathrm{EVs},i}^{\max}}$) and the new parameter $\Delta f_{1}$. The regulation scheme (8) indicates that EVs connected to $i$-th station can charge and discharge within the pre-defined range of frequency, $[50\,{\mu\Omega\mathrm{Hz}}-\Delta f_{1}$, $50\,{\mu\Omega\mathrm{Hz}}+\Delta f_{1}]$, and thus they behave in a similar manner as the thermal power plant with PFC [44]. The scheme is based on the local measurement of the grid’s frequency, while the maximum amount of the active power output is managed globally using the upper bound in Section 3.2. Thus, the EVs distributed for multiple stations are capable of providing the PFC reserve, while they are cooperative with the voltage regulation. Figure 4: Charging/discharging regulation with the droop-type characteristics against the frequency deviation for autonomous vehicle-to-grid design Here, we discuss the capability of providing the PFC reserve to TSO. The capability measure in the distribution feeder at the charging (or discharging) mode, denoted as $\Delta P_{\mathrm{cha/Hz}}$ (or $\Delta P_{\mathrm{discha/Hz}}$), is defined in this paper as follows: $\left.\begin{aligned} \Delta P_{\mathrm{cha/Hz}}&:=-{\frac{\alpha_{\mathrm{cha}}}{\Delta f_{1}}}{\displaystyle\sum_{i=1}^{N_{\mathrm{sta}}}P_{\mathrm{EVs},i}^{\mathrm{max}}}\\\ \Delta P_{\mathrm{discha/Hz}}&:={\frac{\alpha_{\mathrm{discha}}}{\Delta f_{1}}}{\displaystyle\sum_{i=1}^{N_{\mathrm{sta}}}P_{\mathrm{EVs},i}^{\mathrm{max}}}\end{aligned}\right\\}.$ (9) Under a fixed value of $\alpha_{\mathrm{cha}}$ (or $\alpha_{\mathrm{discha}}$), the capability measure $\Delta P_{\mathrm{cha/Hz}}$ (or $\Delta P_{\mathrm{discha/Hz}}$) is inversely proportional to the parameter $\Delta f_{1}$. The measures (9) indicate the regulation reserve of active power by EVs in the distribution feeder per the unit frequency deviation and are closely related to the frequency characteristic requirement in [6] as a parameter of PFC, to which is referred as the frequency characteristic of a control area. It is valuable to compare the schemes proposed by [37] and in (8). In [37], distributed EVs at $i$-th station can charge (or discharge) up to the maximum determined by EV-sharing operator, i.e., $-P_{\mathrm{EVs},i}^{\mathrm{max}}$ (or $P_{\mathrm{EVs},i}^{\mathrm{max}}$). In the proposed method, distributed EVs at $i$-th station can charge (or discharge) up to the upper bound determined cooperatively by EV-sharing operator and DSO, i.e., $-{\alpha_{\mathrm{cha}}P_{\mathrm{EVs},i}^{\max}}$ (or ${\alpha_{\mathrm{discha}}P_{\mathrm{EVs},i}^{\max}}$). Because of $\alpha_{\mathrm{cha}}$ and $\alpha_{\mathrm{discha}}$ less than or equal to unity, the capability measure of providing the PFC reserve for [37] is larger than those in (8). However, as described in Section 3.2, the proposed scheme provides a cooperative operation of distributed EVs for the voltage regulation. No measure for the multi-objective AS is reported to the best of our survey, and its unified definition remains to be solved is in our future research. This is related to the current interest of the cooperation of DSO and TSO: see, e.g., [45]. As the end of this section, we summarize the autonomous V2G design for providing the multi-objective AS, consisting of the following three steps T1 to T3: * T1 DSO updates the prediction data on the number of EVs at charging stations from EV-sharing operator every pre-defined period. One example of the period is 15 minutes in [42] where EV-sharing operator receives the information on reservations by users every 15 minutes. * T2 By using the operational data on EVs and the method described in Section 3.2, DSO computes the upper bound of charging/discharging of EVs at each station in terms of the voltage management. In this paper, we assume that DSO determines the value of $dV_{\mathrm{cha,limit}}$ (or $dV_{\mathrm{discha,limit}}$) and thereby the upper bound for the synthesis of charging (or discharging) patterns as $-{\alpha_{\mathrm{cha}}P_{\mathrm{EVs},i}^{\max}}\,(<0)$ (${\alpha_{\mathrm{discha}}P_{\mathrm{EVs},i}^{\max}}\,(>0)$). * T3 EVs at each station can charge and discharge within the upper bound by DSO. This operation is conducted only with the local measurement of the grid’s frequency. No exchange of information between any charging stations is required. ## 5 Effectiveness Evaluation The aim of this section is to evaluate the effectiveness of the proposed design using full-digital simulations of both the frequency dynamics and the distribution voltage profile. ### 5.1 Simulation Setting First, we describe the simulation setting of power grids and mobility system. The setting of transmission grid in Figure 3 is based on [25], where we assume that the transmission grid is spanned in a geographical region with a population of approximately 9 million customers, and that the grid’s capacity is about $8.3\,{\mu\Omega\mathrm{GVA}}$. It is also assumed that the PV generation is installed as a mega solar plant with 20% against the grid’s capacity. Here, for simplicity of the analysis, the smoothing effect of introduction of PV is not considered. The PV generation as well as the load consumption changes in an aperiodic manner as time goes on. These time-varying inputs cause the frequency dynamics of the grid, and no event or test case is considered in the following simulations. The nominal frequency of the grid is $50\,{\mu\Omega\mathrm{Hz}}$, the inertia constant and damping coefficient in time are $9\,{\mu\Omega\mathrm{s}}$ and $2\,{\mu\Omega\mathrm{s}}$, and the other parameters of the plant model are the same as in [25]. As the distribution grid shown in Figure 6, the model of multiple feeders is used in [24] and based on a practical distribution grid of residential area in western Japan. We assume that the secondary voltage at the bank is regulated at $6.6\,{\mu\Omega\mathrm{kV}}$. The rated capacity of the bank is set at $12\,{\mu\Omega\mathrm{MVA}}$, and the feeder’s resistance (or reactance) at $0.227\,{\mu\Omega\mathrm{\Omega/km}}$ (or $0.401\,{\mu\Omega\mathrm{\Omega/km}}$). See [24] for details. It is supposed that 600 same feeders of the distribution grid are connected to the transmission grid. For the mobility system, we use synthetic operation data on EVs in a sharing service for the numerical evaluation: see [35] for details. The number of EVs at the 8 stations that does not change for 15 minutes from 12 o’clock are shown in Figure 6. The constant number is used in the numerical simulations over 60 minutes under time-varying PV generation and load consumption. Figure 5: Model of multiple feeders based on a practical distribution grid of residential area in Japan. The 8 charging stations are installed and denoted by the circled number: see [24] for details. Figure 6: Number of electric vehicles (EVs) at the 8 charging stations in the distribution feeder of Figure 6. (a) $(dV_{\mathrm{cha,limit}},dV_{\mathrm{discha,limit}})=(80\,{\mu\Omega\mathrm{V}},\,50\,{\mu\Omega\mathrm{V}})$ (b) $(dV_{\mathrm{cha,limit}},dV_{\mathrm{discha,limit}})=(80\,{\mu\Omega\mathrm{V}},\,30\,{\mu\Omega\mathrm{V}})$ Figure 7: Numerical evaluation of autonomous vehicle-to-grid design for multi- objective ancillary service–I: (top) synthesis results of upper bounds for charging/discharging patterns of in-vehicle batteries; (middle) time responses of total output power from electric vehicles (EVs); (bottom) time responses of the grid’s frequency with/without EVs. (a) $(dV_{\mathrm{cha,limit}},dV_{\mathrm{discha,limit}})=(80\,{\mu\Omega\mathrm{V}},\,50\,{\mu\Omega\mathrm{V}})$ (b) $(dV_{\mathrm{cha,limit}},dV_{\mathrm{discha,limit}})=(80\,{\mu\Omega\mathrm{V}},\,30\,{\mu\Omega\mathrm{V}})$ Figure 8: Numerical evaluation of autonomous V2G design for multi-objective ancillary service–II: (top) time series of distribution voltage at $x=L$ and (bottom) spatial profile of distribution voltage at time $2\,{\mu\Omega\mathrm{s}}$ for (a) and at $40\,{\mu\Omega\mathrm{s}}$ for (b). (a) Whole (b) Zoom-up of $[27\,{\mu\Omega\mathrm{s}},29\,{\mu\Omega\mathrm{s}}]$ in (a) Figure 9: Parameter ($\Delta f_{1}$) dependence of total output power from EVs and associated frequency responses under $(dV_{\mathrm{cha,limit}},dV_{\mathrm{discha,limit}})=(80\,{\mu\Omega\mathrm{V}},\,50\,{\mu\Omega\mathrm{V}})$. For the evaluation, direct numerical simulations of the nonlinear ODE (1) are conducted. Using the same manner as in [35], to evaluate $dv(L)$ for the model of multiple feeders in Figure 6, we focus on the horizontal straight-line feeder with length $L=4.63\,{\mu\Omega\mathrm{km}}$ and measures the voltage amplitude at its end point as the rightmost part of the model. The estimation is done with $N_{\mathrm{sta}}=8$. In this simulation, we set $dV_{\mathrm{limit}}$ at the two conditions: $(dV_{\mathrm{cha,limit}},dV_{\mathrm{discha,limit}})=(80\,{\mu\Omega\mathrm{V}},\,50\,{\mu\Omega\mathrm{V}})$ and $(80\,{\mu\Omega\mathrm{V}},\,30\,{\mu\Omega\mathrm{V}})$. For the frequency control, we set $\Delta f_{1}$ at $0.2\,{\mu\Omega\mathrm{Hz}}$ at first and will change its value for the performance evaluation. It is also assumed that there are no charging/discharging restrictions considering SoC. This assumption is relevant for the current time scale observed by numerical simulations. For comparison, we consider the two cases for the regulation of active power for the PFC reserve. One case is based on the proposed method; namely, distributed EVs at each station can charge (or discharge) up to the upper bound $\\{-\alpha_{\mathrm{cha}}P_{\mathrm{EVs},i}^{\mathrm{max}}:i=1,\ldots,N_{\mathrm{sta}}\\}$ (or $\\{\alpha_{\mathrm{discha}}P_{\mathrm{EVs},i}^{\mathrm{max}}:i=1,\ldots,N_{\mathrm{sta}}\\}$). The other case is based on the method of [37]. In this case, distributed EVs at each station can charge (or discharge) up to their maximum power, to which we will refer as _single-objective AS_. ### 5.2 Simulation Results Figure 7 shows the numerical evaluation of the proposed design under $\Delta f_{1}=0.2\,{\mu\Omega\mathrm{Hz}}$. The top two figures in Figure 7 show the upper bounds (_blue_ lines) at each station. The group of EVs at each station charges and discharges in the range (_blue_ line) so that the deviation of voltage amplitude at the end point of the feeder can be within the upper acceptable limit. In the figures (a) and (b), we set $dV_{\mathrm{limit}}$ at the two conditions: $(dV_{\mathrm{cha,limit}},dV_{\mathrm{discha,limit}})=(80\,{\mu\Omega\mathrm{V}},\,50\,{\mu\Omega\mathrm{V}})$ and $(80\,{\mu\Omega\mathrm{V}},\,30\,{\mu\Omega\mathrm{V}})$. In the figure (a), we use the computed values $(\alpha_{\mathrm{cha,limit}},\alpha_{\mathrm{discha,limit}})=(0.7335,0.4585)$, in the figure (b) we use $(\alpha_{\mathrm{cha,limit}},$ $\alpha_{\mathrm{discha,limit}})=(0.7335,0.2751)$. The middle two figures show the results on total active power output by EVs. The _red, solid_ line shows the active power output for single-objective AS, while the _blue, dashed_ line does the active power output for multi-objective AS. The bottom two figures show the time responses of frequency of the transmission grid. The responses are originally caused by the time-varying PV generation and load consumption.666The deviation is relatively large by comparison with standard bounds of the grid’s frequency: $[50{\mu\Omega\mathrm{Hz}}-0.2{\mu\Omega\mathrm{Hz}},50{\mu\Omega\mathrm{Hz}}+0.2{\mu\Omega\mathrm{Hz}}]$ for eastern Japan. This is mainly because as stated in Section 5.1, the introduction rate of PV is large in the current setting, and its smoothing effect is not considered. The _black, solid_ line shows the time response of frequency without EVs, while the _red, solid_ (or _blue, dashed_) line does the time responses for single-objective (or multi-objective) AS. By comparison of the frequency responses with/without EVs, we see that the frequency approaches the nominal value, $50\,{\mu\Omega\mathrm{Hz}}$, by charging/discharging from EVs. Here, we compare the frequency responses with EVs for single- and multi-objective AS. The _red, solid_ line (single- objective AS) is closer to the nominal value than the _blue, dashed_ line (multi-objective AS). For this, the charging/discharging from EVs at $[0\,{\mu\Omega\mathrm{s}},7\,{\mu\Omega\mathrm{s}}]$ and $[28\,{\mu\Omega\mathrm{s}},36\,{\mu\Omega\mathrm{s}}]$ in the middle of Figure 7(a) (or $[39\,{\mu\Omega\mathrm{s}},53\,{\mu\Omega\mathrm{s}}]$ in the middle of Figure 7(b)) is bounded with the proposed design. This implies that the total amount of supply power by EVs in multi-objective AS is smaller than that in single-objective AS, and that the single-objective AS shows a better performance for the frequency control as expected. The associated data on distribution voltage for the numerical evaluation are shown in Figure 8. The voltage amplitude was computed with the nonlinear ODE (1). The top two figures show the time series of distribution voltage at $x=L$ in Figure 6. The two _black, dashed_ lines show the upper/lower limits of voltage determined by $dV_{\mathrm{cha,limit}}$ or $dV_{\mathrm{discha,limit}}$. The _red, solid_ (or _blue, dashed_) lines shows the time series of distribution voltage for single-objective (or multi- objective) AS. The figure (a) (or (b)) indicates that the voltage deviation at the end point of feeder is mitigated by considering the upper bound with $\alpha_{\mathrm{cha}}$ (or $\alpha_{\mathrm{discha}}$). The bottom two figures show the spatial profiles of distribution voltage along the horizontal straight-line feeder in Figure 6. The figure (a) shows the voltage profile at time $2\,{\mu\Omega\mathrm{s}}$, and the figure (b) shows the voltage profile at $40\,{\mu\Omega\mathrm{s}}$. In the figure (a), the _solid black_ line shows the voltage for the loads, the _red, solid_ (or _blue, dashed_) line shows that for both the loads and EVs at the charging mode with unregulated $-P_{\mathrm{EVs},i}^{\mathrm{max}}$ (or regulated $-\alpha_{\mathrm{cha}}P_{\mathrm{EVs},i}^{\mathrm{max}}$). In the figure (b), the _solid black_ line shows the voltage drop by the loads, the _red, solid_ (or _blue, dashed_) line shows that for both the loads and EVs at discharging mode with unregulated $P_{\mathrm{EVs},i}^{\mathrm{max}}$ (or regulated $\alpha_{\mathrm{discha}}P_{\mathrm{EVs},i}^{\mathrm{max}}$). From the figures (a) and (b), we see that the distribution voltage profile is regulated to be within the determined range by the proposed design. Here, the deviation estimated with the nonlinear ODE (1) were $83.5\,{\mu\Omega\mathrm{V}}$ in the figure (a) and $29.4\,{\mu\Omega\mathrm{V}}$ in the figure (b), which were slightly different from the fixed $dV_{\mathrm{cha,limit}}$ and $dV_{\mathrm{discha,limit}}$. This is mainly because the proposed design is based on the approximate solution of the nonlinear ODE (1), which will be discussed in the next subsection. ### 5.3 Discussion In the above subsection we showed the performance of the proposed autonomous V2G design numerically. The capability for providing the PFC reserve to TSO is evaluated with the measures in (9). As mentioned above, the capability depends on the choice of the parameter $\Delta f_{1}$. Thus, the parameter dependence of the total output power from EVs and associated frequency responses are shown in Figure 9. The value of $\Delta f_{1}$ varies from $0.2\,{\mu\Omega\mathrm{Hz}}$ to $0.8\,{\mu\Omega\mathrm{Hz}}$. This figure shows that the PFC reserve is consistently provided for the different values of the parameter $\Delta f_{1}$. In Figure 9(b), we see that the frequency approaches the nominal value rapidly as the value of $\Delta f_{1}$ becomes smaller. This is clearly characterized by the capability measure $\Delta P_{\mathrm{discha/Hz}}$ in (9), which is inversely proportional to $\Delta f_{1}$. Hence, the frequency characteristic requirement for the PFC reserve (if posed from TSO) can be fulfilled sufficiently for a small choice of $\Delta f_{1}$. Figure 10: Numerical results on distribution voltage profiles for 10% and 40% of the loading in Figure 6. Figure 11: Numerical results on control error for the direct numerical simulations in Figure 6. In the end of the last subsection, we mentioned a small error for the voltage regulation in the proposed design. The control is parameterized with the parameter $dV_{\mathrm{limit}}$ that is assigned by DSO. Here, we discuss about how the control error depends on the choice of $dV_{\mathrm{limit}}$ and loading condition of the distribution feeder. The total amount of the loads is set as the two conditions: 10% and 40% of the bank’s rated capacity in Figure 6. The associated direct numerical results on distribution voltage profiles are presented in Figure 11, where the _red, solid_ (or _blue, dashed_) line shows the voltage for 10% loading (or 40% loading). Note that the voltage drop stops in the part of the feeder after 2 km because the loading center is located in the feeder before 2 km. Here we define the control error as the absolute value of the difference between the direct simulation at the end of feeder and $dV_{\mathrm{cha,limit}}$. The $dV_{\mathrm{cha,limit}}$-dependence of the control error is shown in Figure 11. In Figure 11, for each of the loading conditions, the control error increases with $dV_{\mathrm{cha,limit}}$. This basically comes from the approximation contained in the ODE-based modeling—the fact that the approximate solution of the nonlinear ODE (1) is derived under the condition that all the voltage amplitudes $v$ on the right-hand side of (1) are close to unity. Here, we can see in Figure 11 that the control error behaves in a linear manner with $dV_{\mathrm{cha,limit}}$ and the loading condition. This is helpful because the control error is predictable with the loading condition and can be hence compensated in the application. ## 6 Power-HIL (Hardware-In-the-Loop) Testbed This section is a brief review of the development of Power-HIL testbed for experimental validation of the autonomous V2G design for multi-objective AS. This is built as a laboratory facility of a real-time digital simulator and physical devices including an EV battery. ### 6.1 Overview Figure 12: Overview of Power-HIL (Hardware-In-the-Loop) testbed built in this paper. Digital and analog components in this testbed are connected via power and communication lines. First, we describe the Power-HIL testing for this study. Figure 12 shows the overview of Power-HIL testbed for validating the multi-objective AS. The testbed includes a real-time digital simulator and physical devices of PCS, controller, power amplifier, and EV battery. In the real-time digital simulator, the frequency dynamics of a transmission grid and the voltage dynamics of a distribution grid are simulated with mathematical models, denoted by “Power System Model” in Figure 12. That is, the frequency deviation $\Delta f$ and the distribution voltage at the stations are calculated with the models introduced in Section 6.2 and sent to “Charging Station Model” in Figure 12. The block “Charging Station Model” computes the AS signals for the stations according to the control logic (8) in Section 4, which provides the regulation law of active power output for the provision of multi-objective AS. In addition to the digital simulator, the AS signal is sent from “Charging Station Model ” through a communication line to “Controller” in Figure 12. The block “Controller” is used for commanding the AS signal (including the information on $\Delta f$) to “PCS,” namely “EV Battery.” The Power-HIL testbed is therefore closed in loop by sending the signal to “PCS,” measuring its actual value of charging/discharging power, and receiving the measured signal at “Power System Model.” In addition, the frequency deviation $\Delta f$ and the distribution voltage in “Power System Model” are sent to “Power Amplifier” that simulates in the physical domain the temporal change of distribution voltage at one station where the real “EV Battery” is connected. ### 6.2 Power System Model Figure 13: Change of the block diagram in Figure 3 for Power-HIL testing. It contains multiple distribution grids connected to electric vehicles that provide the multi-objective ancillary service. Figure 14: Model of one distribution feeder used for experimental validation. The first charging station “Station 1” is simulated with Power Conditioning System (PCS), and the other stations with the real-time digital simulator. In this section, we describe the mathematical models of transmission and distribution grids denoted by “Power System Model” in Figure 12. The setting of the models and parameters is basically from [25]. Figure 3 is used for the model of transmission grid, where the block “EV” is replaced with the block model of multiple distribution grids as shown in Figure 14. It is here assumed that 400 same feeders of the distribution grid are connected to the transmission grid. Figure 14 shows the model of one distribution feeder that is a single, straight-line feeder with length $4.5\,{\mu\Omega\mathrm{km}}$. The setting is used in [25]. The feeder has the five loads and four charging stations located at a common interval $0.5\,{\mu\Omega\mathrm{km}}$. All electrical components in the distribution grid are modeled with Sim Power Systems in MATLAB/Simulink including voltage sources in the distribution substation, distribution lines, pole transformers, customer loads, and charging stations. The consumer loads and the charging stations are modeled as constant power sources. Electrical transients for inductors of lines and transformers are also considered. This is why dynamical characteristics of the distribution grid can be incorporated, unlike the nonlinear ODE model in Section 5 that represents a steady profile of distribution voltage. As the contribution from transportation, we use synthetic operation on the number of EVs at each charging station shown in Figure 15, where each EV has rated power output of $4\,{\mu\Omega\mathrm{kW}}$. This is because the maximum active power output of the real EV battery we use in the Power-HIL simulation is $6\,{\mu\Omega\mathrm{kW}}$ described as in next section. Figure 15 is based on the practical data in the EV-sharing demonstration project and used in Section 5. The loads in the distribution feeder are represented as the constant power model: $350\,{\mu\Omega\mathrm{kW}}$ for Load1 and Load3; $300\,{\mu\Omega\mathrm{kW}}$ for Load2, Load4, and Load5. In Figure 14, “Station 1” is built as PCS, and the other stations are built on the digital simulator. Figure 15: Temporal change of the number of electric vehicles (EVs) at the 4 charging stations in the distribution feeder of Figure 14. Figure 16: Dynamic model of input/output response for power conversion. The input is the ancillary-service (AS) signal $P_{\mathrm{EV}}^{\mathrm{ref}}$, and the output is the active-power output $P_{\mathrm{EV}}$ of PCS. The model is adopted in Figure 14 to the three stations built in the digital simulator. Figure 17: Configuration of Power-HIL (Hardware-In-the-Loop) testbed. It contains the real-time digital simulator OPAR-RT Technologies (Model #OP5600), the power amplifier AMETEK (Model #MX-15-1pi), the controller (dSPACE, Micro Auto Box II), and the Power Conditioning System (PCS) (Mitsubishi Electric Corporation, Model #EVP-SS60B3-M7-R), EV battery (Nissan Motor Co., Ltd, Nissan Leaf). The left picture shows the experimental setup of Power-HIL, and the right picture shows the EV battery. The EV battery in the right picture is connected to the PCS in the left picture as shown by the red line between the two pictures. Finally, we consider latency in the real system, precisely time-delays due to the local measurement and communication in “Controller” of Figure 12 and a time-lag inherent in the physical PCS. Their consideration does not appear in the preceding paper [25]. A time-delay can happen when “Controller” in Figure 12 receives the AS signal from “Power System Model” and commands it to “PCS” and “EV battery.” In addition to the measurement, because the multi-objective AS is intended to a practical large-scale grid, another time-lag due to long- distance communication is inevitable and should be taken into account for the feasibility testing. Furthermore, the dynamics of the grid’s frequency develop in time scale of several seconds, and the PFC reserve by EVs is thus intended to work in the same time scale. The time scale is also involved in dynamic characteristics of the PCS, precisely speaking, time-lag from input to output in the PCS. The digital simulator without consideration of time-delay and time-lag is described in [46]. To incorporate these effects in the digital simulator, we use a simple-to-implement model in Figure 16 for the input/output response of power conversion. The input is the AS signal $P_{\mathrm{EV}}^{\mathrm{ref}}$, and the output is the active-power output $P_{\mathrm{EV}}$ of PCS. The model is adopted in Figure 14 to the three stations built in the digital simulator. In this study, we set the parameter $T_{1}$ of time-delay as $0.30\,{\mu\Omega\mathrm{s}}$ and $T_{2}$ of time-lag as $0.43\,{\mu\Omega\mathrm{s}}$. These values are based on the measurement of physical devices, which will be discussed in Figure 19 through comparison. It should be remarked that the simple-to-implement model is a main challenge to the real-time platform. For the platform, conventional, full models of power- electronics-interfaced equipments, which are multi-scale dynamical systems with many nonlinearities, are hopeless. Here, it is noted that no voltage regulation device is considered in the following analysis because the effectiveness of the proposed autonomous V2G design is evaluated in a relatively simple setting. Related to this, the time- varying PV generation is considered in the model of transmission grid not the distribution one. ### 6.3 Experimental Setup Figure 17 shows the two pictures of the Power-HIL testbed. The testbed contains the real-time digital simulator manufactured by OPAL-RT Technologies (Model #OP5600) and located in the lower part of the left picture in Figure 17. The time step for the digital simulator is $100\,{\mu\Omega\mathrm{\mu s}}$ for calculation of the frequency deviation, the distribution voltage, and the charging/discharging power for the charging station. The setting of the time step is enough to the tractable simulation of the distribution grid that exhibits the fastest transient phenomenon in this testing. The frequency deviation $\Delta f$ is calculated every time step, namely $100\,{\mu\Omega\mathrm{\mu s}}$. Figure 17 also shows the configuration of Power-HIL. The power amplifier is manufactured by AMETEK (Model #MX15-1pi; Rated AC output power is $15\,{\mu\Omega\mathrm{kVA}}$), which is located in the left part of the left picture in Figure 17, and is used for the physical simulation of the frequency dynamics and the distribution voltage. The power amplifier outputs the distribution voltage to PCS that is manufactured by Mitsubishi Electric Corporation (Model #EVP-SS60B3-M7-R; 6kVA) in the upper part of the left picture in Figure 17 and is used for the physical simulation of charging/discharging of “Station 1” in Figure 14. The equipment of “Controller” is manufactured by dSPACE (Micro Auto Box II), and the EV battery is manufactured by Nissan Motor Co., (Model Nissan Leaf; $30\,{\mu\Omega\mathrm{kWh}}$). We here set the values of $dV_{\mathrm{cha,limit}}$ and $dV_{\mathrm{discha,limit}}$ at $80\,{\mu\Omega\mathrm{V}}$. By using the synthetic operation data on EVs in Figure 15 and the method in Section 3.2, the upper bound for charging/discharging patterns is pre-computed as $\\{-\alpha_{\mathrm{cha}}P_{\mathrm{EVs},i}^{\mathrm{max}}:i=1,\ldots,N_{\mathrm{sta}}\\}$ (or $\\{\alpha_{\mathrm{discha}}P_{\mathrm{EVs},i}^{\mathrm{max}}:i=1,\ldots,N_{\mathrm{sta}}\\}$) with $N_{\mathrm{sta}}=4$ in Figure 14. The parameter $\Delta f_{1}$ is set at $0.2\,{\mu\Omega\mathrm{Hz}}$ from the simulation result in the last part of Section 5. ## 7 Feasibility Testing The aim of this section is to establish the practical feasibility of the proposed autonomous V2G design. For this, we show a series of experimental results on the Power-HIL testing and their implications. ### 7.1 Primary Frequency Control Figure 18: Generated ancillary-service (AS) signals to one distribution feeder as the primary frequency control (PFC) reserve and associated frequency responses of the transmission grid. The _blue, dashed_ line shows the AS signals, and the _red, solid_ line does the grid’s frequency. (a) Station 1 (built on Power Conditioning System (PCS)) (b) Station 2 (built on the digital simulator) Figure 19: Input/output responses of the two charging stations associated with Figure 18. The _red, solid_ lines show the input ancillary-service (AS) signals, and the _blue, dashed_ lines show the output power, and the two _black, dashed_ lines show determined the upper/lower limits. The left and right figures show the whole time-series during $[0\,{\mu\Omega\mathrm{s}},320\,{\mu\Omega\mathrm{s}}]$ and their zoom-up during $[160\,{\mu\Omega\mathrm{s}},200\,{\mu\Omega\mathrm{s}}]$, respectively. Figure 20: Generated ancillary-service (AS) signal and associated frequency response when distributed electric vehicles can charge/discharge up to the maximum power without consideration of upper/lower limits. The result is derived by the Power-HIL simulation. Figure 21: Time series of sampled distribution voltage that are associated with Figure 18. (Upper) The voltage sampled at the six locations $x=0\,{\mu\Omega\mathrm{km}}$ (bank), $1.0\,{\mu\Omega\mathrm{km}}$, $2.0\,{\mu\Omega\mathrm{km}}$, $3.0\,{\mu\Omega\mathrm{km}}$, $4.0\,{\mu\Omega\mathrm{km}}$, and $4.5\,{\mu\Omega\mathrm{km}}$ (end) are plotted. (Lower) The voltage sampled at location $4.5\,{\mu\Omega\mathrm{km}}$ (end) is again plotted with the determined upper/lower bounds. First, we show the frequency dynamics under the design and see how the physical device affects the performance. Figure 18 shows the Power-HIL simulation of the AS signal and associated frequency dynamics. The _blue, dashed_ line shows the time series of the AS signal for one feeder. The positiveness (or negativeness) of the AS signal implies the discharging (or charging) command. Here, we focus on the difference of time-delay and time-lag for the digital simulator and PCS. For this, we show in Figure 19 the associated time series on input AS signal $P_{\mathrm{EV}}^{\mathrm{ref}}$ and output power $P_{\mathrm{EV}}$ in Figure 16 at Station 1 and Station 2. The _red, solid_ lines in Figure 19 represent the AS signals, and the _blue, dashed_ lines do the output power. The two _black, dashed_ lines show the upper and lower limits as $\\{-\alpha_{\mathrm{cha}}P_{\mathrm{EVs},i}^{\mathrm{max}}:i=1,\ldots,N_{\mathrm{sta}}\\}$ and $\\{\alpha_{\mathrm{discha}}P_{\mathrm{EVs},i}^{\mathrm{max}}:i=1,\ldots,N_{\mathrm{sta}}\\}$. The upper/lower limits in Figure 19 change once at the onset when the operation data on EVs at the four charging stations exhibit the stepwise change in Figure 15 (although not clearly shown in the left figures of Figure 19 at about 120 s). This is because $dV_{\mathrm{cha,limit}}$ and $dV_{\mathrm{discha,limit}}$ are fixed during the whole simulation, and the parameters $\alpha_{\mathrm{cha}}$ and $\alpha_{\mathrm{discha}}$ depend on the number of EVs only. In Figure 19(a) for Station 1 as PCS, we see that the time series of input AS signal and output power are not the same. This difference is manly due to the time-delay of “Controller” in Figure 12 and conversion characteristics of the real PCS. In Figure 19(b) for Station 2 on the digital simulator, the time series of input AS signal and output power are also not the same because the digital simulator considers time-delay and time- lag as shown in Figure 16. The other two stations (Station 3 and 4) show qualitatively the same input/output responses as those in the figure (b). From these, we conclude that the quantitatively similar latency to the real one is simulated in the software-based stations. Finally, we validate the proposed autonomous V2G design for the provision of PFC reserve. The frequency response in Figure 18 is kept close to the nominal $50\,{\mu\Omega\mathrm{Hz}}$, and thus the PFC reserve by EVs works effectively for the frequency control. For comparison, we conduct the Power- HIL simulation in a case that EVs are fully distributed and can charge or discharge with their maximum power as described in Section 5. Figure 20 shows the Power-HIL simulation of the AS signal and associated frequency response. In this case, because distributed EVs at each station can charge or discharge for the frequency control only, the grid’s frequency approaches to the nominal ($50\,{\mu\Omega\mathrm{Hz}}$) better by comparison with Figure 18: see $[170\,{\mu\Omega\mathrm{s}},200\,{\mu\Omega\mathrm{s}}]$ in Figures 18 and 20. This is expected in the design stage and indeed shown numerically in Section 5. From the above simulations, we experimentally show that the autonomous V2G design works for the provision of PFC reserve, namely, the frequency control of the transmission grid. ### 7.2 Distribution Voltage Regulation Figure 21 shows the distribution voltage for the Power-HIL simulation associated with Figures 18 and 19. In this figure, the voltage sampled at the six locations including the end ($x=4.5\,{\mu\Omega\mathrm{km}}$) is plotted. The temporal deviations of voltage occur due to the AS signal to the four stations, the power consumption by the five loads, and the electric characteristics of the feeder. (a) Station 1 (b) Station 2 Figure 22: Input/output responses of the two charging stations when distributed electric vehicles can charge/discharge up to the maximum power _without_ consideration of upper limit as in Figure 20. The _red, solid_ lines show the input signals, the _blue, dashed_ lines show the output power, and the _black, dashed_ lines show the determined (but not used here) limits for the voltage regulation. (a) 6 locations including end ($x=4.5\,{\mu\Omega\mathrm{km}}$) (b) End ($x=4.5\,{\mu\Omega\mathrm{km}}$) with upper/lower limits Figure 23: Time series of sampled distribution voltage associated with Figure 22. As the last task in this paper, we validate the proposed autonomous V2G design for the voltage regulation. In the same manner as Section 7.1, we consider the case in which distributed EVs can charge or discharge up to their maximum power _without_ consideration of upper limit. Figure 22 presents the input/output responses of the two charging stations in this case. The _red, solid_ lines in Figure 22 show the AS signals, and the _blue, dashed_ lines do the output power. The two _black, dashed_ lines show the upper and lower limits as $\\{-\alpha_{\mathrm{cha}}P_{\mathrm{EVs},i}^{\mathrm{max}}:i=1,\ldots,N_{\mathrm{sta}}\\}$ and $\\{\alpha_{\mathrm{discha}}P_{\mathrm{EVs},i}^{\mathrm{max}}:i=1,\ldots,N_{\mathrm{sta}}\\}$. Unlike Figure 19 with the consideration of upper limit, we see that the magnitude of output power goes over the limits in Figure 22: see the responses during $[170\,{\mu\Omega\mathrm{s}},200\,{\mu\Omega\mathrm{s}}]$ and $[260\,{\mu\Omega\mathrm{s}},280\,{\mu\Omega\mathrm{s}}]$. The associated temporal deviations of voltage are shown in Figure 23. Here, the charging/discharging power at two charging stations is not kept within the determined range for the voltage regulation in Figure 22. We see that the voltage of Figure 23(b) is largely deviated from the upper/lower limits compared to the bottom in Figure 21. This implies that the autonomous V2G design using the upper/lower limits is effective for reducing the deviation of distribution voltage at the end. It should be mentioned that the voltage in Figure 21 is over the lower limit because of the intrinsic approximation error for the computation of upper bound. This is studied substantially in Section 5 and shows that it can be reduced by appropriately choosing the grid’s condition. As suggested in Figure 11, it would be effective to decrease the amount of the loading condition for reducing the degree of violation. The Power-HIL simulation shows that the autonomous V2G design also works for the voltage regulation. ## 8 Conclusion This paper proposed the new design of autonomous V2G for the provision of multi-objective AS. The design is computationally simple (no need of optimization), easily implemented (as demonstrated in Section 6), and relevant because it is guided by the physics-based models of power grids and the solid mathematical approach (as shown in Sections 3.1). Then, we numerically show that the proposed design effectively works for not only the provision of PFC reserve and the regulation of distribution voltage. It is also experimentally shown that the design works in a practical situation where inevitable latency is involved, which shows the practical feasibility of the design. This study, which built upon on the recent development of coupling of transportation and energy systems, is another step toward this development. Several research directions are possible. For utilization of the proposed design in practical distribution grids, conventional metrics of their operation such as significant losses should be included. In this paper, we considered the control by DSO, and we did not consider the operational strategy of EV-sharing operator, e.g., reported in [22], for assignment and reallocation of EVs. It is in our future work to explore a detailed design of the architecture in Section 2 by taking both the operators into account. It is also important to develop a platform for the architecture in terms of communication, data integration, and cybersecurity. In addition to these technological studies, it is necessary to conduct the so-called economic evaluation of the architecture. A social benefit resulting from the architecture will be clarified through the evaluation. By combining the technological and economical studies, it is expected to establish explicit guidelines for the successful implementation of the proposed architecture. In this, specific requirements on amount of reserve and its ramping capability to be remunerated as an AS should be clarified. ## Acknowledgement The authors appreciate the valuable suggestions of the reviewers while preparing the manuscript. This work was supported in part by Japan Science and Technology Agency, Core Research for Evolutional Science and Technology (JST- CREST) Program [grant number JP-MJCR15K3]. ## References * [1] W. Kempton and J. Tomic, “Vehicle-to-grid power fundamentals: Calculating capacity and net revenue,” J. Power Sources, vol. 144, no. 1, pp.268–279, 2005. https://doi.org/10.1016/j.jpowsour.2004.12.025 * [2] J. Tomić and W. Kempton, “Using fleets of electric-drive vehicles for grid support,” J. Power Sources, vol. 168, issue 2, pp. 459–468, 2007. https://doi.org/10.1016/j.jpowsour.2007.03.010 * [3] M.S. Galus, M.G. Vayá, T. Krause, and G. Andersson, “The role of electric vehicles in smart grids,” WIRES Energy Environ., vol. 2, pp. 384–40, 2013. https://doi.org/10.1002/wene.56 * [4] J. Hu, H. Morais, T. Sousa, and M. Lind, “Electric vehicle fleet management in smart grids: A review of services, optimization and control aspects,” Renew. Sust. Energ. Rev., vol. 56, pp. 1207–1226, 2016. http://dx.doi.org/10.1016/j.rser.2015.12.014 * [5] C. Gschwendtner, S.R. Sinsel, and A. Stephan, “Vehicle-to-X (V2X) implementation: An overview of predominate trial configurations and technical, social and regulatory challenges,” Renew. Sust. Energ. Rev., vol. 145, 110977, 2021. https://doi.org/10.1016/j.rser.2021.110977 * [6] Y.G. Rebours, D.S. Kirschen, M. Trotignon, and S. Rossignol, “A survey of frequency and voltage control ancillary services—Part I: Technical features,” IEEE Trans. Power Syst., vol. 22, no. 1, pp. 350–357, 2007. https://doi.org/10.1109/TPWRS.2006.888963 * [7] “Ancillary services to be delivered in Denmark: Tender conditions,” 44 pages, https://en.energinet.dk/Electricity/Rules-and-Regulations/Approval-as-supplier-of-ancillary-services—requirements; 2019. [accessed 9 October 2020] * [8] Y. Ota, H. Taniguchi, J. Baba, and A.Yokoyama, “Implementation of autonomous distributed V2G to electric vehicle and DC charging system,” Electr. Pow. Syst. Res., vol. 120, pp. 177–183, 2015. https://doi.org/10.1016/j.epsr.2014.05.016 * [9] M. Marinelli, S. Martinenas, K. Knezović, and P.B. Andersen, “Validating a centralized approach to primary frequency control with series-produced electric vehicles,” J. Energy Storage, vol. 7, pp. 63–73, 2016. https://doi.org/10.1016/j.est.2016.05.008 * [10] K. Knezović, S. Martinenas, P.B. Anderson, A. Zecchino, and M. Marinelli, “Enhancing the role of electric vehicles in the power grid: Field validation of multiple ancillary services,” IEEE Trans. Transport. Electrific., vol. 3, no. 1, pp. 201–209, 2017. https://doi.org/10.1109/TTE.2016.2616864 * [11] N.B. Arias, S. Hashemi, P.B. Andersen, C. Træholt, and R. Romero, “V2G enabled EVs providing frequency containment reserves: Field results,” Proc. IEEE International Conference on Industrial Technology, pp.1814–1819, 2018. https://doi.org/10.1109/ICIT.2018.8352459 * [12] D. Apostolopoulou, S. Bahramirad, and A. Khodaei, “The interface of power: Moving toward distribution system operators,” IEEE Power Energy Mag., vol. 14, no. 3, pp. 46–51, 2016. https://doi.org/10.1109/MPE.2016.2524960 * [13] K. Knezović, M. Marinelli, A. Zecchino, P.B. Andersen, and C. Træholt, “Supporting involvement of electric vehicles in distribution grids: Lowering the barriers for a proactive integration,” Energy, vol. 134, pp. 458–468, 2017. https://doi.org/10.1016/j.energy.2017.06.075 * [14] N.B. Arias, S. Hashemi, P.B. Andersen, C. Træholt, and R. Romero, “Distribution system services provided by electric vehicles: Recent status, challenges, and future prospects,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 12, pp. 4277–4296, 2019. https://doi.org/10.1109/TITS.2018.2889439 * [15] A. Ipakchi and F. Albuyeh, “Grid of the future,” IEEE Power Energy Mag., vol. 7, issue 2, pp. 52–62, 2009. https://doi.org/10.1109/MPE.2008.931384 * [16] K. Clement-Nyns, E. Haesen, and J. Driesen, “The impact of charging plug-in hybrid electric vehicles on residential distribution grid,” IEEE Trans. Intell. Transp. Syst., vol. 25, no. 1, pp. 371–380, 2010. https://doi.org/10.1109/TPWRS.2009.2036481 * [17] K. Clement-Nyns, E. Haesen, and J. Driesen, “The impact of vehicle-to-grid on the distribution grid,” Electr. Pow. Syst. Res., vol. 81, pp. 185–192, 2011. https://doi.org/10.1016/j.epsr.2010.08.007 * [18] M. Yilmaz and P.T. Krein, “Review of the impact of vehicle-to-grid technologies on distribution system and utility functions,” IEEE Trans. Power Electr., vol. 28, no. 12, pp. 5673–5689, 2013. https://doi.org/10.1109/TPEL.2012.2227500 * [19] J. Wang, G.R. Bharati, S. Paudyal, O. Ceylan, B.P. Bhattarai, and K.S. Myers, “Coordinated electric vehicle charging with reactive power support to distribution grids,” IEEE Trans. Ind. Info., vol. 15, no. 1, pp. 54–63, 2019. https://doi.org/10.1109/TII.2018.2829710 * [20] J. Dixon and K. Bell, “Electric vehicles: Battery capacity, charger power, access to charging and the impacts on distribution networks,” eTransportation, vol. 4, article no. 100059, 2020. https://doi.org/10.1016/j.etran.2020.100059 * [21] M. Tokudome, K. Tanaka, T. Senju, A. Yona, T. Funabashi, and C.-H. Kim, “Frequency and voltage control of small power systems by decentralized controllable loads,” Proc. Int. Conf. Power Electron. Drive Syst. (PEDS), pp. 666–671, 2009. https://doi.org/10.1109/PEDS.2009.5385834 * [22] A. Kawashima, N. Makino, S. Inagaki, T. Suzuki, and O. Shimizu, “Simultaneous optimization of assignment, reallocation and charging of electric vehicles in sharing services,” Proc. IEEE 1st Conference on Control Technology and Applications, pp. 1070–1076, 2017. https://doi.org/10.1109/CCTA.2017.8062601 * [23] N. Mizuta, Y. Susuki, Y. Ota, and A. Ishigame, “An ODE-based design of spatial charging/discharging patterns of in-vehicle batteries for provision of ancillary service,” Proc. IEEE 1st Conference on Control Technology and Applications, pp. 193–198, 2017. https://doi.org/10.1109/CCTA.2017.8062462 * [24] N. Mizuta, Y. Susuki, Y. Ota, and A. Ishigame, “Synthesis of spatial charging/discharging patterns of in-vehicle batteries for provision of ancillary service and mitigation of voltage impact,” IEEE Syst. J., vol. 13, no. 3, pp. 3443–3453, 2019. https://doi.org/10.1109/JSYST.2018.2883974 * [25] N. Mizuta, S. Kamo, H. Toda, Y. Susuki, Y. Ota, A. Ishigame, “A Hardware-in-the-loop test on the multi-objective ancillary service by in-vehicle batteries: Primary frequency control and distribution voltage support,” IEEE Access, vol. 7, no. 1, pp. 161246–161254, 2019. https://doi.org/10.1109/ACCESS.2019.2951748 * [26] T. Suzuki, S. Inagaki, Y. Susuki and A.T. Tran (editors), “Design and Analysis of Distributed Energy Management Systems: Integration of EMS, EV, and ICT,” Springer Nature, Switzerland, 2020. * [27] P. Fairley, “Car sharing could be the EV’s killer app,” IEEE Spectrum, vol. 50, issue 9, pp. 14–15, 2013. https://doi.org/10.1109/MSPEC.2013.6587173 * [28] D. F. Bignami, A. C. Vitale, A. Lué, R. Nocerino, M. Rossi, and S. M. Savaresi (editors), “Electric Vehicle Sharing Services for Smarter Cities: The Green Move Project for Milan: From Service Design to Technological Deployment,” Springer International Publishing, 2017. * [29] K.K. Tan, K.K.K. Htet, and A.S. Narayanan, “Mitigation of vehicle distribution in an EV sharing scheme for last mile transportation,” IEEE Trans. Intell. Transp. Syst., vol. 16, no. 5, pp. 2631–2641, 2015. https://doi.org/10.1109/TITS.2015.2413356 * [30] M. Chertkov, S. Backhaus, K. Turtisyn, V. Chernyak, and V. Lebedev, “Voltage collapse and ODE approach to power flows: Analysis of a feeder line with static disorder in consumption/production,” Preprint arXiv:1106.5003, 2011. * [31] Y. Susuki, N. Mizuta, A. Kawashima, Y. Ota, A. Ishigame, S. Inagaki, and T. Suzuki, “A continuum approach to assessing the impact of spatio-temporal EV charging to power distribution grids,” Proc. IEEE 20th International Conference on Intelligent Transportation Systems, pp. 2372–2377, 2017. https://doi.org/10.1109/ITSC.2017.8317625 * [32] R. Iacobucci, B. McLellan, and T. Tezuka, “Modeling shared autonomous electric vehicles: Potential for transport and power grid integration,” Energy, vol. 158, pp. 148–163, 2018. https://doi.org/10.1016/j.energy.2018.06.024 * [33] R. Iacobucci, B. McLellan, and T. Tezuka, “Optimization of shared autonomous electric vehicles operations with charge scheduling and vehicle-to-grid,” Transp. Res. Part C, vol. 100, pp. 34–52, 2019. https://doi.org/10.1016/j.trc.2019.01.011 * [34] T. Chen, K.-F. Chu, A.Y.S. Lan, D.J. Hill, and V.O.K. Li, “Electric autonomous vehicle charging and parking coordination for vehicle-to-grid voltage regulation with renewable energy,” Proc. IEEE PES General Meeting, 2020. https://doi.org/10.1109/PESGM41954.2020.9281414 * [35] S. Yumiki, Y. Susuki, R. Masegi, A. Kawashima, A. Ishigame, S. Inagaki, and T. Suzuki, “Computing an upper bound for charging/discharging patterns of in-vehicle batteries towards cooperative transportation-energy management,” Proc. IEEE 22th International Conference on Intelligent Transportation Systems, pp. 655–660, 2019. https://doi.org/10.1109/ITSC.2019.8916971 * [36] M. Ben-Idris, M. Brown, M. Egan, Z. Huang, and J. Mitra, “Utility-scale shared energy storage: Business models for utility-shared energy storage systems and customer participation,” IEEE Electrification Mag., vol. 9, no. 4, pp. 47–54, 2021. https://doi.org/10.1109/MELE.2021.3115558 * [37] Y. Ota, H. Taniguchi, T. Nakajima, K.M. Liyanage, J. Baba, and A.Yokoyama, “Autonomous distributed V2G (vehicle-to-grid) satisfying scheduled charging,” IEEE Trans. Smart Grid, vol. 3, no. 1, pp. 559–564, 2012. https://doi.org/10.1109/TSG.2011.2167993 * [38] J.A.P. Lopes, F.J. Soares, and P.M.R. Almeida, “Integration of electric vehicles in the electric power system,” Proc. IEEE, vol. 99, no. 1, pp.168–183, 2011. https://doi.org/10.1109/JPROC.2010.2066250 * [39] H. Liu, Z. Hu, Y. Song, and J. Lin, “Decentralized vehicle-to-grid control for primary frequency regulation considering charging demands,” IEEE Trans. Power Syst., vol. 28, no. 3, pp.3480–3489, 2013. https://doi.org/10.1109/TPWRS.2013.2252029 * [40] S. Martinenas, K. Knezović, and M. Marinelli, “Management of power quality issues in low voltage networks using electric vehicles: Experimental validation,” IEEE Trans. Power Del., vol. 32, no. 2, pp. 971–979, 2017. https://doi.org/10.1109/TPWRD.2016.2614582 * [41] P.C. Kotsampopoulos, F. Lehfuss, G.F. Lauss, B. Bletterie, and N.D. Hatziargyriou, “The limitations of digital simulations and the advantages of PHIL testing in studying distributed generation provision of ancillary services,” IEEE Trans. Ind. Electron., vol. 62, no. 9, pp. 5502–5515, 2015. https://doi.org/10.1109/TIE.2015.2414899 * [42] R. Masegi, A. Kawashima, N. Mizuta, Y. Susuki, S. Inagaki, and T. Suzuki, “Integrated operation planning for EV sharing service and distribution voltage support,” Proc. 62nd Annual Conference of the Institute of Systems, Control, and Information Engineers, 6 pages, May 2018 (in Japanese). * [43] H. Toda, Y. Ota, T. Nakajima, K. Kawabe, and A. Yokoyama, “HIL test of power system frequency control by electric vehicles,” Proc. 1st E-Mobility Power System Integration Symposium, pp. 1–6, 2017. * [44] G. Andersson, “Dynamics and Control of Electric Power Systems,” Lecture 227-0528-00, ITET, ETH, 2012. * [45] P.D. Martini, “Operational coordination architecture: New models and approaches,” IEEE Power Energy Mag., vol. 17, no. 5, pp. 29–39, 2019. https://doi.org/10.1109/MPE.2019.2921740 * [46] S. Kamo, Y. Ota, K. Kawabe, A. Yokoyama, C. Kashiwa, and T. Nakajima, “Autonomous voltage and frequency control by smart inverters of photovoltaic generation and electric vehicle,” Proc. 2nd E-Mobility Power System Integration Symposium, pp. 1–5, 2018.
# PC4 at Age 40 Michael Freedman Michael Freedman Microsoft Research, Station Q, and Department of Mathematics University of California, Santa Barbara Santa Barbara, CA 93106 This article is not a proof of the Poincaré conjecture but a discussion of the proof, its context, and some of the people who played a prominent role. It is a personal, anecdotal account. There may be omission or transpositions as these recollections are 40 years old and not supported by contemporaneous notes, but memories feel surprisingly fresh. I have not looked up old papers to check details of statements; this article is merely a download from my current mental state.111This is a written version for a CMSA lectured delivered at Harvard on September 28, 2020. Poincaré liked to argue in many ways: analytically, combinatorially, and topologically. He seemed averse to even fixing a definition for the term “manifold,” so one should not impose modern notions like TOP, PL, and DIFF on his famous conjecture. He himself seems to have thought little about his own conjecture, since it asked if simply connected manifolds were spheres; clearly he meant to also specify that the homology should also vanish (except in the highest and lowest dimensions). The modern statement, now known to be true in every dimension is: a closed topological manifold $M$ that is homotopy equivalent to a sphere is homeomorphic to that sphere. One cannot replace homeomorphism with diffeomorphism throughout the statement because of examples like Milnor’s exotic 7-spheres. In all dimensions you still experience most of the excitement if you presume from the start that the manifold $M^{n}$ has a smooth structure and the goal is to prove it homeomorphic to $S^{n}$. We will take this perspective.222In dimension $>3$ work of Lashoff [lashof70] smooths a connected, contractible, topological $M$ and this can be used to bridge to the strongest statement, but the cost is one must use a technically more difficult (proper) version of the work of Smale discussed below. In dimensions $<4$ there are canonical smooth structures [moise52a, moise52b]. When the dimension $n$ is 0 or 1 there is not much to prove. The $n=2$ result is a special case of uniformization. The first modern work is Smale’s [smale56] in which he proved PC$n$, $n\geq 5$. I will need to say something about his proof since the case of PC4 starts the same way but encounters a special problem when $n=4$. Solving this problem brings in the topological category in a way not present in Smale’s work. The reader might think Smale must also have labored in the topological category since the conclusion is only homeomorphism not diffeomorphism. But 99% of Smale’s work is in a smooth setting; his great achievement, the $h$-cobordism, theorem is smooth category. The step from the $h$-cobordism theorem to PC$n$ is small and in some dimensions involves a gluing along an $S^{n-1}$ which might not be extended to a diffeomorphism over $D^{n}$. So PC4 amounts to Smale’s outline with a topological twist. But here the tail wags the dog. When you delve into this detail the twist expands to fill your entire field of view. The final case (historically), dimension 3, was proved by Perelman [perelman03a, perelman03b] using Hamilton’s theory of Ricci flow. It is entirely different in outline, more like Beethoven’s 9th than a conventional proof, and still stands as the greatest accomplishment of 21st century mathematics. I worked towards PC4 from 1974 to 1981, roughly half time. I had many mental pictures but no notes when it came to writing the proof. I realized I had no letters (in my mind) for any of the spaces, maps, or relations. Apparently as a youngster I had not learned yet to think in or with symbols. When written, the proof was hard for others to understand; when I tried to explain it, I did not know where to begin. To this day I regret that in my lectures at Berkeley with Smale in the audience, I never even mentioned that it was his proof, with a twist, that I was presenting. From the perspective of a youth in 1981, 1959 might as well have been 1859, I did not feel the historical connections. Now they seem obvious and I will try to capture them here. Three streams of thought enter the proof and I will personify them with the names of three mathematicians: Steve Smale, Andrew Casson, and Bob Edwards. Of course, each represents a field and there are other names as well, but in all three cases the power of their individual ideas is so strong and so determinative of the form of the final proof that little is lost by identifying them with their fields of work. To put a name to their work (as relevant here): * $\bullet$ Smale: Classical smooth topology and dynamics. * $\bullet$ Casson: How to get started in 4D: “finger moves” simplify $\pi_{1}$, leading eventually to “Casson handles.” * $\bullet$ Edwards: Manifold factors, decomposition spaces, shrinking. I will explain, in order, the input from each of these three streams and recall the occasional anecdote. By the time (1959) of Smale’s paper Morse theory had already moved on to infinite dimensions. People knew the Morse inequalities and that seemed to be regarded as a satisfactory link between critical points and homology of manifolds. Smale took a much closer look. Let us recall his set up. Henceforth, all manifolds are assumed to be compact and smooth. Let $W^{n+1}$ be an $(n+1)$-dimensional manifold with two boundary components, $M_{0}$ and $M_{1}$. Let us assume all three fundamental groups are trivial, and the relative homology vanishes: $H_{\ast}(W,M_{0};\mathbb{Z})=0$. (Of course by duality, the relative homology groups also vanish if we replace $M_{0}$ with $M_{1}$.) With these hypotheses $W$ is called a (simply connected) $h$-cobordism. ###### Theorem 1 (Smale’s $h$-cobordism theorem (hCT)). A simply connected $h$-cobordism $W^{n+1}$ is diffeomorphic to a product $M_{0}\times I$, provided $n>4$. The idea of the proof is to relentlessly match the algebra of the Morse complex of a smooth Morse function, $f:W\rightarrow[0,1]$ to the geometry, or as Smale would say, the dynamics of the gradient ($\nabla f$) flow. The Morse complex has critical points of index $k$ as its chain group generators in dimension $k$, and the differential of the complex records in matrix form the algebraic number $\\#_{i,j}$ of times the boundary of the $i$th $k$-cell wraps over the $j$th $(k-1)$-cell. The Morse cancelation lemma states that if the geometric count $\\#_{i,j}=\pm 1$, the two critical points can be cancelled, and the Morse functions simplified. Smale realized that the in the simply connected case, the vanishing of the relative homology groups was the only obstruction to performing a series of deformations to $f$ (called “handle slides,” or in the algebra, “row operations”) that lead to cancelation of _all_ critical points. Once there are no critical points left, the gradient flow lines define the desired product structure. Matching the geometry to the algebra consists of turning the algebraic information that the signed sum of intersection points $\\#_{i,j}=1\in\mathbb{Z}$ into the stronger geometric information that the $i$th descending sphere truly meets the $j$th ascending sphere in one transverse point, not for example three points in a $+,-,+$ pattern. To accomplish this he employed Whitney’s trick [whitney44] for removing pairs of oppositely signed double points. These double points live in an $n$-dimensional cross-section $L^{n}$ of $W$, often called the middle level, and Whitney’s method requires $n>4$. The dimension restriction arises because a 2 dimensional disk, the “Whitney disk,” guides the cancellation of the extra $(+,-)$ pair and we need that disk to be embedded; an easy thing when $\dim(L)=n>4$, and a difficult thing otherwise. The proof of PC4 follows Smale up to this point, from there forward it is all about how to locate the Whitney disks necessary to make $\\{A\\}$ and $\\{D\\}$, the ascending and descending 2-spheres in the middle level $L$, intersect each other _geometrically_ in single transverse points, as the algebra would indicate. So Smale has gotten us started but we are left with the problem of simplifying, say, a canonical picture in the middle level $L^{4}$ of two embedded 2-spheres $A$ and $D$ which meet in three points $+,-,+$. It is here where Casson’s work begins. Before turning to Casson, this is a good place to explain how the hCT implies various PCn. There are actually two overlapping techniques (both due to Smale). Suppose $\Sigma^{n}$ is a homotopy $n$ sphere. We can cut out two disjoint balls from $\Sigma^{n}$ to obtain an $n$ dimensional $h$-cobordism $W^{n}$. If $n$ is at least 6 the hCT recognizes $W$ as diffeomorphic to $S^{n-1}\times I$. $\Sigma^{n}$ is now obtained by gluing back the two balls, but the gluing diffeomorphisms reflect the complexity of the product structure output of the hCT and we don’t know that the result is diffeomorphic to $S^{n}$. We see from this picture that $\Sigma$ is two balls glued together by a homeomorphism of their boundaries. It is elementary (via Alexander’s coning trick) that the union is homeomorphic to $S^{n}$. To handle the lowest dimensional case treated by Smale, $\Sigma^{5}$, a different strategy is required. Some algebraic topology and a bit of surgery theory is used to construct a 6D $h$-cobordism $W^{6}$ between $\Sigma^{5}$ and $S^{5}$ and the hCT is applied to $W^{6}$. Interestingly, this proves more, that $\Sigma^{5}$ is diffeomorphic, not just homeomorphic, to $S^{5}$. This approach using surgery only succeeds in finding $W$ in some dimensions but when it works the conclusion is stronger. In our case $n=4$, the surgery approach actually does work and we find a smooth $h$-cobordism $W^{5}$ from $\Sigma^{4}$ to $S^{4}$. The reason that our proof concludes only a homeomorphism at the end of the day is that there is a terrific struggle involving lots of limits and point set topology to find the Whitney disks and they end up being merely topological category objects. The Whitney disks we find spoil the smooth category context of the rest of the proof. It is an astonishing historical coincidence that within 6 months of PC4, Simon Donaldson had figured out enough about which four dimensional homotopy types do not contain smooth manifolds to know for sure that the Whitney disks I built cannot generally be smoothed. These Whitney disks are all that stand in the way of realizing the quadratic form $E_{8}\oplus E_{8}$ as the intersection form of a close smooth 4-manifold, which is excluded by Donaldson’s “Theorem C” [donaldson86]. So the excursion into the topological category is necessary if one speaks generally in terms of 4-manifold constructions. Of course there may be some as yet unimagined workaround—or entirely new method—satisfactory to the study of a smooth homotopy sphere $\Sigma^{4}$. I think it safe to say that the greatest open problem in topology (“The last man standing”) is the smooth category PC4: Is every homotopy 4-sphere diffeomorphic to $S^{4}$? Now to Casson. Casson’s first great insight was that there was some hope for Whitney disks in 4D. He realized that the problem of finding an embedded Whitney disk is qualitatively different from finding a slice disk for a classical knot in $S^{3}$. That prototypical 2-disk embedding problem had been studied starting in the 50s by Fox and Milnor [fox66]: Given a knot $K$ in $S^{3}$ does it bound a smoothly embedded disk in the 4-ball $B^{4}$? This problem is infinitely rich, and generally has a negative answer. But Casson realized that Whitney disks in all applications would have homological duals, so by standing farther back, and taking more of the manifold topology into account, the embedding problem might turn out to be less obstructed than in knot theory. This hope bore fruit. It is easy to show that these three spaces are simply connected: The middle level $L$, and the complements $L\setminus A$, and $L\setminus D$. To look for a Whitney disk $D$ in $L$ meeting $A\cup D$ only where it is supposed to, long the boundary of $D$, we would also like $L\setminus(A\cup D)$ to be simply connected. The duality Casson observed implies that $\pi_{1}(L\setminus(A\cup D))$ is perfect. He realized that it might help to make things worse before trying to make them better. If one takes ones finger and pushes a generic patch of $A$ along an arc and through a generic point on $D$ this “finger move” will create two new $+,-$ points of intersection. Each such point will microscopically look like the intersection of two complex lines in $\mathbb{C}^{2}$ and linking these complex lines is a “Clifford torus.” The algebraic point is that the top cell of these Clifford tori are relations saying the two loops linking the two sheets (complex lines in the model) commute in $\pi_{1}$ of their joint complement. If you take a perfect group and start forcing pairs of elements to commute pretty soon you have a trivial group. In this way Casson got off the ground, by doing enough finger moves between $A$ and $D$ to kill $\pi_{1}(L\setminus(A\cup D))$. This enabled him to at least immerse the Whitney disk $W$ (with the required normal framing data) he was looking for [casson86]. From here the finger moves simply explode in number. Fearlessly, Casson decides to cure the problem of the Whitney disk double points by capping them off with subsidiary Whitney disks in a hierarchy later called a Casson tower. When taken to its infinite limit, the open, tapered, regular neighborhood is called a “Casson handle.” I draw a dimensionally reduced schematic of a Casson handle in Figure 1. $\ddots$attaching region$\vdots$$\vdots$$\vdots$$\vdots$$\vdots$unbranchedbranchedWhitney disks Figure 1. Casson handles. Casson (1974) showed that his Casson handles $C$ had the correct proper homotopy type to serve as neighborhoods of embedded Whitney disks, $C\overset{P}{\simeq}(D^{2}\times\mathbb{R}^{2},\partial D^{2}\times\mathbb{R}^{2})$. My contribution seven years later was to replace proper homotopy equivalence—which was not strong enough for most geometric applications—with homeomorphism. As I mentioned, I spent those seven years staring at his handles about half time. I think this was the correct fraction. I was young and needed to do some projects likely to work in order to get tenure, raise a family, and preserve sanity. Those years I was either climbing rocks, writing pleasant papers, or struggling with Casson handles. Not much else. My mother had taught me that everything must be fun, “fun first, pleasure second” as she put it. My father who was always at work on something (and always enjoyed it) told me that there is “your work and, then, the world’s work.” The idea of course was to do both. My papers were the “world’s work.” I enjoyed them greatly and would surely have lost confidence in myself if I was not publishing regularly. Casson handles were “my work.” The climbing sheer pleasure. I had a reputation among my friends for “not thinking on lead,” I just would tear ahead and see what happened. That is the way I liked it: When I did math I tried to think (a bit); when I climbed I just let it rip. Many of my friends asked me variations of, “Since Casson is obviously so much smarter than you, why was it that he didn’t analyze his own handles.” The answer, I think, is “opportunity costs”; he was so full of brilliant ideas: secondary obstructions to knot cobordism (with Gordon), the Casson invariant, what became “weak Heegaard reduction” (also with Gordon), that he was simply too busy. Let us turn to the third stream, Bob Edwards and the world of decomposition space theory, the Bing shrinking criterion (BSC), and manifold factors. In 1980, Bob was the undisputed heavyweight champ in what might variously be called Texas-topology, Bing-topology, the R.L.Moore-R.H.Bing school. It is an area at the wild end of manifold topology, with roots and emphasis in low dimensions. The Alexander horned sphere is what you should picture. It drew on point set topology but was a very pictorial, concrete, and vivid undertaking. It also had an enjoyably sportive feel that allows me to call Bob “Champ” whereas it would never occur to me to call Hiranaka or Deligne the “Champ” of algebraic geometry. During those seven years, I had discovered something, now called re-imbedding. If you have a Casson tower of some critical minimum height, then 7, later reduced to 5 (see [cha16, gs84]), you can leave the lowest stage alone and build a new extension inside a neighborhood of the original stage construction which has any, arbitrarily large, number of stages. This quickly allows geometric control to be added to Casson handles. The original construction just kind of flops around, the higher stages are in no sense smaller than the lower ones. But with re-imbedding it is only a small step to build Casson towers in which the higher stages converge beautifully to a Cantor set. Now if one takes a tapered regular neighborhood and completes it with this limiting Cantor set one obtains a compactified $\overline{C}$ in any application where formerly a Casson handle $C$ was present. A handle has its boundary divided into the attaching region and the co-attaching or “belt” region. See Figure 2. In early fall of 1980 I used the Kirby calculus to draw an exact picture of the belt region for the standard unbranching $\overline{C}$. For the everyday closed 2-handle $(D^{2}\times D^{2},S^{1}\times D^{2})$, the belt region is $D^{2}\times S^{1}$, a solid torus. What I drew belt ($\overline{C}$) was what I later learned was a decomposition space $D^{2}\times S^{1}/\operatorname{Wh}$, the solid torus with a compactum called the Whitehead continuum crushed to a point (and endowed with the quotient topology). A few weeks later I was sitting at a pizza restaurant after a meeting of the recurring Southern California Topology meeting with Bob Edwards. Apropos of nothing in particular, he was putting pen to napkin to explain the “Andrews-Rubin” [andrews65] shrinking argument which proves that the space $D^{2}\times S^{1}/\operatorname{Wh}$ is a manifold factor. Even the concept to me was astonishing: $D^{2}\times S^{1}/\operatorname{Wh}$ is most certainly not a manifold but it turns out that its product with the real line $(D^{2}\times S^{1}/\operatorname{Wh})\times\mathbb{R}$ is homeomorphic to $D^{2}\times S^{1}\times\mathbb{R}$. Edwards understood this phenomenon well as he had just completed his proof that the double suspension of the homology sphere bounding the Mazur manifold was $S^{5}$. The single suspension is of course not a manifold. My excitement was overwhelming. The frontier of my controlled Casson handle $\overline{C}$, while not a manifold, was close—if it turned out to have a product collar neighborhood that would, by Andrews-Rubin, be a manifold and would harbor the Whitney disk I had been seeking for six years. In the end, the proof proceeded somewhat differently but the fact that the belt region of “frontier” of the controlled Casson handle was a manifold factor convinced me that PC4 was likely true. $\vdots$belt region $=D^{2}\times S^{1}/\operatorname{Wh}=$attaching region$\operatorname{Wh}=\cap_{i=0}^{\infty}(D^{2}\times S^{1})_{i}$,each nested the sameway as its predecessor.$(D^{2}\times S^{1})_{0}$$(D^{2}\times S^{1})_{1}$$\operatorname{Wh}$ Figure 2. $(D^{2}\times S^{1}/\operatorname{Wh})\times\mathbb{R}$ is homeomorphic to $D^{2}\times S^{1}\times\mathbb{R}$. To appreciate why $D^{2}\times S^{1}/\operatorname{Wh}$ is a manifold factor we introduce the Bing Shrinking Criterion (BSC), as exposited by Edwards [edwards80]. Let $X$ and $Y$ be compact metric spaces. A map $f:X\rightarrow Y$ is approximable by homeomorphisms (ABH) if, for every $\epsilon>0$, $\exists$ a homeomorphism: $g:X\rightarrow Y$ with $\operatorname{dist}_{Y}(f(x),g(x))<\epsilon$ for all $x\in X$. Bing Shrinking Criterion (BSC): $f:X\rightarrow Y$ is ABH iff for all $\epsilon>0$ there is a homeomorphism $h:X\rightarrow X$ so that 1. (1) $\operatorname{dist}_{Y}(f\circ h(x),f(x))<\epsilon$ for all $x\in X$ and 2. (2) $\operatorname{diam}_{X}(h(f^{-1}(y)))<\epsilon$ for all $y\in Y$. Andrew and Rubin found self-homeomorphisms $h_{\epsilon}$ of $(D^{2}\times S^{1})\times\mathbb{R}$ so that $\pi\circ h_{\epsilon}$ is $\epsilon$-close to $\pi$, and $\operatorname{diam}(h(\pi^{-1}(r)))<\epsilon$, where $\pi:(D^{2}\times S^{1})\times\mathbb{R}\xrightarrow{\epsilon}(D^{2}\times S^{1}/\operatorname{Wh})\times\mathbb{R}$ is the projection crushing each $\operatorname{Wh}\times r$ to a point. Their idea is to lift, at any deep stage $i$, the self clasp of the Whitehead double $(D^{2}\times S^{1})_{i}\times r$ to an infinite chain of clasps and then apply a global twist making each chain of the link small diameter. Figure 3 gives a hint of how this works. $\mathbb{R}$lift$i$th stage$(i+1)$th stage,$(D^{2}\times S^{1})_{j}\times h$twist$i$th stage solidtori appear smallafter lift/twistand contain $\operatorname{Wh}\times r$ Figure 3. The next step in the proof is what I called the design, but a better word would be a “topographic map with some holes in it,” perhaps singed in a campfire—but still useful. The map can be thought of as a certain closed subset $D$ of the standard (open) 2-hanlde $(D^{2}\times\mathbb{R}^{2},S^{1}\times\mathbb{R}^{2})$, but using the re- imbedding lemma, and its concomitant geometric control is also a subset of the (general Casson handle) $C$. Actually this topographic map varies in ways that are not terribly important with different choices of $C$, but in all cases the design $D$ includes into both $C$ and the standard (open) handle $H$. If this map was fully extensive and covered both $C$ and $H$ it would by itself describe the desired homeomorphism, but unfortunately it has a countable number of bits missing, which we call “holes” in $H$, and “gaps” in $C$. The gaps are bits of $C$ which remain terra incognita after the exploration that lead to the design. To sketch the rest of the proof, I need to tell you how the design is built, why it has these gaps, and finally what to do about them. There will be a lot a branching going on in this paragraph, and I hope to keep two quite different types straight as they are described. Let me call them “bulk” and “radial” branching. In Figure 1 we already met bulk branching, this comes from the fact that every Whitney disk that Casson installs to kill a double point has itself many double points and requires that many Whitney disks at the next level. Bulk branching, in the presence of geometric control, limits to a Cantor set (which plays the role of the singular points in the Andrews-Rubin argument). This branching is really only a distraction and does not materially affect the proof. The reader would be served well by the (false) fantasy that all Casson handles are unbranched (even when obtained via the re-imbedding theorem) and just forget about bulk branching. The more interesting branching is what I’m calling radial. Recall a 7-stage tower contains a 14-stage tower (Casson handles are alive and can replicate!). We can think of $14=7+7$ with a second 7-stage tower mounted on the top of the first. But the same combinatorial technology that allows raising height also allows us to build two alternative choices, $T_{7}$ and $T^{\prime}_{7}$ for the second tower with $T_{7}^{\prime}$ contained in $T_{7}$. This binary choice now occurs countably often as we may now identify 14 stage towers within both $T_{7}^{\prime}$ and $T_{7}$, and again create a binary choice (move in or stay out) in attaining the final 7 stages in these 14-ers. Continuing in this way we realize each of two choices “in or out” every 7th stage, always while maintaining geometric control so that the limits are as we expect. We call this branching “radial” because when $D$ is embedded in $H$, indeed at every such branch, the primed choice has a smaller $\rho$ coordinate in the polar coordinates of the transverse $\mathbb{R}^{2}$. The design carries a singular foliation. At radial coordinates which are not in the standard Cantor set the leaf is a compact 3-maniold with boundary, at a Cantor set radius the leaf is an Andrew-Rubin-like decomposition space. Corresponding to the “middle thirds” are the holes or gaps where this procedure has not succeeded in corresponding points of $H$ to points of $C$. There is now a technical point, all but one of the holes is homeomorphic to $S^{1}\times\mathbb{R}^{3}$ (the outlier is homeomorphic to $\mathbb{R}^{4}$). At this point we don’t know what the topology of the gaps is-–-but long after this proof is finished we do learn enough to know that they all along had the topology of their corresponding holes. Our plan for the $\\{$holes$\\}$ and the corresponding $\\{$gaps$\\}$ is eventually to crush them to points and analyze the results. It is much better in applying the Bing/Edwards mathematics that we crush only closed “cell-like” [edwards80] sets. Cell-like means that when embedded in a Euclidean space (or Hilbert space) for any neighborhood $U$ there is a smaller neighborhood $V$ that contracts within $U$. Our holes/gaps are neither closed nor cell-like. The first deficiency is easily corrected by taking the closures, the second by locating certain 2-disks to add to the holes or gaps. We use the notations hole+ and gap+ for holes or gaps that have been closed and augmented with an additional 2-disk to become cell-like. In a sense adding the + is taking a step back because we are giving up some of the “explored region” of $C$ covered by the design, but sometimes to take two steps forward one must take a step backwards. This is an example. Now comes the key diagram which will divide what remains of the proof into two separate theorems: 1) the Edwards shrink and 2) the “Sphere to Sphere” theorem. $Q$$C$$H$$\alpha$ABH$\beta$ABH Figure 4. $\alpha$ crushes $\\{$holes+$\\}$ and $\beta$ crushes $\\{$gaps+$\\}$. Tautologically $H$ and $C$ have a common quotient $Q$, obtained by crushing the holes+, respectively gaps+, to points. This is the benefit of having found the design $D$ on both sides. It turns out both quotient maps can be shown to be approximable by homeomorphisms (ABH). This, of course, would finish the proof that Casson handles $C$ are homeomorphic to the standard open handle $H$, by composing one approximating homeomorphism by the inverse of the other. And with this result in hand the Poincaré Conjecture PC4, and much more within 4D topology follows. Edwards analysis of the first map $\alpha$ is a tour de force of Bing topology. Many of the best ideas in that field enter: It encompasses and generalizes the Andrews-Rubin shrinking technology and also contains a compound use of the principle that “Countable, null, star-like equivalent” decompositions are shrinkable [bean67]. It is a beautiful and pure application of the Bing shrinking criterion. The arrow $\beta$, might be called the “blindfold shrink.” Ric Ancel [ancel84], Jim Cannon, Frank Quinn, and Mike Starbird helped me revise this argument and express it as an extension of Brown’s proof of the Schoenflies theorem, a proof that I learned from them. The notation is a bit daunting but the idea is simple. After the Edwards shrink we know that both domain and range of $\beta$ are manifolds and ones that embed in $\mathbb{R}^{4}$, in fact one readily reduces to the case that both domain and range are standard 4-spheres. Then the idea is to look at the 8D graph of the function $\beta$ and try to improve it to a homeomorphism by systematically modifying it to remove its largest “flat spots.” The data (from the Edwards shrink) that the quotient space has nice local and global topology and allows you to insert a local “drawing” of the decomposition space structure into the target within a small neighborhood of the image of these largest decomposition elements. A natural procedure, see Figure 5, uses this inserted drawing to resolve these most singular points and make the function one-to-one over them. There is a blending problem: the resolution creates countably many small vertical spots, that is the resolution is only a relation not a function. But one should not worry, we are on the right track. One goes back and forth: sanding down first the largest horizontal spots, then flipping the workpiece over and sanding down the largest of these newly created vertical spots, flipping it back to sand down somewhat smaller horizontal spots, etc. One moves back and forth sanding and polishing until the limiting relation has neither horizontal nor vertical thickness, i.e. becomes a homeomorphism. $X$$X$$R^{4}\supseteq Q\cong H$flat spot$(s)$graph of $\beta$$C=$ Casson handle Figure 5. The neighborhood of the largest gap+, $X$, is drawn on the $y$-axis near $\beta(X)$ to facilitate a resolution of $\beta$ to a homeomorphism on $X$. This is good, but small vertical spots inevitably result from pulling points where $\beta$ is already a homeomorphism over other smaller elements of $\\{$gaps+$\\}$. This deficit is corrected later. I thank Arumina Ray and Mark Powell for including this lecture in their book; it is hoped that these brief recollections may add context or at least amusement. There is much left to be understood in four dimensions—the best of luck! ## References
# El Volumen Louder Por Favor: Code-switching in Task-oriented Semantic Parsing Arash Einolghozati Abhinav Arora Lorena Sainz-Maza Lecanda Anuj Kumar Sonal Gupta Facebook <EMAIL_ADDRESS> ###### Abstract Being able to parse code-switched (CS) utterances, such as Spanish+English or Hindi+English, is essential to democratize task-oriented semantic parsing systems for certain locales. In this work, we focus on Spanglish (Spanish+English) and release a dataset, CSTOP, containing 5800 CS utterances alongside their semantic parses. We examine the CS generalizability of various Cross-lingual (XL) models and exhibit the advantage of pre-trained XL language models when data for only one language is present. As such, we focus on improving the pre-trained models for the case when only English corpus alongside either zero or a few CS training instances are available. We propose two data augmentation methods for the zero-shot and the few-shot settings: fine-tune using translate-and-align and augment using a generation model followed by match-and-filter. Combining the few-shot setting with the above improvements decreases the initial $30$-point accuracy gap between the zero- shot and the full-data settings by two thirds. ## 1 Introduction Code-switching (CS) is the alternation of languages within an utterance or a conversation Poplack (2004). It occurs under certain linguistic constraints but can vary from one locale to another Joshi (1982). We envision two usages of CS for virtual assistants. First, CS is very common in locales where there is a heavy influence of a foreign language (usually English) in the native “substrate” language (e.g., Hindi or Latin-American Spanish). Second, for other native languages, the prevalence of English-related tech words (e.g., Internet, screen) or media vocabulary (e.g., movie names) is very common. While in the second case, a model using contextual understanding should be able to parse the utterance, the first form of CS, which is our focus in this paper, needs Cross-Lingual(XL) capabilities in order to infer the meaning. There are various challenges for CS semantic parsing. First, collecting CS data is hard because it needs bilingual annotators. This gets even worse considering that the number of CS pairs grows quadratically. Moreover, CS is very dynamic and changes significantly by occasion and in time Poplack (2004). As such, we need extensible solutions that need little or no CS data while having the more commonly-accessible English data available. In this paper, we first focus on the zero-shot setup for which we only use EN data for the same task domains (we call this in-domain EN data). We show that by translating the utterances to ES and aligning the slot values, we can achieve high accuracy on the CS data. Moreover, we show that having a limited number of CS data alongside augmentation with synthetically generated data can significantly improve the performance. Our contributions are as follows: 1) We release a code-switched task-oriented dialog data set, CSTOP111The dataset can be downloaded from https://fb.me/cstop_data, containing 5800 Spanglish utterances and a corresponding parsing task. To the best of our knowledge, this is the first Code-switched parsing dataset of such size that contains utterances for both training and testing. 2) We evaluate strong baselines under various resource constraints. 3) We introduce two data augmentation techniques that improve the code-switching performance using monolingual data. IN:GET_WEATHER Dime el clima para next Friday SL:DATE_TIME Figure 1: Example CS sentence and its annotation for the sequence [IN:GET_WEATHER Dime el clima [SL:DATE_TIME para next Friday]] ## 2 Task In task-oriented dialog, the language understanding task consists of classifying the intent of an utterance, i.e., sentence classification, alongside tagging the slots, i.e., sequence labeling. We use the Task-Oriented Parsing dataset released by Schuster et al. (2019a) as our EN monolingual dataset. We release a similar dataset, CSTOP, of around 5800 Spanglish utterances over two domains, Weather and Device, which are collected and annotated by native Spanglish speakers. An example from the CSTOP alongside its annotation is shown in Fig. 1. Note that the intent and slot lables start with $IN:$ and $SL:$, respectively. Our task is to classify the sentence intent, here IN:GET_WEATHER as well as the label and value of the slots, here SL:DATE_TIME corresponding to the span para next Friday. Moreover, other words are classified as having no label, i.e., O class. We discuss the details of this dataset in the next section. One of the unique challenges of this task, compared with common NER and language identification CS tasks, is the constant evolution of CS data. Since the task is concerned with spoken language, the nature of CS is very dynamic and keeps evolving from domain to domain and from one community to another. Furthermore, cross-lingual data for this task is also very rare. Most of the existing techniques, either combine monolingual representations Winata et al. (2019a) or combine the datasets to synthesize code-switched data Liu et al. (2019). Lack of monolingual data for the substrate language (very realistic if you replace ES with a less common language) would make those techniques inapplicable. In order to evaluate the model in a task-oriented dialog setting, we use the exact-match accuracy (from now on, accuracy) as the primary metric. This is simply defined as the percentage of utterances for which the full parse, i.e., the intent and all the slots, have been correctly predicted. ## 3 CSTOP Dataset In this section, we provide details of the CSTOP dataset. We originally collected around 5800 CS utterances over two domains; Weather and Device. We picked these two domains as they represent complementary behavior. While Weather contains slot-heavy utterances (average 1.6 slots per utterance), Device is an intent-heavy domain with only average 0.8 slots per utterance. We split the data into 4077, 1167, and 559 utterances for training, testing, and validation, respectively. CS data collection proceeded in the following steps: 1. 1. One of the authors, who is a native speaker of Spanish and uses Spanglish on a daily basis, generated a small set of CS utterances for Weather and Device domains. Additionally, we also recruited bilingual EN/ES speakers who met our Spanglish speaker criteria guidelines, established following Escobar and Potowski (2015). 2. 2. We wrote Spanglish data creation instructions and asked participants to produce Spanish-English CS utterances for each intent (i.e. ask for the weather, set device brightness, etc). 3. 3. Next, we filter out utterances from this pool to only retain those that exhibited true intra-sentential CS. 4. 4. The collected utterances were labeled by two annotators, who identified the intent and slot spans. If the two annotators disagreed on the annotation for an utterance, a third annotator would resolve the disagreement to provide a final annotation for it. Table. 1 shows the number of distinct intents and slots for each domain and the number of utterances in CSTOP for each domain. We have also shown the most 15 common intents in the training set and a representative Spanglish example alongside its slot values for those intents in Table. 2. The first value in a slot tuple is the slot label and the second is the slot value. We can see that while most of the verbs and stop words are in Spanish, Nouns and slot values are mostly in English. We further calculate the prevalence Spanish and English words by using a vocabulary file of 20k for each language. Each token in the CSTOP training set is assigned to the language for which that token has a lower rank. The ratio of the Spanglish to English tokens is around $1.34$ which matches our previous anecdotal observation. This ratio was consistent when increasing the vocabulary size to even 40k. . Domain | # intents | # slots | # utterances ---|---|---|--- Weather | 2 | 4 | 3692 Device | 17 | 6 | 2112 Table 1: CSTOP Statistics intent | utterance | slots ---|---|--- GET_WEATHER | ¿cómo estará el clima en Miami este weekend? | (LOCATION, Miami), (DATE_TIME, este weekend) UNSUPPORTED_WEATHER | how many centimeters va a llover hoy | (DATE_TIME, hoy) OPEN_RESOURCE | Abreme el gallery | (RESOURCE, el gallery) CLOSE_RESOURCE | Cierra maps | (RESOURCE, maps) TURN_ON | Prende el privacy mode | (COMPONENT, el privacy mode) TURN_OFF | Desactiva el speaker | (COMPONENT, el speaker) WAKE_UP | Quita sleep mode | - SLEEP | prende el modo sleep | - OPEN_HOMESCREEN | Go to pagina de inicio | - MUTE_VOLUME | Desactiva el sound | - UNMUTE_VOLUME | Prende el sound | - SET_BRIGHTNESS | subir el brigtness al 80 | (PERCENT, 80) INCREASE_BRIGHTNESS | Ponlo mas bright | - DECREASE_BRIGHTNESS | baja el brightness | - SET_VOLUME | Turn the volumen al nivel 10 | (PRECISE_AMOUNT,10) INCREASE_VOLUME | aumenta el volumen a little bit | - DECREASE_VOLUME | Bájale a la music | - Table 2: Examples from CSTOP intents ## 4 Model Our base model is a bidirectional LSTM with separate projections for the intent and slot tagging Yao et al. (2013). We use the aligned word embedding MUSE Lample et al. (2018) with a vocabulary size of 25k for both EN and ES. Our experiments showed that for the best XL generalization, it’s best to freeze the word embeddings when the training data contains only EN or ES utterances. We refer to this model as simply MUSE. We also use SOTA pre-trained XL models; XLM Conneau and Lample (2019) and XLM-R Conneau et al. (2020). These models are pre-trained via Masked Language Modeling (MLM) Devlin et al. (2019) on massive multilingual data. They share the word-piece token representation, BPE Sennrich et al. (2016) and SentencePiece Kudo and Richardson (2018), as well as a common MLM transformer for different languages. Moreover, while XLM is pre-trained on Wikipedia, XLM-R is trained on crawled web data which contains more non-English and possibly CS data. In order to adapt these models for the joint intent classification and slot tagging task, we use the method described in Chen et al. (2019). For classification, we add a linear classifier on top of the first hidden state of the Transformer. A typical slot tagging model feeds the hidden states, corresponding to each token, to a CRF layer Mesnil et al. (2015). To make this compatible with XLM and XLM-R, we use the hidden states corresponding to the first sub-word of every token as the input to the CRF layer. Table 3 shows the accuracy of the above models on CSTOP. We also have listed the performance when the models were first fine-tuned on the EN data (CS+EN). We observe that in-domain fine-tuning can almost halve the gap between XLM-R and XLM, which is around $50\%$ faster during the inference than XLM-R during inference. The training details for all our models and the validation results are listed in the Appendix. ## 5 Zero-shot performance Bottom part of Table 3 shows the CS test accuracy when using only the in- domain monolingual data. Our EN dataset is the task-oriented parsing dataset Schuster et al. (2019a) described in the previous section. Since the original TOP dataset did not include any utterances belonging to the Device domain, we also release a dataset of around thousand EN Device utterances for the experiments using the EN data. In order to showcase the effect of monolingual ES data, we also experiment with using the in-domain ES dataset, i.e. ES Weather and Device queries. Lang/Model | MUSE | XLM | XLM-R ---|---|---|--- CS | 87.0 | 86.6 | 94.4 CS + EN | 88.1 | 93.0 | 95.4 EN | 39.2 | 54.8 | 66.6 ES | 69.9 | 78.3 | 88.1 EN+ES | 88.2 | 87.8 | 91.2 Table 3: Full-training (top) and zero-shot (bottom) accuracy of XL models when using different monolingual corpora. ES is an internal dataset to showcase the effect of having a big Spanish corpus. We observe that having monolingual data of both languages yields very high accuracy, only a few points shy of training directly on the CS data. Moreover, in this setting, even simpler models such as MUSE can yield competitive results with XLM-R while being much faster. However, the advantage of XL pre- training becomes evident when only one of the languages is present. As such, having only the substrate language (i.e., ES) is almost the same as having both languages for XLM-R. Note that we do not use ES data for other results in this paper. Obtaining semantic parsing dataset in another language is expensive and often only EN data is available. Our experiments show a huge performance gap when only using the EN data, and thus in this paper, we will be focusing on using the EN data alongside zero or a few CS instances. ### 5.1 Effect of XL Embeddings Here, we explore how much of the zero-shot performance can be attributed to the XL embeddings as opposed to the shared XL representation. As such, we experiment with replacing MUSE embeddings with other embeddings in the LSTM model explained in the previous section. We experiment with the following strategies:: (1) Random embedding: This learns the ES and EN word embeddings from the scratch (2) Randomly-initialized SentencePiece Kudo and Richardson (2018) (RSP): Words are represented by wordpiece tokens that are learned from a huge unlabeled multilingual corpus. (3) Pre-trained XLM-R sentence piece (XLSP). These are the 250k embedding vectors that are learned during the pre- trainig of XLM-R. We have shown the effects of using the aforementioned embeddings in the zero- shot setting in Table 4. We can see that by having monolingual datasets from both languages, even random embeddings can yield high performance. By removing one of the languages, unsurprisingly, the codeswitching generalizability drops sharply for all, but much less for XLSP and MUSE. Moreover, even though the XLSP embeddingsm, unlike MUSE, is not consttrained to only EN and ES, it yields comparable results with the word-based MUSE embeddings. We can also see that When ES data is available, RSP provides some codeswitching generalizability, as compared with the Random strategy, but not when only EN data is available. We hypothesize that the common sub-word tokens are more helpful to generalize the slot values (which in the codeswitched data are mostly in EN) than the non-slot queries which are more commonly in ES. This is also verified by the observation that most of the gains for the RSP vs Random for the ES only scenario come from the slot tagging accuracy as compared with the intent detection. As a final note, we observe that between $20-30\%$ of the XLM-R gains can be captured by using the pre-trained sentence-piece embeddings while the rest are coming from the shared XL representation pre-trained on massive unlabeled data. In the rest of the paper, we focus on the XLM-R model. | Random | RSP | XLSP | MUSE ---|---|---|---|--- EN | 13.5 | 12.2 | 30.3 | 39.2 ES | 38.2 | 48.0 | 70.5 | 69.9 EN+ES | 81.1 | 84.3 | 89.0 | 88.2 Table 4: Zero-shot accuracy for simple LSTM model when using different monolingual corpora and different embedding strategies. setvolumeto10ajusteelvolumena10SL:AMOUNTpercentporSL:UNITSL:UNITSL:AMOUNTcientoAttention Alignmentsetvolumeto10ajusteelvolumena10SL:AMOUNTpercentporSL:UNITSL:UNITSL:AMOUNTcientoFast- align Figure 2: An example comparison between the two methods of slot label projection. The image in on the left shows Attention alignment, where every source token gets projected to a single target token. As a result, percent, in EN is aligned only with ciento in ES. The image on the right shows fast-align, which allow a many-to-many alignment. Hence percent is correctly aligned with por ciento. ## 6 Data Augmentation Approaches In this section, we discuss two data augmentation approaches. The first one is in a zero-shot setting and only uses EN data to improve the performance on the Spanglish test set. In the second approach, we assume having a limited number of Spanglish data and use the EN data to augment the few-shot setting. ### 6.1 Translate and Align We explore creating synthetic ES data from the EN dataset using machine translation. Since our task is a joint intent and slot tagging task, creating a synthetic ES corpus consists of two parts: a) Obtaining a parallel EN-ES corpus by machine translating utterances from EN to ES, b) Projecting gold annotations from EN utterances to their ES counterparts via word alignment Tiedemann et al. (2014); Lee et al. (2019b). Once the words in both languages are aligned, the slot annotations are simply copied over from EN to ES by word alignment. For word alignment, we explore two methods that are explained below. In some cases, word alignment may produce discontinuous slot tokens in ES, which we handle by introducing new slots of the same type, for all discontinuous slot fragments. Our first method leverages the attention scores Bahdanau et al. (2015) obtained from an existing EN to ES NMT model. We adopt a simplifying assumption that each source word is aligned to one target language word Brown et al. (1993). For every slot token in the source language, we select the highest attention score to align it with a word in the target language. Our next approach to annotation projection makes use of unsupervised word alignment from statistical machine translation. Specifically, we use the fast- align toolkit Dyer et al. (2013) to obtain alignments between EN and ES tokens. Since fast-align generates asymmetric alignments, we generate two sets of alignments, EN to ES and ES to EN and symmetrize them using the grow- diagnol-final-and heuristic Koehn et al. (2003) to obtain the final alignments. In Table 5, we show the CS zero-shot accuracy when fine-tuning on the newly generated ES data (called $\textrm{ES}^{*}$.) alongside the original EN data. We can see that unsupervised alignment results in around 2.5 absolute point accuracy improvement. On the other hand, using attention alignment ends up hurting the accuracy, which is perhaps due to the slot noise that it introduces. The assumption that a single source token aligns with a single target token leads to incorrect data annotations when the length of a translated slot is different in EN and ES. Figure 3 shows an example utterance where attention alignment produces an incorrect annotation compared to unsupervised alignment. EN | EN+$\textrm{ES}^{*}$ Attn | EN+$\textrm{ES}^{*}$ aligned ---|---|--- 66.6 | 65.8 | 69.2 Table 5: Zero-shot accuracy when fine-tuning XLM-R on EN monolignual data as well as the auto-translated and aligned ES data (called ES*). Model/Training Data | Few Shot | Few shot+ Generate and Filter augmentation ---|---|--- XLM-R | 61.2 | 70.3 XLM-R fine-tuned on EN | 82.6 | 83.7 XLM-R fine-tuned on EN+$\textrm{ES}^{*}$ | 84.1 | 84.8 Table 6: Accuracy when only a few CS instances are available during training, with and without the data augmentation. ES* is the auto-translated and aligned data. [IN:GET_WEATHER show me the weather [SL:DATE_TIME for next Monday ] [IN:GET_WEATHER Dime el clima [SL:DATE_TIME para next Friday] [IN:GET_WEATHER Quiero saber el clima [SL:DATE_TIME para next Monday ]] [IN:GET_WEATHER Dime el clima esperado [SL:DATE_TIME para next Friday ]] [IN:GET_WEATHER Dime el pronóstico [SL:DATE_TIME hasta el 15 ]] Figure 3: Match and Filter data augmentation: 1- For each CS utterance (target), find the the closest EN neighbor (source). 2- Learn a generative model from source to target 3- Perform beam search to generate more targets from the source utterances. ### 6.2 Generate by Match-and-Filter in the Few-shot Setting Here, we assume having a limited number of high-quality in-domain CS data and as such, we construct the $\textrm{CSTOP}_{\textrm{100}}$ dataset of around $100$ utterances from the original training set in the CSTOP. We make sure that every individual slot and intent (but not necessarily the combination) is presented in $\textrm{CSTOP}_{\textrm{100}}$ and randomly sample the rest. We perform our sampling three times and report the few-shot results on the average performance. This setting is of paramount importance for bringing up a domain in a new locale when the EN data is already available. The first column in Table 6 shows the CS Few-Shot (FS) performance alongside the fine-tuning on the EN data and the aligned translated data, when average over three sampling of $\textrm{CSTOP}_{\textrm{100}}$. In order to improve the FS performance, we perform data augmentation on the $\textrm{CSTOP}_{\textrm{100}}$ dataset. Unlike methods such as Pratapa et al. (2018), we seek generic methods that do not need extra resources such as constituency parsers. Instead, we explore using pre-trained generative models while taking advantage of the EN data. We use BART Lewis et al. (2020), a denoising autoencoder trained on massive amount of web data, as the generative model. Our goal is to generate diverse Spanglish data from the EN data. Even though BART was trained for English, we found it very effective for this task. We hypothesize this is due to the abundance of the Spanish text among EN web data and the proximity of the word- piece tokens among them. We also experimented with multilingual BART Liu et al. (2020a) but found it very challenging to fine-tune for this task. First, we convert the data to a bracket format Vinyals et al. (2015), which is called the seqlogical form in Gupta et al. (2018). Examples of this format are shown in Fig. 3. In the seqlogical form, we include the intent (i.e., sentence label) at the beginning and for each slot, we first include the label and text in brackets. We perform our data augmentation technique in the following steps: 1. 1. Find the top K closest EN neighbors to every CS query in the $\textrm{CSTOP}_{\textrm{100}}$. We enforce the neighbors to have the same parse as the CS utterance, i.e., same intent and same slot labels, and use the Levenshtein distance to rank the EN sequences. 2. 2. Having this parallel corpus, i.e., top-K EN neighbors as the source and the original CS query as the target, Fine-tune the BART model. We use K=10 in our experiments to increase the parallel data size to around $650$. 3. 3. During the inference, Use the beam size of 5 to decode CS utterances from the same EN source data. Since both the source and target sequences are in the seqlogical form, the CS generated sequences are already annotated. In Fig. 3, we have shown the closest EN neighbor corresponding to the original CS example in Fig. 1. The CS utterance can be seen as a rough translation of the EN sentence. We have also shown the top three generated CS utterances from the EN example. In order to reduces the noise, we filter the generated sequences that either already exist in $\textrm{CSTOP}_{\textrm{100}}$, are not valid trees, or have a semantic parse different from the original utterance. We augment $\textrm{CSTOP}_{\textrm{100}}$ with the data, and fine-tune the XLM-R baseline. In the second column of Table 6, we have shown the average data augmentation improvement over the three $\textrm{CSTOP}_{\textrm{100}}$ samples for the few-shot setting. We can see that even after fine-tuning on the EN monolingual data (the second row), the augmentation technique improves this strong baseline. In the last row, we first use the translation alignment of the previous section to obtain $\textrm{ES}^{*}$. After fine-tuning on this set combined with the EN data, we further fine-tune on the $\textrm{CSTOP}_{\textrm{100}}$. We can see that the best model enjoys improvements from both zero-shot (translation alignment) and the few-shot (generate and filter) augmentation techniques. We also note that the p-value corresponding to the second and third row gains are 0.018 and 0.055, respectively. ## 7 Related Work ### 7.1 XL Pre-training Most of the initial work on pre-trained XL representations was focused on embedding alignment Xing et al. (2015); Zhang et al. (2017); Lample et al. (2018). Recent developments in this area have focused on the context-aware XL alignment of contextual representations Schuster et al. (2019b); Aldarmaki and Diab (2019); Wang et al. (2019); Cao et al. (2020). Recently, pre-trained multilingual language models such as mBERT Devlin et al. (2019), XLM Conneau and Lample (2019), and Conneau et al. (2020) have been introduced, and Pires et al. (2019) demonstrate the effectiveness of these on sequence labeling tasks. Separately, Liu et al. (2020a) introduce mBART, a sequence-to-sequence denoising auto-encoder pre-trained on monolingual corpora in many languages using a denoising autoencoder objective (Lewis et al., 2020). ### 7.2 Code-Switching Following the ACL shared tasks, CS is mostly discussed in the context of word- level language identification Molina et al. (2016) and NER Aguilar et al. (2018). Techniques such as curriculum learning Choudhury et al. (2017) and attention over different embeddings Wang et al. (2018); Winata et al. (2019a) have been among the successful techniques. CS parsing and use of monolingual parses are discussed in Sharma et al. (2016); Bhat et al. (2017, 2018). Sharma et al. (2016) introduces a Hinglish test set for a shallow parsing pipeline. In Bhat et al. (2017), outputs of two monolingual dependency parsers are combined to achieve a CS parse. Bhat et al. (2018) extends this test set by including training data and transfers the knowledge from monolingual treebanks. Duong et al. (2017) introduced a CS test set for semantic parsing which is curated by combining utterances from the two monolingual datasets. In contrast, CSTOP is procured independently of the monolingual data and exhibits much more linguistic diversity. In Pratapa et al. (2018), linguistic rules are used to generate CS data which has been shown to be effective in reducing the perplexity of a CS language model. In contrast, our augmentation techniques are generic and do not require rules or constituency parsers. ### 7.3 XL Data Augmentation Most approaches to cross-lingual data augmentation use machine translation and slot projection for sequence labeling tasks (Jain et al., 2019). Wei and Zou (2019) uses simple operations such as synonym replacement and Lee et al. (2019a) use phrase replacement from a parallel corpus to augment the training data. Singh et al. (2019) present XLDA that augments data by replacing segments of input text with its translations in other languages. Some recent approaches (Chang et al., 2019; Winata et al., 2019b) also train generative models to artificially generate CS data. More recently, Kumar et al. (2020) study data augmentation using pre-trained transformer models by incorporating label information during fine-tuning. Concurrent to our work, Bari et al. (2020) introduce Multimix, where data augmentation from pre-trained multilingual language models and self-learning are used for semi-supervised learning. Recently, Liu et al. (2019) generate CS data by translating keywords picked based on attention scores from a monolingual model. Generating CS data has recently been studied in Liu et al. (2020b) ### 7.4 Task-oriented Dialog The intent/slot framework is the most common way of performing language understanding for task oriented dialog using. Bidirectional LSTM for the sentence representation alongside separate projection layers for intent and slot tagging is the typical architecture for the joint task Yao et al. (2013); Mesnil et al. (2015); Hakkani-Tür et al. (2016). Such representations can accommodate trees of up to length two, as is the case in CSTOP. More recently, an extension of this framework has been introduced to fit the deeper trees Gupta et al. (2018); Rongali et al. (2020). ## 8 Conclusion In this paper, we propose a new task for code-switched semantic parsing and release a dataset, CSTOP, containing 5800 Spanglish utterances over two domains. We hope this foments further research on the code-switching phenomenon which has been set back by paucity of sizeable curated datasets. We show that cross-lingual pre-trained models can generalize better than traditional models to the code-switched setting when monolingual data from only one languages is available. In the presence of only EN data, we introduce generic augmentation techniques based on translation and generation. As such, we show that translating and aligning the EN data can significantly improve the zero-shot performance. Moreover, generating code-switched data using a generation model and a match-and-filter approach leads to improvements in a few-shot setting. We leave exploring and combining other augmentation techniques to future work. ## References * Aguilar et al. (2018) Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2018. Named entity recognition on code-switched data: Overview of the CALCS 2018 shared task. In _Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching_ , pages 138–147, Melbourne, Australia. Association for Computational Linguistics. * Aldarmaki and Diab (2019) Hanan Aldarmaki and Mona Diab. 2019. Context-aware cross-lingual mapping. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3906–3911, Minneapolis, Minnesota. Association for Computational Linguistics. * Aly et al. (2018) Ahmed Aly, Kushal Lakhotia, Shicong Zhao, Mrinal Mohit, Barlas Oguz, Abhinav Arora, Sonal Gupta, Christopher Dewan, Stef Nelson-Lindall, and Rushin Shah. 2018. Pytext: A seamless path from NLP research to production. _CoRR_ , abs/1812.08729. * Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_. * Bari et al. (2020) M. Saiful Bari, Muhammad Tasnim Mohiuddin, and Shafiq R. Joty. 2020. Multimix: A robust data augmentation strategy for cross-lingual NLP. _CoRR_ , abs/2004.13240. * Bhat et al. (2017) Irshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma. 2017. Joining hands: Exploiting monolingual treebanks for parsing of code-mixing data. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers_ , pages 324–330, Valencia, Spain. Association for Computational Linguistics. * Bhat et al. (2018) Irshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma. 2018. Universal dependency parsing for Hindi-English code-switching. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 987–998, New Orleans, Louisiana. Association for Computational Linguistics. * Brown et al. (1993) Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. _Computational Linguistics_ , 19(2):263–311. * Cao et al. (2020) Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net. * Chang et al. (2019) Ching-Ting Chang, Shun-Po Chuang, and Hung-yi Lee. 2019. Code-switching sentence generation by generative adversarial networks and its application to data augmentation. In _Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019_ , pages 554–558. ISCA. * Chen et al. (2019) Qian Chen, Zhu Zhuo, and Wen Wang. 2019. Bert for joint intent classification and slot filling. _ArXiv_ , abs/1902.10909. * Choudhury et al. (2017) Monojit Choudhury, Kalika Bali, Sunayana Sitaram, and Ashutosh Baheti. 2017. Curriculum design for code-switching: Experiments with language identification and language modeling with deep neural networks. In _Proceedings of the 14th International Conference on Natural Language Processing (ICON-2017)_ , pages 65–74, Kolkata, India. NLP Association of India. * Conneau et al. (2020) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020_ , pages 8440–8451. Association for Computational Linguistics. * Conneau and Lample (2019) Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In _Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada_ , pages 7057–7067. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. * Duong et al. (2017) Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip Cohen, and Mark Johnson. 2017. Multilingual semantic parsing and code-switching. In _Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)_ , pages 379–389, Vancouver, Canada. Association for Computational Linguistics. * Dyer et al. (2013) Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In _Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 644–648, Atlanta, Georgia. Association for Computational Linguistics. * Escobar and Potowski (2015) Anna Maria Escobar and Kim Potowski. 2015. _El Español de los Estados Unidos_. Cambridge University Press. * Gupta et al. (2018) Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representations. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018_ , pages 2787–2792. Association for Computational Linguistics. * Hakkani-Tür et al. (2016) Dilek Hakkani-Tür, Gökhan Tür, Asli Çelikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM. In _Interspeech 2016, 17th Annual Conference of the International Speech Communication Association, San Francisco, CA, USA, September 8-12, 2016_ , pages 715–719. ISCA. * Jain et al. (2019) Alankar Jain, Bhargavi Paranjape, and Zachary C. Lipton. 2019. Entity projection via machine translation for cross-lingual NER. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1083–1092, Hong Kong, China. Association for Computational Linguistics. * Joshi (1982) Aravind K. Joshi. 1982. Processing of sentences with intra-sentential code-switching. In _Coling 1982: Proceedings of the Ninth International Conference on Computational Linguistics_. * Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_. * Koehn et al. (2003) Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In _Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics_ , pages 127–133. * Kudo and Richardson (2018) Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 66–71, Brussels, Belgium. Association for Computational Linguistics. * Kumar et al. (2020) Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data augmentation using pre-trained transformer models. _CoRR_ , abs/2003.02245. * Lample et al. (2018) Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In _6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings_. OpenReview.net. * Lee et al. (2019a) Grandee Lee, Xianghu Yue, and Haizhou Li. 2019a. Linguistically motivated parallel data augmentation for code-switch language modeling. In _Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019_ , pages 3730–3734. ISCA. * Lee et al. (2019b) Kyungjae Lee, Sunghyun Park, Hojae Han, Jinyoung Yeo, Seung-won Hwang, and Juho Lee. 2019b. Learning with limited data for multilingual reading comprehension. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2840–2850, Hong Kong, China. Association for Computational Linguistics. * Lewis et al. (2020) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020_ , pages 7871–7880. Association for Computational Linguistics. * Liu et al. (2020a) Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020a. Multilingual denoising pre-training for neural machine translation. _Trans. Assoc. Comput. Linguistics_ , 8:726–742. * Liu et al. (2019) Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Peng Xu, and Pascale Fung. 2019. Attention-informed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems. * Liu et al. (2020b) Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Peng Xu, and Pascale Fung. 2020b. Attention-informed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 34(05):8433–8440. * Mesnil et al. (2015) Grégoire Mesnil, Yann N. Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tür, Xiaodong He, Larry P. Heck, Gökhan Tür, Dong Yu, and Geoffrey Zweig. 2015. Using recurrent neural networks for slot filling in spoken language understanding. _IEEE ACM Trans. Audio Speech Lang. Process._ , 23(3):530–539. * Molina et al. (2016) Giovanni Molina, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas Rey-Villamizar, Mona Diab, and Thamar Solorio. 2016. Overview for the second shared task on language identification in code-switched data. In _Proceedings of the Second Workshop on Computational Approaches to Code Switching_ , pages 40–49, Austin, Texas. Association for Computational Linguistics. * Pires et al. (2019) Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In _Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers_ , pages 4996–5001. Association for Computational Linguistics. * Poplack (2004) Shana Poplack. 2004. _Code-Switching_ , pages 589–596. * Pratapa et al. (2018) Adithya Pratapa, Gayatri Bhat, Monojit Choudhury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018. Language modeling for code-mixing: The role of linguistic theory based synthetic data. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1543–1553, Melbourne, Australia. Association for Computational Linguistics. * Rongali et al. (2020) Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza. 2020. Don’t parse, generate! A sequence to sequence architecture for task-oriented semantic parsing. In _WWW ’20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020_ , pages 2962–2968. ACM / IW3C2. * Schuster et al. (2019a) Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019a. Cross-lingual transfer learning for multilingual task oriented dialog. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers)_ , pages 3795–3805. Association for Computational Linguistics. * Schuster et al. (2019b) Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019b. Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 1599–1613, Minneapolis, Minnesota. Association for Computational Linguistics. * Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. * Sharma et al. (2016) Arnav Sharma, Sakshi Gupta, Raveesh Motlani, Piyush Bansal, Manish Shrivastava, Radhika Mamidi, and Dipti M. Sharma. 2016. Shallow parsing pipeline - Hindi-English code-mixed social media text. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1340–1345, San Diego, California. Association for Computational Linguistics. * Singh et al. (2019) Jasdeep Singh, Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2019. XLDA: cross-lingual data augmentation for natural language inference and question answering. _CoRR_ , abs/1905.11471. * Tiedemann et al. (2014) Jörg Tiedemann, Željko Agić, and Joakim Nivre. 2014. Treebank translation for cross-lingual parser induction. In _Proceedings of the Eighteenth Conference on Computational Natural Language Learning_ , pages 130–140, Ann Arbor, Michigan. Association for Computational Linguistics. * Vinyals et al. (2015) Oriol Vinyals, Ł ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, _Advances in Neural Information Processing Systems 28_ , pages 2773–2781. Curran Associates, Inc. * Wang et al. (2018) Changhan Wang, Kyunghyun Cho, and Douwe Kiela. 2018. Code-switched named entity recognition with embedding attention. In _Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching_ , pages 154–158, Melbourne, Australia. Association for Computational Linguistics. * Wang et al. (2019) Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, and Ting Liu. 2019. Cross-lingual BERT transformation for zero-shot dependency parsing. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5721–5727, Hong Kong, China. Association for Computational Linguistics. * Wei and Zou (2019) Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 6382–6388, Hong Kong, China. Association for Computational Linguistics. * Winata et al. (2019a) Genta Indra Winata, Zhaojiang Lin, Jamin Shin, Zihan Liu, and Pascale Fung. 2019a. Hierarchical meta-embeddings for code-switching named entity recognition. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3541–3547, Hong Kong, China. Association for Computational Linguistics. * Winata et al. (2019b) Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2019b. Code-switched language models using neural based synthetic data from parallel sentences. In _Proceedings of the 23rd Conference on Computational Natural Language Learning, CoNLL 2019, Hong Kong, China, November 3-4, 2019_ , pages 271–280. Association for Computational Linguistics. * Xing et al. (2015) Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In _Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1006–1011, Denver, Colorado. Association for Computational Linguistics. * Yao et al. (2013) Kaisheng Yao, Geoffrey Zweig, Mei-Yuh Hwang, Yangyang Shi, and Dong Yu. 2013. Recurrent neural networks for language understanding. In _Interspeech_ , pages 2524–2528. * Zhang et al. (2017) Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1959–1970, Vancouver, Canada. Association for Computational Linguistics. ## Appendix A Appendix Here, we describe the details regarding the training as the validation results. ### A.1 Model and Training Parameters In Table 7, we have shown the training details for all our models. We use ADAM Kingma and Ba (2015) with Learning Rate (LR), Weight Decay (WD), and Batch Size (BSz) that is listed for each model. We have also shown the number of epochs and the average training time for the full CS data using $8$ V100 Nvidia GPUs. For all our XLM-R experiments, we use the XLM-R large from the PyText222https://pytext.readthedocs.io/en/master/xlm_r.html Aly et al. (2018) which is pre-trained on 100 languages. For the XLM experiments, we use XLM-20 pre-trained over 20 languages and use the same fine-tuning parameters as XLM-R but run for more epochs. For the LSTM models, we use a two-layer LSTM with hidden dimension of $256$ and dropout of $0.3$ for all connections. We use one layer of MLP of dimension $200$ for both the slot tagging and the intent classification. We also use an ensemble of five models for all the LSTM experiments to reduce the variance. The LSTM model with SentencePiece embeddings in Table 4 were trained with embedding dimension of $1024$ similar to the XLM-R model. ### A.2 Validation Results In Table. 9, we have shown the validation results when using the full CS training data. We have not shown the corresponding results for the zero-shot experiments as no validation data was not used and the monolingual models were tested off the shelf. In Table. 8, we have shown the validation results for the few-shot setting. Model | BSz | LR | WD | Epoch | Avg Time ---|---|---|---|---|--- XLM-R (pronoun) | 8 | 0.000005 | 0.0001 | 15 | 5 hr XLM (pronoun) | 8 | 0.000005 | 0.0001 | 20 | 1 hr LSTM (pronoun+question) | 64 | 0.03 | 0.00001 | 45 | 45 min Table 7: Training Parameters Model/Training Data | Few shot | Few shot + Generate and Filter Augmentation ---|---|--- XLM-R | 61.7 | 70.4 XLM-R fine-tuned on EN | 83.3 | 83.9 XLM-R fine-tuned on EN+$\textrm{ES}^{*}$ | 83.5 | 84.9 Table 8: Validation Accuracy when only a few CS instances (FS) are available during training. FS+G refers to augmenting the few-shot instances with generated CS data. ES* is the auto-translated and aligned data. Lang/Model | MUSE | XLM | XLM-R ---|---|---|--- CS | 87.8 | 90.7 | 95.0 CS + EN | 89.0 | 92.9 | 95.5 Table 9: Validation results for the Full-training on the CS data
# Self-Calibrating Indoor Localization with Crowdsourcing Fingerprints and Transfer Learning Chenlu Xiang†, Shunqing Zhang†, Shugong Xu†, and George C. Alexandropoulos‡ † Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China ‡ Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Greece Email: {xcl, shunqing<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Precise indoor localization is one of the key requirements for fifth Generation (5G) and beyond, concerning various wireless communication systems, whose applications span different vertical sectors. Although many highly accurate methods based on signal fingerprints have been lately proposed for localization, their vast majority faces the problem of degrading performance when deployed in indoor systems, where the propagation environment changes rapidly. In order to address this issue, the crowdsourcing approach has been adopted, according to which the fingerprints are frequently updated in the respective database via user reporting. However, the late crowdsourcing techniques require precise indoor floor plans and fail to provide satisfactory accuracy. In this paper, we propose a low-complexity self-calibrating indoor crowdsourcing localization system that combines historical with frequently updated fingerprints for high precision user positioning. We present a multi- kernel transfer learning approach which exploits the inner relationship between the original and updated channel measurements. Our indoor laboratory experimental results with the proposed approach and using Nexus 5 smartphones at 2.4GHz with 20MHz bandwidth have shown the feasibility of about one meter level accuracy with a reasonable fingerprint update overhead. ###### Index Terms: Indoor localization, crowdsourcing, channel state information, fingerprint database, transfer learning. ## I Introduction Among the technical requirements for fifth Generation (5G) wireless communications, and their late demanding provisions for 6G, belong the precise indoor localization, which can bring, as representative applications, accurate navigation experience [1] in shopping malls and seamless tracking requirements in smart factories [2]. Different from the outdoor localization process, where the combination of the global navigation satellite system (GNSS) with the inertial navigation system (INS) [3] provides satisfying accuracy, there are still diversified solutions in the indoor case, such as those based on the low-cost Bluetooth Low Energy (BLE) [4], the increasingly popular 3GPP LTE/5G [5], and the widely deployed WiFi technologies [6, 7, 8, 9, 10, 11]. Nevertheless, signal fingerprinting approaches, including large scale received signal strength indicator (RSSI) [6], or reference signal received power (RSRP) [5], or small-scale channel state information (CSI) [9, 11], are usually recognized as the most efficient solutions for high accuracy indoor localization. Many methods based on signal fingerprints, e.g., [6, 5] and [9], have achieved record-breaking localization results. Those methods target at extracting the intrinsic features of wireless signals in the training phase, which are then utilized in the online operating phase to predict the user location in conjunction with real-time measurements. The RSSI [6] and RSRP [5] metrics have been proven to be highly correlated with the spatial locations, and thus, deployed for improving the localization accuracy to the level of a meter in indoor and outdoor scenarios, respectively. Furthermore, CSI measurements, which are frequently reported in [7, 8, 9, 10, 11], have managed to improve the localization accuracy. Those measurements have complex structure and more dimensions than traditional fingerprints. The probabilistic models between the collected CSI and the candidate locations are established through some classifiers, such as deterministic k-nearest neighbor (KNN) clustering [7], probabilistic Bayes rule algorithms [8], and deep learning based algorithms [9, 12]. The above fingerprint based solutions have been shown to be able to achieve sub-meter, even decimeter level accuracy, if CSI from multiple APs [10], multiple frequency bands [11], and/or multiple antennas [8] can be fused together. The availability of timely and accurate signal fingerprint maps is of paramount importance for the aforementioned highly accurate localization approaches. However, the collection of fingerprint maps is often considered as a labor-intensive task. Furthermore, fingerprint maps can be easily corrupted by the fluctuations in the wireless channels, due to human movement or time- varying scattering and reflections. It was shown in [13] that those factors can gradually, over time, degrade the localization accuracy. In order to solve this problem, [14] proposed to regularly update the fingerprint database in order to maintain the positioning accuracy without deterioration. However, the associated system maintenance costs were inevitably expensive. Recently, a low cost alternative scheme named _crowdsourcing_ [15] has been proposed to keep the fingerprint database fresh via collaborative user reporting. With the aid of floor plans, the existing crowdsourcing systems [16] can update the fingerprint database by matching the mobile users’ zigzag routing, estimated by inertial measurement unit (IMU) results for a period of time with the pre- acquired floor plan. However, the above method suffers from the inaccurate location information of crowdsourcing users, which often results in error accumulating events, as shown in [16]. In addition, the localization approaches [17, 18] only update the fingerprint database by newly collected fingerprints using conventional multi-dimensional scaling (MDS) [17] or marginalized particle filtering (MPF) [18] schemes, which has been shown to exhibit poor localization accuracy. To address the aforementioned issues with the crowdsourcing approaches, we present in this paper a novel self-calibrating indoor localization system that is based on a WiFi fingerprint database. By efficiently utilizing the dynamically updated fingerprints from crowdsourcing, we introduce the maximum mean discrepancy (MMD) to describe the differences of their distributions, which is further extended to multi-kernel MMD (MK-MMD) by incorporating the conditional CSI distributions. Furthermore, a combined loss function for the proposed deep transfer learning framework is designed to balance the localization accuracy and the MK-MMD distances. The proposed scheme attains to keep the WiFi fingerprint database as well as the neural network models updated, in order to efficiently track the variations of the wireless channel for long time periods, while exhibiting reasonable implementation complexity. Our experimental results in an indoor laboratory environment have showcased that the proposed localization scheme can achieve mean localization errors as low as one meter. The rest of this paper is organized as follows. In Section II, we introduce our proposed crowdsourcing indoor localization system. We then present the considered problem formulation and the implementation details of the proposed deep transfer learning approach in Section III. Our experimental results are discussed in Section IV, while Section V concludes this paper. ## II System Model In this section, we present the proposed crowdsourcing system architecture, which is depicted in Fig. 1, and discuss the initialization of the architecture’s fingerprint database as well as its updating procedure via crowdsourcing. Figure 1: The proposed crowdsourcing system includes one or more reporters, a server, and a requester. The system operator designs the initial fingerprint database at the server, and the reporter provides frequently updates on its fingerprints with location labels. Based on this information, the server keeps updating the relationship between the database with the fingerprints and the reporter locations. In the online phase, the requester leverages the latter updated mapping to obtain its precise localization. ### II-A Fingerprint Database Initialization The initial fingerprint database, whose content is denoted as $\mathcal{DB}_{0}$, stores the CSI samples from $N_{R}$ discrete grid locations, e.g., $\\{\mathcal{A}_{m},\forall m\in[1,\ldots,N_{R}]\\}$. We make use of the notation $\mathbf{H}(\mathcal{A}_{m},T_{0})\in\mathbb{C}^{N_{sc}\times N_{s}}$ for the aggregated channel response of $N_{sc}$ subcarriers and $N_{s}$ consecutive orthogonal frequency division multiplexing (OFDM) symbols in grid area $\mathcal{A}_{m}$ at the initial time instant $T_{0}$. The entire fingerprint database is initialized as follows, $\displaystyle\mathcal{DB}_{0}=\left\\{\left(m,\mathbf{H}(\mathcal{A}_{m},T_{0})\right),\forall m\in[1,\ldots,N_{R}]\right\\}.$ (1) In practice, it is in general hard to obtain the channel responses of the entire grid area. For our fingerprint database initialization, the CSI samples at the center locations $\\{\mathcal{L}_{m},\forall m\in[1,\ldots,N_{R}]\\}$ of each grid area (see the red points in our laboratory experimental results, as illustrated later on in Fig. 3), are considered as the regional CSI, $\mathbf{H}(\mathcal{A}_{m},T_{0})$, and CSI at the reference point $\mathcal{L}_{m}$, e.g., $\mathbf{H}(\mathcal{L}_{m},T_{0})$, can be obtained via a standard MMSE detection algorithm, as discussed in [19]. ### II-B Fingerprint Database Update For each duration between the time instants $T_{n}$ and $T_{n+1}$, we assume $N^{r}_{n+1}$ reporters are transmitting their respective location information $\\{\mathcal{L}^{r}_{n+1}(i)\\}$ and collected CSI $\\{\mathbf{H}(\mathcal{L}^{r}_{n+1}(i),T_{n+1})\\}$, where $i\in[1,\ldots,N^{r}_{n+1}]$, to the central server. In order to obtain the accurate location $\mathcal{L}^{r}_{n+1}(i)$, we utilize the WiFi, GPS receiver and IMU sensors, as shown in Fig. 1. Let us denote by $\Omega_{i}(\mathcal{A}_{m})$ the set of reports in the area $\mathcal{A}_{m}$, e.g., $\Omega_{i}(\mathcal{A}_{m})=\\{i,\forall i\ \textrm{satisfies}\ \mathcal{L}^{r}_{n+1}(i)\in\mathcal{A}_{m}\\}$. We can update the fingerprint $\mathbf{H}(\mathcal{A}_{m},T_{n+1})$ by averaging the collected CSI within the area $\mathcal{A}_{m}$ in order to eliminate occasional measurement errors, which is given by $\displaystyle\mathbf{H}(\mathcal{A}_{m},T_{n+1})=\qquad\qquad\qquad\qquad\qquad\qquad$ $\displaystyle\left\\{\begin{array}[]{cc}\frac{1}{|\Omega_{i}(\mathcal{A}_{m})|}\sum_{i\in\Omega_{i}(\mathcal{A}_{m})}\mathbf{H}(\mathcal{L}^{r}_{n+1}(i),T_{n+1}),&\Omega_{i}(\mathcal{A}_{m})\neq\emptyset,\\\ \mathbf{H}(\mathcal{A}_{m},T_{n}),&\Omega_{i}(\mathcal{A}_{m})=\emptyset,\end{array}\right.$ where $|\cdot|$ denotes the cardinality of the inner set and $\emptyset$ denotes the empty set. The fingerprint database at $T_{n+1}$, i.e., $\mathcal{DB}_{n+1}$, is thus given by, $\displaystyle\mathcal{DB}_{n+1}=\left\\{\left(m,\mathbf{H}(\mathcal{A}_{m},T_{n+1})\right),\forall m\in[1,\ldots,N_{R}]\right\\}.$ (3) It is well known that GPS is currently the most widely applied GNSS service in the world, and deployed on most mobile terminals. However, indoor GPS signals are usually considered too weak for indoor localization. Actually, our smartphones often receive strong GPS signals and acquire accurate positioning coordinates in some non-closed places such as positions by the windows. In this way, opportunistic GPS localization can be accessible to annotate the newly collected fingerprint. In order to make sure the received signals are strong enough to provide accurate locations, we have designed detectors to help assess the signal quality111More details about the detectors will be provided in the extended version of this paper.. Only when the signal quality meets the requirements in several tests, we record the prior position $\mathcal{L}_{0}$. To further obtain the user traces, we apply an offline pedestrian dead reckoning (PDR) algorithm applied on the collected IMU sensor data, including accelerometer, gyroscope and magnetometer as mentioned in [20]. The whole process consists of step detection $N_{L}$, step length estimation $L_{k}$, and heading direction estimation $\alpha_{k}$, and the PDR displacement in a very short time period is described as, $\displaystyle\Delta\mathcal{L}=\sum_{k=1}^{N_{L}}\alpha_{k}\cdot L_{k},$ (4) where $L_{k}$ is the $k^{th}$ step length of reporter. A particle filter [18] is utilized to reduce IMU distance errors during this process. Through this approach, we can obtain the reporter’s location by $\mathcal{L}^{r}_{n+1}(i)=\mathcal{L}_{0}+\Delta\mathcal{L}$ and the corresponding CSI, $\mathbf{H}(\mathcal{L}^{r}_{n+1}(i),T_{n+1})$. ## III Proposed Deep Transfer Learning Scheme In this section, we present the considered crowdsourcing localization problem formulation and devise a deep transfer learning scheme for the fingerprint database update. In particular, we present a MK-MMD minimization optimization framework, based on which a novel neural network structure and loss function are designed to exploit the inner relationship between the original and updated CSI measurements. ### III-A Problem Formulation We apply a general optimization framework to describe the localization problem. Denote $\mathcal{L}_{n}^{k}$ and $\hat{\mathcal{L}}_{n}^{k}$ to be the ground-true and the predicted locations of the $k^{th}$ target at $T_{n}$ respectively, and the corresponding mean distance error (MDE) performance over $K$ sampling positions is given by $\frac{1}{K}\sum_{k=1}^{K}\|\hat{\mathcal{L}}_{n}^{k}-\mathcal{L}_{n}^{k}\|_{2}$, where $\|\cdot\|_{2}$ represents the vector $l_{2}$ norm as defined in [21]. With the above notation, we can describe the MDE minimization problem using the following optimization framework. ###### Problem 1 (MDE Minimization). $\displaystyle\underset{\\{g_{n}(\cdot)\\}}{\textrm{minimize}}$ $\displaystyle\frac{1}{N}\frac{1}{K}\sum_{n=1}^{N}\sum_{k=1}^{K}\|\hat{\mathcal{L}}_{n}^{k}-\mathcal{L}_{n}^{k}\|_{2}$ (5) subject to $\displaystyle\hat{\mathcal{L}}_{n}^{k}=\left(\mathbf{p}_{n}^{k}\right)^{T}\cdot\mathcal{L}_{c},$ (8) $\displaystyle\mathbf{p}_{n}^{k}=g_{n}\left(\mathbf{H}(\mathcal{L}_{n}^{k}),\mathcal{DB}_{n}\right),\forall n,$ $\displaystyle\mathbf{p}_{n}^{k}\in[0,1]^{N_{R}},\forall n,k,$ where $N$ is the total number of localization time instants, as well as $\mathcal{L}_{c}=[\mathcal{L}_{c}^{1},\ldots,\mathcal{L}_{c}^{m},\ldots,\mathcal{L}_{c}^{N_{R}}]$ and ${\mathbf{p}_{n}^{k}}$ denote the central grid positions and the position likelihood distribution of the $k^{th}$ target at $T_{n}$ with respect to all the possible $\mathcal{A}_{m}$, respectively. The function $g_{n}$ represents the unknown mapping relationship between the measured CSI $\mathbf{H}(\mathcal{L}_{n}^{k})$ and ${\mathbf{p}_{n}^{k}}$. In order to minimize the localization errors, it is necessary to estimate $\mathbf{p}_{n}^{k}$ by fitting the $g_{n}(\cdot)$ function, and further obtain a recursive relation with the new function $g_{n+1}(\cdot)$, which is represented as, $\displaystyle\mathbf{p}_{n+1}^{k}=g_{n+1}\left(\mathbf{H}(\mathcal{L}_{n+1}^{k}),\mathcal{DB}_{n+1}\right).$ (9) Considering that part of fingerprints in $\mathcal{DB}_{n+1}$ are updated compared with $\mathcal{DB}_{n}$, it is unnecessary to fit the $g_{n+1}(\cdot)$ function using the entire $\mathcal{DB}_{n+1}$. In order to reduce the computational complexity of the fingerprint database update, we propose to apply transfer learning for fingerprint transfer. Due to the difference between $\textrm{Pr}(\mathbf{H}(\mathcal{A}_{m},T_{n}))$ and $\textrm{Pr}(\mathbf{H}(\mathcal{A}_{m},T_{n+1}))$, the self-calibrating localization system needs to utilize a transfer mapping function $\Phi(\cdot)$ in order to model the difference in reproducing kernel Hilbert space (RKHS) instead, where $\textrm{Pr}(\cdot)$ represents the stochastic properties of channel distributions. The corresponding MMD measure is thus given by, $\displaystyle\mathcal{D}_{\textrm{MMD}}(\mathcal{DB}_{n},\mathcal{DB}_{n+1})$ $\displaystyle=$ $\displaystyle\sum_{m=1}^{N_{R}}\left\|\Phi\left(\mathbf{H}(\mathcal{A}_{m},T_{n}))\right)-\Phi\left(\mathbf{H}(\mathcal{A}_{m},T_{n+1}))\right)\right\|_{\mathcal{H}}^{2},$ where $\|\cdot\|_{\mathcal{H}}$ are and the vector norm operation in RKHS. With the above manipulation, we are required to fit the optimal mapping function $\Phi(\cdot)$ to update the function $g_{n+1}(\cdot)$, that is minimize MMD. Then we can transform the original MDE minimization problem into the following MMD minimization problem. ###### Problem 2 (MMD Minimization). $\displaystyle\underset{\Phi(\cdot)}{\textrm{minimize}}$ $\displaystyle\mathcal{D}_{\textrm{MMD}}(\mathcal{DB}_{n},\mathcal{DB}_{n+1})$ subject to $\displaystyle\eqref{eqn:const_1},\ \eqref{eqn:const_2}.$ Although the aforementioned MMD considers the distribution differences of CSI between $\textrm{Pr}(\mathbf{H}(\mathcal{A}_{m},T_{n}))$ and $\textrm{Pr}(\mathbf{H}(\mathcal{A}_{m},T_{n+1}))$, the conditional probability distributions of specific areas, e.g., $\textrm{Pr}(\mathbf{H}(\mathcal{A}_{m},T_{n})|\mathcal{A}_{m})$ and $\textrm{Pr}(\mathbf{H}(\mathcal{A}_{m},T_{n+1})|\mathcal{A}_{m})$, are ignored. Since this feature provides additional correlation information in different areas, we propose to use an improved multi-kernel solution, namely MK-MMD [22], defined as follows. $\displaystyle\mathcal{D}_{\textrm{MK- MMD}}(\mathcal{DB}_{n},\mathcal{DB}_{n+1})$ $\displaystyle=\sum_{m=1}^{N_{R}}\lambda\left\|\Phi\left(\mathbf{H}(\mathcal{A}_{m},T_{n}))\right)-\Phi\left(\mathbf{H}(\mathcal{A}_{m},T_{n+1}))\right)\right\|_{\mathcal{H}}^{2}$ $\displaystyle+\mu\left\|\Phi\left(\mathbf{H}(\mathcal{A}_{m},T_{n})|\mathcal{A}_{m})\right)-\Phi\left(\mathbf{H}(\mathcal{A}_{m},T_{n+1})|\mathcal{A}_{m})\right)\right\|_{\mathcal{H}}^{2},$ where $\lambda,\mu\in[0,1]$ denote a fine-tuning coefficient indicating the data similarity between $\mathcal{DB}_{n}$ and $\mathcal{DB}_{n+1}$, for which it holds $\lambda+\mu=1$. With the proposed MK-MMD metric, we define the MK- MMD minimization problem to better fit the optimal mapping function $\Phi(\cdot)$ and update the function $g_{n+1}(\cdot)$ as follows. ###### Problem 3 (MK-MMD Minimization). $\displaystyle\underset{\Phi(\cdot)}{\textrm{minimize}}$ $\displaystyle\mathcal{D}_{\textrm{MK- MMD}}(\mathcal{DB}_{n},\mathcal{DB}_{n+1})$ subject to $\displaystyle\lambda+\mu=1,$ $\displaystyle\eqref{eqn:const_1},\ \eqref{eqn:const_2}.$ It is noted that in the conventional methods, such as joint distribution adaptation (JDA) [23], the objective is usually to find a transformation matrix to represent the transfer mapping function $\Phi(\cdot)$. In the above MK-MMD formulation, a simply transformation matrix is insufficient, as mentioned in [22], which motivates us to apply deep neural networks for the transfer function representation, as described in the sequel. ### III-B Neural Network To address the mentioned Problem 3, we design a deep transfer learning network for fingerprint adaptation. As shown in Fig 2, we start with a deep convolutional neural network (CNN), which is a common structure to fulfill the complex signal feature extraction and dimension reduction tasks. Figure 2: The architecture of deep transfer learning network, which are consisted of convolution layers, FC layers and average pooling layers. The input of the network are the channel responses at the time instants $T_{n}$ and $T_{n+1}$ with respect to the grid area $\mathcal{A}_{m}$, which are $\mathbf{H}(\mathcal{A}_{m},T_{n})$ and $\mathbf{H}(\mathcal{A}_{m},T_{n+1})$, respectively. The output of the network is the estimated position likelihood distribution at $T_{n}$ and $T_{n+1}$, which are $\mathbf{p}_{n}^{k}$ and $\mathbf{p}_{n+1}^{k}$, respectively. Since the convolutional layers only learns generic features in the related data sets, we only impose the MK-MMD domain adaptation in the final fully connected (FC) layer. This is because the unique characteristics of different data sets begin to appear when the structure of neural networks is deep enough [22]. Motivated by this fact, we design the neural network structure with five convolutional layers and one average pooling layer to obtain the feature vectors. An FC layer with softmax output [24] is used to provide the normalized probability. The detailed configuration and parameters of the proposed neural networks are listed in Table I, which is designed with reference to the commonly used transfer learning neural network. Meanwhile, in the neural network design, we also propose to use a joint loss measure, which considers both the cross-entropy measure to describe the differences between the normalized output probability $\hat{\mathbf{p}}_{n}^{k}$ and the ground true label vector $\mathbf{p}_{n}^{k}$, and the MK-MMD based loss, $\mathcal{D}_{\textrm{MK-MMD}}(\mathcal{DB}_{n},\mathcal{DB}_{n+1})$. The parameters in the network can be adjusted by minimizing the loss function, which can be written as, $\displaystyle\mathbb{L}=-\sum_{m=1}^{N_{R}}\mathbf{p}_{n,m}^{k}\log\hat{\mathbf{p}}_{n,m}^{k}+\mathcal{D}_{\textrm{MK- MMD}}(\mathcal{DB}_{n},\mathcal{DB}_{n+1}),$ (10) where $\mathbf{p}_{n,m}^{k}$ and $\hat{\mathbf{p}}_{n,m}^{k}$ are the normalized probability for the $m^{th}$ grid area. In addition, we train the parameters of deep neural networks with Adam optimizer to minimize the above loss function in the training stage. Table I: An Overview of the Considered Network Configuration and Parameters. Module | Layers | Parameters ---|---|--- CNNs | conv$-1$ | $112\times 112\times 1$ conv$-2$ | $56\times 56\times 64$ conv$-3$ | $28\times 28\times 128$ conv$-4$ | $14\times 14\times 256$ conv$-5$ | $7\times 7\times 512$ | average pooling | $1\times 1\times 512$ | FC | $1\times 1\times 512$ Output | Softmax | $15\times 1$ In the operating stage, users ask for their position information by reporting their real-time CSI to the server. The trained neural network with the updated parameters outputs a probability vector $\hat{\mathbf{p}}_{n+1,m}^{k}$, which is utilized to obtain the final estimation location $\hat{\mathcal{L}}_{n}^{k}$. The corresponding mathematical expression is given by, $\displaystyle\hat{\mathcal{L}}_{n+1}^{k}=\sum_{m=1}^{N_{R}}\hat{\mathbf{p}}^{k}_{n+1,m}\cdot\mathcal{L}_{c}^{m}.$ (11) ## IV Experiment Results In this section, we provide some numerical results to show the necessity of the fingerprint database and effectiveness of the proposed deep transfer learning based approach for neural networks update. To be more specific, we compare the proposed schemes with two baselines, e.g., Baseline 1: non-updated KNN based scheme and Baseline 2: JDA based transfer scheme, in terms of effectiveness and complexity. We implement our localization system using a TP- LINK wireless router as the AP and two Nexus 5 smartphones as the reporter and requester respectively. The whole system works at 2.4 GHz with the bandwidth of 20MHz. Nexus 5 with Nexmon [25] software installed overhears the user datagram protocol (UDP) frames transmitted by the AP and then extracts the CSI from them. We verify the proposed scheme in the laboratory environment, where the layout of testing scenarios are shown in Fig. 3. With laboratory equipment, furniture, and people movements in the real situation, the tested wireless fading conditions cover most of the daily indoor scenarios with mixed LOS and NLOS paths. Figure 3: A sketch map of experimental environment, in which the red/green/black spots represent the location of reference points of the initial fingerprint database, test points of requesters and access point, respectively. The distance between two adjacent reference points is about 1.2m. We collect 1500 CSI samples for training dataset and 750 CSI samples for test dataset. ### IV-A Localization Accuracy In this experiment, we investigate on the effect of fingerprint database update. We compared the proposed schemes with the above baselines by measuring the cumulative distribution function (CDF) [26] of distance error in the test scenario. Fig. 4 describes CDF of the localization distance error during the operating stage. The proposed regression based algorithms show superior localization accuracy over Baseline 1 and Baseline 2. By comparing JDA based approach (red solid curves) and deep transfer learning based approach (black solid curves) with non-update KNN based scheme, we can find the fingerprint transfer based schemes have a better localization performance. By comparing JDA based approach and deep transfer learning based approach, the latter one achieves the mean errors of 1.08 m for the test scenarios, which shows better localization accuracy than 1.37 m of the former one. This is due to the fact that deep transfer learning based approach is able to utilize the complex structure of neural network to better minimize MK- MDD and build the relationship between the updated fingerprint database and locations, while the transfer ability of transfer matrix in JDA based approach is very limited. Figure 4: CDF of localization distance errors for different algorithms in the test scenarios. The proposed deep transfer learning based approach is compared with two baselines to test the algorithm effectiveness. ### IV-B Fingerprint Database Update In this experiment, we investigate the test localization accuracy with different percentage of newly collected samples in the original dataset, considering that not all the CSI samples can be updated through crowdsourcing technology. Based on this consideration, we can figure out the time period of performing a fingerprint transfer to make sure the neural network to be updated. To this end, percentage of newly collected samples are set to be 30%, 50% and 70%, respectively. Figure 5: CDF of localization errors for different percentage of newly collected samples to explore the time period of fingerprint transfer. Localization errors under different new CSI samples percentage are illustrated in Fig. 5, where the corresponding average localization errors are 1.36 m (blue solid curves), 1.08 m (black solid curves) and 0.85 m (red solid curves), respectively. It is worth noting that when the size is changed from 30% to 70%, the localization errors reduce to the half at the cost of approximately two times the data collection and labelling. Based on the above results, we believe if half of the fingerprint database can be updated by the newly collected samples, the crowdsourcing localization system can provide accuracy of about one meter. ### IV-C Computational Complexity Although in terms of localization errors, the proposed schemes provide a satisfactory performance, the implementation complexity is still uncertain. Therefore, in the second experiment, we show the effectiveness of proposed schemes by comparing the computational time cost with the above baseline schemes. Table II: Total Running Time Comparison for Different Methods. Samples Number | Baseline 1 | Baseline 2 | Proposed Method ---|---|---|--- 1 | 0.01s | 13.9s | 0.6s 100 | 0.02s | 15.9s | 4.7s 1000 | 0.1s | 31.6s | 5.61s In Table II, we compare the total running time of different schemes with different number of test samples on the same experimental platform. As shown in Table II, KNN based scheme (Baseline 1) cost the least time, regardless of the number of samples, because only the matching and classification processes are conducted, instead of complex fingerprint transfer process. For JDA based scheme (Baseline 2), most of calculation time is used to complete the iterative process to find the suitable transfer matrix. That’s why JDA algorithm cost the most running time regardless of the samples number. Thanks to the parameter storage capacity of the neural network architecture, the trained network help to quickly calculate the position information of the test signal in the online phase. Compared with two baseline methods, our proposed deep transfer learning based method provides high positioning accuracy with low computational complexity, which reduces the calculation overload of the localization system. ## V Conclusion In this paper, we presented a self-calibrating crowdsourcing localization system that is based on multi-kernel deep transfer learning for efficiently exploiting the availability of frequently updated fingerprinting signals. The proposed system simultaneously updates the fingerprint database and the corresponding localization function, enabling the utilization of both historical as well as updated fingerprint information for high precision localization. The presented indoor laboratory experimental results with the proposed system using two Nexus 5 smartphones at 2.4GHz with 20MHz bandwidth have shown that the proposed framework achieves about one meter level accuracy with relatively low computational complexity. This performance is achieved with 50% of the fingerprint update needed in the conventional transfer matrix based method. ## Acknowledgement This work was supported in part by the National Natural Science Foundation of China (NSFC) Grants under No. 61701293 and No. 61871262, the National Science and Technology Major Project (Grant No. 2018ZX03001009), the National Key Research and Development Program of China (Grant No. 2017YFE0121400), the Huawei Innovation Research Program (HIRP), and research funds from Shanghai Institute for Advanced Communication and Data Science (SICS). ## References * [1] B. Gozick, K. P. Subbu, R. Dantu, and T. Maeshiro, “Magnetic maps for indoor navigation,” _IEEE Trans. Instrum. Meas._ , vol. 60, no. 12, pp. 3883–3891, Dec. 2011. * [2] E. Ahmed, I. Yaqoob, A. Gani, M. Imran, and M. Guizani, “Internet-of-things-based smart environments: state of the art, taxonomy, and open research challenges,” _IEEE Wireless Commun._ , vol. 23, no. 5, pp. 10–16, Oct. 2016. * [3] B. Hofmann-Wellenhof, H. Lichtenegger, and E. Wasle, _GNSS–Global navigation satellite systems: GPS, GLONASS, Galileo, and more_. Springer Science & Business Media, 2007. * [4] R. Faragher and R. Harle, “Location fingerprinting with bluetooth low energy beacons,” _IEEE J. Sel. Areas in Commun._ , vol. 33, no. 11, pp. 2418–2428, Nov. 2015. * [5] H. Zhang, Z. Zhang, S. Zhang, S. Xu, and S. Cao, “Fingerprint-based localization using commercial LTE signals: A field-trial study,” in _IEEE Veh. Technol. Conf._ IEEE, 2019, pp. 1–5. * [6] A. S. Paul and E. A. Wan, “RSSI-based indoor localization and tracking using Sigma-point Kalman smoothers,” _IEEE J. of Sel. Topics Signal Process._ , vol. 3, no. 5, pp. 860–873, Oct. 2009. * [7] S. Sen, B. Radunovic, R. R. Choudhury, and T. Minka, “You are facing the Mona Lisa: spot localization using phy layer information,” in _ACM Proc. MobiSys’12_ , 2012, pp. 183–196. * [8] Y. Chapre, A. Ignjatovic, A. Seneviratne, and S. Jha, “CSI-MIMO: An efficient Wi-Fi fingerprinting using channel state information with MIMO,” _Perv. Mobile Comput._ , vol. 23, pp. 89–103, 2015. * [9] C. Xiang, S. Zhang, S. Xu, X. Chen, S. Cao, G. C. Alexandropoulos, and V. K. Lau, “Robust sub-meter level indoor localization with a single WiFi access point—regression versus classification,” _IEEE Access_ , vol. 7, pp. 146 309–146 321, 2019. * [10] M. Kotaru, K. Joshi, D. Bharadia, and S. Katti, “SpotFi: Decimeter level localization using WiFi,” in _Proc. ACM Conf. Special Interest Group Data Commun._ ACM, 2015, pp. 269–282. * [11] D. Vasisht, S. Kumar, and D. Katabi, “Decimeter-level localization with a single WiFi access point,” in _Proc. Usenix Conf. Netw. Syst. Des. Implementation_. ACM, 2016, pp. 165–178. * [12] C. Huang, G. C. Alexandropoulos, C. Yuen, and M. Debbah, “Indoor signal focusing with deep learning designed reconfigurable intelligent surfaces,” in _Proc. Int. Workshop Signal Process. Adv. Wireless Commun._ IEEE, 2019, pp. 1–5. * [13] B. Wang, Q. Chen, L. T. Yang, and H.-C. Chao, “Indoor smartphone localization via fingerprint crowdsourcing: challenges and approaches,” _IEEE Wireless Commun._ , vol. 23, no. 3, pp. 82–89, Jun. 2016. * [14] A. A. C. M. Kalker and J. A. Haitsma, “Fingerprint database updating method, client and server,” Apr. 2009, U.S. Patent 7,523,312. * [15] Y. Liu, L. Kong, and G. Chen, “Data-oriented mobile crowdsensing: A comprehensive survey,” _IEEE Commun. Surv. Tut._ , vol. 21, no. 3, pp. 2849–2885, 3rd Quarter 2019. * [16] M. Murata, D. Ahmetovic, D. Sato, H. Takagi, K. M. Kitani, and C. Asakawa, “Smartphone-based localization for blind navigation in building-scale indoor environments,” _Perv. Mobile Comput._ , vol. 57, pp. 14–32, 2019\. * [17] C. Wu, Z. Yang, and Y. Liu, “Smartphones based crowdsourcing for indoor localization,” _IEEE Trans. Mobile Comput._ , vol. 14, no. 2, pp. 444–457, Feb. 2014. * [18] B. Huang, Z. Xu, B. Jia, and G. Mao, “An online radio map update scheme for WiFi fingerprint-based localization,” _IEEE Int. Things J._ , vol. 6, no. 4, pp. 6909–6918, Aug. 2019. * [19] B. Yang, Z. Cao, and K. B. Letaief, “Analysis of low-complexity windowed DFT-based MMSE channel estimator for OFDM systems,” _IEEE Trans. Commun._ , vol. 49, no. 11, pp. 1977–1987, 2001. * [20] Z. Li, X. Zhao, F. Hu, Z. Zhao, J. L. C. Villacrés, and T. Braun, “SoiCP: A seamless outdoor–Indoor crowdsensing positioning system,” _IEEE Int. Things J._ , vol. 6, no. 5, pp. 8626–8644, 2019. * [21] S. Boyd, S. P. Boyd, and L. Vandenberghe, _Convex optimization_. Cambridge University Press, 2004. * [22] M. Long, Y. Cao, J. Wang, and M. I. Jordan, “Learning transferable features with deep adaptation networks,” in _Proc. Int. Conf. Mach. Learn._ ACM, 2015. * [23] M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, “Transfer feature learning with joint distribution adaptation,” in _Proc. Int. Conf. Comput. Vision_. IEEE, 2013, pp. 2200–2207. * [24] E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with Gumbel-softmax,” _arXiv preprint arXiv:1611.01144_ , 2016. * [25] F. Gringoli, M. Schulz, J. Link, and M. Hollick, “Free your CSI: A channel state information extraction platform for modern Wi-Fi chipsets,” in _Proc. Int. Workshop Wireless Net. Testbeds, Exper. Eval. Character._ , 2019, pp. 21–28. * [26] M. P. Deisenroth, A. A. Faisal, and C. S. Ong, _Mathematics for machine learning_. Cambridge University Press, 2020.
# Hyperspectral Image Classification: Artifacts of Dimension Reduction on Hybrid CNN Muhammad Ahmad, Sidrah Shabbir, Rana Aamir Raza, Manuel Mazzara, Salvatore Distefano, Adil Mehmood Khan M. Ahmad is with the Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad, Chiniot-Faisalabad Campus, Chiniot 35400, Pakistan and Dipartimento di Matematica e Informatica—MIFT, University of Messina, Messina 98121, Italy. E-mail<EMAIL_ADDRESS>Shabbir is with the Department of Computer Engineering, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan, 64200, Pakistan; Email<EMAIL_ADDRESS>A. Raza is with the Department of Computer Science, Bahauddin Zakariya University, Multan 66000, Pakistan; E-mail<EMAIL_ADDRESS>Mazzara is with Innopolis University, Innopolis 420500, Russia; E-mail: <EMAIL_ADDRESS>Distefano is with Dipartimento di Matematica e Informatica—MIFT, University of Messina, Messina 98121, Italy; E-mail: <EMAIL_ADDRESS>M. Khan is with Innopolis University, Innopolis 420500, Russia; E-mail<EMAIL_ADDRESS> ###### Abstract Convolutional Neural Networks (CNN) has been extensively studied for Hyperspectral Image Classification (HSIC) more specifically, 2D and 3D CNN models have proved highly efficient in exploiting the spatial and spectral information of Hyperspectral Images. However, 2D CNN only considers the spatial information and ignores the spectral information whereas 3D CNN jointly exploits spatial-spectral information at a high computational cost. Therefore, this work proposed a lightweight CNN (3D followed by 2D-CNN) model which significantly reduces the computational cost by distributing spatial- spectral feature extraction across a lighter model alongside a preprocessing that has been carried out to improve the classification results. Five benchmark Hyperspectral datasets (i.e., SalinasA, Salinas, Indian Pines, Pavia University, Pavia Center, and Botswana) are used for experimental evaluation. The experimental results show that the proposed pipeline outperformed in terms of generalization performance, statistical significance, and computational complexity, as compared to the state-of-the-art 2D/3D CNN models except commonly used computationally expensive design choices. ###### Index Terms: Dimension Reduction; Hybrid 3D/2D CNN; Hyperspectral Image Classification; Spectral-Spatial Information. ## I Introduction Hyperspectral Imaging (HSI) records the reflectance values of a target object across a wide range of electromagnetic spectrum instead of just visible range (red, green, blue (RGB) channels) [1]. Light interacting with each pixel is fractionated into hundreds of contiguous spectral bands in order to provide sufficient information about the target. The high spectral resolution of HSI allows us to distinguish various objects based on spectral signatures. The conventional classifiers used for hyperspectral image classification (HSIC) are SVM [2], k-Nearest Neighbors (KNN) [3, 4], Maximum Likelihood [5], Logistic Regression [6], Random Forests (RF) [7], Multinomial Logistic Regression (MLR), Extreme Learning Machine (ELM) [8], and 3D Convolution Neural Networks (CNN) [9]. These traditional classification methods classify the HSI based on spectral signatures only, and therefore, do not outperform owing to redundancy present in spectral information and high correlation among spectral bands. Moreover, these methods fail to maintain the spatial variability of HSI that also hinders the HSIC performance. HSIC performance can be enhanced by considering two main aspects: dimensionality reduction and utilization of spatial information contained in HSI. Dimensionality reduction is an important preprocessing step in HSIC to reduce the spectral redundancy of HSI that subsequently results in less processing time and enhanced classification accuracy. Dimensionality reduction methods transform the high-dimensional data into a low-dimensional space whilst preserving the potential spectral information [10]. Spatial information improves the discriminative power of the classifier by considering the neighboring pixels’ information. Generally, spectral-spatial classification approaches can be categorized into two groups. The first type excavates for both spectral and spatial features individually. Spatial information is educed in advanced using various methods like morphological operations [11], attribute profiles [12] and entropy [13, 14] etc., and then spliced together with spectral features for pixels-wise classification. The other type coalesces spectral and spatial information to acquire joint features like Gabor filter and wavelets [15, 16] are constructed at various scales to simultaneously extract spectral-spatial features for HSIC. Traditional feature extraction approaches are based on handcrafted features and usually extract shallow HSI features. Moreover, such approaches rely on a high level of domain knowledge for feature designing [17]. To overcome the limitations of traditional feature extraction techniques, Deep learning (DL) has been widely used to automatically learn the low to a high-level representation of HSI in a hierarchical manner [18, 19]. DL based HSIC frameworks present enhanced generalization capabilities and improved predictive performance [20]. Recently, Convolutional Neural Network (CNN) has proven to be a powerful feature extraction tool that can learn effective features of HSI through a network of hidden layers [21]. CNN based HSIC architectures have attracted prevalent attention due to substantial performance gain. Generally, the 2D CNN are utilized for efficient spatial feature exploitation and to extract both spectral and spatial features of HSI, many variants of 3D CNN have been proposed [22, 23, 24, 25]. However, 3D CNN is a computationally complex model and 2D CNN alone cannot efficiently extract discriminating spectral features. To overcome these challenges, a hybrid 3D/2D CNN model that splice together 3D CNN components with 2D CNN components/layers is proposed. The aim is to synergize the efficacies of 3D CNN and 2D CNN to obtain important discriminating spectral-spatial features of HSI for HSIC. Moreover, we incorporated the dimensionality reduction as a preprocessing step and investigated the impact of various state-of-the-art dimensionality reduction approaches on the performance of the hybrid 3D/2D CNN model. Also, we evaluated the impact of different input window sizes on the performance of the proposed framework. A comparative study is also carried out using several state-of-the-art CNN based HSIC frameworks proposed in recent literature. Experimental results on five benchmark datasets revealed that our proposed hybrid 3D/2D CNN model outperformed the compared HSIC frameworks. The rest of the paper is structured as follows: Section II presents the proposed pipeline including the details of each component implemented in this paper. Section III describes the important experimental settings and evaluation metrics. Section IV contains information regarding the experimental datasets and results. Section Vpresents the important discussion on obtained results. Section VI describes the experimental comparisons with state-of-the- art methods. Finally, Section VII concludes the paper with possible future research directions. ## II Proposed Methodology Let us assume that the HSI cube can be represented as $\textbf{X}=[x_{1},x_{2},x_{3},\dots,x_{S}]^{T}\in\mathcal{R}^{S\times(C\times D)}$, where $S$ denotes total number of spectral bands and $(C\times D)$ are the samples per band belonging to $Y$ classes and $x_{i}=[x_{1,i},~{}x_{2,i},~{}x_{3,i},~{}\dots,x_{S,i}]^{T}$ is the $i^{th}$ sample in the HSI cube. Suppose $(x_{i},y_{i})\in(\mathcal{R}^{S\times(C\times D)},\mathcal{R}^{Y})$, where $y_{i}$ is the class label of the $i^{th}$ sample. However, due to the spectral mixing effect which induces high intra- class variability and high inter-class similarity in HSI, it becomes difficult to classify various materials based on their spectral signatures. To combat the aforesaid issue, we used dimensionality reduction as a preprocessing step to eliminate the spectral redundancy of HSI which reduces the number of spectral bands $(S\to B$, where $B\ll S)$ while keeping the spatial dimensions unimpaired. Subsequently, this also results in a reduced computational overhead owing to a lower-dimensional feature subspace. We evaluated our proposed model with several dimensionality reduction approaches defined in section II-A. ### II-A Dimensionality Reduction Methods Dimensionality reduction is an essential preprocessing step in HSI analysis that can be categories as either feature extraction or selection. Feature extraction transforms the high dimensional HSI data into a low dimensional subspace by extracting suitable feature representation that can give classification performance comparable to the model trained on the original set of spectral bands while band selection approaches extract a subset of discriminative bands to reduce spectral redundancy. In this work, we evaluated the effectiveness of the following dimensionality reduction methods for our proposed hybrid 3D/2D CNN model. #### II-A1 Principle Component Analysis (PCA) Principal Component Analysis (PCA) is a feature extraction technique based on the orthogonal transformation that computes linearly uncorrelated variables, known as principal components (PCs), from possibly correlated data. The first PC is the projection on the direction of highest variance and it gradually decreases as we move towards the last PC. The transformation of the original image to PCs is the Eigen decomposition of the covariance matrix of mean- centered HSI data. Eigen decomposition of covariance matrix i.e. finding the eigenvalues along with their corresponding eigenvectors is as follows: $E=ADA^{T}$ where $A={a_{1},a_{2},a_{3}\dots,a_{S}}$ is a transformation matrix and $D=diag{\lambda_{1},\lambda_{2},\lambda_{3},\dots,\lambda_{S}}$ is a diagonal matrix of eigenvalues of covariance matrix. The linear HSI transformation is defined as: $H=AX$ #### II-A2 Incremental PCA (iPCA) Generally, PCA is performed in batch mode i.e. all the training data is simultaneously available to compute the projection matrix. In order to find the updated PCs after incorporating the new data into the existing training set, PCA needs to be retrained with complete training data. To combat this limitation, an incremental PCA (iPCA) approach is used that can be categorized as either covariance-based iPCA or covariance free iPCA method. Covariance based methods are further divided into two approaches. In the first approach, the covariance matrix is computed using existing training data and then the matrix is updated whenever new data samples are added. In the second approach, a reduced covariance matrix is computed using previous PCs and the new training data. Covariance free iPCA methods update the PCs without computing the covariance methods, however, such methods usually face convergence problems in case of high dimensional data [26]. #### II-A3 Sparse PCA (SPCA) The conventional PCA has a limitation that the PCs are the linear combinations of all input features/predictors or in other words, all the components are nonzero and direct interpretation becomes difficult. Therefore, to improve interpretability, it is desirable to use sparsity promoting regularizers. In this regard, sparse PCA (SPCA) has emerged as an effective technique that finds the linear combinations of a few inputs features i.e. only a few active (nonzero) coefficients. SPCA works well in the scenarios where input features are redundant, that is, they do not contribute to identifying the underlying rational model structure. #### II-A4 Singular Value Decomposition (SVD) Singular Value Decomposition (SVD) is a mathematical technique that decomposes a matrix into three different matrices. It is knowns as truncated SVD when used for dimensionality reduction. This matrix decomposition is represented as: $X=PSQ^{T}$ where $P$ and $Q$ are orthogonal matrices of left and right singular vectors and $S$ is a diagonal matrix having singular values as its diagonal entries. An SVD reduced $X$ is obtained by taking into account the contribution of only the first $k$ eigenimages, computed as follows: $X_{SVD}=\sum_{i=1}^{k}P_{i}S_{i}Q_{i}^{T}$ #### II-A5 Independent Component Analysis (ICA) Independent Component Analysis (ICA) is one of the popular approaches among other dimensionality reduction methods, that extracts statistically independent components (ICs) through a linear or non-linear transformation that minimizes the mutual information between ICs or maximizes the likelihood or non-Gaussianity of ICs. It transforms the HSI into a lower-dimensional feature space by comparing the average absolute weight coefficients for each spectral band of HSI and retain only those independent bands which contain maximum information [27]. Given an n-dimensional data $X$, the main task of ICA is to find the linear transformation $W$ such that: $H=WX$ where $H$ has statistically independent components. ### II-B Hybrid 3D/2D CNN In order to pass the HSI data cube to our Hybrid CNN model, it is divided into multiple small overlapping 3D patches, and the class labels of these patches are decided based on the label of central pixel. The $3D$ neighboring patches $P\in\mathcal{R}^{(W\times W)\times B}$ are formed that are centered at spatial position $(a,b)$, covering the $W\times W$ windows. The total number of 3D spatial patches created from $X$ is given by $(M-W+1)\times(N-W+1)$. These $3D$ patches centered at location $(a,b)$ represented by $P_{(a,b)}$ covers the width from $\frac{a-(W-1)}{2}$ to $\frac{a+(W-1)}{2}$ and height from $\frac{b–(W-1)}{2}$ to $\frac{b+(W-1)}{2}$ and all $B$ spectral bands obtained after dimensionality reduction method. In the 2D CNN, input data is convolved with the 2D kernel function that computes the sum of the dot product between the input and the 2D kernel function. The kernel is stridden over the input in order to cover the whole spatial dimension. Then these convolved features are processed through an activation function that helps to learn non-linear features of data by introducing non-linearity in the model. In case of 2D convolution, the activation value of $j^{th}$ feature map at spatial location $(x,y)$ in the $i^{th}$ layer, denoted by $v^{x,y}_{i,j}$, can be formulated as follows: $v^{x,y}_{i,j}=\mathcal{F}(b_{i,j}\sum_{\tau=1}^{d_{l-1}}\sum_{\rho=-\gamma}^{\gamma}\sum_{\sigma=-\delta}^{\delta}w_{i,j,\tau}^{\sigma,\rho}\times v_{i-1,\tau}^{x+\sigma,y+\rho})$ where $\mathcal{F}$ is the activation function, $d_{l-1}$ is the number of feature map at $(l-1)^{th}$ layer and the depth of kernel $w_{i,j}$ for $j^{th}$ feature map at $i^{th}$ layer, $b_{i,j}$ denotes the bias parameter for $j^{th}$ feature map at $i^{th}$ layer, $2\gamma+1$ and $2\sigma+1$ be the width and height of the kernel. The 3D convolutional process first computes the sum of the dot product between input patches and 3D kernel function i.e. the 3D input patches are convolved with 3D kernel function [18, 19]. Later these feature maps are passed through an activation function to induce non-linearity. Our proposed model generates the features maps of the 3D convolutional layer by using 3D kernel function over $B$ spectral bands, extracted after dimensionality reduction, in the input layer. In the 3D convolutional process of the proposed model, the activation value at spatial location $(x,y,z)$ at the $i^{th}$ layer and $j^{th}$ feature map can be formulated as: $v^{x,y}_{i,j}=\mathcal{F}(b_{i,j}\sum_{\tau=1}^{d_{l-1}}\sum_{\lambda=-v}^{v}\sum_{\rho=-\gamma}^{\gamma}\sum_{\sigma=-\delta}^{\delta}w_{i,j,\tau}^{\sigma,\rho,\lambda}\times v_{i-1,\tau}^{x+\sigma,y+\rho,z+\lambda})$ where all the parameters are the same as defined in Equation II-B except $2v+1$ which is the depth of 3D kernel along a spectral dimension. In the proposed framework, the details of 3D convolutional kernels are as follows: $3D\\_conv\\_layer\\_1=8\times 3\times 3\times 7\times 1$ i.e. $K_{1}^{1}=3,K_{2}^{1}=3$ and $K_{3}^{1}=7$. $3D\\_conv\\_layer\\_2=16\times 3\times 3\times 5\times 8$ i.e. $K_{1}^{2}=3,K_{2}^{2}=3$ and $K_{3}^{2}=5$. $3D\\_conv\\_layer\\_3=32\times 3\times 3\times 3\times 16$ i.e. $K_{1}^{3}=3,K_{2}^{3}=3$ and $K_{3}^{3}=3$. The details of $2D$ convolutional kernel are: $2D\\_conv\\_layer\\_1=64\times 3\times 3\times 96$ i.e. $K_{1}^{4}=3$ and $K_{2}^{4}=3$. Three $3D$ convolutional layers are employed to increase the number of spectral-spatial feature maps and one $2D$ convolutional layer is used to discriminate the spatial features within different spectral bands while preserving the spectral information. Further details regarding the Hybrid 3D/2D CNN architecture in terms of types of layers, dimensions of output feature maps, and a number of trainable parameters are given in Table I and layer-wise hierarchy is shown in Figure 1. The total number of tune-able weight parameters of our proposed model is $127,104$ for the Salinas Full Scene dataset. Initially, the weights are randomized and then optimized using backpropagation with the Adam optimizer by using the soft max loss function. The network is trained for $50$ epochs using a mini-batch size of $256$ and without any batch normalization and data augmentation. TABLE I: Layer wise detailed Summary Hybrid 3D/2D CNN architecture on Salinas Full Scene Dataset with Window Size $9\times 9$ and $15$ bands. Layer | Output Shape | $\\#$ of Parameters ---|---|--- Input Layer | (9, 9, 15, 1) | 0 Conv3D_1 (Conv3D) | (7, 7, 9, 8) | 512 Conv3D_2 (Conv3D) | (5, 5, 5, 16) | 5776 Conv3D_3 (Conv3D) | (3, 3, 3, 32) | 13856 Reshape (Reshape) | (3, 3, 96) | 0 Conv2D_1 (Conv2D) | (1, 1, 64) | 55360 Flatten_1 (Flatten) | (64) | 0 Dense_1 (Dense) | (256) | 16640 Dropout_1 (Dropout) | (256) | 0 Dense_2 (Dense) | (128) | 32896 Dropout_2 (Dropout) | (128) | 0 Dense_3 (Dense) | (16) | 2064 In total, 127,104 trainable parameters are required Figure 1: Hybrid 3D/2D CNN framework for HSIC ## III Experimental Settings The experimental datasets used in this manuscript are used for a variety of unmixing and classification tasks in which the number of bands, the number of samples, and the number of classes are different. Experimental validation is being carried out in the $K$-fold cross-validation process a base case to split the train/validation/test samples in which the value of $K$ is set to $5$. To validate the experimental results and our proposed pipeline, we conduct several statistical tests such as Precision ($P_{r})$, Recall ($R_{c}$), F1 Score, Overall (OA – is computed as the number of correctly classified examples out of the total test examples), Average (AA – represents the average class-wise classification performance), and Kappa ($\kappa$ – is known as a statistical metric that considered the mutual information regarding a strong agreement among classification and ground-truth maps) accuracies with $95\%$ confidence interval to validate the claims made in this manuscript. The said metrics are computed using the following mathematical expressions: $P_{r}=\frac{1}{Y}\sum_{i=1}^{Y}\frac{TP_{i}}{TP_{i}+FP_{i}}$ $R_{c}=\frac{1}{Y}\sum_{i=1}^{Y}\frac{TP_{i}}{TP_{i}+FN_{i}}$ $F1=\frac{2\times(R_{c}\times P_{r})}{(R_{c}+P_{r})}$ $OA=\frac{1}{K}\sum_{i=1}^{K}TP_{i}$ $\kappa=\frac{P_{o}-P_{e}}{1-P_{e}}$ where $P_{e}=P^{+}+P^{-}$ $P^{+}=\frac{TP+FN}{TP+FN+FP+TN}\times\frac{TP+FN}{TP+FN+FP+TN}$ $P^{-}=\frac{FN+TN}{TP+FN+FP+TN}\times\frac{FP+TN}{TP+FN+FP+TN}$ $P_{o}=\frac{TP+TN}{TP+FN+FP+TN}$ where $FP$ and $TP$ are false and true positive, $FN$ and $TN$ are false and true negative computed from the confusion matrix. All the experiments are performed on Colab [28]. Colab provides an option to execute the codes on python $3$ notebook with GPU with $25$ GB RAM and $358.27$ GB of could storage for computations. In all the experiments, each dataset is initially divided into a $50/50\%$ ratio for the Training and Test set and then the training set is further split into a $50/50\%$ ratio for Training and Validation samples. In all the experiments, learning rate is set to $0.001$ and activation function used for all the layers is $relu$ except the last layer where $softmax$ is applied. Spatial dimensions of 3D input patches for all datasets are set as as $9\times 9\times 15$, $11\times 11\times 15$, $9\times 9\times 18$, $11\times 11\times 18$, $9\times 9\times 21$, $11\times 11\times 21$, $9\times 9\times 24$, $11\times 11\times 24$, and $9\times 9\times 27$, $11\times 11\times 27$, where $15,18,21,24$ and $27$ are the number of most informative bands extracted by PCA, iPCA, SPCA, ICA, SVD, and GRP. ## IV Experimental Results The effectiveness of our proposed hybrid 3D/2D CNN is confirmed on five benchmark HSI datasets available publicly and acquired by two different sensors, i.e., Reflective Optics System Imaging Spectrometer (ROSIS) and Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). These five datasets are, Salinas-A (SA), Salinas full scene (SFS), Indian Pines (IP), and Pavia University (PU). ### IV-A Salinas-A Dataset (SA) Salinas-A (SA) is a sub-set of Salinas full scene (SFS) dataset consisting of $86\times 83$ samples per band with a total of $224$ and have six classes as shown in Table IV. In the full Salinas scene, SA is located at $158-240$ and $591-676$. A detailed experimental results on Salinas-A dataset is shown in Table II and Figure 2. Moreover, the statistical significance is shown in Table IV. The convergence loss and accuracy of our proposed Hybrid $3D/2D$ CNN for $50$ epochs with two different patch sizes are illustrated in Figure 3. From these accuracy and loss curves, one can deduce that our proposed model is converged almost around $7$ epoch for both $9\times 9$ and $11\times 11$ window sizes. TABLE II: Kappa, Overall and Average accuracy for Salinas-A dataset with different number of bands (i.e., $15$, $18$, $21$, $24$, $27$, respectively) and different number of patch sizes (i.e., $9\times 9$ and $11\times 11$, respectively). Method | 15 Bands | 18 Bands | 21 Bands | 24 Bands | 27 Bands ---|---|---|---|---|--- 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 PCA | 99.95 | 100.00 | 100.00 | 100.00 | 99.53 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 99.96 | 100.00 | 100.00 | 100.00 | 99.63 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 99.95 | 100.00 | 100.00 | 100.00 | 99.49 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 iPCA | 90.76 | 99.81 | 100.00 | 90.76 | 99.95 | 100.00 | 99.95 | 99.81 | 99.95 | 99.95 92.67 | 99.85 | 100.00 | 92.67 | 99.96 | 100.00 | 99.96 | 99.85 | 99.96 | 99.96 83.33 | 99.75 | 100.00 | 83.33 | 99.98 | 100.00 | 99.91 | 99.70 | 99.95 | 99.95 SPCA | 85.29 | 99.06 | 99.77 | 99.91 | 90.76 | 99.91 | 99.86 | 99.95 | 99.91 | 100.00 88.48 | 99.25 | 99.81 | 99.93 | 92.67 | 99.93 | 99.89 | 99.96 | 99.93 | 100.00 83.33 | 99.12 | 99.64 | 99.95 | 83.33 | 99.89 | 99.93 | 99.95 | 99.95 | 100.00 ICA | 99.86 | 99.95 | 100.00 | 99.95 | 99.91 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 99.89 | 99.96 | 100.00 | 99.96 | 99.93 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 99.84 | 99.95 | 100.00 | 99.95 | 99.83 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 SVD | 99.67 | 0.00 | 0.00 | 74.69 | 100.00 | 75.13 | 99.81 | 84.81 | 75.35 | 49.97 99.74 | 28.50 | 28.50 | 79.99 | 100.00 | 80.67 | 99.85 | 88.03 | 80.93 | 59.99 99.65 | 16.67 | 16.67 | 66.56 | 100.00 | 66.28 | 99.81 | 82.47 | 66.52 | 66.67 (a) $15,9$ (b) $15,11$ (c) $18,9$ (d) $18,11$ (e) $21,9$ (f) $21,11$ (g) $24,9$ (h) $24,11$ (i) $27,9$ (j) $27,11$ Figure 2: Classification results for Salinas-A for different number of bands ($15,18,21,24,27$) selected using PCA, with different number of patch sizes ($9\times 9$ and $11\times 11$). (a) Accuracy, $9\times 9\times 15$ (b) Loss, $9\times 9\times 15$ (c) Accuracy, $11\times 11\times 15$ (d) Loss, $11\times 11\times 15$ Figure 3: Accuracy and Loss for Training and Validation sets on Salinas-A Dataset for $50$ number of epochs with two different spatial dimensions ($9\times 9$ and $11\times 11$) and $15$ number of bands. ### IV-B Salinas Full Scene Dataset (SFS) The Salinas full scene (SFS) dataset was acquired over Salinas Valley California, through an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. It comprises $512\times 217$ pixels per band and a total of $244$ bands with $3.7$ meter spatial resolution. It consists of vineyard fields, vegetables, and bare soils and contains sixteen classes as shown in Table V. Few water absorption bands $(108-112,154-167$ and $224)$ are removed from the dataset before analysis. A detailed experimental results on Salinas full scene dataset is shown in Table III and Figure 4. Moreover, the statistical significance is shown in Table V. The convergence loss and accuracy of our proposed Hybrid $3D/2D$ CNN for $50$ epochs with two different patch sizes are illustrated in Figure 5. From these accuracy and loss curves, one can deduce that the accuracy and loss of our proposed model are converged almost around $8$ epoch and $10$ epoch, respectively, for both $9\times 9$ and $11\times 11$ window sizes. TABLE III: Kappa, Overall and Average accuracy for Salinas dataset with different number of bands ($15,18,21,24,27$) and different number of patch sizes ($9\times 9$ and $11\times 11$). Method | 15 Bands | 18 Bands | 21 Bands | 24 Bands | 27 Bands ---|---|---|---|---|--- 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 PCA | 99.86 | 99.89 | 99.93 | 99.96 | 99.98 | 99.58 | 99.91 | 99.86 | 99.93 | 99.99 99.87 | 99.90 | 99.93 | 99.97 | 99.99 | 99.62 | 99.92 | 99.88 | 99.93 | 99.99 99.90 | 99.97 | 99.97 | 99.97 | 99.98 | 99.68 | 99.95 | 99.90 | 99.97 | 99.98 iPCA | 97.42 | 21.48 | 91.66 | 42.17 | 97.70 | 1.57 | 93.46 | 84.19 | 84.19 | 91.70 97.69 | 33.73 | 92.53 | 49.45 | 97.93 | 21.91 | 94.13 | 85.82 | 85.84 | 92.54 93.35 | 18.66 | 90.33 | 33.55 | 97.61 | 7.37 | 91.46 | 72.51 | 70.23 | 85.30 SPCA | 95.97 | 33.53 | 96.57 | 88.31 | 99.31 | 83.12 | 96.65 | 69.92 | 97.04 | 81.52 96.38 | 39.80 | 96.92 | 89.51 | 99.38 | 85.00 | 97.00 | 73.45 | 97.34 | 83.77 92.13 | 30.96 | 97.25 | 87.04 | 99.32 | 74.04 | 92.26 | 52.74 | 97.46 | 72.27 ICA | 99.08 | 99.56 | 99.17 | 99.93 | 99.07 | 99.77 | 99.72 | 99.75 | 99.83 | 99.93 99.17 | 99.60 | 99.25 | 99.93 | 99.17 | 99.80 | 99.75 | 99.78 | 99.85 | 99.94 99.46 | 99.81 | 99.58 | 99.94 | 99.42 | 99.90 | 99.87 | 99.87 | 99.89 | 99.91 SVD | 98.02 | 0.00 | 97.14 | 0.00 | 82.59 | 94.16 | 96.22 | 16.91 | 71.79 | 0.00 98.22 | 20.82 | 97.43 | 13.43 | 84.44 | 94.76 | 96.61 | 32.28 | 74.82 | 20.82 98.40 | 6.25 | 98.54 | 6.25 | 73.60 | 91.05 | 91.93 | 12.50 | 68.44 | 6.25 (a) $15,9$ (b) $15,11$ (c) $18,9$ (d) $18,11$ (e) $21,9$ (f) $21,11$ (g) $24,9$ (h) $24,11$ (i) $27,9$ (j) $27,11$ Figure 4: Classification results for Salinas for different number of bands ($15,18,21,24,27$) selected using PCA, with different number of patch sizes ($9\times 9$ and $11\times 11$). (a) Accuracy, $9\times 9\times 15$ (b) Loss, $9\times 9\times 15$ (c) Accuracy, $11\times 11\times 15$ (d) Loss, $11\times 11\times 15$ Figure 5: Accuracy and Loss for Training and Validation sets on Salinas for $50$ number of epochs with two different spatial dimensions ($9\times 9$ and $11\times 11$) and $15$ number of bands. TABLE IV: Class Names, Total Samples, Train, Validation and Test Sample numbers (Tr, Val, Te) along with the Statistical Test (Precision ($P_{r}$), Recall ($R_{c}$) & F1-Score (F1)) results for Salinas-A dataset with PCA as dimensional reduction method over two window sizes ($W_{1}=9\times 9$ and $W_{2}=11\times 11$). Class | Tr, Val, Te | 15 Bands | 18 Bands | 21 Bands | 24 Bands | 27 Bands ---|---|---|---|---|---|--- $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ Brocoli green weeds 1 | 98, 97, 196 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 0.98/1.00 | 1.00/1.00 | 0.99/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Corn senesced green weeds | 336, 336, 671 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Lettuce romaine 4wk | 154, 154, 308 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 0.99/1.00 | 0.99/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Lettuce romaine 5wk | 381, 382, 762 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 0.99/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Lettuce romaine 6wk | 168, 169, 337 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 0.99/1.00 | 0.99/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Lettuce romaine 7wk | 200, 199, 400 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 TABLE V: Class Names, Total Samples, Train, Validation and Test Sample numbers (Tr, Val, Te) along with the Statistical Test (Precision ($P_{r}$), Recall ($R_{c}$) & F1-Score (F1)) results for Salinas with PCA as dimensional reduction method over two window sizes ($W_{1}=9\times 9$ and $W_{2}=11\times 11$). Class | Tr, Val, Te | 15 Bands | 18 Bands | 21 Bands | 24 Bands | 27 Bands ---|---|---|---|---|---|--- $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ Brocoli_green_weeds_1 | 502, 503, 1004 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1 .00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Brocoli_green_weeds_2 | 932, 931, 1863 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Fallow | 494 , 494 , 988 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Fallow_rough_plow | 348, 349, 697 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Fallow_smooth | 669, 670, 1339 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Stubble | 990, 989, 1980 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Celery | 895, 894, 1790 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Grapes_untrained | 2818, 2817, 5636 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Soil_vinyard_develop | 1550, 1551, 3102 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Corn_senesced_green_weeds | 820, 819, 1639 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Lettuce_romaine_4wk | 267, 267, 534 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 0.99/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Lettuce_romaine_5wk | 482, 482, 963 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Lettuce_romaine_6wk | 229, 229, 458 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/0.98 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Lettuce_romaine_7wk | 267, 268, 535 | 1.00/1.00 | 0.99/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/0.98 | 1.00/0.99 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Vinyard_untrained | 1817, 1817, 3634 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/0.98 | 1.00/0.99 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Vinyard_vertical_trellis | 452, 452, 903 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 ### IV-C Indian Pines Dataset (IP) Indian Pines (IP) dataset was collected over northwestern Indiana’s test site, Indian Pines, by AVIRIS sensor and comprises of $145\times 145$ pixels and $224$ bands in the wavelength range $0.4-2.5\times 10^{-6}$ meters. This dataset consists of two-thirds of agricultural area and one-third of forest or other naturally evergreen vegetation. A railway line, two dual-lane highways, low-density buildings, and housing and small roads are also part of this dataset. Furthermore, some corps in the early stages of their growth is also present with approximately less than $5\%$ of total coverage. The ground truth is comprised of a total of $16$ classes but they all are not mutually exclusive. The details of ground truth classes are shown in Table VII. The number of spectral bands is reduced to $200$ from $224$ by removing the water absorption bands. The convergence loss and accuracy of our proposed Hybrid $3D/2D$ CNN for $50$ epochs with two different patch sizes are illustrated in Figure 7. From these accuracy and loss curves, one can deduce that our proposed model is converged almost around $35$ epoch for both $9\times 9$ and $11\times 11$ window sizes. A detailed experimental results on Indian Pines dataset is shown in Table VI and Figure 6. Moreover, the statistical significance is shown in Table VII. TABLE VI: Kappa, Overall and Average accuracy for Indian Pines with different number of bands ($15,18,21,24,27$) and different number of patch sizes (i.e., $9\times 9$ and $11\times 11$). Method | 15 Bands | 18 Bands | 21 Bands | 24 Bands | 27 Bands ---|---|---|---|---|--- 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 PCA | 96.75 | 96.84 | 96.75 | 97.00 | 97.55 | 96.95 | 97.53 | 97.24 | 91.54 | 97.00 97.15 | 97.23 | 97.15 | 97.37 | 97.85 | 97.33 | 97.83 | 97.58 | 92.57 | 97.37 97.54 | 95.57 | 97.51 | 93.39 | 97.37 | 96.98 | 97.49 | 95.96 | 91.53 | 96.95 iPCA | 62.30 | 35.54 | 18.81 | 66.48 | 83.29 | 73.33 | 38.91 | 86.50 | 0.00 | 84.72 66.87 | 47.47 | 36.27 | 70.71 | 85.40 | 76.82 | 49.89 | 88.18 | 23.96 | 86.63 56.35 | 25.41 | 12.49 | 46.91 | 67.58 | 50.32 | 31.16 | 81.44 | 6.25 | 75.68 SPCA | 73.20 | 75.66 | 76.12 | 81.86 | 68.95 | 11.82 | 73.96 | 79.81 | 0.00 | 75.88 76.72 | 78.87 | 79.08 | 84.16 | 72.88 | 28.45 | 77.23 | 82.44 | 23.96 | 78.79 65.43 | 56.02 | 64.46 | 74.97 | 59.91 | 12.27 | 65.39 | 60.81 | 6.25 | 53.97 ICA | 68.52 | 71.24 | 65.13 | 71.36 | 79.84 | 84.42 | 91.83 | 90.23 | 79.94 | 90.55 72.72 | 75.10 | 69.76 | 75.36 | 82.42 | 86.40 | 92.84 | 91.41 | 82.48 | 91.75 60.79 | 59.92 | 57.74 | 61.50 | 70.80 | 81.28 | 86.59 | 85.93 | 74.69 | 85.49 SVD | 14.77 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 29.94 | 0.00 | 0.00 30.79 | 23.96 | 23.96 | 23.96 | 23.96 | 23.96 | 23.96 | 43.28 | 23.96 | 23.96 12.34 | 6.25 | 6.25 | 6.25 | 6.25 | 6.25 | 6.25 | 18.70 | 6.25 | 6.25 (a) $15,9$ (b) $15,11$ (c) $18,9$ (d) $18,11$ (e) $21,9$ (f) $21,11$ (g) $24,9$ (h) $24,11$ (i) $27,9$ (j) $27,11$ Figure 6: Classification results for Indian Pines for different number of bands ($15,18,21,24,27$) with $9\times 9$ and $11\times 11$ patch sizes respectively. (a) Accuracy, $9\times 9\times 15$ (b) Loss, $9\times 9\times 15$ (c) Accuracy, $11\times 11\times 15$ (d) Loss, $11\times 11\times 15$ Figure 7: Accuracy and Loss for Training and Validation sets on Indian Pines for $50$ number of epochs with two different spatial dimensions ($9\times 9$ and $11\times 11$) and $15$ number of bands. TABLE VII: Class Names, Total Samples, Train, Validation and Test Sample numbers (Tr, Val, Te) along with the Statistical Test (Precision ($P_{r}$), Recall ($R_{c}$) & F1-Score (F1)) results for Indian Pines dataset with PCA as dimensional reduction method over two window sizes i.e., $W_{1}=9\times 9$ and $W_{2}=11\times 11$. Class | Tr, Val, Te | 15 Bands | 18 Bands | 21 Bands | 24 Bands | 27 Bands ---|---|---|---|---|---|--- $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ Alfalfa | 11, 12, 23 | 1.00/1.00 | 0.96/0.91 | 0.98/0.95 | 1.00/1.00 | 0.96/0.91 | 0.98/0.95 | 1.00/1.00 | 0.96/0.87 | 0.98/0.93 | 1.00/1.00 | 1.00/0.96 | 1.00/0.98 | 1.00/0.96 | 1.00/0.96 | 1.00/0.96 Corn-notill | 357, 357, 714 | 0.96/0.96 | 0.96/0.98 | 0.96/0.97 | 0.95/0.95 | 0.95/0.96 | 0.95/0.96 | 0.98/0.96 | 0.96/0.97 | 0.97/0.96 | 0.99/0.98 | 0.96/0.97 | 0.97/0.97 | 0.94/0.98 | 0.85/0.94 | 0.89/0.96 Corn-mintill | 208, 207, 415 | 0.96/0.97 | 0.98/0.98 | 0.97/0.97 | 0.94/0.95 | 0.99/0.98 | 0.96/0.96 | 0.95/0.95 | 0.99/0.97 | 0.97/0.96 | 0.96/0.95 | 0.95/0.98 | 0.97/0.96 | 0.86/0.94 | 0.97/0.98 | 0.91/0.96 Corn | 59, 60, 118 | 0.99/1.00 | 0.95/0.89 | 0.97/0.94 | 0.97/1.00 | 0.89/0.86 | 0.93/0.92 | 0.94/0.96 | 0.93/0.92 | 0.94/0.94 | 1.00/0.94 | 0.92/0.96 | 0.96/0.95 | 0.96/1.00 | 0.84/0.93 | 0.90/0.96 Grass-pasture | 121, 120, 242 | 1.00/0.99 | 0.98/0.95 | 0.99/0.97 | 0.99/0.99 | 0.96/0.99 | 0.97/0.99 | 1.00/0.99 | 0.97/0.98 | 0.98/0.99 | 0.97/0.99 | 0.97/0.95 | 0.97/0.97 | 1.00/0.98 | 0.93/0.95 | 0.97/0.97 Grass-trees | 183, 182, 365 | 1.00/0.99 | 0.99/1.00 | 0.99/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 0.98/0.99 | 1.00/1.00 | 0.99/1.00 | 0.98/1.00 | 0.99/1.00 | 0.99/1.00 Grass-pasture-mowed | 7, 7, 14 | 1.00/0.92 | 1.00/0.86 | 1.00/0.89 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 0.93/0.93 | 1.00/1.00 | 0.97/0.97 | 0.64/0.93 | 1.00/1.00 | 0.78/0.97 | 0.82/0.74 | 1.00/1.00 | 0.90/0.85 Hay-windrowed | 120, 119, 239 | 1.00/0.99 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Oats | 5, 5, 10 | 0.91/0.90 | 1.00/0.90 | 0.95/0.90 | 0.83/1.00 | 1.00/0.40 | 0.91/0.57 | 1.00/1.00 | 0.90/1.00 | 0.95/1.00 | 1.00/1.00 | 0.90/0.70 | 0.95/0.82 | 0.83/0.90 | 0.50/0.90 | 0.62/0.90 Soybean-notill | 243, 243, 486 | 0.96/0.95 | 0.93/0.94 | 0.94/0.95 | 0.99/0.98 | 0.93/0.93 | 0.96/0.95 | 0.99/0.95 | 0.95/0.94 | 0.97/0.95 | 0.96/0.99 | 0.96/0.93 | 0.96/0.96 | 0.78/0.97 | 0.90/0.94 | 0.83/0.95 Soybean-mintill | 614, 613, 1228 | 0.97/0.98 | 0.98/0.97 | 0.97/0.98 | 0.98/0.98 | 0.97/0.98 | 0.98/0.98 | 0.98/0.98 | 0.99/0.98 | 0.98/0.98 | 0.98/0.98 | 0.99/0.99 | 0.98/0.98 | 0.94/0.97 | 0.91/0.99 | 0.93/0.98 Soybean-clean | 148, 148, 297 | 0.94/0.92 | 0.94/0.96 | 0.94/0.94 | 0.93/0.94 | 0.98/0.98 | 0.95/0.96 | 0.96/0.95 | 0.98/0.94 | 0.97/0.94 | 0.96/0.95 | 0.99/0.97 | 0.97/0.96 | 0.92/0.95 | 0.86/0.97 | 0.89/0.96 Wheat | 51, 52, 102 | 1.00/1.00 | 0.98/1.00 | 0.99/1.00 | 0.97/1.00 | 1.00/0.98 | 0.99/0.99 | 1.00/0.96 | 0.99/0.99 | 1.00/0.98 | 1.00/1.00 | 0.98/0.99 | 0.99/1.00 | 1.00/1.00 | 0.98/0.98 | 0.99/0.99 Woods | 316, 316, 633 | 0.99/0.99 | 0.99/0.99 | 0.99/0.99 | 0.99/0.99 | 0.99/1.00 | 0.99/0.99 | 0.99/1.00 | 0.99/1.00 | 0.99/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 0.97/0.99 | 1.00/1.00 | 0.98/1.00 Buildings-Grass-Trees-Drives | 96, 97, 193 | 0.93/0.97 | 0.97/0.96 | 0.95/0.97 | 0.93/0.97 | 0.98/0.98 | 0.95/0.97 | 0.93/0.99 | 0.98/0.95 | 0.95/0.97 | 1.00/0.92 | 0.99/0.98 | 0.99/0.95 | 0.96/0.95 | 0.91/0.98 | 0.94/0.96 Stone-Steel-Towers | 23, 24, 46 | 0.92/0.92 | 1.00/1.00 | 0.96/0.96 | 0.92/0.94 | 1.00/1.00 | 0.96/0.97 | 0.90/0.92 | 1.00/1.00 | 0.95/0.96 | 0.98/0.92 | 1.00/1.00 | 0.99/0.96 | 0.78/0.96 | 1.00/1.00 | 0.88/0.98 ### IV-D Pavia University Dataset (PU) This Pavia University (PU) dataset was gathered over Pavia in northern Italy through a Reflective Optics System Imaging Spectrometer (ROSIS) optical sensor during a flight campaign. It consists of $610\times 610$ pixels and $103$ spectral bands with a spatial resolution of $1.3$ meters. Some samples in this dataset provide no information and are removed before analysis. The total ground truth classes are $9$ as shown in Table IX. The convergence loss and accuracy of our proposed Hybrid $3D/2D$ CNN for $50$ epochs with two different patch sizes are illustrated in Figure 9. From these accuracy and loss curves, one can deduce that our proposed model is converged almost around $10$ epoch for both $9\times 9$ and $11\times 11$ window sizes. A detailed experimental result on Pavia University dataset is shown in Table VIII and Figure 8. Moreover, the statistical significance is shown in Table IX. TABLE VIII: Kappa, Overall and Average accuracy for Pavia University with different number of bands ($15,18,21,24,27$) and different number of patch sizes ($9\times 9$ and $11\times 11$). Method | 15 Bands | 18 Bands | 21 Bands | 24 Bands | 27 Bands ---|---|---|---|---|--- 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 | 9 $\times$ 9 | 11 $\times$ 11 PCA | 56.97 | 99.61 | 58.56 | 99.69 | 56.77 | 99.76 | 59.40 | 99.77 | 64.80 | 99.68 61.93 | 99.71 | 63.39 | 99.77 | 61.74 | 99.82 | 63.62 | 99.83 | 68.73 | 99.76 44.55 | 99.54 | 47.32 | 99.55 | 46.56 | 99.69 | 55.41 | 99.64 | 57.55 | 99.59 iPCA | 39.35 | 0.00 | 67.85 | 99.11 | 48.34 | 99.40 | 55.18 | 98.77 | 60.45 | 98.93 46.43 | 43.59 | 71.34 | 99.33 | 54.30 | 99.55 | 60.02 | 99.07 | 64.47 | 99.20 31.76 | 11.11 | 65.84 | 98.72 | 37.98 | 99.11 | 43.91 | 98.59 | 51.85 | 98.98 SPCA | 51.37 | 19.39 | 46.13 | 99.18 | 62.65 | 99.70 | 56.53 | 99.45 | 60.21 | 99.44 56.45 | 41.50 | 52.53 | 99.38 | 66.50 | 99.78 | 61.28 | 99.59 | 64.74 | 99.57 42.69 | 15.70 | 34.69 | 98.95 | 55.65 | 99.64 | 46.94 | 99.25 | 47.46 | 99.18 ICA | 0.00 | 98.78 | 0.00 | 98.96 | 0.00 | 99.18 | 0.00 | 99.57 | 0.00 | 99.30 17.81 | 99.08 | 17.81 | 99.21 | 17.81 | 99.38 | 17.81 | 99.67 | 17.81 | 99.47 7.69 | 98.28 | 7.69 | 98.85 | 7.69 | 99.11 | 7.69 | 99.53 | 7.69 | 99.25 SVD | 45.29 | 0.00 | 50.72 | 97.13 | 56.33 | 99.08 | 49.49 | 0.00 | 54.06 | 98.57 52.57 | 43.59 | 57.06 | 97.83 | 61.17 | 99.30 | 56.10 | 43.59 | 59.59 | 98.92 33.28 | 11.11 | 40.52 | 97.48 | 50.77 | 98.73 | 37.56 | 11.11 | 45.42 | 98.51 (a) $15,9$ (b) $15,11$ (c) $18,9$ (d) $18,11$ (e) $21,9$ (f) $21,11$ (g) $24,9$ (h) $24,11$ (i) $27,9$ (j) $27,11$ Figure 8: Classification results for Pavia University dataset for different number of bands ($15,18,21,24,27$) selected using PCA, with different number of patch sizes ($9\times 9$, and $11\times 11$). (a) Accuracy, $9\times 9\times 15$ (b) Loss, $9\times 9\times 15$ (c) Accuracy, $11\times 11\times 15$ (d) Loss, $11\times 11\times 15$ Figure 9: Accuracy and Loss for Training and Validation sets on Pavia University for $50$ number of epochs with two different spatial dimensions ($9\times 9$ and $11\times 11$) and $15$ number of bands. TABLE IX: Class Names, Total Samples, Train, Validation and Test Sample numbers (Tr, Val, Te) along with the Statistical Test (Precision ($P_{r}$), Recall ($R_{c}$) & F1-Score (F1)) results for Pavia University with PCA as dimensional reduction method over two window sizes ($W_{1}=9\times 9$ and $W_{2}=11\times 11$). Class | Tr, Val, Te | 15 Bands | 18 Bands | 21 Bands | 24 Bands | 27 Bands ---|---|---|---|---|---|--- $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 | $P_{r}$ | $R_{c}$ | F1 $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ | $W_{1}/W_{2}$ Asphalt | 1657, 1658, 3316 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Meadows | 4662, 4663, 9324 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Gravel | 525, 525, 1049 | 0.99/0.98 | 0.99/0.98 | 0.99/0.98 | 1.00/0.99 | 0.88/0.99 | 0.93/0.99 | 0.99/1.00 | 0.96/0.98 | 0.98/0.99 | 0.98/1.00 | 0.99/0.99 | 0.99/0.99 | 0.99/0.99 | 0.99/0.98 | 0.99/0.99 Trees | 766, 766, 1532 | 1.00/1.00 | 0.99/1.00 | 1.00/1.00 | 1.00/1.00 | 0.99/0.99 | 0.99/1.00 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Painted metal sheets | 336, 336, 673 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Bare Soil | 1258, 1257, 2514 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 0.99/1.00 | 1.00/1.00 | 0.99/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 Bitumen | 333, 332, 665 | 1.00/0.99 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 0.99/1.00 | 1.00/1.00 | 1.00/1.00 | 0.99/0.99 | 1.00/1.00 | 0.99/1.00 | 0.99/0.99 | 1.00/1.00 | 1.00/1.00 Self-Blocking Bricks | 921, 920, 1841 | 0.99/0.99 | 0.99/0.99 | 0.99/0.99 | 0.93/0.99 | 1.00/0.99 | 0.96/0.99 | 0.98/0.99 | 0.99/1.00 | 0.99/0.99 | 1.00/0.99 | 0.98/1.00 | 0.99/0.99 | 0.99/0.99 | 0.99/0.99 | 0.99/0.99 Shadows | 236, 237, 474 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 0.99/0.99 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 | 1.00/1.00 ## V Results Discussion In all the above-discussed experiments, we evaluated the performance of our proposed model for a set of experiments that is initially, we analyzed several dimensionality reductions approaches for hybrid 3D/2D CNN and assessed the performance against a different number of spectral bands ($15,18,21,24$ and $27$) extracted through PCA, iPCA, SPCA, SVD, and ICA methods. Later we examined the effect of input window size on the classification performance of the proposed model by choosing two different patch sizes ($9\times 9$ and $11\times 11$). The experimental results on six benchmark datasets (mentioned in section III) are presented in Tables II, III, VI and VIII and Figures 3 \- 8. From the results, one can conclude that for all the datasets, the proposed model performed significantly better with PCA as compared to the other well-known dimensionality reduction methods. However, from the experimental results, one can observe that the $\kappa$, OA, AA values remain almost the same with an increasing number of spectral bands extracted through dimensionality reduction techniques. The classification performance of CNN based HSIC models also relies on the input window size. If the patch size is too small, it decreases the inter- class diversity in samples and if the patch size is set larger then it may take in the pixels from various classes, hence, both cases result in misclassification. We evaluated the proposed framework against two window sizes (i.e. $W_{1}=9\times 9$ and $W_{2}=11\times 11$). From the experimental results of Salinas-A, Salinas Full Scene, and Indian Pines dataset, it can be observed that there is a slight improvement in the classification results with increased window size. However, in the case of Pavia University, and the Botswana dataset, one can notice a considerable enhancement in the classification accuracy with $11\times 11$ window patch as compared to $9\times 9$. ## VI Comparison with State-of-the-art For comparison purposes, the proposed method is compared with various state- of-the-art frameworks published in recent years. From the experimental results presented in Table X one can interpret that the proposed Hybrid $3D/2D$ CNN has obtained results comparable to the state-of-the-art frameworks and some extent outperformed with respect to the other models. The comparative frameworks used in this work are Multi-scale-3D-CNN [25], 3D/2D-CNN [24, 29]. All the comparative models are being trained as per the settings mentioned in their respective papers. The experimental results listed in Table X shows that the proposed Hybrid $3D/2D$ CNN has significantly improved the classification results as compared to the other methods with less number of convolutional layers, number of filters, number of epochs, even a small number of training samples, and mainly, in less computational time. TABLE X: Comparative evaluations with State-of-the-art methods while considering $11\times 11$ Spatial dimensions with even less number of training samples (i.e., $50/50\%$ (train/test) and $50/50\%$ (train/validation)). Dataset | 2D-CNN | 3D-CNN | MS-3D-CNN | Proposed ---|---|---|---|--- OA | AA | Kappa | OA | AA | Kappa | OA | AA | Kappa | OA | AA | Kappa Salinas Full | 96.34 | 94.36 | 95.93 | 85.00 | 89.63 | 83.20 | 94.20 | 96.66 | 93.61 | 99.89 | 99.90 | 99.97 Indian Pines | 80.27 | 68.32 | 75.26 | 82.62 | 76.51 | 79.25 | 81.39 | 75.22 | 81.20 | 96.84 | 97.23 | 95.57 Botswana | 82.54 | 80.23 | 79.62 | 88.26 | 86.15 | 87.52 | 83.93 | 83.22 | 82.56 | 97.73 | 97.91 | 98.20 Pavia University | 96.63 | 94.84 | 95.53 | 96.34 | 97.03 | 94.90 | 95.95 | 97.52 | 93.40 | 99.61 | 99.71 | 99.54 ## VII Conclusion Hyperspectral Image Classification (HSIC) is a challenging task due to the spectral mixing effect which induces high intra-class variability and high inter-class similarity in HSI. Nonetheless, 2D CNN is utilized for efficient spatial feature exploitation and several variants of 3D CNN are used for joint spectral-spatial HSIC. However, 3D CNN is a computationally complex model and 2D CNN alone cannot efficiently extract discriminating spectral features. Therefore, to overcome these challenges, this work presented a Hybrid 3D/2D CNN model that provided outstanding HSI classification results on five benchmark datasets in a computationally efficient manner. To summarize, our end-to-end trained Hybrid 3D/2D CNN framework has significantly improved the classification results as compared to the other state-of-the-art methods while being computationally less complex. ## Reproducibility The running code is available at Github ”https://github.com/mahmad00”. ## References * [1] M. Ahmad, S. Shabbir, D. Oliva, M. Mazzara, and S. Distefano, “Spatial-prior generalized fuzziness extreme learning machine autoencoder-based active learning for hyperspectral image classification,” _Optik_ , vol. 206, p. 163712, 2020. * [2] Y. Wang, W. Yu, and Z. Fang, “Multiple kernel-based svm classification of hyperspectral images by combining spectral, spatial, and semantic information,” _Remote Sensing_ , vol. 12, no. 1, p. 120, 2020. * [3] J. Ma, B. Xiao, and C. Deng, “Graph based semi-supervised classification with probabilistic nearest neighbors,” _Pattern Recognition Letters_ , vol. 133, pp. 94–101, 2020. * [4] M. Ahmad, M. Mazzara, R. A. Raza, S. Distefano, M. Asif, M. S. Sarfraz, A. M. Khan, and A. Sohaib, “Multiclass non-randomized spectral–spatial active learning for hyperspectral image classification,” _Applied Sciences_ , vol. 10, no. 14, p. 4739, 2020. * [5] A. Alcolea, M. E. Paoletti, J. M. Haut, J. Resano, and A. Plaza, “Inference in supervised spectral classifiers for on-board hyperspectral imaging: An overview,” _Remote Sensing_ , vol. 12, no. 3, p. 534, 2020. * [6] M. Ahmad, A. Khan, A. M. Khan, M. Mazzara, S. Distefano, A. Sohaib, and O. Nibouche, “Spatial prior fuzziness pool-based interactive classification of hyperspectral images,” _Remote Sensing_ , vol. 11, no. 9, p. 1136, 2019\. * [7] J. Xia, P. Du, X. He, and J. Chanussot, “Hyperspectral remote sensing image classification based on rotation forest,” _IEEE Geoscience and Remote Sensing Letters_ , vol. 11, no. 1, pp. 239–243, 2013. * [8] M. Ahmad, “Fuzziness-based spatial-spectral class discriminant information preserving active learning for hyperspectral image classification,” _arXiv preprint arXiv:2005.14236_ , 2020. * [9] M. Ahmad, A. M. Khan, M. Mazzara, S. Distefano, M. Ali, and M. S. Sarfraz, “A fast and compact 3-d cnn for hyperspectral image classification,” _IEEE Geoscience and Remote Sensing Letters_ , pp. 1–5, 2020. * [10] M. Ahmad, M. A. Alqarni, A. M. Khan, R. Hussain, M. Mazzara, and S. Distefano, “Segmented and non-segmented stacked denoising autoencoder for hyperspectral band reduction,” _Optik_ , vol. 180, pp. 370 – 378, 2019. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0030402618316644 * [11] J. A. Benediktsson, J. A. Palmason, and J. R. Sveinsson, “Classification of hyperspectral data from urban areas based on extended morphological profiles,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 43, no. 3, pp. 480–491, 2005. * [12] M. Dalla Mura, A. Villa, J. A. Benediktsson, J. Chanussot, and L. Bruzzone, “Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis,” _IEEE Geoscience and Remote Sensing Letters_ , vol. 8, no. 3, pp. 542–546, 2010. * [13] D. Tuia, M. Volpi, M. Dalla Mura, A. Rakotomamonjy, and R. Flamary, “Automatic feature learning for spatio-spectral image classification with sparse svm,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 52, no. 10, pp. 6062–6074, 2014. * [14] M. Ahmad, “Ground truth labeling and samples selection for hyperspectral image classification,” _Optik_ , vol. 230, p. 166267, 2021. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0030402621000103 * [15] L. Shen and S. Jia, “Three-dimensional gabor wavelets for pixel-based hyperspectral imagery classification,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 49, no. 12, pp. 5039–5046, 2011. * [16] Y. Qian, M. Ye, and J. Zhou, “Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 51, no. 4, pp. 2276–2291, 2012. * [17] J. Wang, X. Song, L. Sun, W. Huang, and J. Wang, “A novel cubic convolutional neural network for hyperspectral image classification,” _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ , vol. 13, pp. 4133–4148, 2020. * [18] X. Yang, X. Zhang, Y. Ye, R. Lau, S. Lu, X. Li, and X. Huang, “Synergistic 2d/3d convolutional neural network for hyperspectral image classification,” _Remote Sensing_ , vol. 12, p. 2033, 6 2020. * [19] S. K. Roy, G. Krishna, S. R. Dubey, and B. B. Chaudhuri, “Hybridsn: Exploring 3-d–2-d cnn feature hierarchy for hyperspectral image classification,” _IEEE Geoscience and Remote Sensing Letters_ , vol. 17, no. 2, pp. 277–281, 2020. * [20] S. Li, W. Song, L. Fang, Y. Chen, P. Ghamisi, and J. A. Benediktsson, “Deep learning for hyperspectral image classification: An overview,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 57, no. 9, pp. 6690–6709, 2019. * [21] S. Shabbir and M. Ahmad, “Hyperspectral image classification–traditional to deep models: A survey for future prospects,” _arXiv preprint arXiv:2101.06116_ , 2021. * [22] C. Wang, N. Ma, Y. Ming, Q. Wang, and J. Xia, “Classification of hyperspectral imagery with a 3d convolutional neural network and jm distance,” _Advances in Space Research_ , vol. 64, no. 4, pp. 886–899, 2019. * [23] M. Paoletti, J. Haut, J. Plaza, and A. Plaza, “A new deep convolutional neural network for fast hyperspectral image classification,” _ISPRS journal of photogrammetry and remote sensing_ , vol. 145, pp. 120–147, 2018. * [24] Y. Li, H. Zhang, and Q. Shen, “Spectral–spatial classification of hyperspectral imagery with 3d convolutional neural network,” _Remote Sensing_ , vol. 9, no. 1, p. 67, 2017. * [25] M. He, B. Li, and H. Chen, “Multi-scale 3d deep convolutional neural network for hyperspectral image classification,” in _2017 IEEE International Conference on Image Processing (ICIP)_. IEEE, 2017, pp. 3904–3908. * [26] J. Weng, Y. Zhang, and W.-S. Hwang, “Candid covariance-free incremental principal component analysis,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 25, no. 8, pp. 1034–1040, 2003. * [27] H. Du, H. Qi, X. Wang, R. Ramanath, and W. E. Snyder, “Band selection using independent component analysis for hyperspectral image processing,” in _32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings._ IEEE, 2003, pp. 93–98. * [28] T. Carneiro, R. V. M. Da Nóbrega, T. Nepomuceno, G.-B. Bian, V. H. C. De Albuquerque, and P. P. Reboucas Filho, “Performance analysis of google colaboratory as a tool for accelerating deep learning applications,” _IEEE Access_ , vol. 6, pp. 61 677–61 685, 2018. * [29] A. Ben Hamida, A. Benoit, P. Lambert, and C. Ben Amar, “3-d deep learning approach for remote sensing image classification,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 56, no. 8, pp. 4420–4434, 2018.
# How Thermal Annealing Process Determines the Inherent Structure Evolution in Amorphous Silicon: An Investigation from Atomistic Time Scales to Experimental Time Scales Yanguang Zhou<EMAIL_ADDRESS>Department of Mechanical and Aerospace Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong (August 28, 2024) ###### Abstract Abstract The annealing treatment in the advanced manufacturing process, e.g., laser assisted manufacturing, determines the final state of glasses which is critical to its thermal, electrical and mechanical properties. Energy barriers analysis based on the potential energy surface offers an effective way to study the microscopic evolution of the inherent structures during the annealing process in a broadening timescale range, i.e., from atomistic timescale ($\sim$ ps) to experimental timescale ($\sim$ s). Here, we find the distribution activation energy barriers in the potential energy surface can be divided into three regimes: 1, the distribution mainly follows the Rayleigh distribution when the annealing rate $\dot{R}<5\times{{10}^{15}}\ \text{K/s}$; 2, two different modes, i.e., an exponentially decaying mode and a Rayleigh distribution mode, are found in the spectra when the annealing rate $5\times{{10}^{15}}\ \text{K/s}<\dot{R}<instant$; 3, the spectra is almost following the exponentially decaying mode when the system is under the instant annealing process. However, the spectra of relaxation energy barriers only show an exponentially decaying mode with a decreasing decay parameter. A multi timescale model for any specific annealing rate, which is beyond the limit of the conventional atomistic simulations, i.e., molecular dynamics simulations, is then proposed based on the distribution of the energy barriers. Such a model enables quantitative explanations and predictions of the heat release during the annealing process of the nanocalorimetry measurements or laser assisted manufacturing. Structural defects such as dislocations and grain boundaries in crystal cannot be well characterized in amorphous material due to its topology of atomic connectivity varies locally. For such a reason, amorphous material, e.g., amorphous silicon, has many promising physical, chemical and mechanical properties comparing to their crystalline counterparts, e.g., extremely low thermal conductivity Goldsmid1983 ; Larkin2014 ; Kwon2016 ; He2011 ; Liu2009 ; Moon2018 ; Saaskilahti2016 ; Zhou2016 ; Zhou2015 ; Zhou20162 , exceptional charge capacity Ding2015 ; Ding2017 and high strength Patinet2016 ; Fan2013 . In amorphous material, many properties are strongly related to the degrees of relaxation in the system which can be quantified via the inherent structural energy (ISE), i.e., $E_{\text{IS}}$, calculating by removing the kinetic energy Kallel2010 . Therefore, the properties such as damage evolution Patinet2016 ; Fan2015 and flow stress Fan20132 of amorphous materials can be modulated through ISE. One of most popular approach to vary the ISE of amorphous materials is to adjust the dynamic annealing process Liu2020 ; Fan2017 ; Ramachandramoorthy2016 . Recent development in ultrafast laser pulse techniques Shou2019 make the annealing rate as high as 106 K/s which is still an unresolved challenge to date to the traditional modelling tools, i.e., molecular dynamics (MD) techniques Fan2017 ; Deringer2018 ; Fan2014 ; Chowdhury2016 . The potential energy surface (PES) generated based on the inherent structure energy has been proved to be capable of characterizing the complex phenomenology in amorphous materials Fan2015 ; Liu2020 ; Fan2017 ; Fan2014 ; Yan2016 , and more importantly they allow focusing on the energy distribution rather than the dynamic process, making it possible to capture the experimental timescale annealing rates, i.e., $10^{-2}$ $\sim$ $10^{6}$ K/s Shou2019 ; Wan2011 ; Yuan2017 . To generate the PES, the elementary processes are the hopping between neighboring local minima. This process consists of two stages: 1, the activation stage which is from the initial state to the connecting saddle state with activation barrier ${{E}_{\text{A}}}$; and 2, the relaxation stage, i.e., from the saddle state to the final state with energy barrier ${{E}_{\text{R}}}$. Then, the time evolution of ${{E}_{\text{IS}}}$ can be expressed in the form of Liu2020 ; Fan2017 ; Derlet2013 $\displaystyle\frac{d{{{\bar{E}}}_{\text{IS}}}}{dt}=$ $\displaystyle\ \frac{dT}{dt}\frac{d{{{\bar{E}}}_{\text{IS}}}}{dT}\approx\dot{R}\left[\frac{{{{\bar{E}}}_{\text{IS}}}(T+\Delta T)-{{{\bar{E}}}_{\text{IS}}}(T)}{\Delta T}\right]$ (1) $\displaystyle=$ $\displaystyle f\cdot\exp\left(-\frac{{{{\bar{E}}}_{\text{A}}}}{{{k}_{b}}T}\right)\left({{{\bar{E}}}_{\text{A}}}-{{{\bar{E}}}_{\text{R}}}\right)$ in which, $f$ is the jump frequency which includes the entropy effects Goldstein1976 ; Johari1977 ; Berthier2011 , ${{k}_{b}}$ is the Boltzmann constant, is the temperature of the current inherent structure, $\dot{R}$ is the annealing rate, and ${{\left[f\cdot\exp\left(-{{{\bar{E}}}_{\text{A}}}/{{k}_{b}}T\right)\right]}^{-1}}$ represents the average residence time in current inherent structure. ${{\bar{E}}_{\text{A}}}$ and ${{\bar{E}}_{\text{R}}}$ are the average activation and relaxation energy barriers, which consider the effect of multiple transition pathways in the PES of amorphous materials, and can be calculated via Fan2017 ; Derlet2013 ${{\bar{E}}_{\text{A}}}=-{{k}_{b}}T\cdot\ln\left[\int{P\left({{E}_{\text{A}}}|{{E}_{\text{IS}}}\right){{e}^{-\frac{{{E}_{\text{A}}}}{{{k}_{b}}T}}}d{{E}_{\text{A}}}}\right]$ (2) ${{\bar{E}}_{\text{R}}}=\int{{{E}_{\text{R}}}\cdot P\left({{E}_{\text{R}}}|{{E}_{\text{IS}}}\right)d{{E}_{\text{R}}}}$ (3) where $P\left({{E}_{A}}|{{E}_{ISE}}\right)$ and $P\left({{E}_{R}}|{{E}_{ISE}}\right)$ are the activation and relaxation energy spectra which are calculated via the activation-relaxation technique (ART) Cancs2009 ; Barkema1996 – a method of atomistic simulation known to be able to capturing the saddle point states and providing the standard PES samples including these high activation energy barrier events that cannot be accessible by MD simulations in amorphous materials. Here, the crystalline silicon with 1728 atoms with interaction depicting using Tersoff potential Tersoff1989 is first equilibrated at a high temperature (4000 K) liquid state, and then cooled to 10 K with nine annealing rates ranging from 1010 K/s up to 1016 K/s and the instant quench using MD simulations. The size of the system is 3.26 nm$\times$3.26 nm$\times$3.26 nm, with periodic boundary conditions applied to all the three directions. The system with 4096 atoms is used to validate the size effect in our results can be ignored. Figure 1 shows the evolution of the ISE at various temperatures in MD simulations at the annealing rate of 1010 K/s up to 1016 K/s. In the high annealing rate process, e.g., $\dot{R}={{10}^{16}}\ \text{K/s}$, the system would jump out of the equilibrium state and eventually turn into the glassy state with an almost constant ISE (black dots in Figure 1) since the timescale of cooling rate cannot keep up with the intrinsic timescale in supercooled liquid. With the decreasing of the annealing rate, e.g., $\dot{R}<{{10}^{16}}\ \text{K/s}$, the systems at temperature above 1800 K can reach equilibrium state easily and their corresponding ISEs decrease quickly, and then froze to glassy states with different ISEs with temperature decreasing below 1800 K. Next, the ISs generated in the MD runs are going to be used as input for ART simulations to calculate the ${{E}_{\text{A}}}$ and ${{E}_{\text{R}}}$ spectra. For each of the ten input structures, 5000 ART searchers with different random perturbation directions are employed. The magnitude of perturbation displacement is fixed at 0.5 Å, and the system is relaxed to the saddle point following the Lanczos algorithm Barkema1996 ; Malek2000 when the curvature of the PES is less than -0.01 eV$\cdot$Å-2 and the force of the system is less than 0.05 eV$\cdot$Å-1. After removing the failed and redundant searches, around 3,000 different searches are left for each IS. Figure 1: The inherent energy of the systems during the annealing process with different annealing rates, i.e., 1010 K/s up to 1016 K/s, in molecular dynamics simulations. The solid dots are results from the models with 1728 atoms, and the open dots are calculated by the system of 4096 atoms. Our results show the system with 1728 atoms is large enough to ignore the size effects. Figure 2: The raw histogram data on the activation energy barrier spectra in different samples generated at various annealing rates (a) $10^{10}$ K/s, (b) $10^{14}$ K/s, (c) $10^{15}$ K/s, (d) 5$\times$$10^{15}$ K/s, (e) 1016 K/s and (f) instant quench obtained by ART. The spectra can be fitted into two modes: an Rayleigh distribution (M1 mode, blue lines in the Figure) with an obvious peak and an exponential decay mode (M2 mode, purple lines in the Figure), using Eq. (4). We then focus on the activation energy spectra $P\left({{E}_{\text{A}}}|{{E}_{\text{IS}}}\right)$ and relaxation energy spectra $P\left({{E}_{\text{R}}}|{{E}_{\text{IS}}}\right)$. Figure 2 shows the activation energy spectra obtained for these samples. It is not surprising to find faster annealing rate yield smaller activation energy range since the intrinsic timescale is larger than the timescale of cooling rate and therefore it is difficult for the system to catch up to the high saddle energy states Fan2017 . More interesting, it is found that $P\left({{E}_{\text{A}}}|{{E}_{\text{IS}}}\right)$ can be divided into three regimes: 1, the spectra dominantly follows the Rayleigh distribution with a clear peak when $\dot{R}<5\times{{10}^{15}}\ \text{K/s}$ (M1 mode in our manuscript, Figure 2a-c); 2, the spectra can be decomposed into two different modes: an exponentially decaying mode (M2 mode) and a Rayleigh distribution mode (M1 mode) when $5\times{{10}^{15}}\ \text{K/s}<\dot{R}<instant$ Figure 2d-e); 3, the spectra is mainly following the M2 mode when the system is under the instant annealing process (Figure 2f). Based on the probability distribution of the spectra, the spectra can be represented phenomenologically using Liu2020 ; Fan2017 : $\displaystyle P\left({{E}_{\text{A}}}|{{E}_{\text{IS}}}\right)=$ $\displaystyle{{P}_{\text{{M}1}}}\left({{E}_{\text{A}}}|{{E}_{\text{IS}}}\right)+{{P}_{\text{{M}2}}}\left({{E}_{\text{{A}}}}|{{E}_{\text{{IS}}}}\right)$ (4) $\displaystyle=$ $\displaystyle{{W}_{1}}\cdot{{E}_{\text{{A}}}}\cdot\exp\left[-\frac{{{({{E}_{\text{{A}}}}-\mu({{E}_{\text{{IS}}}}))}^{2}}}{2{{\sigma}^{2}}}\right]+$ $\displaystyle{{W}_{2}}\cdot\frac{1}{{{\xi}_{\text{{A}}}}}\cdot\exp\left(-\frac{{{E}_{\text{{A}}}}}{{{\xi}_{\text{{A}}}}}\right)$ where ${{W}_{1}}$ and ${{W}_{2}}$ are the weigh factors related to the amplitude of M1 and M2 modes, respectively. $\mu$ is the location parameter of M2, $\sigma$ is the width of the M1 mode, and ${{\xi}_{\text{A}}}$ is the decay parameter of M2 mode. It is worth noting the integral of $P\left({{E}_{\text{A}}}|{{E}_{\text{IS}}}\right)$ should be equal to one due to the normalization condition of the spectra. As seen in Figure 2, when the inherent structure energy becomes higher, i.e., the higher annealing rate, the amplitude of M1 becomes smaller and disappear eventually (Figure 2), while the amplitude of M2 shifts to a larger value and becomes the main mode when the system is under instant annealing (Figure 2f). In the similar way, we find the spectra of relaxation energy $P\left({{E}_{\text{R}}}|{{E}_{\text{IS}}}\right)$ can be fitted in the phenomenological formula of Kallel2010 ; Liu2020 ; Fan2017 ; Derlet2013 $P\left({{E}_{\text{R}}}|{{E}_{\text{IS}}}\right)=\frac{1}{{{\xi}_{\text{R}}}}\cdot\exp\left(-\frac{{{E}_{\text{R}}}}{{{\xi}_{\text{R}}}}\right)$ (5) in which, ${{\xi}_{\text{R}}}$ is the decay parameter. Unlike the previous studies found the relaxation spectra is independent of the annealing rate Kallel2010 , our results show that higher annealing rate will lead to a broad range of the relaxation energy barrier (Figure 3). As discussed in Ref. Fan2015 ; Fan2014 , the higher annealing rate will lead to a faster exponential decay behavior of relaxation energy barrier distribution indicating a localized relaxation process which is found in various systems, and with decreasing of the annealing rate, the probability of the relaxation energy barrier distribution will become broader due to the combination of localization and cascade (i.e., self-organized criticality which has been observed universally in many different systems experimentally Krisponeit2014 and numerically Salerno2012 ) relaxations of ${{E}_{\text{R}}}$ in the PES. Figure 3: The raw data on the relaxation energy barrier spectra in different samples generated at various annealing rates (a) 1010 K/s, (b) 1014 K/s, (c) 5$\times$1010 K/s and (d) instant quench obtained by ART. Lines are fitted using Eq. (5). To solve Eq. (1) at any given ISE, we need prepare the forms of ${{W}_{1}}$, $\mu$, $\sigma$, ${{W}_{2}}$, ${{\xi}_{\text{A}}}$ and ${{\xi}_{\text{R}}}$. In our study, we have nine samples which can provide nine data of each parameter mentioned above. What is worth noting is that a wider range of ISE, particularly those ISEs at slower annealing rate, e.g., $\dot{R}<{{10}^{10}}\ \text{K/s}$, can make sure the fitting process of the parameters more accurately. But it will take prohibiting computing time or even out of the power of MD simulations. In this study, we firstly fit these parameters using appropriate functions and then compare their generated average saddle-initial or final minimal energy differences with MD results directly. Based on the data generated from the ART results, we can find ${{W}_{1}}=\ -92.4+97.6\cdot\exp({{E}_{\text{IS}}}/81.42)$ (Figure 4a), $\sigma\ =\ 0.233+(2.234-0.233)/\left[1+\exp(({{E}_{\text{IS}}}+4.367)/0.019)\right]$ (Figure 4b), $\mu=2.224-17.25\times{{10}^{6}}\cdot\exp({{E}_{\text{IS}}}/0.25)$ (Figure 4c), ${{W}_{2}}=10\cdot\exp(-48.61-28.30{{E}_{\text{IS}}}-4.19E_{\text{IS}}^{2})$ (Figure 4d), ${{\xi}_{\text{A}}}=1.308+1.370\times{{10}^{-28}}\cdot\exp(-{{E}_{\text{IS}}}/0.066)+1.242\times{{10}^{-28}}\cdot\exp(-{{E}_{\text{IS}}}/0.065)$ (Figure 4e), ${{\xi}_{R}}=3.836-1.075{{E}_{\text{IS}}}-0.985E_{\text{IS}}^{2}-0.125E_{\text{IS}}^{3}$ (Figure 4f). The average saddle-initial or final energy difference is then calculated for validating our fitting formulas via $\displaystyle{{\bar{E}}_{{saddle}-initial\ or\ final}}=$ (6) $\displaystyle\int{{{E}_{\text{A}\ or\ \text{R}}}\cdot P\left({{E}_{\text{A}\ or\ \text{R}}}|{{E}_{\text{IS}}}\right)d{{E}_{\text{A}\ or\ \text{R}}}}$ Figure 4: The magnitude of (a) ${{W}_{1}}$, (b) $\sigma$, (c) $\mu$, (d) ${{W}_{2}}$, (e) ${{\xi}_{\text{A}}}$ in Eq. (4) and (f) ${{\xi}_{\text{R}}}$ in Eq. (5) v.s. the inherent energy $E_{\text{IS}}$. To make Eq. (1) to be used at any specific annealing rate, we fit the discrete data using empirical equations: (a) ${{W}_{1}}=-92.4+97.6\cdot\exp({{E}_{\text{IS}}}/81.42)$, (b) $\sigma=0.233+(2.234-0.233)/\left[1+\exp(({{E}_{\text{IS}}}+4.367)/0.019)\right]$, (c) $\mu=2.224-17.25\times{{10}^{6}}\cdot\exp({{E}_{\text{IS}}}/0.25)$, (d) ${{W}_{2}}=10\cdot\exp(-48.61-28.30{{E}_{\text{IS}}}-4.19E_{\text{IS}}^{2})$, (e) ${{\xi}_{\text{A}}}=1.308+1.370\times{{10}^{-28}}\cdot\exp(-{{E}_{\text{IS}}}/0.066)+1.242\times{{10}^{-28}}\cdot\exp(-{{E}_{\text{IS}}}/0.065)$ and (f) ${{\xi}_{R}}=3.836-1.075{{E}_{\text{IS}}}-0.985E_{\text{IS}}^{2}-0.125E_{\text{IS}}^{3}$. The lines in Figure 5a are calculated using Eq. (6) with the functions of the parameters fitted above (Figure 4), and the dots in Figure 5a are the results obtained from discrete ART data. The good agreement between the lines and the dots in Figure 5a indicates rationality of the fitting forms of ${{W}_{1}}$, $\mu$, $\sigma$, ${{W}_{2}}$, ${{\xi}_{\text{A}}}$ and ${{\xi}_{\text{R}}}$. To fully obtain the dynamic process of the ISE during the annealing or heating process at arbitrary annealing rate, or equivalently solving Eq. (1), the hypothesis on the jump frequency is still necessary. As firstly proposed by Goldstein Goldstein1976 and Johari Johari1977 , the jump frequency $f$ depends on the entropy of the system which is related to the temperature and ISE. Since the ISE is a function of temperature as well, we then assume $f$ only depends on the temperature and can be expressed as ${{f}_{0}}\exp[-1.25\times{{10}^{-4}}(T-300)]/{{(9.6T)}^{2}}$ Fan2017 where ${{f}_{0}}$ is the zero temperature jump frequency and is set as $1\times{{10}^{9}}$ THz Fan2013 ; Yan2016 ; Fan2012 . Next, the Eq. (1) can be solved at arbitrary annealing rate $\dot{R}$, and good agreements between our theoretical predictions with MD simulations are found (Figure 5b). The model developed above is then used to study the heat release of the system $Q$ at experimental annealing rates, i.e., ${{10}^{-2}}\sim{{10}^{6}}\ \text{K/s}$ Shou2019 ; Wan2011 ; Yuan2017 , which can be written as: $\displaystyle Q=$ $\displaystyle\frac{d\bar{E}}{dT}=\frac{d{{{\bar{E}}}_{\text{IS}}}+d{{{\bar{E}}}_{kinetic}}}{dT}$ (7) $\displaystyle\approx$ $\displaystyle\left[\frac{{{{\bar{E}}}_{\text{IS}}}(T+\Delta T)-{{{\bar{E}}}_{\text{IS}}}(T)}{\Delta T}+1.5{{k}_{b}}\right]$ Combining Eqs. (1) and (7), we can know $\displaystyle Q=\frac{f}{{\dot{R}}}\cdot\exp\left(-\frac{{{{\bar{E}}}_{\text{A}}}}{{{k}_{b}}T}\right)\left({{{\bar{E}}}_{\text{A}}}-{{{\bar{E}}}_{\text{R}}}\right)+1.5{{k}_{b}}$ (8) Figure 5: (a) The average relaxation energy calculated using ART data and empirical formulas fitted in Figure 4. (b) The inherent energy changes with temperature at various annealing rates, dots are from MD simulations and lines are calculated using Eq. (1). Figure 6a shows the heat release of the system during the annealing process of amorphous Si at the rates of ${{10}^{-2}}\ \sim{{10}^{6}}\ \text{K/s}$ using Eq. (8). At high temperature range ($\geq$ 200 K), the systems can be equilibrated at very short time scale and therefore the abundant heat will be released. However, at the low temperature range ($\leq$ 200 K), the annealing rate is too fast to equilibrate the system, and the system is undergoing a normal ageing behavior Liu2020 ; Fan2017 . We now try to compare our predictions with the experimental measurements Karmouch2007 . In the experiments, the Si ions are implanted into the amorphous samples, and then the released heat of the system is measured. By comparing the results in Figure 3 and 4 in Ref. Karmouch2007 , we can assume the number of atoms of the ion-implanted region is about 1016. By choosing the ISE of the systems and annealing rate carefully, our theoretical predictions agree very well with the experimental measurement (Figure 6b). It is worth noting that the higher density implanted ions will cause higher ISE of the system as we assumed here. Meanwhile, the annealing rate can be as high as ${{10}^{6}}\ \text{ K/s}$ in the laser flash experiments Shou2019 and as low as ${{10}^{-2}}\ \text{K/s}$ in the furnace lamp annealing experiments Shrestha2018 , here we use the medium annealing rate 10 K/s in our model. Figure 6: (a) the predicted heat release during the annealing process with different annealing rates using Eq. (8), which we assume the ${{E}_{\text{IS}}}=-4.51\ \text{eV}$. (b) The het release when the annealing rate is 10 K/s with appropriate inherent energy, experimental measurements are from Ref. Karmouch2007 , excellent agreement between theoretical predictions and experimental data is found. We also note that in Ref. Kallel2010 , the potential energy surface of activation and relaxation energy barriers in amorphous Si is almost independent on the inherent structures. The reason for this may be due to the original structures used in their study are the same, and the inherent energy is varying via deleting atoms in the system. In our paper, we generate the inherent structures via mimicing the real experimental annealing process, we find the distribution of activation energy can be decomposed in two modes which is found in other systems such as amorphous Cu56Zr44 Fan2017 and the distribution of relaxation energy is varying with the annealing rate which is also observed in other systems, e.g., Stillinger-Weber silicon Middleton2001 . We would also like to emphasize that the accuracy of our theoretical predictions depends on the accuracy of $P\left({{E}_{\text{A}}}|{{E}_{\text{IS}}}\right)$ and $P\left({{E}_{\text{R}}}|{{E}_{\text{IS}}}\right)$ over the entire range of inherent structure energy. Molecular dynamics simulations can assure the parameters of $P\left({{E}_{\text{A}}}|{{E}_{\text{IS}}}\right)$ and $P\left({{E}_{\text{R}}}|{{E}_{\text{IS}}}\right)$ in high annealing rate from ${{10}^{10}}\ \text{ K/s}$ to ${{10}^{16}}\ \text{ K/s}$ and the instant annealing rate, i.e., inherent surface energy from -4.44 eV/atom to -3.97 eV/atom, in our theoretical model are reasonable, while there is no guarantee that their extrapolations are still suitable at experimental annealing rates, i.e., ${{10}^{-1}}\sim{{10}^{4}}\ \text{ K/s}$. However, the underlying mechanism behind the theoretical model proposed here is still valid since Eq. (8) describes the physics process of hopping in potential energy surface. In conclusion, the activation and relaxation energy barriers’distributions are extracted based on the extensive sampling of the potential energy surface of a-Si. We find the distribution of the activation energy barriers can be divided into three regimes: 1, the spectra dominantly follows the Rayleigh distribution when $\dot{R}<5\times{{10}^{15}}\ {K/s}$; 2, the spectra can be decomposed into two different modes: an exponentially decaying mode and a Rayleigh distribution mode when $5\times{{10}^{15}}\ {K/s}<\dot{R}<instant$; 3, the spectra is mainly following the exponentially decaying mode when the system is under the instant annealing process. The distribution of relaxation energy barrier is found to follow the exponentially decaying mode with a decreasing decaying parameter at the whole annealing rate range. Then, a long- time scale model beyond the limit of the conventional atomistic simulations used to capture the annealing process observed in experiments of generating a-Si is proposed based on distributions of the energy barriers. The heat release during the annealing process is calculated using our proposed long timescale model, and compared with the nanocalorimetry measurements. Excellent agreement between our numerical results and experimental measurements, makes sure the present model not only can unravel the microscopic information of evolution of the inherent structure but also applicable to many thermodynamics processes such as laser assisted manufacturing process. Y.Z. thanks startup fund from Hong Kong University of Science and Technology (HKUST). Y. Z. gratefully acknowledges Mr. Zhitong Bai (University of Michigan-Ann Arbor) for valuable discussions. ## References * (1) H. J. Goldsmid, M. M. Kaila, and G. L. Paul, Phys. Status Solidi 76, K31 (1983). * (2) J. M. Larkin and A. J. H. McGaughey, Phys. Rev. B 89, 1 (2014). * (3) S. Kwon, J. Zheng, M. C. Wingert, S. Cui, and R. Chen, ACS Nano 11, 2470 (2016). * (4) Y. He, D. Donadio, and G. Galli, Appl. Phys. Lett. 98, 3 (2011). * (5) X. Liu, J. L. Feldman, D. G. Cahill, R. S. Crandall, N. Bernstein, D. M. Photiadis, M. J. Mehl, and D. A. Papaconstantopoulos, Phys. Rev. Lett. 102, 1 (2009). * (6) J. Moon, B. Latour, and A. J. Minnich, Phys. Rev. B 97, 1 (2018). * (7) K. Sääskilahti, J. Oksanen, J. Tulkki, S. Volz, J. and A. J. H. Mcgaughey, AIP Adv. 6, 121904 (2016). * (8) Y. Zhou and M. Hu, Nano Lett. 16, 6178 (2016). * (9) Y. Zhou and M. Hu, Phys. Rev. B 92, 195205 (2015). * (10) Y. Zhou, X. Zhang, and M. Hu, Nanoscale 8, 1994 (2016). * (11) B. Ding, X. Li, X. Zhang, H. Wu, Z. Xu, and H. Gao, Nano Energy 18, 89 (2015). * (12) B. Ding, H. Wu, Z. Xu, X. Li, and H. Gao, Nano Energy 38, 486 (2017). * (13) S. Patinet, D. Vandembroucq, and M. L. Falk, Phys. Rev. Lett. 117, 1 (2016). * (14) F. Fan, S. Huang, H. Yang, M. Raju, D. Datta, V. B. Shenoy, A. C. T. Van Duin, S. Zhang, and T. Zhu, Model. Simul. Mater. Sci. Eng. 21, 074002 (2013). * (15) H. Kallel, N. Mousseau, and F. Schiettekatte, Phys. Rev. Lett. 105, 045503 (2010). * (16) Y. Fan, T. Iwashita, and T. Egami, Phys. Rev. Lett. 115, 1 (2015). * (17) Y. Fan, Y. N. Osetskiy, S. Yip, and B. Yildiz, Proc. Natl. Acad. Sci. 110, 17756 (2013). * (18) C. Liu, X. Yan, P. Sharma, and Y. Fan, Comput. Mater. Sci. 172, 109347 (2020). * (19) Y. Fan, T. Iwashita, and T. Egami, Nat. Commun. 8, 1 (2017). * (20) R. Ramachandramoorthy, W. Gao, R. Bernal, and H. Espinosa, Nano Lett. 16, 255 (2016). * (21) W. Shou, B. Ludwig, L. Wang, X. Gong, X. Yu, C. P. Grigoropoulos, and H. Pan, ACS Appl. Mater. Interfaces11, 34416 (2019). * (22) V. L. Deringer, N. Bernstein, A. P. Bartók, M. J. Cliffe, R. N. Kerber, L. E. Marbella, C. P. Grey, S. R. Elliott, and G. Csányi, J. Phys. Chem. Lett. 9, 2879 (2018). * (23) Y. Fan, T. Iwashita, and T. Egami, Nat. Commun. 5, 1 (2014). * (24) S. C. Chowdhury, B. Z. Haque, and J. W. Gillespie, J. Mater. Sci. 51, 10139 (2016). * (25) X. Yan and P. Sharma, Nano Lett. 16, 3487 (2016). * (26) Z. Wan, S. Huang, M. A. Green, and G. Conibeer, Nanoscale Res. Lett. 12, 478 (2011). * (27) Z. Yuan, C. Wang, K. Chen, Z. Ni, and Y. Chen, Nanoscale Res. Lett. 12, 1 (2017). * (28) P. M. Derlet and R. Maa$\beta$, Philos. Mag. 93, 4232 (2013). * (29) M. Goldstein, J. Chem. Phys. 64, 4767 (1976). * (30) G. P. Johari, Philos. Mag. 35, 1077 (1977). * (31) L. Berthier and G. Biroli, Rev. Mod. Phys. 83, 587 (2011). * (32) E. Cancs, F. Legoll, M. C. Marinica, K. Minoukadeh, and F. Willaime, J. Chem. Phys. 130, (2009). * (33) G. T. Barkema and N. Mousseau, Phys. Rev. Lett. 77, 4358 (1996). * (34) J. Tersoff, Phys. Rev. B 39, 5566 (1989). * (35) R. Malek and N. Mousseau, Phys. Rev. E 62, 7723 (2000). * (36) J. O. Krisponeit, S. Pitikaris, K. E. Avila, S. Küchemann, A. Krüger, and K. Samwer, Nat. Commun. 5, 3616 (2014). * (37) K. M. Salerno, C. E. Maloney, and M. O. Robbins, Phys. Rev. Lett. 109, 1 (2012). * (38) Y. Fan, Y. N. Osetsky, S. Yip, and B. Yildiz, Phys. Rev. Lett. 109, 1 (2012). * (39) R. Karmouch, Y. Anahory, J. F. Mercure, D. Bouilly, M. Chicoine, G. Bentoumi, R. Leonelli, Y. Q. Wang, and F. Schiettekatte, Phys. Rev. B 75, 075304 (2007). * (40) M. Shrestha, K. Wang, B. Zheng, L. Mokrzycki, and Q. H. Fan, Coatings 8, 97 (2018). * (41) T. F. Middleton and D. J. Wales, Phys. Rev. B 64, 024205 (2001).
††thanks: These authors contributed equally to this work.††thanks: These authors contributed equally to this work. # Probing Multiple Electric Dipole Forbidden Optical Transitions in Highly Charged Nickel Ions Shi-Yong Liang State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Key Laboratory of Atomic Frequency Standards, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China University of Chinese Academy of Sciences, Beijing 100049, China Ting-Xian Zhang State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China University of Chinese Academy of Sciences, Beijing 100049, China Hua Guan<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Key Laboratory of Atomic Frequency Standards, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Qi-Feng Lu Shanghai EBIT Laboratory, Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Institute of Modern Physics, Fudan University, Shanghai 200433, China Jun Xiao <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Shanghai EBIT Laboratory, Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Institute of Modern Physics, Fudan University, Shanghai 200433, China Shao-Long Chen State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Key Laboratory of Atomic Frequency Standards, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Max-Planck-Institut für Kernphysik, Heidelberg 69117, Germany Yao Huang State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Key Laboratory of Atomic Frequency Standards, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Yong-Hui Zhang State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Cheng-Bin Li<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Ya-Ming Zou Shanghai EBIT Laboratory, Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Institute of Modern Physics, Fudan University, Shanghai 200433, China Ji-Guang Li Institute of Applied Physics and Computational Mathematics, Beijing 100088, China Zong-Chao Yan Department of Physics, University of New Brunswick, Fredericton, New Brunswick, Canada E3B 5A3 State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Andrei Derevianko Department of Physics, University of Nevada, Reno, Nevada 89557, USA Ming-Sheng Zhan State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Ting-Yun Shi State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Ke-Lin Gao <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China Key Laboratory of Atomic Frequency Standards, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China ###### Abstract Highly charged ions (HCIs) are promising candidates for the next generation of atomic clocks, owing to their tightly bound electron cloud, which significantly suppresses the common environmental disturbances to the quantum oscillator. Here we propose and pursue an experimental strategy that, while focusing on various HCIs of a single atomic element, keeps the number of candidate clock transitions as large as possible. Following this strategy, we identify four adjacent charge states of nickel HCIs that offer as many as six optical transitions. Experimentally, we demonstrated the essential capability of producing these ions in the low-energy compact Shanghai-Wuhan Electron Beam Ion Trap. We measured the wavelengths of four magnetic-dipole ($M$1) and one electric-quadrupole ($E$2) clock transitions with an accuracy of several ppm with a novel calibration method; two of these lines were observed and characterized for the first time in controlled laboratory settings. Compared to the earlier determinations, our measurements improved wavelength accuracy by an order of magnitude. Such measurements are crucial for constraining the range of laser wavelengths for finding the “needle in a haystack” narrow lines. In addition, we calculated frequencies and quality factors, evaluated sensitivity of these six transitions to the hypothetical variation of the electromagnetic fine structure constant $\alpha$ needed for fundamental physics applications. We argue that all the six transitions in nickel HCIs offer intrinsic immunity to all common perturbations of quantum oscillators, and one of them has the projected fractional frequency uncertainty down to the remarkable level of 10-19. ## I INTRODUCTION Quantum metrology of atomic time-keeping has seen dramatic improvements over the past decade with novel applications spanning from chronometric geodesy [1, 2] to fundamental physics, such as dark matter searches [3, 4] and multi- messenger astronomy [5]. Currently, optical atomic clocks using neutral atoms or singly charged ions have demonstrated fractional frequency uncertainties at the level of $10^{-18}$ or even $10^{-19}$ [6, 7, 8, 9]. These uncertainties refer to the ability to protect the quantum oscillator from environmental perturbations, such as stray magnetic and electric fields. As these existing technologies mature, they are reaching the stage where one needs to understand numerous sources of environmental perturbations in greater detail. In some cases, the perturbations cannot be fully eliminated and one needs to devote significant efforts to measuring and characterizing the perturbations; these lead to non-universal systematic corrections to the clock frequency that are specific to experimental realization of the clock. Novel classes of atomic clocks must start with quantum oscillators that offer a much more improved inherent immunity to environmental perturbations than the more mature technologies. One of such systems is the nuclear clock based on the unique property of the 229Th nucleus – the existence of a nuclear transition in a laser-accessible range [10, 11]; unfortunately, despite substantial world-wide efforts [12, 13], this transition is yet to be observed directly. The suppression of environmental perturbations for the nuclear oscillator comes from the nuclear size being $\sim 10^{4}$ times smaller than the size of a neutral atom. Alternative novel systems are highly charged ions (HCIs) [14, 15]. Similar to the nuclear clock, here the oscillator size is also substantially reduced, owing to the electronic cloud size shrinking as $1/Z$ with the increasing ion charge $Z$. HCIs were proposed as promising candidates for the next generation of atomic clocks [14]. In addition, beyond the improved time-keeping, HCIs open intriguing opportunities for probing new physics beyond the standard model of particle physics [16, 17]. Compared to the single, yet to be spectroscopically found nuclear transition, there is a plethora of suitable HCIs (see a review [18] for a sample of proposals). A detailed analysis [19] indicates that with certain HCIs, atomic clocks “can have projected fractional accuracies beyond the $10^{-20}-10^{-21}$ level for all common systematic effects, such as blackbody radiation, Zeeman, ac-Stark, and quadrupolar shifts”. Moreover, compared to the nuclear clock, where the direct observation of the clock transition remains elusive, the clock transitions in HCIs can be found with conventional spectroscopy or from atomic-structure computations. Indeed, here we report spectrographic measurements of wavelengths for five clock transitions with an accuracy of several ppm (see Table 1), setting up the stage for the more accurate laser spectroscopy. Table 1: Observed and calculated wavelengths of magnetic-dipole ($M$1) and electric-quadrupole ($E$2) transitions, where the lines $a$ through $f$ are candidate clock transitions, in nm. Line | Ion | Transition | Type | NIST | This work ---|---|---|---|---|--- Vacuum | Air (observed) | Vacuum | Theory $a$ | Ni11+ | $3s^{2}3p^{5}\;\;{}^{2}\\!P_{1/2}-{{}^{2}\\!P}_{3/2}$ | $M$1 | 423.2 | 423.104(2) | 423.223(2) | 423.0(6) $b$ | Ni12+ | $3s^{2}3p^{4}\;\;{}^{3}\\!P_{1}-{{}^{3}\\!P}_{2}$ | $M$1 | 511.724 | 511.570(2) | 511.713(2) | 511.8(6) $c$ | Ni14+ | $3s^{2}3p^{2}\;\;{}^{3}\\!P_{1}-{{}^{3}\\!P}_{0}$ | $M$1 | 670.36 | 670.167(2) | 670.352(2) | 671.1(14) $d$ | Ni15+ | $3s^{2}3p\;\;^{2}\\!P_{3/2}-{{}^{2}\\!P}_{1/2}$ | $M$1 | 360.22 | 360.105(2) | 360.207(2) | 359.9(9) $e$ | Ni12+ | $3s^{2}3p^{4}\;\;{}^{3}\\!P_{0}-{{}^{3}\\!P}_{2}$ | $E$2 | 498.50(249) | – | – | 496.9(24) $f$ | Ni14+ | $3s^{2}3p^{2}\;\;{}^{3}\\!P_{2}-{{}^{3}\\!P}_{0}$ | $E$2 | 365.277 | – | 365.278(1) | 365.0(3) $g$ | Ni14+ | $3s^{2}3p^{2}\;\;{}^{3}\\!P_{2}-{{}^{3}\\!P}_{1}$ | $M$1 | 802.63 | 802.419(2) | 802.639(2) | 800.3(25) Despite the lack of suitable electric-dipole ($E$1) transitions for direct laser cooling, recent successes in sympathetic cooling and quantum logic spectroscopy of HCIs have paved way for precision spectroscopic measurements with HCIs [20, 21]. It is worth emphasizing that these newly demonstrated technologies can be applied universally to a wide range of HCIs. The multitude of suitable clock HCI candidates is a “blessing in disguise”, as one needs to commit to building the infrastructure for a specific ion. As with any new endeavor, one would like to mitigate potential problems with picking a “wrong” ion. Here we propose and pursue a straddling strategy that would allow one to explore several clock transitions using not only the same HCI production system but also ions of the same atomic element. A suitable HCI has to possess a number of properties enabling precision spectroscopy and compatibility with operating an atomic clock. Generally, one may distinguish between three classes of visible or near visible optical forbidden transitions in HCIs that can be used for developing optical clocks: 1. 1. Magnetic-dipole ($M$1) transitions between two hyperfine-structure levels of the same electronic state [22, 19]. 2. 2. Forbidden transitions between level crossing electronic states, which tend to be sensitive to variation of the fine structure constant [16, 23, 24, 25]. 3. 3. Forbidden transitions between the ground-state fine structure levels [14, 19, 26, 27]. Type 1 transitions occur in few-electron heavy HCIs [19] that are challenging to produce and trap. Type 2 transitions involve a complex energy structure that can impede initialization and read-out of the clock states. Here we focus on type 3 transitions that offer simplicity in both producing the ions and clock operation. More specifically we choose HCIs of nickel of various charge states [19, 26, 27]: Ni11+, Ni12+, Ni14+, and Ni15+. The clock transitions are shown in Fig. 1. All the traditional clock perturbations are strongly suppressed for these ions due to the charge scaling arguments [28, 14, 19]. As pointed out in Ref. [14], the major issue with HCI clocks is the so-called quadrupolar shift of the clock transition, when the quadrupole ($Q$) moment of the clock state couples to the always existing $E$-field gradients in ion traps. While the $Q$-moment of an electronic cloud does scale as $1/Z^{2}$, this reduction is not sufficient to suppress the quadrupolar shift below the desired level of accuracy. Thus, it is beneficial to select clock states with either vanishing or strongly suppressed $Q$-moments. There are four $M$1 and two $E$2 optical transitions in Ni11+, Ni12+, Ni14+, and Ni15+ that offer the desired flexibility. These ions have varying number of electrons in the $3p$ shell, see Fig. 1. The clock transitions are between the fine structure components of the ground electronic state. There are four stable isotopes 58Ni, 60Ni, 62Ni, and 64Ni without nuclear spin; these can be used to search for new physics with isotope shift measurements [29, 30, 31] and for initial spectroscopic measurements. These spin-0 isotopes, however, will be susceptible to the quadrupolar shifts for clock transitions. However, these shifts can be suppressed by using the 61Ni isotope (nuclear spin $I=3/2$), which has a natural abundance of 1.14%. Then the quadrupolar shifts can be either strongly suppressed or completely removed by employing the following clock transitions between hyperfine states (see Fig. 1): * • ${{}^{2}\\!P}_{3/2}\,F=0$ and ${}^{2}\\!P_{1/2}\,F=1$ or $F=2$ for Ni11+ and Ni15+, * • ${}^{3}\\!P_{1}\,F=1/2$ and ${}^{3}\\!P_{2}\,F=1/2$ for Ni12+, * • ${}^{3}\\!P_{1}\,F=1/2$ and ${}^{3}\\!P_{0}\,F=3/2$ for Ni14+, * • ${}^{3}\\!P_{0}\,F=3/2$ and ${}^{3}\\!P_{2}\,F=1/2$ for Ni12+ and Ni14+. This selection is based on the following reasoning [19]: $Q$-moments (rank 2 tensors) of the $F=0$ and $F=1/2$ states are zero due to selection rules. For the ${}^{2}\\!P_{1/2}\,F=1,2$ and ${}^{3}\\!P_{0}\,F=3/2$ states, the electronic $Q$-moments vanish due to selection rules for the electronic angular momentum $J$. Thereby, the $Q$-moments are determined by the nuclear $Q$-moments or hyperfine mixing [11] and, as such, are strongly suppressed. As an indication of attainable accuracy, Refs. [26, 27] evaluated relevant properties of the clock transitions in 61Ni15+ and 58Ni12+ and estimated common systematic uncertainties to be below $10^{-19}$, in line with the more general estimates of Ref. [19]. The second-order Doppler shift induced by the excess micromotion of the trapped ion is expected to be suppressed to below $10^{-19}$ by compensating the stray electric field to a level below 0.1 V/m [32, 33]. In a cryogenic trap, the heating rate of the trapped ions caused by the collisions with the background gas and the anomalous motional heating is reduced, and hence the second-order Doppler shift induced by the secular motion is also expected to be sufficiently small [18]. Based on these arguments, we expect the attainable fractional systematic uncertainty of all the six clock transitions in Ni HCIs to be $10^{-19}$. Figure 1: Partial energy-level diagrams for highly charged nickel ions. Clock transitions are explicitly drawn. Magnetic-dipole ($M$1) transitions are shown in magenta and electric-quadrupole ($E$2) transitions in green. The labeling of transitions is the same as in Table 1. As the first essential step towards realizing the Ni HCI clocks, we produced the target ions at our newly built low-energy compact Shanghai-Wuhan Electron Beam Ion Trap (SW-EBIT) [34]. The wavelengths of four $M$1 and one $E$2 clock transitions between the ground-state fine structure levels in these ions are measured to an accuracy of several ppm using a spectrograph. In particular, the three $M$1 lines $b$, $c$, and $g$ (listed in Table 1) in Ni12+ and Ni14+ are observed and characterized for the first time in the laboratory. We also carried out calculations for these ions using an ab initio relativistic method of atomic structure, the multi-configuration Dirac-Hartree-Fock (MCDHF) method [35, 36]. We evaluated relevant spectroscopic properties, such as transition wavelengths and natural linewidths. We also estimated the sensitivity to the hypothetical variation of the fine structure constant $\alpha$ and found that all considered clock transitions in Ni HCIs are more susceptible to the variation than most of the commonly employed singly charge ions or neutral atoms. Thus, Ni HCIs can be used for placing stringent constraints on the spatial or temporal variation of $\alpha$. ## II EXPERIMENTAL METHOD AND RESULTS ### II.1 Production of Ni HCIs To produce Ni HCIs, we injected the Ni(C5H5)2 (nickelocene) molecular beam into the trap center. Then the charge-state distribution of Ni HCIs was measured using the electron-beam current of 6 mA and the electron-beam energy of 500 eV, which is higher than the ionization energies 319.5 eV, 351.6 eV, 429.3 eV, and 462.8 eV needed for Ni11+, Ni12+, Ni14+, and Ni15+, respectively. The extraction period was 0.3 s and the magnetic flux density was 0.16 T. As shown in Fig. 2, the target ions Ni11+, Ni12+, Ni14+, and Ni15+ were produced, and the ions of two distinct isotopes, 60Ni and 58Ni, were resolved. The techniques for measuring charge-state distribution are described in Ref. [34]. Figure 2: Charge-state distribution of the Ni HCIs, obtained by averaging 3 measurements. ### II.2 Spectral measurements The spectra of the trapped Ni HCIs were observed by a Czerny-Turner spectrograph (Andor Kymera 328i) equipped with an Electron Multiplying Charge- Coupled Device (EMCCD, Andor Newton 970, pixel: $1600\times 200$, pixel size: 16 $\mu$m) and a 1200 l/mm grating blazed at 500 nm. To maximize the number of the Ni HCIs of a specific charge state, different electron-beam energies were used, i.e. 370 eV, 400 eV, 500 eV, and 540 eV for Ni11+, Ni12+, Ni14+, and Ni15+, respectively. As illustrated in Fig. 3, the fluorescence emitted from the Ni HCIs was focused by a single N-BK7 Bi-Convex lens (focal length $f=10$ cm at 633 nm) on the spectrograph entrance slit. The distance between the trap (DT2, drift tube 2 in SW-EBIT [34]) center and the front principal plane of the lens remained fixed at 197 mm, which was about twice the focal length. Before setting up the spectrograph, a Charge-Coupled Device (CCD) was placed on the image plane to image the two inner edges of DT2 (1 mm slit width) that were illuminated by the hot cathode. In order to ensure that the lens was aligned with the optical axis, we adjusted the angle and position of the lens until the edge image became mirror-symmetric. Because of the dispersion of the lens, to ensure that the spectrograph slit was always precisely located on the image plane, the distance $L$ between the slit of the spectrograph and the back principal plane of the lens was calculated and adjusted for every central wavelength of the measured spectra. The grating was set to zero-order to image the inner edges of DT2 through spectrograph with its maximum slit width and minimum iris aperture behind the slit. Similarly, the angle and position of the spectrograph were adjusted until the image of the edges became mirror- symmetric to ensure the spectrograph alignment with the optical axis. A one inch aperture was placed before the lens to block the stray light. For calibration, a conjugated optical system was installed on the opposite side of the spectrograph. A diffuser attached by a 0.5 mm slit was placed on the object plane. A low-pressure gas-discharge lamp filled with Kr illuminated the slit, and the slit was imaged to the trap center to overlap with the trapped ion cloud. During the spectral exposure time of 10 to 60 minutes, the Kr lamp as the calibration light source flashed at a period of 1 to 3 minutes. The slit of the spectrograph was set to 30 $\mu$m, and the iris aperture in the spectrograph was set to 40 steps to obtain the F/7.6 aperture. The focal length of the spectrograph was tuned to minimize the linewidth in each spectral range. All the spectra were binned to a non-dispersive direction after removing the cosmic ray noise, as shown in Fig. 4 (a). The dispersion function was obtained by fitting the NIST-tabulated Ritz in-the-air wavelengths of the calibration lines to a cubic polynomial, against their column numbers of the line centroids. The residuals of the calibration lines and the 1-$\sigma$ fitting confidence band are shown in Fig. 4 (b). To determine the line centroids, the measured lines and calibration lines were fitted to a Gaussian or a multi- Gaussian profile, as shown in Fig. 4 (c). Figure 3: Scheme of observation and calibration of the measured lines. The Ni HCIs are trapped at the center of DT2 in SW-EBIT. Figure 4: (a) A spectrum of line $a$ from Ni11+ and its calibration lines from Kr atom whose approximate wavelengths are labeled in the figure in nm. (b) Residuals of cubic polynomial fits of the calibration lines. The gray band is a 1-$\sigma$ confidence band. The uncertainties in the calibration lines are dominated by the line centroid uncertainties of the Gaussian fits. (c) Spectrum of line $a$ and its Gaussian fit. ### II.3 Observed wavelengths Previously, these five $M$1 lines in Table 1 were observed in the solar corona emission [37] with a wavelength uncertainty of tens of picometer. The lines $a$ and $d$ have been also measured in Tokamak [38, 39], but experimental observation of the other three lines $b$, $c$, and $g$ has not been reported in the literature. In this work, we observed and identified all five $M$1 lines emitted from the nickel plasma in the SW-EBIT in a controlled laboratory setting. The measured wavelengths agree with the Ritz wavelengths in NIST database [40], as shown in Table 1, where the wavelengths between air and vacuum were converted by an empirical equation [41]. However, for line $d$, our result of 360.105(2) nm substantially deviates from the value of 360.123(2) nm observed from Tokamak plasma by Hinnov _et al._ [39]. To test our result, two lines from Ar+ were measured without any change of the optical system comparing to the measurement of line $d$, and the measured wavelengths in air were 357.660(2) nm and 358.843(2) nm, which were in good agreement with the Ritz wavelengths in NIST database, i.e., 357.661538 nm and 358.844021 nm. For the $E$2 lines $e$ and $f$, the transition rates are too small to be observable by our technique. However, we deduced the wavelength of line $f$ in Table 1 from those of lines $c$ and $g$ via the Rydberg-Ritz combination principle. ### II.4 Measurement uncertainties Line centroid uncertainty. The line centroids of the measured lines and their calibration lines were determined by the centers of the fitted Gaussian profiles. Since the statistical uncertainty of the line centroid was mainly caused by the low signal-to-noise ratio, we evaluated the statistical uncertainty by performing at least 20 measurements on the line of interest, as shown in Fig 5. For all five measured lines, this uncertainty was smaller than 0.4 pm. The systematic uncertainty of the line centroid is mainly caused by the non-ideal Gaussianity of the line because of the optical aberration and the Zeeman components. In this work, since the measured lines and their calibration lines shared a similar profile, the optical aberration effect was largely offset. In the trap center, the magnetic flux density was $\sim$0.16 T, resulting in a $\sim$2 pm splitting between the Zeeman components of the clock transitions, which was relatively small (unresolved) compared to the $\sim$90 pm linewidth. Furthermore, the Zeeman effect would not alter the line centroid because the Zeeman components were symmetrically distributed; in addition, the Zeeman effect was negligible for the Kr lamp due to the low magnetic field of 0.4 mT. Figure 5: The calibrated wavelengths of line $a$ in air derived from a series of 26 measurements. The wavelength uncertainty of each single spectrum was calculated as the quadrature of the line centroid uncertainty and the 1-$\sigma$ confidence interval of the fitted dispersion function. The weighted average wavelength is represented by the solid purple line and its uncertainty is represented by lilac band. Dispersion function uncertainty. The statistical uncertainty for the dispersion function was caused by the centroid statistical uncertainties of calibration lines, which were reduced by the statistics of the line centroids. The systematic dispersion function uncertainty of a measured line was estimated by averaging the absolute values of the fitted residuals of its calibration lines of all the measured spectra. Calibration systematic uncertainty. Since the image of the calibration light source might not be overlapped exactly with the trapped ion cloud, the spatial deviation and misalignment could cause wavelength offset between the measured lines and their calibration lines. In this work, a spatial deviation of less than 2 mm would result in a wavelength uncertainty of less than 1 pm. The misalignment could cause a wavelength uncertainty of less than 1 pm, which was estimated from five measurements of the Ar9+ 553 nm line by resetting the optical system every time. Thereby, the overall systematic uncertainty caused by our calibration scheme was expected to be less than 2 pm. Other uncertainties. In this work, the calibration light source and the fluorescence of the trapped ions were exposed to the spectrograph almost simultaneously, indicating that the temperature drift and the mechanical drift were canceled out. The shifts due to the Stark effect and collisions can also be neglected at this level of accuracy. The wavelengths of the selected calibration lines are reliable because their uncertainties in the NIST database are all less than 0.3 pm. Table 2 is the uncertainty budget for the lines $a$-$d$ and $g$ in air. The total uncertainty was calculated as the quadrature of all the uncertainties, which was dominated by the calibration systematic uncertainty. In order to test the reliability of the uncertainty estimation, the wavelengths in-the-air of the three lines from Ar HCI were measured, i.e., Ar9+ 553.327(2) nm, Ar10+ 691.689(2) nm, and Ar13+ 441.255(2) nm, which were consistent with the previous measured values of 553.3265(2) nm, 691.6878(12) nm, and 441.255919(6) nm, respectively [42, 43]. The total wavelength uncertainty of this observation and calibration scheme was approximately 2 pm, which was comparable to the uncertainty of the scheme that the measured lines were calibrated by the lines from the buffer gas observed by a similar resolution spectrograph [44, 45], but larger than the uncertainty of the scheme that the calibration source was overlapped with the real image of the ion cloud observed by a higher resolution spectrograph [42]. Compared to these two schemes, our scheme is more convenient and flexible. The uncertainty may be reduced by using a higher resolution spectrograph that is less sensitive to the calibration optical system. Table 2: Uncertainty budget of the measured lines. Source | Uncertainty in wavelength (pm) ---|--- Line | $a$ | $b$ | $c$ | $d$ | $g$ Line centroid | 0.2 | 0.2 | 0.1 | 0.2 | 0.3 Dispersion function | 0.1 | 0.3 | 0.3 | 0.2 | 0.4 Calibration systematic | 2 | 2 | 2 | 2 | 2 Total | 2 | 2 | 2 | 2 | 2 ## III THEORETICAL METHOD AND RESULTS ### III.1 MCDHF calculations In the MCDHF method, an atomic wave function $\Psi$ is constructed as a linear combination of configuration state functions (CSFs) $\Phi$ of the same parity $P$, the total angular momentum $J$, and its projection $M_{J}$, i.e., $\Psi(\Gamma PJM_{J})=\sum_{i=1}^{N_{\mathrm{CSF}}}{c_{i}\Phi(\gamma_{i}PJM_{J})}.$ (1) Here $c_{i}$ is the mixing coefficient and $\gamma_{i}$ stands for the remaining quantum numbers of the CSFs. Each CSF itself is a linear combination of products of one-electron Dirac orbitals. Both mixing coefficients and orbitals are optimized in the self-consistent field calculation. After a set of orbitals is obtained, the relativistic configuration interaction (RCI) calculations are used to capture more electron correlations. In addition to the Coulomb interactions, our RCI calculations also include the Breit interaction in the low-frequency approximation and the quantum electrodynamic (QED) corrections. In order to obtain high-quality atomic wave functions, we designed an elaborate computational model as follows. In the first step, the self- consistent field (SCF) calculations were performed successively to generate virtual orbitals. The virtual orbitals were employed to form CSFs which account for certain electron correlations. More specifically, CSFs were produced through single (S)- and double (D)-electron excitations from the occupied Dirac-Hartree-Fock orbitals to virtual orbitals, but the double excitation from the atomic core $1s^{2}2s^{2}2p^{6}$ orbitals were excluded at this stage. The virtual orbitals were augmented layer by layer up to $n_{\rm{max}}=12$ and $l_{\rm{max}}=6$, where $n_{\rm{max}}$ and $l_{\rm{max}}$ denote, respectively, the maximum principal quantum number and the maximum angular quantum number of the virtual orbitals. In the second step, the single-reference configuration RCI calculations were performed with the configuration space constructed from SD excitation from all occupied orbitals to the set of virtual orbitals. In other words, the correlation between electrons in the atomic core, which were neglected in the first step, were captured. In the last step, we considered part of contributions from the triple- and quadruple-excitation CSFs. In order to limit the number of CSFs, the MR-SD approach was adopted to produce corresponding CSFs [46, 47]. The multi-reference (MR) configuration sets were created as $\\{3s3p^{5}3d$, $3s^{2}3p^{3}3d^{2}\\}$ for Ni11+, $\\{3s3p^{4}3d$, $3s^{2}3p^{2}3d^{2}$, $3p^{6}\\}$ for Ni12+, $\\{3s3p^{2}3d$, $3s^{2}3d^{2}$, $3p^{4}\\}$ for Ni14+, and $\\{3p^{3}$, $3s3p3d$, $3p3d^{2}\\}$ for Ni15+. Additionally, the Breit interaction and the QED corrections were included in the RCI computation. Once the atomic wave functions are obtained, we are in a position to calculate the physical quantities under investigation, that is, the reduced matrix elements for corresponding rank $k$ irreducible tensor operators between two atomic states, i.e., $\langle\Psi(\Gamma PJ)\|O^{(k)}\|\Psi(\Gamma^{{}^{\prime}}P^{{}^{\prime}}J^{{}^{\prime}})\rangle\,.$ The magnetic dipole and electric quadrupole transition operators are rank 1 and rank 2 operators, respectively. In practice, we performed the calculations using the GRASP2018 package [48]. ### III.2 Calculated wavelengths As shown in Table 1, the calculated wavelengths of the $M$1 transitions of line $a$ through line $d$ and line $g$ agree with our measured values. The wavelengths of the two $E$2 transitions of line $e$ and line $f$ in Ni12+ and Ni14+ were also calculated. These two lines have not been observed yet before. Our calculated wavelengths for these two transitions are in agreement with the NIST recommended values. Meanwhile, the calculated wavelength of line $f$ also agrees with our indirect measurement, see Table 1. ### III.3 Properties of the clock transitions The design of an atomic clock relies on the knowledge of atomic parameters of the quantum oscillator. Thus, we have computed wavelengths, spontaneous emission rates $A$, lifetimes $\tau$, linewidths $\Gamma$ ($2\pi\Gamma=1/\tau$), and other parameters for all six candidate clock transitions, and the results are listed in Table 3. As one of the key parameters of clock stability, the quality factor ($Q$-factor) is also given in this table. The $Q$-factor is defined as the ratio of the clock frequency $\nu_{\mathrm{clk}}$ to the linewidth $\Gamma$ of the clock transition, i.e., $Q=\nu_{\mathrm{clk}}/\Gamma$. Among the four $M$1 clock transitions, the ${}^{3}\\!P_{1}-{{}^{3}\\!P}_{0}$ transition in Ni14+ is the narrowest with its linewidth less than $10\,\mathrm{Hz}$, while the linewidths of the other three $M$1 transitions are about $30\,\mathrm{Hz}$. The corresponding $Q$-factors of these four $M$1 transitions are $\sim 10^{13}$. There are two decay channels from ${}^{3}\\!P_{0}$ in Ni12+ and ${}^{3}\\!P_{2}$ in Ni14+ to the lower states. In order to determine the linewidth of these $E$2-clock transitions , both decay channels should be taken into account. For ${}^{3}\\!P_{0}$ in Ni12+, the decay rate is 0.037 s-1 for the $E$2 (${}^{3}\\!P_{0}-{{}^{3}\\!P}_{2}$) channel and 0.011 s-1 for the $M$1 (${}^{3}\\!P_{0}-{{}^{3}\\!P}_{1}$) channel. For ${}^{3}\\!P_{2}$ of Ni14+, the $E$2 (${}^{3}\\!P_{2}-{{}^{3}\\!P}_{0}$) and $M$1 (${}^{3}\\!P_{2}-{{}^{3}\\!P}_{1}$) transition rates are 0.03 s-1 and 22.5 s-1, respectively. Therefore, the linewidths for the $E$2-clock transitions are $3.6~{}\mathrm{Hz}$ for ${}^{3}\\!P_{2}-{{}^{3}\\!P}_{0}$ in Ni14+ and $8~{}\mathrm{mHz}$ for ${}^{3}\\!P_{0}-{{}^{3}\\!P}_{2}$ in Ni12+, which are respectively smaller than the $M$1 transition lines c and b, as marked in Fig. 1. This $E$2 transition in Ni12+ is particularly attractive for stable clockwork [27], because of its relatively high $Q$-factor of $7.5\times 10^{16}$, meaning that the statistical uncertainty limited by the quantum projection noise [49, 50, 18] of this transition can reach the level of $10^{-19}$ by averaging over a few days. Table 3: Theoretical spectral properties of clock transitions. Here $A$ is the Einstein coefficient for spontaneous decay, $\tau$ is the lifetime of the upper clock state, $\Gamma$ is the natural linewidth, and $Q$ is the transition quality factor. Also, $q$ and $K$ are, respectively, the sensitivity coefficient and enhancement factor for the variation of the fine structure constant. Numbers in square brackets stand for the powers of 10, i.e., $x[y]\equiv x\times 10^{y}$. Transition | Type | $A$ (s-1) | $\tau$ (ms) | $\Gamma$ (Hz) | $Q$ | $q$ (cm-1) | $K$ | ---|---|---|---|---|---|---|---|--- Ni11+ $3s^{2}3p^{5}$ ${}^{2}\\!P_{1/2}-{{}^{2}\\!P}_{3/2}$ | $M$1 | 238(2) | 4.2(1) | 38 | 1.9[13] | 24820 | 2.1 | | | 236.31(3) | 4.23(2) | | | 24464 | | Ref. [51] Ni12+ $3s^{2}3p^{4}$ ${}^{3}\\!P_{1}-{{}^{3}\\!P}_{2}$ | $M$1 | 157(1) | 6.3(1) | 25 | 2.3[13] | 22473 | 2.3 | | | 154 | 6.5 | | | | | Ref. [27] ${}^{3}\\!P_{1}-{{}^{3}\\!P}_{2}$ | $E$2 | 0.02 | | | | | | ${}^{3}\\!P_{0}-{{}^{3}\\!P}_{2}$ | $E$2 | 0.037(4) | 21(3)[3] | 0.008 | 7.5[16] | 14982 | 1.5 | | | 0.03 | 19[3] | | 1.1[16] | | | Ref. [27] ${}^{3}\\!P_{0}-{{}^{3}\\!P}_{1}$ | $M$1 | 0.011(2) | | | | | | Ni14+ $3s^{2}3p^{2}$ ${}^{3}\\!P_{1}-{{}^{3}\\!P}_{0}$ | $M$1 | 56.1(5) | 17.8(2) | 9 | 5.0[13] | 20340 | 2.7 | ${}^{3}\\!P_{2}-{{}^{3}\\!P}_{0}$ | $E$2 | 0.030(1) | 44(1) | 3.6 | 2.3[14] | 28197 | 2.1 | ${}^{3}\\!P_{2}-{{}^{3}\\!P}_{1}$ | $M$1 | 22.5(4) | | | | | | ${}^{3}\\!P_{2}-{{}^{3}\\!P}_{1}$ | $E$2 | 0.001 | | | | | | Ni15+ $3s^{2}3p$ ${}^{2}\\!P_{3/2}-{{}^{2}\\!P}_{1/2}$ | $M$1 | 193(2) | 5.2(1) | 31 | 2.7[13] | 29204 | 2.1 | | | 190.99 | 5 | 30.38 | 2.73[13] | 89391 | | Ref. [26] From the perspective of searching for new physics, we anticipate that by monitoring the Ni HCI clock transition frequencies, stringent constraints could be placed on the possible time variation of the fine structure constant $\alpha$. Following Refs. [52, 17], one can introduce the “sensitivity coefficient” $q$, defined by $\omega(x)=\omega_{0}+qx$, where $x\equiv(\alpha/\alpha_{0})^{2}-1$ and $\omega_{0}$ is the clock transition frequency at the nominal value of the fine structure constant $\alpha_{0}$. The sensitivity coefficient $q$ characterizes the linear response of the clock frequency $\omega(x)$ to the variation of $\alpha$, and can be calculated numerically as $q\approx[\omega(+x)-\omega(-x)]/{(2x)}$. Another commonly used quantity is the dimensionless enhancement factor [17] $K=\partial\ln\omega/\partial\ln\alpha\approx 2q/\omega_{0}$. As shown in Table 3, our computed $K$ values for the relevant transitions in nickel HCIs are about $2$, which is higher than most of the current optical clocks. For example [53], Al+ has $K=0.008$. Out of $\sim 10$ species currently used in the optical clock community, only the heavy Yb+ and Hg+ ions have $|K|>2$ [53]. Therefore, we expect that, even with their initial predicted accuracy of $10^{-19}$, the quantum clocks based on the relatively light Ni HCIs will have greater potential for exploring new physics than most of the current atomic clocks. Recently, an improved constraint of $\dot{\alpha}/\alpha=1.0(1.1)\times 10^{-18}/$year was reported based on the comparison of the ${}^{2}S_{1/2}(F=0)-{{}^{2}D}_{3/2}(F=2)$ ($E$2, $K=1.00$) and the ${}^{2}S_{1/2}(F=0)-{{}^{2}F}_{7/2}(F=3)$ ($E$3, $K=-5.95$) transition of 171Yb+ clock [54]. The constraint on the temporal variation of $\alpha$ is expected to be further improved by comparing two clocks based on the $E$2 transition of Ni12+ and the $E$3 transition of Yb+, because of its larger $K$ value and smaller projected systematic and statistical uncertainties of the $E$2 transition in Ni12+ than those of the $E$2 transition in Yb+. Nandy and Sahoo [51] determined the sensitivity coefficient to the $\alpha$-variation for the ${{}^{2}\\!P_{1/2}}-{{}^{2}\\!P_{3/2}}$ transition in Ni11+ ion. In their work, the transition rate and the lifetime of the ${{}^{2}\\!P_{1/2}}$ state were calculated using the relativistic coupled- cluster (RCC) method. Yu and Sahoo [26, 27] calculated some atomic parameters for the ${{}^{2}\\!P_{3/2}}-{{}^{2}\\!P_{1/2}}$ transition in Ni15+ and the ${{}^{3}\\!P_{0}}-{{}^{3}\\!P_{2}}$ transition in Ni12+ with the RCC and MCDHF methods. Their results are also listed in Table 3 for comparison. For lines $a$, $d$, and $e$, our calculated values agree well with other theoretical results [51, 26, 27], except for a factor of 3 difference for the sensitivity coefficient $q$ of line $d$. There is also a factor of 6 difference in the value of the $Q$-factor of line $e$, for which we traced back to the trivial factor of $2\pi$ missing in the linewidth definition in Ref. [27]. Previous theoretical work on nickel HCIs focuses on atomic properties relevant to the emission from the solar, astrophysical, and laboratory plasmas. In Tables 4 and 5, we present a comparison with the literature values for the spontaneous decay rates and lifetimes. Overall, our MCDHF values agree well with the results from other theoretical methods, such as the RCC method and the multi-reference Møller-Plesset perturbation theory. Moreover, the lifetimes of the ${}^{2}\\!P_{1/2}$ state in Ni11+ ion, the ${}^{3}\\!P_{1}$ state in Ni12+ ion, and the ${}^{2}\\!P_{3/2}$ state in Ni15+ ion were measured at the heavy-ion storage ring [55, 56, 57]. We found a good agreement between theory and experiment. Table 4: Spontaneous emission rates in Ni HCIs, in s-1. Line | This work | Other theory ---|---|--- $a$ | 238(2) | 235 [58], 260 [59], 236.31(3) [51], 237 [60], 213.1 [61] $b$ | 157(1) | 154 [27],156.9 [62], 157.4 [63, 64], 157 [60, 65], 156 [66] $c$ | 56.1(5) | 56.08 [67], 57 [68], 52.7 [69], 56.45 [62], 56.42 [70], 56.5 [60], 54.66 [71], 56 [66] $d$ | 193(2) | 192.2 [72], 190.99 [26] $e$ | 0.037(4) | 0.034 [27], 0.03622 [62] 0.037 [64],0.03702 [63], 0.0355 [65], 0.048 [66] $f$ | 0.030(1) | 0.03 [67],0.029 [69], 0.03044 [62], 0.0157 [70], 0.031 [71] 0.028 [66] Table 5: Lifetimes (in ms) of upper clock states in Ni11+, Ni12+, Ni14+, and Ni15+. Numbers in square brackets stand for the powers of 10, i.e., $x[y]\equiv x\times 10^{y}$. Ion | State | This work | Other theory | Experiment ---|---|---|---|--- Ni11+ | ${{}^{2}\\!P_{1/2}}$ | 4.2(1) | 4.25 [58], 4.23(2) [51] | 4.166(60) [55] Ni12+ | ${}^{3}\\!P_{1}$ | 6.3(1) | 6.5 [27], 6.55 [73], 6.59 [73] | 7.3(2)∗, 6.50(15)∗∗ [56] Ni12+ | ${}^{3}\\!P_{0}$ | 21(3)[3] | 22.1[3] [73], 19.5[3] [73], 19[3] [27] | Ni14+ | ${}^{3}\\!P_{1}$ | 17.8(2) | 17.8 [67], 17.7 [70] | Ni14+ | ${}^{3}\\!P_{2}$ | 44(1) | 44.6 [67], 45.1 [70] | Ni15+ | ${{}^{2}\\!P_{3/2}}$ | 5.2(1) | 5.2 [72], 5 [26], 5.184 [74] | 5.90(1)∗, 5.27(7)∗∗ [57] * Single exponential evaluation ** Multi-exponential evaluation ### III.4 Computational uncertainties The computational uncertainties in our work include the neglected correlation contributions, such as the triple- and quadruple-electron excitations involving the $1s$ orbital. The upper limit on these effects was estimated from the double excitations of the core orbitals in the single-reference configuration RCI calculations. The “truncation” uncertainties due to the finite number of virtual orbitals were evaluated based on the convergence trends in the above-mentioned three steps. For the wavelengths, all the uncertainties were summed together in quadrature. For the $M$1 transition rates, in addition to these uncertainties, we also included the frequency- dependent Breit interaction contribution as another source of error. For the $E$2 transition rates, the difference in results between the Babushkin and Coulomb gauges [75] were treated as an additional contribution to the combined theoretical uncertainty. ## IV CONCLUSIONS To reiterate, the quantum clockwork we explored here provides an intriguing possibility for achieving high accuracy on multiple transitions in HCIs of the same element. Our strategy offers an important flexibility in the pursuit of multiple candidate clock transitions. Particularly, the $E$2 transition in 61Ni12+ has projected fractional uncertainty $10^{-19}$. We demonstrated the key experimental capabilities of using our SW-EBIT facility to generate and extract Ni11+, Ni12+, Ni14+, and Ni15+ ions. We measured the wavelengths of four $M$1 and one $E$2 clock transitions in these ions with the uncertainties of about 2 pm. The measured wavelengths establish an important reference for precision laser spectroscopy in future clock transition measurements. We also calculated spectroscopic properties of the relevant $M$1 and $E$2 clock transitions. The calculated wavelengths are consistent with our experimental results and with previous determinations. The calculated properties indicate that these ions are suitable for precision quantum metrology and for exploring new physics beyond the standard model of particle physics. ###### Acknowledgements. The authors thank Xin Tong, José R. Crespo López-Urrutia, and Yan-Mei Yu for helps and for fruitful discussions. This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB21030300), the National Natural Science Foundation of China (Grant Nos. 11934014, 11622434, 11974382, 11604369, 11974080, 11704398, and 11874090), the National Key Research and Development Program of China under Grant No. 2017YFA0304402, the CAS Youth Innovation Promotion Association (Grant Nos. Y201963 and 2018364), the Hubei Province Science Fund for Distinguished Young Scholars (Grant No. 2017CFA040), and the K. C. Wong Education Foundation (Grant No. GJTD-2019-15). ZCY was supported by NSERC of Canada. Work of A.D. was supported in part by the U.S. National Science Foundation. ## References * Chou _et al._ [2010] C. W. Chou, D. B. Hume, T. Rosenband, and D. J. Wineland, Science 329, 1630 (2010). * Bondarescu _et al._ [2012] R. Bondarescu, M. Bondarescu, G. Hetényi, L. Boschi, P. Jetzer, and J. Balakrishna, Geophys. J. Int. 191, 78 (2012). * Derevianko and Pospelov [2014] A. Derevianko and M. Pospelov, Nat. Phys. 10, 933 (2014). * Arvanitaki _et al._ [2015] A. Arvanitaki, J. Huang, and K. Van Tilburg, Phys. Rev. D 91, 015015 (2015). * Dailey _et al._ [2020] C. Dailey, C. Bradley, D. F. J. Kimball, I. Sulai, S. Pustelny, A. Wickenbrock, and A. Derevianko, arXiv:2002.04352 (2020). * Brewer _et al._ [2019] S. M. Brewer, J. S. Chen, A. M. Hankin, E. R. Clements, C. W. Chou, D. J. Wineland, D. B. Hume, and D. R. Leibrandt, Phys. Rev. Lett. 123, 33201 (2019). * Huntemann _et al._ [2016] N. Huntemann, C. Sanner, B. Lipphardt, C. Tamm, and E. Peik, Phys. Rev. Lett. 116, 063001 (2016). * McGrew _et al._ [2018] W. F. McGrew, X. Zhang, R. J. Fasano, S. A. Schäffer, K. Beloy, D. Nicolodi, R. C. Brown, N. Hinkley, G. Milani, M. Schioppo, T. H. Yoon, and A. D. Ludlow, Nature 564, 87 (2018). * Oelker _et al._ [2019] E. Oelker, R. B. Hutson, C. J. Kennedy, L. Sonderhouse, T. Bothwell, A. Goban, D. Kedar, C. Sanner, J. M. Robinson, G. E. Marti, D. G. Matei, T. Legero, M. Giunta, R. Holzwarth, F. Riehle, U. Sterr, and J. Ye, Nat. Photonics 13, 714 (2019). * Peik and Tamm [2003] E. Peik and C. Tamm, Europhysics Letters (EPL) 61, 181 (2003). * Campbell _et al._ [2012] C. J. Campbell, A. G. Radnaev, A. Kuzmich, V. A. Dzuba, V. V. Flambaum, and A. Derevianko, Phys. Rev. Lett. 108, 120802 (2012). * Von Der Wense _et al._ [2016] L. Von Der Wense, B. Seiferle, M. Laatiaoui, J. B. Neumayr, H. J. Maier, H. F. Wirth, C. Mokry, J. Runke, K. Eberhardt, C. E. Düllmann, N. G. Trautmann, and P. G. Thirolf, Nature 533, 47 (2016). * Seiferle _et al._ [2019] B. Seiferle, L. von der Wense, P. V. Bilous, I. Amersdorffer, C. Lemell, F. Libisch, S. Stellmer, T. Schumm, C. E. Düllmann, A. Pálffy, and P. G. Thirolf, Nature 573, 243 (2019). * Derevianko _et al._ [2012] A. Derevianko, V. A. Dzuba, and V. V. Flambaum, Phys. Rev. Lett. 109, 180801 (2012). * Safronova _et al._ [2014] M. S. Safronova, V. A. Dzuba, V. V. Flambaum, U. I. Safronova, S. G. Porsev, and M. G. Kozlov, Phys. Rev. Lett. 113, 030801 (2014). * Berengut _et al._ [2010] J. C. Berengut, V. A. Dzuba, and V. V. Flambaum, Phys. Rev. Lett. 105, 120801 (2010). * Safronova _et al._ [2018] M. S. Safronova, D. Budker, D. Demille, D. F. Kimball, A. Derevianko, and C. W. Clark, Rev. Mod. Phys. 90, 25008 (2018). * Kozlov _et al._ [2018] M. G. Kozlov, M. S. Safronova, J. R. Crespo López-Urrutia, P. O. Schmidt, J. R. C. López-Urrutia, and P. O. Schmidt, Rev. Mod. Phys. 90, 045005 (2018). * Yudin _et al._ [2014] V. I. Yudin, A. V. Taichenachev, and A. Derevianko, Phys. Rev. Lett. 113, 233003 (2014). * Micke _et al._ [2020] P. Micke, T. Leopold, S. A. King, E. Benkler, L. J. Spieß, L. Schmöger, M. Schwarz, J. R. Crespo López-Urrutia, and P. O. Schmidt, Nature 578, 60 (2020). * Schmoger _et al._ [2015] L. Schmoger, O. O. Versolato, M. Schwarz, M. Kohnen, A. Windberger, B. Piest, S. Feuchtenbeiner, J. Pedregosa-Gutierrez, T. Leopold, P. Micke, A. K. Hansen, T. M. Baumann, M. Drewsen, J. Ullrich, P. O. Schmidt, and J. R. Crespo López-Urrutia, Science 347, 1233 (2015). * Schiller [2007] S. Schiller, Phys. Rev. Lett. 98, 180801 (2007). * Berengut _et al._ [2011] J. C. Berengut, V. A. Dzuba, V. V. Flambaum, and A. Ong, Phys. Rev. Lett. 106, 210802 (2011). * Bekker _et al._ [2019] H. Bekker, A. Borschevsky, Z. Harman, C. H. Keitel, T. Pfeifer, P. O. Schmidt, J. R. Crespo López-Urrutia, and J. C. Berengut, Nat. Commun. 10, 5651 (2019). * Dzuba _et al._ [2012] V. A. Dzuba, A. Derevianko, and V. V. Flambaum, Phys. Rev. A 86, 054501 (2012). * Yu and Sahoo [2016] Y. M. Yu and B. K. Sahoo, Phys. Rev. A 94, 062502 (2016). * Yu and Sahoo [2018] Y. M. Yu and B. K. Sahoo, Phys. Rev. A 97, 041403(R) (2018). * Berengut _et al._ [2012] J. C. Berengut, V. a. Dzuba, V. V. Flambaum, and A. Ong, Phys. Rev. A 86, 022517 (2012). * Counts _et al._ [2020] I. Counts, J. Hur, D. P. L. Aude Craik, H. Jeon, C. Leung, J. C. Berengut, A. Geddes, A. Kawasaki, W. Jhe, and V. Vuletić, Phys. Rev. Lett. 125, 123002 (2020). * Solaro _et al._ [2020] C. Solaro, S. Meyer, K. Fisher, J. C. Berengut, E. Fuchs, and M. Drewsen, Phys. Rev. Lett. 125, 123003 (2020). * Berengut _et al._ [2020] J. C. Berengut, C. Delaunay, A. Geddes, and Y. Soreq, arXiv:2005.06144 (2020). * Huber _et al._ [2014] T. Huber, A. Lambrecht, J. Schmidt, L. Karpa, and T. Schaetz, Nat. Commun. 5, 5587 (2014). * Huang _et al._ [2019] Y. Huang, H. Guan, M. Zeng, L. Tang, and K. Gao, Phys. Rev. A 99, 011401(R) (2019). * Liang _et al._ [2019] S. Liang, Q. Lu, X. Wang, Y. Yang, K. Yao, Y. Shen, B. Wei, J. Xiao, S. Chen, P. Zhou, W. Sun, Y. Zhang, Y. Huang, H. Guan, X. Tong, C. Li, Y. Zou, T. Shi, and K. Gao, Rev. Sci. Instrum. 90, 093301 (2019). * Grant [2007] I. P. Grant, _Relativistic Quantum Theory of Atoms and Molecules: Theory and Computation_ (Springer Science & Business Media, New York, 2007). * Fischer _et al._ [2016] C. F. Fischer, M. Godefroid, T. Brage, P. Jönsson, and G. Gaigalas, J. Phys. B At. Mol. Opt. Phys. 49, 182004 (2016). * Jefferies _et al._ [1971] J. T. Jefferies, F. Q. Orrall, and J. B. Zirker, Sol. Phys. 16, 103 (1971). * Behringer _et al._ [1986] K. H. Behringer, P. G. Carolan, B. Denne, G. Decker, W. Engelhardt, M. J. Forrest, R. Gill, N. Gottardi, N. C. Hawkes, E. Kallne, H. Krause, G. Magyar, M. Mansfield, F. Mast, P. Morgan, M. J. Forrest, M. F. Stamp, and H. P. Summers, Nucl. Fusion 26, 751 (1986). * Hinnov _et al._ [1990] E. Hinnov, B. Denne, A. Ramsey, B. Stratton, and J. Timberlake, J. Opt. Soc. Am. B 7, 2002 (1990). * Kramida _et al._ [2019] A. Kramida, Yu. Ralchenko, J. Reader, and NIST ASD Team, NIST Atomic Spectra Database (ver. 5.7.1), https://physics.nist.gov/asd. (2019). * Peck and Reeder [1972] E. R. Peck and K. Reeder, J. Opt. Soc. Am. 62, 958 (1972). * Draganić _et al._ [2003] I. Draganić, J. R. Crespo López-Urrutia, R. DuBois, S. Fritzsche, V. M. Shabaev, R. S. Orts, I. I. Tupitsyn, Y. Zou, and J. Ullrich, Phys. Rev. Lett. 91, 183001 (2003). * Egl _et al._ [2019] A. Egl, I. Arapoglou, M. Höcker, K. König, T. Ratajczyk, T. Sailer, B. Tu, A. Weigel, K. Blaum, W. Nörtershäuser, and S. Sturm, Phys. Rev. Lett. 123, 123001 (2019). * Kimura _et al._ [2019a] N. Kimura, R. Kodama, K. Suzuki, S. Oishi, M. Wada, K. Okada, N. Ohmae, H. Katori, and N. Nakamura, Plasma and Fusion Research 14, 1 (2019a). * Kimura _et al._ [2019b] N. Kimura, R. Kodama, K. Suzuki, S. Oishi, M. Wada, K. Okada, N. Ohmae, H. Katori, and N. Nakamura, Physical Review A 100, 052508 (2019b). * Li _et al._ [2012] J. Li, P. Jönsson, M. Godefroid, C. Dong, and G. Gaigalas, Phys. Rev. A 86, 052523 (2012). * Li _et al._ [2016] J. Li, M. Godefroid, and J. Wang, J. Phys. B: At., Mol. Opt. Phys. 49, 115002 (2016). * Froese Fischer _et al._ [2019] C. Froese Fischer, G. Gaigalas, P. Jönsson, and J. Bieroń, Comput. Phys. Commun. 237, 184 (2019). * Itano _et al._ [1993] W. M. Itano, J. C. Bergquist, J. J. Bollinger, J. M. Gilligan, D. J. Heinzen, F. L. Moore, M. G. Raizen, and D. J. Wineland, Phys. Rev. A 47, 3554 (1993). * Peik _et al._ [2006] E. Peik, T. Schneider, and C. Tamm, J. Phys. B At. Mol. Opt. Phys. 39, 145 (2006). * Nandy and Sahoo [2013] D. K. Nandy and B. K. Sahoo, Phys. Rev. A 88, 052512 (2013). * Dzuba _et al._ [1999] V. A. Dzuba, V. V. Flambaum, and J. K. Webb, Phys. Rev. A 59, 230 (1999). * Flambaum and Dzuba [2009] V. V. Flambaum and V. A. Dzuba, Can. J. Phys. 87, 25 (2009). * Lange _et al._ [2021] R. Lange, N. Huntemann, J. M. Rahm, C. Sanner, H. Shao, B. Lipphardt, C. Tamm, S. Weyers, and E. Peik, Phys. Rev. Lett. 126, 011102 (2021). * Träbert _et al._ [2004a] E. Träbert, G. Saathoff, and A. Wolf, J. Phys. B At. Mol. Opt. Phys. 37, 945 (2004a). * Träbert _et al._ [2004b] E. Träbert, G. Saathoff, and A. Wolf, Eur. Phys. J. D 30, 297 (2004b). * Träbert _et al._ [2009] E. Träbert, J. Hoffmann, C. Krantz, A. Wolf, Y. Ishikawa, and J. A. Santana, J. Phys. B At. Mol. Opt. Phys. 42, 025002 (2009). * Bilal _et al._ [2017] M. Bilal, R. Beerwerth, A. V. Volotka, and S. Fritzsche, Mon. Not. R. Astron. Soc. 469, 4620 (2017). * Del Zanna and Badnell [2016] G. Del Zanna and N. R. Badnell, Astron. Astrophys. 585, A118 (2016). * Kaufman and Sugar [1986] V. Kaufman and J. Sugar, J. Phys. Chem. Ref. Data 15, 321 (1986). * Huang _et al._ [1983] K.-N. Huang, Y.-K. Kim, K. Cheng, and J. Desclaux, At. Data Nucl. Data Tables 28, 355 (1983). * Fischer [2010] C. F. Fischer, J. Phys. B At. Mol. Opt. Phys. 43, 074020 (2010). * Biémont and Hansen [1986] E. Biémont and J. E. Hansen, Phys. Scr. 34, 116 (1986). * Bhatia and Doschek [1998] A. Bhatia and G. Doschek, At. Data Nucl. Data Tables 68, 49 (1998). * Mendoza and Zeippen [1983] C. Mendoza and C. J. Zeippen, Mon. Not. R. Astron. Soc. 202, 981 (1983). * Malville and Berger [1965] J. M. Malville and R. A. Berger, Planet. Space Sci. 13, 1131 (1965). * Jönsson _et al._ [2016] P. Jönsson, L. Radži$\bar{\rm u}$t$\dot{\rm e}$, G. Gaigalas, M. R. Godefroid, J. P. Marques, T. Brage, C. Froese Fischer, and I. P. Grant, Astron. Astrophys. 585, A26 (2016). * Del Zanna _et al._ [2014] G. Del Zanna, P. J. Storey, and H. E. Mason, Astron. Astrophys. 567, A18 (2014). * Landi and Bhatia [2012] E. Landi and A. Bhatia, At. Data Nucl. Data Tables 98, 862 (2012). * Ishikawa and Vilkas [2001] Y. Ishikawa and M. J. Vilkas, Phys. Rev. A 63, 042506 (2001). * Huang [1985] K.-N. Huang, At. Data Nucl. Data Tables 32, 503 (1985). * Ekman _et al._ [2018] J. Ekman, P. Jönsson, L. Radži$\bar{\rm u}$t$\dot{\rm e}$, G. Gaigalas, G. Del Zanna, and I. Grant, At. Data Nucl. Data Tables 120, 152 (2018). * Nazir _et al._ [2017] R. T. Nazir, M. A. Bari, M. Bilal, S. Sardar, M. H. Nasim, and M. Salahuddin, Chin. Phys. B 26, 023102 (2017). * Santana _et al._ [2009] J. A. Santana, Y. Ishikawa, and E. Träbert, Phys. Scr. 79, 065301 (2009). * Grant [1974] I. P. Grant, J. Phys. B: At. Mol. Phys. 7, 1458 (1974).
Lensing of Dirac monopole in Berry’s phase Kazuo Fujikawa 1 and Koichiro Umetsu 2 1 Interdisciplinary Theoretical and Mathematical Sciences Program, RIKEN, Wako 351-0198, Japan 2 Laboratory of Physics, College of Science and Technology, and Junior College, Funabashi Campus, Nihon University, Funabashi, Chiba 274-8501, Japan ###### Abstract Berry’s phase, which is associated with the slow cyclic motion with a finite period, looks like a Dirac monopole when seen from far away but smoothly changes to a dipole near the level crossing point in the parameter space in an exactly solvable model. This topology change of Berry’s phase is visualized as a result of lensing effect; the monopole supposed to be located at the level crossing point appears at the displaced point when the variables of the model deviate from the precisely adiabatic movement. The effective magnetic field generated by Berry’s phase is determined by a simple geometrical consideration of the magnetic flux coming from the displaced Dirac monopole. ## 1 Monopole in Berry’s phase The notion of topology and topological phenomena have become common in various fields in physics. Among them, topological Berry’s phase arises when one analyzes level crossing phenomena in quantum mechanics by a careful use of the adiabatic theorem [1, 2, 3]. The basic mechanism of the phenomenon is very simple and it is ubiquitous in quantum physics. It is thus surprising that one encounters Dirac’s magnetic monopole-like topological phase [4] essentially at each level crossing point for the sufficiently slow cyclic motion in quantum mechanics [2, 5]. The general aspects of the monopole-like topological Berry’s phase in the adiabatic limit and the smooth change of Berry’s phase to a dipole in the nonadiabatic limit have been analyzed in [6] using an exactly solvable version of Berry’s model [5]. We here report on a more quantitative description of the magnetic field generated by Berry’s phase, which is essential to understand the motion of a particle placed in the monopole-like field, together with a surprising connection of the topology change of Berry’s phase with the formal geometrical movement of Dirac’s monopole in the parameter space caused by the nonadiabatic variation of parameters. This movement is characterized as an analogue of the lensing effect of Dirac’s monopole in Berry’s phase. We first briefly summarize the essential setup of the problem for the sake of completeness. Berry originally analyzed the Schrödinger equation [2] $\displaystyle i\hbar\partial_{t}\psi(t)=\hat{H}\psi(t)$ (1) for the Hamiltonian $\hat{H}=-\mu\hbar\vec{\sigma}\cdot\vec{B}(t)$ describing the motion of a magnetic moment $\mu\hbar\vec{\sigma}$ placed in a rotating magnetic field $\displaystyle\vec{B}(t)=B(\sin\theta\cos\varphi(t),\sin\theta\sin\varphi(t),\cos\theta)$ (2) with $\vec{\sigma}$ standing for Pauli matrices. The level crossing takes place at the vanishing external field $B=0$. It is explained later that this parameterization (2) describes the essence of Berry’s phase. It has been noted that the equation (1) is exactly solved if one restricts the movement of the magnetic field to the form $\varphi(t)=\omega t$ with constant $\omega$, and constant $B$ and $\theta$ [5]. The exact solution is then written as $\displaystyle\psi_{\pm}(t)$ $\displaystyle=$ $\displaystyle w_{\pm}(t)\exp\left[-\frac{i}{\hbar}\int_{0}^{t}dtw_{\pm}^{\dagger}(t)\big{(}\hat{H}-i\hbar\partial_{t}\big{)}w_{\pm}(t)\right]$ (3) $\displaystyle=$ $\displaystyle w_{\pm}(t)\exp\left[-\frac{i}{\hbar}\int_{0}^{t}dtw_{\pm}^{\dagger}(t)\hat{H}w_{\pm}(t)\right]\exp\left[-\frac{i}{\hbar}\int_{0}^{t}{\cal A}_{\pm}(\vec{B})\cdot\frac{d\vec{B}}{dt}dt\right]$ where $\displaystyle w_{+}(t)$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{c}\cos\frac{1}{2}(\theta-\alpha)e^{-i\varphi(t)}\\\ \sin\frac{1}{2}(\theta-\alpha)\end{array}\right),\ \ \ w_{-}(t)=\left(\begin{array}[]{c}\sin\frac{1}{2}(\theta-\alpha)e^{-i\varphi(t)}\\\ -\cos\frac{1}{2}(\theta-\alpha)\end{array}\right).$ (8) It is important that these solutions differ from the so-called instantaneous solutions used in the adiabatic approximation, which are given by setting $\alpha=0$; the following analysis of topology change is not feasible using the instantaneous solutions. The parameter $\alpha(\theta,\eta)$ is defined by $\mu\hbar B\sin\alpha=(\hbar\omega/2)\sin(\theta-\alpha)$ or equivalently [5] $\displaystyle\cot\alpha(\theta,\eta)=\frac{\eta+\cos\theta}{\sin\theta}$ (9) with $\eta=2\mu\hbar B/\hbar\omega$ for $0\leq\theta\leq\pi$, which specifies the branch of the cotangent function. The second term in the exponential of the exact solution (3) is customarily called Berry’s phase which is defined by a potential-like object (or connection) $\displaystyle{\cal A}_{\pm}(\vec{B})\equiv w_{\pm}^{\dagger}(t)(-i\hbar\frac{\partial}{\partial\vec{B}})w_{\pm}(t).$ (10) This potential describes an azimuthally symmetric static magnetic monopole- like object in the present case. The solution (3) is confirmed by evaluating $\displaystyle i\hbar\partial_{t}\psi_{\pm}(t)$ $\displaystyle=$ $\displaystyle\\{i\hbar\partial_{t}w_{\pm}(t)+w_{\pm}(t)[w_{\pm}^{\dagger}(t)\big{(}\hat{H}-i\hbar\partial_{t}\big{)}w_{\pm}(t)]\\}$ (11) $\displaystyle\times\exp\left[-\frac{i}{\hbar}\int_{0}^{t}dt^{\prime}w_{\pm}^{\dagger}(t^{\prime})\big{(}\hat{H}-i\hbar\partial_{t^{\prime}}\big{)}w_{\pm}(t^{\prime})\right]$ $\displaystyle=$ $\displaystyle\\{i\hbar\partial_{t}w_{\pm}(t)+w_{\pm}(t)[w_{\pm}^{\dagger}(t)\big{(}\hat{H}-i\hbar\partial_{t}\big{)}w_{\pm}(t)]$ $\displaystyle\ +w_{\mp}(t)[w_{\mp}^{\dagger}(t)\big{(}\hat{H}-i\hbar\partial_{t}\big{)}w_{\pm}(t)]\\}$ $\displaystyle\times\exp\left[-\frac{i}{\hbar}\int_{0}^{t}dt^{\prime}w_{\pm}^{\dagger}(t^{\prime})\big{(}\hat{H}-i\hbar\partial_{t^{\prime}}\big{)}w_{\pm}(t^{\prime})\right]$ $\displaystyle=$ $\displaystyle\hat{H}\psi_{\pm}(t)$ where we used $w_{\mp}^{\dagger}\big{(}\hat{H}-i\hbar\partial_{t}\big{)}w_{\pm}=0$ by noting (9), and the completeness relation $w_{+}w_{+}^{\dagger}+w_{-}w_{-}^{\dagger}=1$. The parameter $\eta\geq 0$ is written as $\displaystyle\eta=\frac{2\mu\hbar B}{\hbar\omega}=\frac{\mu BT}{\pi}$ (12) when one defines the period $T=2\pi/\omega$. The parameter $\eta$ is a ratio of the two different energy scales appearing in the model, namely, the static energy $2\mu\hbar B$ of the dipole moment in an external magnetic field and the kinetic energy (rotation energy) $\hbar\omega$: $\eta\gg 1$ (for example, $T\rightarrow\infty$ for any finite $B$) corresponds to the adiabatic limit, and $\eta\ll 1$ (for example, $T\rightarrow 0$ for finite $B$) corresponds to the nonadiabatic limit. In a mathematical treatment of the adiabatic theorem, the precise adiabaticity is defined by $T\rightarrow\infty$ with fixed $B$ [3]. The parameter $\alpha(\theta,\eta)$ in (9) is normalized as $\alpha(0,\eta)=0$ by definition. Then the topology of the monopole-like object is specified by the value $\displaystyle\lim_{\theta\rightarrow\pi}\alpha(\theta,\eta)=0,\frac{1}{2}\pi,\pi,$ (13) for $\eta>1$, $\eta=1$ and $\eta<1$, respectively, as is explained later. The extra phase factor for one period of motion is written as, $\displaystyle\exp\left[-\frac{i}{\hbar}\oint{\cal A}_{\pm}(\vec{B})\cdot\frac{d\vec{B}}{dt}dt\right]$ $\displaystyle=$ $\displaystyle\exp\\{-i\oint\frac{-1\mp\cos(\theta-\alpha(\theta,\eta))}{2}d\varphi\\}$ (14) $\displaystyle=$ $\displaystyle\exp\\{-i\oint\frac{1\mp\cos(\theta-\alpha(\theta,\eta))}{2}d\varphi+2i\pi\\}$ $\displaystyle=$ $\displaystyle\exp\\{-\frac{i}{\hbar}\Omega_{\pm}\\},$ with the monopole-like integrated flux $\displaystyle\Omega_{\pm}$ $\displaystyle=$ $\displaystyle\hbar\oint\frac{[1\mp\cos(\theta-\alpha(\theta,\eta))]}{2B\sin\theta}B\sin\theta d\varphi.$ (15) In (14), we adjusted the trivial phase $2\pi i$ for the convenience of the later analysis; this is related to a gauge transformation of Wu and Yang [7, 6]. The corresponding energy eigenvalues are $\displaystyle E_{\pm}=w_{\pm}^{\dagger}(t)\hat{H}w_{\pm}(t)=\mp(\mu\hbar B\cos\alpha).$ (16) From now on, we concentrate on $\Omega_{+}$ associated with the energy eigenvalue $E_{+}$; the monopole $\Omega_{-}$ associated with the nergy eigenvalue $E_{-}$ is described by $-\Omega_{+}$ up to a gauge transformation of Wu and Yang. We then have an azimuthally symmetric monopole-like potential [6] $\displaystyle{\cal A}_{\varphi}=\frac{\hbar}{2B\sin\theta}[1-\cos\Theta(\theta,\eta)]$ (17) and ${\cal A}_{\theta}={\cal A}_{B}=0$, where we defined $\displaystyle\Theta(\theta,\eta)=\theta-\alpha(\theta,\eta).$ (18) The standard Dirac monopole [4] is recovered when one sets $\alpha(\theta,\eta)=0$ (or in the ideal adiabatic limit $\eta=\infty$ in (9)), namely, $\Theta=\theta$ in (17) and when $B$ is identified with the radial coordinate $r$ in the real space. The crucial parameter $\Theta(\theta,\eta)$ is shown in Fig.1 [6]. Figure 1: The relation between $\theta$ and $\Theta(\theta,\eta)=\theta-\alpha(\theta,\eta)$ parameterized by $\eta$. We have the exact relations $\Theta(\theta,\infty)=\theta$, $\Theta(\theta,1)=\theta/2$ and $\Theta(\theta,0)=0$, respectively, for $\eta=\infty$, $\eta=1$ and $\eta=0$. Topologically, $\eta>1$ corresponds to a monopole, $\eta=1$ corresponds to a half-monopole, and $\eta<1$ corresponds to a dipole, respectively [6]. Note that $\cos\theta_{0}=-\eta$ with $\eta<1$, for which $\partial\Theta(\theta,\eta)/\partial\theta=0$. The Dirac string appears at the singularity of the potential (17). There exists no singularity at $\theta=0$ since $\Theta(\theta,\eta)\rightarrow 0$ for $\theta\rightarrow 0$. The singularity does not appear at the origin $B=0$ with any fixed $T$ since $\alpha(\theta,\eta)\rightarrow\theta$ for $B\rightarrow 0$, namely, if one uses $\Theta(\theta,\eta)\rightarrow 0$ for $\eta=\mu BT/\pi\rightarrow 0$ in (9). In fact the potential vanishes at $B=0$ for any finite $T$; we have a useful relation in the non-adiabatic domain $\eta=\mu BT/\pi\ll 1$ [6], $\displaystyle{\cal A}_{\varphi}$ $\displaystyle\simeq$ $\displaystyle\frac{\hbar}{4B}(\mu TB/\pi)^{2}\sin\theta$ (19) that has no singularity associated with the Dirac string at $\theta=\pi$ near $B=0$ and vanishes at $B=0$. Thus the Dirac string can appear only at $\theta=\pi$ and only when $\Theta(\pi,\eta)\neq 0$, namely, $\eta\geq 1$ in Fig.1 or equivalently $\displaystyle B\geq\frac{\pi}{\mu T}$ (20) for any fixed finite $T$ [6]; the end of the Dirac string is located at $\frac{\pi}{\mu T}$ and $\theta=\pi$. The total magnetic flux passing through a small circle C around the Dirac string at the point $B$ and $\theta=\pi$ is given by the potential (17) $\displaystyle\oint_{C}{\cal A}_{\varphi}B\sin\theta d\varphi=\frac{e_{M}}{2}(1-\cos\Theta(\pi,\eta))$ (21) with $e_{M}=2\pi\hbar$. This flux agrees with the integrated flux outgoing from a sphere with a radius $B$ covering the monopole due to Stokes’ theorem, since no singularity appears except for the Dirac string. For $\eta>1$, one sees from Fig.1 that the above flux is given by $e_{M}=2\pi\hbar$ and thus Dirac’s quantization condition is satisfied in the sense $\exp[-ie_{M}/\hbar]=1$. On the other hand, the flux vanishes for $\eta<1(i.e.,B<\frac{\pi}{\mu T})$ and thus the object changes to a dipole [6]. ### 1.1 Fixed $T$ configurations We analyze the behavior of the magnetic monopole-like object (17) for fixed $T$ and varying B; this is close to the description of a monopole in the real space if one identifies $B$ with the radial variable $r$ of the real space. The topology and topology change of Berry’s phase when regarded as a magnetic monopole defined in the space of $\vec{B}$ is specified by the parameter $\eta$, as is suggested by a discrete jump of the end point $\lim_{\theta\rightarrow\pi}\Theta(\theta,\eta)$ in Fig.1 [6]. Using the exact potential (17) we have an analogue of the magnetic flux in the parameter space $\vec{B}=B(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)$, $\displaystyle{\cal B}\equiv\nabla\times{\cal A}$ $\displaystyle=$ $\displaystyle\frac{\hbar}{2}\frac{\frac{\partial\Theta(\theta,\eta)}{\partial\theta}\sin\Theta(\theta,\eta)}{\sin\theta}\frac{1}{B^{2}}{\bf e}_{B}-\frac{\hbar}{2}\frac{\frac{\partial\Theta(\theta,\eta)}{\partial B}\sin\Theta(\theta,\eta)}{B\sin\theta}{\bf e}_{\theta}$ (22) for $\theta\neq\pi$ and $B\neq 0$ with ${\bf e}_{B}=\frac{\vec{B}}{B}$, and ${\bf e}_{\theta}$ is a unit vector in the direction $\theta$ in the spherical coordinates. We have $\displaystyle\frac{\partial\Theta(\theta,\eta)}{\partial\theta}=\frac{\eta(\eta+\cos\theta)}{1+\eta^{2}+2\eta\cos\theta},$ (23) by noting $\frac{\partial\alpha(\theta,\eta)}{\partial\theta}=\frac{1+\eta\cos\theta}{(\eta+\cos\theta)^{2}+\sin^{2}\theta}$ in (9), and thus $\frac{\partial\Theta(\theta,\eta)}{\partial\theta}=0$ at $\cos\theta_{0}=-\eta$ for $\eta<1$. The factor in the second term in (22) is given by recalling $\eta=\mu TB/\pi$, $\displaystyle\frac{\partial\Theta(\theta,\eta)}{\partial B}$ $\displaystyle=$ $\displaystyle\frac{\mu T}{\pi}\frac{\partial\Theta(\theta,\eta)}{\partial\eta}$ (24) $\displaystyle=$ $\displaystyle\frac{\eta}{B}\frac{\sin\theta}{1+\eta^{2}+2\eta\cos\theta}$ using (9) and (18). Thus we have (by setting $e_{M}=2\pi\hbar$) $\displaystyle{\cal B}$ $\displaystyle\equiv$ $\displaystyle\nabla\times{\cal A}$ (25) $\displaystyle=$ $\displaystyle\frac{e_{M}}{4\pi}\frac{\sin\Theta(\theta,\eta)}{\sin\theta}\frac{\eta}{B^{2}}\frac{1}{1+\eta^{2}+2\eta\cos\theta}[(\eta+\cos\theta){\bf e}_{B}-\sin\theta{\bf e}_{\theta}]$ We also have from (9), $\displaystyle\cos\alpha=\frac{\eta+\cos\theta}{\sqrt{1+\eta^{2}+2\eta\cos\theta}},\ \ \ \sin\alpha=\frac{\sin\theta}{\sqrt{1+\eta^{2}+2\eta\cos\theta}},$ (26) and thus $\displaystyle\sin\Theta(\theta,\eta)$ $\displaystyle=$ $\displaystyle\sin(\theta-\alpha)$ (27) $\displaystyle=$ $\displaystyle\sin\theta\cos\alpha-\cos\theta\sin\alpha$ $\displaystyle=$ $\displaystyle\frac{\eta\sin\theta}{\sqrt{1+\eta^{2}+2\eta\cos\theta}},$ and similarly $\cos\Theta(\theta,\eta)=[1+\eta\cos\theta]/\sqrt{1+\eta^{2}+2\eta\cos\theta}$. We finally have the azimuthally symmetric magnetic field from (25) $\displaystyle{\cal B}$ $\displaystyle=$ $\displaystyle\frac{e_{M}}{4\pi}\frac{\eta^{2}}{B^{2}}\frac{1}{(1+\eta^{2}+2\eta\cos\theta)^{3/2}}[(\eta+\cos\theta){\bf e}_{B}-\sin\theta{\bf e}_{\theta}].$ (28) We note that $B/\eta=\pi/\mu T$ and $\theta=\pi$ define the end point of the Dirac string in the fixed $T$ picture. The magnetic field ${\cal B}$ is not singular at $\theta=\pi$ for $\eta>1$ which shows that the Dirac string is not observable if it satisfies the Dirac quantization condition. In the adiabatic limit $\eta\rightarrow\infty$ ($\pi/\mu T\rightarrow 0$ with fixed $B$) in (28), the outgoing magnetic flux agrees with that of the Dirac monopole $\displaystyle{\cal B}=\frac{e_{M}}{4\pi}\frac{1}{B^{2}}{\bf e}_{B}$ (29) located at the origin (level crossing point) in the parameter space. This is the common magnetic monopole field associated with Berry’s phase in the precise adiabatic approximation. At the origin $B=0$ with fixed finite $T$, which corresponds to the nonadiabatic limit $\eta=\mu BT/\pi\rightarrow 0$, the magnetic field (28) approaches a constant field parallel to the z-axis $\displaystyle{\cal B}=\frac{e_{M}}{4\pi}(\frac{\mu T}{\pi})^{2}[\cos\theta{\bf e}_{B}-\sin\theta{\bf e}_{\theta}].$ (30) A view of the magnetic flux generated by the monopole-like object (28) is shown in Fig.2. In passing, we comment on the notational conventions: $\vec{B}(t)$ stands for the externally applied magnetic field to define the original Hamiltonian in (1) and $\vec{B}$ is used to specify the parameter space to define Berry’s phase, and ${\cal B}$ stands for the “magnetic field” generated by Berry’s phase in the parameter space. The calligraphic symbols ${\cal A},\ {\cal B}$, ${\cal\nabla}$ and the bold ${\bf e}$ stand for vectors without arrows. Figure 2: Arrows indicating the direction and magnitude of the magnetic flux from the azimuthally symmetric monopole-like object associated with Berry’s phase (28) in the fixed $T$ picture. Two spheres with radii $B>\pi/\mu T$ (i.e., $\eta>1$) and $B<\pi/\mu T$ (i.e., $\eta<1$) are shown. The wavy line stands for the Dirac string with the end located at $B=\pi/\mu T$ and $\theta=\pi$ from which the magnetic flux is imported. Only in the ideal adiabatic limit $T\rightarrow\infty$, the end of the Dirac string and the geometrical center of Berry’s phase which is located at the origin agree. ### 1.2 Lensing of Dirac monopole in Berry’s phase We show that the monopole associated with Berry’s phase is mathematically regarded as a Dirac monopole moving away from the level crossing point of the parameter space driven by the force generated by the nonadiabatic rotating external field with finite period $T=2\pi/\omega<\infty$ in Berry’s model. We consider the configuration in Fig.3. Figure 3: A geometric picture in the 3-dimensional parameter space $\vec{B}$ with a sphere centered at $O$ and the radius $B$ by assuming azimuthal symmetry. We suppose that a genuine azimuthally symmetric Dirac monopole is located at the point $O^{\prime}$ in the parameter space. The distance between $O$ and $O^{\prime}$ is chosen at $\overline{OO^{\prime}}=B/\eta$. The three angles $\theta$, $\alpha$ and $\Theta=\theta-\alpha$ are shown. The observer is located at the point $P$. The wavy line indicates the Dirac string. We then have $\displaystyle\overline{O^{\prime}P}^{2}$ $\displaystyle=$ $\displaystyle B^{2}+(\frac{B}{\eta})^{2}-2B(\frac{B}{\eta})\cos(\pi-\theta)$ (31) $\displaystyle=$ $\displaystyle\frac{B^{2}}{\eta^{2}}[1+\eta^{2}+2\eta\cos\theta],$ and the unit vector ${\bf e}$ in the direction of $\vec{O^{\prime}P}$ is $\displaystyle{\bf e}$ $\displaystyle=$ $\displaystyle\cos\alpha{\bf e}_{B}-\sin\alpha{\bf e}_{\theta}$ (32) with ${\bf e}_{B}=\vec{B}/B$ and ${\bf e}_{\theta}$ is a unit vector in the direction of $\theta$ in the spherical coordinates. Then the magnetic flux of Dirac’s monopole located at $O^{\prime}$ when observed at the point P is given by $\displaystyle{\cal B}^{\prime}$ $\displaystyle=$ $\displaystyle\frac{e_{M}}{4\pi}\frac{1}{\overline{O^{\prime}P}^{2}}{\bf e}$ (33) $\displaystyle=$ $\displaystyle\frac{e_{M}}{4\pi}\frac{\eta^{2}}{B^{2}}\frac{1}{1+\eta^{2}+2\eta\cos\theta}(\cos\alpha{\bf e}_{B}-\sin\alpha{\bf e}_{\theta}).$ Next we fix the parameter $\alpha$. We have $(B/\eta)^{2}=B^{2}+\overline{O^{\prime}P}^{2}-2B\overline{O^{\prime}P}\cos\alpha$ which gives $\displaystyle\cos\alpha$ $\displaystyle=$ $\displaystyle\frac{1}{2B(B/\eta)\sqrt{1+\eta^{2}+2\eta\cos\theta}}[B^{2}+(\frac{B}{\eta})^{2}(1+\eta^{2}+2\eta\cos\theta)-(\frac{B}{\eta})^{2}]$ (34) $\displaystyle=$ $\displaystyle\frac{\eta+\cos\theta}{\sqrt{1+\eta^{2}+2\eta\cos\theta}}$ and from the geometrical relation $\frac{B\sin\alpha}{B\sin\theta}=\frac{B/\eta}{(B/\eta)\sqrt{1+\eta^{2}+2\eta\cos\theta}}$ , $\displaystyle\sin\alpha=\frac{\sin\theta}{\sqrt{1+\eta^{2}+2\eta\cos\theta}}.$ (35) The parameter $\alpha$ agrees with the parameter in (26). The azimuthally symmetric flux (33) is thus given by $\displaystyle{\cal B}^{\prime}$ $\displaystyle=$ $\displaystyle\frac{e_{M}}{4\pi}\frac{\eta^{2}}{B^{2}}\frac{1}{(1+\eta^{2}+2\eta\cos\theta)^{3/2}}[(\eta+\cos\theta){\bf e}_{B}-(\sin\theta){\bf e}_{\theta})]$ (36) which agrees with the flux given by Berry’s phase (28). This agreement of two expressions (28) and (36) shows that the Dirac monopole originally at the level crossing point in the parameter space formally appears to drift away by the distance $B/\eta=\pi/\mu T$ in the parameter space when the precise adiabaticity condition $T=\infty$ [3] is spoiled by the finite $T$. It is interesting that two dynamical parameters, the strength of the external magnetic field and the period in Berry’s model, are converted to very different geometrical parameters in Berry’s phase, namely, the shape of the monopole and the distance of the deviation of the monopole from the level crossing point. The observed magnetic field on the sphere with a radius $B$, which is controlled by the observer, thus changes when one changes the parameter $T$ that determines the end of the Dirac string located at $\pi/\mu T$ in the parameter space. This geometrical picture is useful when one draws the precise magnetic flux from the monopole-like object for finite $T$ as in Fig.4 and it is essential when one attempts to understand the motion of a particle in the magnetic field. Figure 4: Arrows indicating the direction and magnitude of the magnetic flux observed at the point P with fixed $B$ and $\theta$ when the end of Dirac string at $\pi/\mu T$ is varied from the point $\pi/\mu T<B$ (Fig.4a) to the boundary $\pi/\mu T=B$ (Fig.4b) and then to the point $\pi/\mu T>B$ (Fig.4c), which correspond to the change of the basic parameter $\eta=\mu BT/\pi$ from the adiabatic domain $\eta=2.5>1$ to the boundary $\eta=1$ and then to the nonadiabatic domain $\eta=0.5<1$, respectively. The wavy line stands for the Dirac string with the end at $B=\pi/\mu T$ and $\theta=\pi$. These figures after a suitable rescaling may also be interpreted as the results with the end of the Dirac string kept fixed at $\pi/\mu T$ and $\theta=\pi$ and varying the distance $B$, starting with a large $B>\pi/\mu T$ (Fig.4a) toward a small $B<\pi/\mu T$ (Fig.4c) in the parameter space, such as two spheres in Fig.2. In terms of the original physical setting of a magnetic dipole placed in a given rotating magnetic field described by the Hamiltonian (1), the cone drawn by the dipole becomes sharper compared to the cone of the given magnetic field, which subtends the solid angle $\Omega=2\pi(1-\cos\theta)$, when the rotating speed of the external magnetic field becomes larger and the dipole moment is left behind, namely, [5] $\displaystyle\psi_{+}^{\dagger}(t)\vec{\sigma}\psi_{+}(t)$ $\displaystyle=$ $\displaystyle w_{+}^{\dagger}(t)\vec{\sigma}w_{+}(t)$ (37) $\displaystyle=$ $\displaystyle\left(\sin\Theta\cos\varphi(t),\sin\Theta\sin\varphi(t),\cos\Theta\right)$ $\displaystyle=$ $\displaystyle-\psi_{-}^{\dagger}(t)\vec{\sigma}\psi_{-}(t)$ that subtends the solid angle $\Omega=2\pi(1-\cos\Theta)$ with $\Theta=\theta-\alpha$ ; this sharper cone is effectively recognized as the drifting monopole in Berry’s phase by an observer located at the point $P$ in Fig.3. We note that the agreement of the solid angle drawn by the spinor solution (8) with Berry’s phase is known to be generally valid in the two-component spinor. The general orthonormal spinor bases are parameterized as $\displaystyle v_{+}(t)$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{c}\cos\frac{1}{2}\theta(t)e^{-i\varphi(t)}\\\ \sin\frac{1}{2}\theta(t)\end{array}\right),\ \ \ v_{-}(t)=\left(\begin{array}[]{c}\sin\frac{1}{2}\theta(t)e^{-i\varphi(t)}\\\ -\cos\frac{1}{2}\theta(t)\end{array}\right)$ (42) that give the spin vector $\displaystyle v_{+}^{\dagger}(t)\vec{\sigma}v_{+}(t)$ $\displaystyle=$ $\displaystyle\left(\sin\theta(t)\cos\varphi(t),\sin\theta(t)\sin\varphi(t),\cos\theta(t)\right)=-v_{-}^{\dagger}(t)\vec{\sigma}v_{-}(t)$ subtending the solid angle $\tilde{\Omega}_{\pm}=\oint(1\mp\cos\theta(t))d\varphi(t)$ for a closed movement. On the other hand, the “holonomy”, which is related to Berry’s phase, satisfies [5] $\displaystyle\oint dtv^{\dagger}_{\pm}(t)i\partial_{t}v_{\pm}(t)=-\frac{1}{2}\oint(1\mp\cos\theta(t))d\varphi(t)+2\pi=-\frac{1}{2}\tilde{\Omega}_{\pm}+2\pi.$ These two quantities thus agree up to the factor $1/2$ and up to trivial phase $2\pi$ in the case of spinor bases. The important fact is that our exact solution of the Schrödinger equation (8) has this structure of $v_{+}(t)$ and $v_{-}(t)$ with $\theta(t)=\theta-\alpha(\theta)$. One may thus prefer to understand that Fig.3 implies an analogue of the effect of lensing of Dirac’s monopole, since the movement of the monopole in the parameter space is a mathematical one. In the precise adiabatic limit with $T=\infty$ [3], the monopole is located at the level crossing point $O$, but when the effect of nonadiabatic rotation with finite $T<\infty$ is turned on, the image of the monopole is displaced to the point $O^{\prime}$ located at $\pi/\mu T$ by keeping the topology and strength of the point-like monopole intact. In this picture, it is important that the topological monopole itself is not resolved in the nonadiabatic domain but it disappears from observer’s view located at the point P for fixed $B$ when $\pi/\mu T=B/\eta\rightarrow{\rm large}$ with fixed $B$ (i.e., $\eta\rightarrow$ small). In the middle, the formal topology change takes place when $\pi/\mu T$ touches the sphere with the fixed radius $B$ (i.e., $\eta=1$). Even in the picture of lensing, the “magnetic flux” generated by Berry’s phase measured at the point in the parameter space specified by $(B,\theta)$ is the real flux. It will be interesting to examine the possible experimental implications of these aspects of Berry’s phase, which is expressed by the magnetic field (28), in a wider area of physics. As for the smooth transition from a monopole to a dipole, it appears in the process of shrinking of the sphere with a radius $B$ covering the end of the Dirac string located at $\pi/\mu T$ to a smaller sphere for which $B<\pi/\mu T$ as in Fig.2. When the sphere touches the end of the Dirac string (at $\eta=1$) in the middle, one encounters a half monopole with the outgoing flux which is half of the full monopole $e_{M}/2=\pi\hbar$. See Stokes’ theorem (21) with $\Theta(\pi,\eta=1)=\pi/2$ in Fig.1. At this specific point, the Dirac string becomes observable [6], corresponding to the Aharonov-Bohm effect [8] of the electron in the magnetic flux generated by the superconducting Cooper pair [9]. It is then natural to attach the end of the Dirac string to an infinitesimally small opening on the sphere forming a closed sphere and thus leading to the vanishing net outgoing flux, which corresponds to a dipole. The idea of the half monopole at $\eta=1$ is interesting, but it is natural to incorporate it as a part of a dipole. The monopole-like object (17) is always a dipole if one counts the Dirac string as in Fig.2 and Stokes’ theorem (21) always holds. In this sense, no real topology change takes place for the movement of $B$, from large $B$ to small $B$, except for the fact that the unobservable Dirac string becomes observable at $B=\pi/\mu T$ and triggers the topology change from a monopole to a dipole. ## 2 Discussion and conclusion The topology or singularity in Berry’s phase arises from the well-known adiabatic theorem [14, 15], namely, no level crossing takes place in the precise adiabatic limit $T\rightarrow\infty$. This theorem implies the appearance of some kind of obstruction or barrier to the level crossing in the precise adiabatic limit; the appearance of Dirac’s monopole singularity in the adiabatic limit may be regarded as a manifestation of this obstruction or barrier in the parameter space. Off the precise adiabatic limit with finite $T$, which is physically relevant for the applications of Berry’s phase as was noted by Berry [2], no more obstruction to the level crossing appears. This is a basis of our expectation of the topology change in Berry’s phase in the nonadiabatic domain. The topology change in Berry’s phase in the exactly solvable model has been analyzed in detail including the appearance of a half-monopole in [6]. The analysis is essentially based on Fig.1 that is a result of solving the relation (9), which is in turn a result of the Schrödinger equation (1). Because of this complicated logical procedure, the exact “magnetic field” generated by Berry’s phase was not very transparent. In the present paper, we remedied this short coming in [6] by giving a more explicit representation of the magnetic field. In this attempt, we recognized that the magnetic field is in fact given by a very simple geometrical picture in Fig.3. We thus encountered an interesting mathematical description of the topology change in Berry’s phase in terms of a geometrical movement of Dirac’s monopole caused by the nonadiabatic variations of parameters in Berry’s model. It is remarkable that the monopole remains in tact without being resolved even in the nonadiabatic domain. We analyzed the monopole-like object and its topological property in Berry’s phase by treating $\vec{B}$ as a given classical parameter. If one adds other physical considerations, there appear some conditions on the parameters of the exactly solvable model (1). For example, the two levels in (16) cross at $\alpha(\theta;\eta)=\pi/2$, which is related to the topology change from a monopole to a dipole in an intricate way if one remembers $\alpha(\theta;\eta)=\theta-\Theta$ and Fig.1. Traditionally, we are accustomed to understanding the topology change in terms of the winding and unwinding of some topological obstruction. The present geometrical description of topology change in terms of the moving monopole is a hitherto unknown mechanism. This new mechanism partly arises from the fact that Berry’s phase is not a simple monopole but rather a complex of the monopole and the level crossing point located at the origin of coordinates. If one instead understands Berry’s phase as a simple monopole, one will find a novel class of monopoles [10, 11]. A notable application of Berry’s phase in momentum space, which is defined by the effective Hamiltonian by replacing $\vec{B}(t)\rightarrow\vec{p}(t)$ in the original model of Berry $\displaystyle\hat{H}=-\mu\vec{\sigma}\cdot\vec{p}(t),$ (43) is known in the analyses of the anomalous Hall effect [12] and the spin Hall effect [13]. This effective Hamiltonian of the two-level crossing for the generic $\vec{p}(t)$ (Bloch momentum) has been analyzed in detail in [6], and it has been shown that Berry’s phase for (43) is determined by the time derivative of the azimuthal angle $\dot{\varphi}(t)$ in both adiabatic (monopole) and nonadiabatic (dipole) limits, and thus our parameterization (2) describes an essential aspect of the topology of Berry’s phase. To be more precise, Berry’s phase becomes trivial, namely, either $0$ or $2\pi$, in the model (43) for the nonadiabatic limit [6] $\displaystyle(\mu|\vec{p}|)T/\hbar\ll 1$ (44) which corresponds to $\eta\ll 1$ in terms of the parameter in (12). This estimate is consistent with the analysis of the exactly solvable model for $\eta\rightarrow 0$ for which $\Theta\rightarrow 0$ in Fig. 1, and thus $\Omega_{\pm}/\hbar\rightarrow 0\ {\rm or}\ 2\pi$ in (15). Our present analysis implies that one may be able to observe experimentally the effective movement of the monopole in momentum space, as is represented by the magnetic field in (28) (by replacing $B\rightarrow|\vec{p}|$), at away from the precise adiabaticity in the model (43). Also, it will be interesting to examine the implications of the present analysis on the very basic issue if Berry’s phase associated with (43) deforms the principle of quantum mechanics by giving rise to anomalous canonical commutators [16]. In conclusion, the analysis of an exactly solvable model has revealed that the topology change in Berry’s phase is mathematically visualized as the geometrical movement or the lensing of Dirac’s monopole in the parameter space. This will help better understand both Berry’s phase and Dirac’s monopole. The present work is supported in part by JSPS KAKENHI (Grant No.18K03633). ## References * [1] H. Longuet-Higgins, Proc. Roy. Soc. A344, 147 (1975). * [2] M. V. Berry, Proc. R. Soc. Ser. A392, 45 (1984). * [3] B. Simon, Phys. Rev. Lett. 51 (1983) 2167. * [4] P.A.M. Dirac, Proc. Roy. Soc. London 133, 60 (1931). * [5] K. Fujikawa, Int. J. Mod. Phys. A21 (2006) 5333; K. Fujikawa, Ann. of Phys. 322, 1500 (2007). Earlier works on the basic aspects of Berry’s phase are quoted in these references. * [6] S. Deguchi and K. Fujikawa, Phys. Rev. D100, 025002 (2019). * [7] T. T. Wu and C. N. Yang, Phys. Rev. D12, 3845 (1975). * [8] Y. Aharonov and D. Bohm, Phys. Rev. 115, 485 (1959). * [9] A. Tonomura, N. Osakabe, T. Matsuda, T. Kawasaki, J. Endo, S. Yano, and H. Yamada, Phys. Rev. Lett. 56, 792 (1986). * [10] S. Deguchi and K. Fujikawa, Phys. Lett. B802, 135210 (2020). * [11] A recent review of the magnetic monopole is found in N. E. Mavromatos and V. A. Mitsou, “Magnetic monopoles revisited: Models and searches at colliders and in the Cosmos”, Int. J. Mod. Phys. A 35 (2020) 2030012. * [12] T. Jungwirth, Q. Niu, A.H. MacDonald, Phys. Rev. Lett. 88 (2002) 207208. Z. Fang, et al., Science 302 (2003) 92, and references therein. * [13] J.E. Hirsch, Phys. Rev. Lett. 83 (1999) 1834; S.-F. Zhang, Phys. Rev. Lett. 85 (2000) 393; S. Murakami, N. Nagaosa, S.-C. Zhang, Science 301 (2003) 1348. * [14] M. Born and V. Fock, Zeitschrift f. Phys. 51 (1928) 165. * [15] T. Kato, J. Phys. Soc. Jpn. 5, 435 (1950). * [16] S. Deguchi and K. Fujikawa, Ann. of Phys. 416 (2020) 168160.
∎ 11institutetext: # On $-1$-differential uniformity of ternary APN power functions ††thanks: H. Yan is with School of Mathematics, Southwest Jiaotong University, Chengdu, 610031, China; and is also with Guangxi Key Laboratory of Cryptography and Information Security, Guilin, 541000, China (email: hdyan@swjtu.edu.cn). Haode Yan (Received: date / Accepted: date) ###### Abstract Very recently, a new concept called multiplicative differential and the corresponding $c$-differential uniformity were introduced by Ellingsen et al. A function $F(x)$ over finite field ${\mathrm{GF}}(p^{n})$ to itself is called $c$-differential uniformity $\delta$, or equivalent, $F(x)$ is differentially $(c,\delta)$ uniform, when the maximum number of solutions $x\in{\mathrm{GF}}(p^{n})$ of $F(x+a)-F(cx)=b$, $a,b,c\in{\mathrm{GF}}(p^{n})$, $c\neq 1$ if $a=0$, is equal to $\delta$. The objective of this paper is to study the $-1$-differential uniformity of ternary APN power functions $F(x)=x^{d}$ over ${\mathrm{GF}}(3^{n})$. We obtain ternary power functions with low $-1$-differential uniformity, and some of them are almost perfect $-1$-nonlinear. ###### Keywords: c-differential differential uniformity almost perfect c-nonlinearity ###### MSC: 11T06 94A60 ## 1 Introduction Differential cryptanalysis (BS ; BS93 ) is one of the most fundamental cryptanalytic approaches targeting symmetric-key primitives. Such a cryptanalysis approach has attracted a lot of attention because it was proposed to be the first statistical attack for breaking the iterated block ciphers BS . The security of cryptographic functions regarding differential attacks was widely studied in the past 30 years. This type of security is quantified by the so-called _differential uniformity_ of the substitution box (S-box) used in the cipher NK . In BCJW , a new type of differential was proposed. The authors utilized modular multiplication as a primitive operation, which extends the type of differential cryptanalysis. It is necessary to start the theoretical analysis of an (output) multiplicative differential. Motivated by practical differential cryptanalysis, Ellingsen et al. recently coined a new concept called _multiplicative differential_ and the corresponding $c$-differential uniformity (EFRST ). ###### Definition 1 Let ${\mathrm{GF}}(p^{n})$ denote the finite field with $p^{n}$ elements, where $p$ is a prime number and $n$ is a positive integer. For a function $F$ from ${\mathrm{GF}}(p^{n})$ to itself, $a,c\in{\mathrm{GF}}(p^{n})$, the (multiplicative) $c$ derivative of $F$ with respect to $a$ is define as ${}_{c}D_{a}F(x)=F(x+a)-cF(x),~{}\mathrm{for}~{}\mathrm{all}~{}x.$ For $b\in{\mathrm{GF}}(p^{n})$, let ${}_{c}\Delta_{F}(a,b)=\\#\\{x\in{\mathrm{GF}}(p^{n}):F(x+a)-cF(x)=b\\}$. We call ${}_{c}\Delta_{F}=\mathrm{max}\\{_{c}\Delta_{F}(a,b):a,b\in{\mathrm{GF}}(p^{n}),\mathrm{and}~{}a\neq 0~{}\mathrm{if}~{}c=1\\}$ the $c$-differential uniformity of F. If ${}_{c}\Delta_{F}=\delta$, then we say $F$ is differentially $(c,\delta)$-uniform. If the $c$-differential uniformity of $F$ equals $1$, then $F$ is called a perfect $c$-nonlinear (P$c$N) function. P$c$N functions over odd characteristic finite fields are also called $c$-planar functions. If the $c$-differential uniformity of $F$ is $2$, then $F$ is called an almost perfect $c$-nonlinear (AP$c$N) function. It is easy to see that, for $c=1$ and $a\neq 0$, the $c$-differential uniformity becomes the usual differential uniformity, and the P$c$N and AP$c$N functions become perfect nonlinear (PN) function and almost perfect nonlinear function (APN) respectively. These functions are of great significance in both theory and practical applications. For even characteristic finite fields, APN functions have the lowest differential uniformity. Known APN functions over even characteristic finite fields were presented in BD ; D1 ; D2 ; D3 ; G ; JW ; K ; N . For the known results on PN and APN functions over odd characteristic finite fields, the readers are referred to CM ; DMMPW ; DO ; DY ; HRS ; HS ; L ; ZW10 ; ZW11 . Because of the strong resistance to differential attacks and the low implementation cost in a hardware environment, power function $F(x)=x^{d}$ (i.e., monomials) with low differential uniformity can serve as a good candidate for the design of S-boxes. Moreover, power functions with low differential uniformity may also introduce some unsuitable weaknesses within a cipher BCC ; JaKn ; CoPi ; CaVi . For instance, a differentially $4$-uniform power function, which is extended affine EA-equivalent to the inverse function $x\mapsto x^{2^{n}-2}$ over ${\mathrm{GF}}(2^{n})$ with even $n$, is employed in the AES (advanced encryption standard). A nature question one would ask is whether the power functions have good $c$-differential properties. In EFRST , the authors studied the $c$-differential uniformity of the well-known inverse function $F(x)=x^{p^{n}-2}$ over ${\mathrm{GF}}(p^{n})$ for both even and odd prime $p$. It was shown that $F(x)$ is P$c$N when $c=0$, $F(x)$ is AP$c$N with some conditions of $c$ and $F(x)$ is differentially $(c,3)$-uniform otherwise. This result illustrates that P$c$N functions can exist for $p=2$. For P$c$N functions $x^{\frac{3^{k}+1}{2}}$ over ${\mathrm{GF}}(3^{n})$ and $c=-1$, a sufficient and necessary condition was presented in EFRST . In BT , it was shown that for odd $p$, $n$ and $c=-1$, $x^{\frac{p^{2}+1}{2}}$ over ${\mathrm{GF}}(p^{n})$ and $x^{p^{2}-p+1}$ over ${\mathrm{GF}}(p^{3})$ are P$c$N functions. In YMZ , it was proved that the Gold function over even characteristic finite field is differentially $(c,3)$-uniform for $c\neq 1$. Some P$c$N and AP$c$N functions were also obtained. Moreover, for $c$-differential uniformity of power function $F(x)=x^{d}$ over ${\mathrm{GF}}(p^{n})$ with $c\neq 1$, the following lemma was introduced. ###### Lemma 1 (YMZ ) Let $F(x)=x^{d}$ be a power function over ${\mathrm{GF}}(p^{n})$. Then ${}_{c}\Delta_{F}=\mathrm{max}\big{\\{}~{}\\{{{}_{c}\Delta_{F}}(1,b):b\in{\mathrm{GF}}(p^{n})\\}\cup\\{\gcd(d,p^{n}-1)\\}~{}\big{\\}}.$ Table 1: Power functions $F(x)=x^{d}$ over ${\mathrm{GF}}(p^{n})$ with low $c$-differential uniformity $p$ | $d$ | condition | ${}_{c}\Delta_{F}$ | References ---|---|---|---|--- any | $2$ | $c\neq 1$ | 2 | EFRST any | $p^{n}-2$ | $c=0$ | $1$ | EFRST 2 | $2^{n}-2$ | $c\neq 0$, $\mathrm{Tr_{n}}(c)=\mathrm{Tr_{n}}(c^{-1})=1$ | $2$ | EFRST 2 | $2^{n}-2$ | $c\neq 0$, $\mathrm{Tr_{n}}(c)=0$ or $\mathrm{Tr_{n}}(c^{-1})=0$ | $3$ | EFRST odd | $p^{n}-2$ | $c=4$, $c=4^{-1}$ or $\chi(c^{2}-4c)=\chi(1-4c)=-1$ | $2$ | EFRST odd | $p^{n}-2$ | $c\neq 0,4,4^{-1}$, $\chi(c^{2}-4c)$=1 or $\chi(1-4c)=1$ | $3$ | EFRST 3 | $({3^{k}+1})/{2}$ | $c=-1$, $n/\gcd(k,n)=1$ | 1 | EFRST odd | $({p^{2}+1})/{2}$ | $c=-1$, $n$ odd | 1 | BT odd | $p^{2}-p+1$ | $c=-1$, $n=3$ | 1 | BT 2 | $2^{k}+1$ | $c\neq 1$, $\gcd(k,n)=1$ | 3 | YMZ odd | $p^{k}+1$ | $1\neq c\in{\mathrm{GF}}(p)$, $\gcd(k,n)=1$ | 2 | YMZ odd | $(p^{k}+1)/2$ | $c=-1$, $k/\gcd(k,n)$ is even | $1$ | YMZ 3 | $(3^{k}+1)/2$ | $c=-1$, $k$ odd, $\gcd(k,n)=1$ | $2$ | YMZ any | $(2p^{n}-1)/3$ | $c\neq 1$, $p^{n}\equiv 2(\mathrm{mod}~{}3)$ | $\leq 3$ | YMZ odd | $(p^{n}+1)/2$ | $c\neq\pm 1$ | $\leq 4$ | YMZ odd | $(p^{n}+1)/2$ | $c\neq\pm 1$, $\chi(\frac{1-c}{1+c})=1$, $p^{n}\equiv 1(\mathrm{mod}~{}4)$ | $\leq 2$ | YMZ $>3$ | $(p^{n}+3)/2$ | $c=-1$, $p^{n}\equiv 3(\mathrm{mod}~{}4)$ | $\leq 3$ | YMZ $>3$ | $(p^{n}+3)/2$ | $c=-1$, $p^{n}\equiv 1(\mathrm{mod}~{}4)$ | $\leq 4$ | YMZ odd | $(p^{n}-3)/2$ | $c=-1$ | $\leq 4$ | YMZ * • $\mathrm{Tr_{n}}(\cdot)$ denotes the absolute trace mapping from ${\mathrm{GF}}(2^{n})$ to ${\mathrm{GF}}(2)$. * • $\chi(\cdot)$ denotes the quadratic multiplicative character on ${\mathrm{GF}}(p^{n})^{*}$. Table 2: Results in this paper $p$ | $d$ | condition | ${}_{c}\Delta_{F}$ ---|---|---|--- 3 | $(3^{\frac{n+1}{2}}-1)/2$ | $c=-1$, $n\equiv 1(\mathrm{mod}~{}4)$ | $\leq 2$ 3 | $(3^{\frac{n+1}{2}}-1)/2+(3^{n}-1)/2$ | $c=-1$, $n\equiv 3(\mathrm{mod}~{}4)$ | $\leq 2$ 3 | $(3^{n+1}-1)/8$ | $c=-1$, $n\equiv 1(\mathrm{mod}~{}4)$ | $\leq 2$ 3 | $(3^{n+1}-1)/8+(3^{n}-1)/2$ | $c=-1$, $n\equiv 3(\mathrm{mod}~{}4)$ | $\leq 2$ 3 | $(3^{\frac{n+1}{2}}-1)/2$ | $c=-1$, $n\equiv 3(\mathrm{mod}~{}4)$ | $\leq 4$ 3 | $(3^{\frac{n+1}{2}}-1)/2+(3^{n}-1)/2$ | $c=-1$, $n\equiv 1(\mathrm{mod}~{}4)$ | $\leq 4$ 3 | $(3^{n+1}-1)/8$ | $c=-1$, $n\equiv 3(\mathrm{mod}~{}4)$ | $\leq 4$ 3 | $(3^{n+1}-1)/8+(3^{n}-1)/2$ | $c=-1$, $n\equiv 1(\mathrm{mod}~{}4)$ | $\leq 4$ 3 | ${(3^{\frac{n+1}{4}}-1)(3^{\frac{n+1}{2}}+1)}$ | $c=-1$, $n\equiv 3(\mathrm{mod}~{}4)$ | $\leq 4$ 3 | $(3^{n}+1)/4+(3^{n}-1)/2$ | $c=-1$, $n$ odd | $\leq 4$ As summarized in Table 1, $c=-1$ is a very special case and sometimes the $-1$-differential uniformity is lower than $c$-differential uniformity for other $c\in{\mathrm{GF}}(p^{n})$. The perfect $-1$-nonlinear function was also called quasi-planar function BT . In this paper, we study the $-1$-differential uniformity of $F(x)$ when $F(x)$ is a ternary APN power function. Lemma 1 indicates that to determine the $-1$-differential uniformity of power functions, the following $-1$-differential equation needs to be studied. $\Delta(x)=(x+1)^{d}+x^{d}=b.$ Let $\delta(b)=\\#\\{x\in{\mathrm{GF}}(3^{n})~{}|~{}\Delta(x)=b\\}$. The maximum value of $\\{\delta(b)~{}|~{}b\in{\mathrm{GF}}(3^{n})\\}$ plays an important role in studying the $-1$-differential uniformity of $F(x)$. In the rest of this paper, we consider several classes of ternary APN power functions. It turns out that they are with low $-1$-differential uniformity, and some of them are almost perfect $-1$-nonlinear. The results in this paper are shown in Table 2I. ## 2 $-1$-differential uniformity of $x^{\frac{3^{\frac{n+1}{2}}-1}{2}}$ over ${\mathrm{GF}}(3^{n})$ In this section, let $F(x)=x^{d}$ be a power function over ${\mathrm{GF}}(3^{n})$, where $n\equiv 1(\mathrm{mod}~{}4)$ and $d=\frac{1}{2}(3^{\frac{n+1}{2}}-1)$. It was proved in DMMPW that $F(x)$ is an APN function. We consider the $-1$-differential uniformity of $F(x)$ as follows. ###### Theorem 2.1 Let $F(x)=x^{d}$ be a power function over ${\mathrm{GF}}(3^{n})$, where $n\equiv 1(\mathrm{mod}~{}4)$ and $d=\frac{1}{2}(3^{\frac{n+1}{2}}-1)$. We have ${}_{-1}\Delta_{F}\leq 2$. ###### Proof Let $m=\frac{n+1}{2}$. Note that $2(3^{m}+1)d-3(3^{n}-1)=2$ and $d$ is odd when $n\equiv 1(\mathrm{mod}~{}4)$, then $\gcd(d,3^{n}-1)=1$, i.e., $F(x)$ is a permutation on ${\mathrm{GF}}(3^{n})$. For $b\in{\mathrm{GF}}(3^{n})$, we consider the $-1$-differential equation $\Delta(x)=(x+1)^{d}+x^{d}=b.$ (1) Let $u_{x+1}=(x+1)^{d}$ and $u_{x}=x^{d}$. For $x\neq 0$, note that $u^{3^{m}+1}_{x}=x^{\frac{3^{n+1}-1}{2}}=\chi(x)x.$ (2) Herein and hereafter, let $\chi$ denote the quadratic multiplicative character on ${\mathrm{GF}}(3^{n})^{*}$. Let $x\in{\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$ be a solution of (1) for fixed $b\in{\mathrm{GF}}(3^{n})$, then $u_{x},u_{x+1}\neq 0$. Taking the $(3^{m}+1)$th power on both sides of $u_{x+1}=-u_{x}+b$, we have $bu^{3^{m}}_{x}+b^{3^{m}}u_{x}=-\chi(x+1)(x+1)+\chi(x)x+b^{3^{m}+1}.$ (3) For equation (3), we distinguish the following four cases. Case I. $\chi(x+1)=\chi(x)=1$. In this case, we have $bu^{3^{m}}_{x}+b^{3^{m}}u_{x}=b^{3^{m}+1}-1$ from (3). Since the mapping $u_{x}\mapsto bu^{3^{m}}_{x}+b^{3^{m}}u_{x}$ is bijective on ${\mathrm{GF}}(3^{n})$, we can find a unique $u_{x}$. Because $F(x)$ is a permutation, a unique $x$ can be found from the $u_{x}$. This case has at most one solution. Case II. $\chi(x+1)=\chi(x)=-1$. This case has at most one solution. The discussion is similar to that of Case I and we omit it. Case III. $\chi(x+1)=1,\chi(x)=-1$. From (3), we have $bu^{3^{m}}_{x}+b^{3^{m}}u_{x}=x+b^{3^{m}+1}-1$ in this case, and then we have $(u_{x}+b)^{3^{m}+1}=-b^{3^{m}+1}-1$ (4) by (2). If there are two distinct solutions in this case, namely $x_{3}$ and $x^{\prime}_{3}$, then $u_{x_{3}}$ and $u_{x^{\prime}_{3}}$ satisfy (4) with $\chi(x_{3}+1)=\chi(x^{\prime}_{3}+1)=1$ and $\chi(x_{3})=\chi(x^{\prime}_{3})=-1$. Consequently $(u_{x_{3}}+b)^{3^{m}+1}=(u_{x^{\prime}_{3}}+b)^{3^{m}+1}$ can be obtained from (4). Then we have $u_{x_{3}}+b=-(u_{x^{\prime}_{3}}+b)$ since $\gcd(3^{m}+1,3^{n}-1)=2$ and $x_{3}\neq x^{\prime}_{3}$, which leads to $u_{x_{3}}=b-u_{x^{\prime}_{3}}=u_{x^{\prime}_{3}+1}$. However, the above conclusion contradicts to $\chi(u_{x_{3}})=\chi(x_{3})=-1$ and $\chi(u_{x^{\prime}_{3}+1})=\chi(x^{\prime}_{3}+1)=1$. We conclude that Case III has at most one solution. Case IV. $\chi(x+1)=-1,\chi(x)=1$. In this case we have $bu^{3^{m}}_{x}+b^{3^{m}}u_{x}=-x+b^{3^{m}+1}+1$ from (3), and then $(u_{x}+b)^{3^{m}+1}=-b^{3^{m}+1}+1$ (5) by (2). Similar to Case III, we can obtain that this case has at most one solution. Next we will prove that for fixed $b$, (1) cannot have solution in Case I and Case II simultaneously. Otherwise, suppose that $x_{1}$ and $x_{2}$ are solutions of (1) in Case I and Case II with $\chi(x_{1}+1)=\chi(x_{1})=1$ and $\chi(x_{2}+1)=\chi(x_{2})=-1$ respectively. Then we have $bu^{3^{m}}_{x_{1}}+b^{3^{m}}u_{x_{1}}=b^{3^{m}+1}-1$ and $bu^{3^{m}}_{x_{2}}+b^{3^{m}}u_{x_{2}}=b^{3^{m}+1}+1$, where $u_{x_{1}}$ and $u_{x_{2}}$ we defined before. Now we have $b(u_{x_{1}}+u_{x_{2}})^{3^{m}}+b^{3^{m}}(u_{x_{1}}+u_{x_{2}})=-b^{3^{m}+1}$ and the consequent $u_{x_{1}}+u_{x_{2}}=b$. From (1), we can obtain $u_{x_{2}}=u_{x_{1}+1}$, which contradicts to $\chi(u_{x_{2}})=\chi(x_{2})=-1$ and $\chi(u_{x_{1}+1})=\chi(x_{1}+1)=1$. Therefore, we conclude that (1) has at most one solution in Cases I and II for fixed $b\in{\mathrm{GF}}(3^{n})$. Then we prove that for fixed $b$, (1) cannot have solution in Case III and Case IV simultaneously. Otherwise, suppose that $x_{3}$ and $x_{4}$ are solutions of (1) in Case III and Case IV with $\chi(x_{3}+1)=1$, $\chi(x_{3})=-1$ and $\chi(x_{4}+1)=-1,\chi(x_{4})=1$ respectively. Then $x_{3}$ and $x_{4}$ satisfy (4) and (5) respectively. By the sum of (4) and (5), we have $(u_{x_{3}}+b)^{3^{m}+1}+(u_{x_{4}}+b)^{3^{m}+1}=b^{3^{m}+1}.$ (6) Taking the $3^{m}$th power on both sides of (6), we have $(u_{x_{3}}+b)^{3^{m}+3}+(u_{x_{4}}+b)^{3^{m}+3}=b^{3^{m}+3}$ (7) since $3^{m}(3^{m}+1)=3^{n+1}+3^{m}=3^{m}+3+3(3^{n}-1)$. From (6) and (7), we have $(u_{x_{3}}+b)^{3^{m}+3}+(u_{x_{4}}+b)^{3^{m}+3}=b^{2}(u_{x_{3}}+b)^{3^{m}+1}+b^{2}(u_{x_{4}}+b)^{3^{m}+1}$, that is $-(u_{x_{3}}+b)^{3^{m}+1}u_{x_{3}}(b-u_{x_{3}})=(u_{x_{4}}+b)^{3^{m}+1}u_{x_{4}}(b-u_{x_{4}}).$ (8) Note that $b-u_{x_{3}}=u_{x_{3}+1}$ and $b-u_{x_{4}}=u_{{x_{4}}+1}$, the left- hand side of (8) is a square element and the right-hand side of (8) is a nonsquare element. Then $u_{x_{3}}+b=u_{x_{4}}+b=0$ can be obtained, i.e. $u_{x_{3}}=u_{x_{4}}$, which contradicts to $\chi(u_{x_{3}})=\chi(x_{3})=-1$ and $\chi(u_{x_{4}})=\chi(x_{4})=1$. We conclude that (1) has at most one solution in Cases III and IV for fixed $b\in{\mathrm{GF}}(3^{n})$. From the above discussions, (1) has at most two solutions in ${\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$. One can be easily calculate that $\Delta(0)=1$ and $\Delta(-1)=-1$. For $b=1$ and $b=-1$, it can be verified that $\Delta(x)=1$ and $\Delta(x)=-1$ has no solution in ${\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$, i,e, $\delta(1)=\delta(-1)=1$. Then we obtain $\delta(b)\leq 2$ for any $b$, which leads to ${}_{-1}\Delta_{F}\leq 2$ by Lemma 1 and $\gcd(d,3^{n}-1)=1$ . For $n\equiv 3(\mathrm{mod}~{}4)$, we can also get power functions with low $-1$-differential uniformity. ###### Theorem 2.2 Let $F(x)=x^{d}$ be a power function over ${\mathrm{GF}}(3^{n})$, where $n\equiv 3(\mathrm{mod}~{}4)$ and $d=\frac{1}{2}(3^{\frac{n+1}{2}}-1)$. We have ${}_{-1}\Delta_{F}\leq 4$. ###### Proof The proof is similar to that of Theorem 2.1. We give a sketch here. In this case, $\gcd(d,3^{n}-1)=2$. With the notation we used before, equations (1), (2) and (3) also hold. Let $x\in{\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$ be a solution of (3) for fixed $b\in{\mathrm{GF}}(3^{n})$, four cases are considered as follows. Case I. $\chi(x+1)=\chi(x)=1$. In this case, We have $bu^{3^{m}}_{x}+b^{3^{m}}u_{x}=b^{3^{m}+1}-1$ from (3). Since the mapping $u_{x}\mapsto bu^{3^{m}}_{x}+b^{3^{m}}u_{x}$ is bijective on ${\mathrm{GF}}(3^{n})$, we can find a unique $u_{x}$. Then a unique $x$ can be found for $\chi(x)=1$. This case has at most one solution. Case II. $\chi(x+1)=\chi(x)=-1$. Similar to Case I, this case has at most one solution. Case III. $\chi(x+1)=1,\chi(x)=-1$. We have $bu^{3^{m}}_{x}+b^{3^{m}}u_{x}=x+b^{3^{m}+1}-1$ from (3), and then we have $(u_{x}+b)^{3^{m}+1}=-b^{3^{m}+1}-1$ by (2). We can obtain two $u_{x}$’s since $\gcd(d,3^{n}-1)=2$ and the consequent two $x$’s for given $\chi(x)$. This case has at most two solutions. Case IV. $\chi(x+1)=-1,\chi(x)=1$. Similar to Case III, this case has at most two solutions. One can similarly prove that for fixed $b$, (1) cannot have solution in Case III and Case IV simultaneously. By discussions as above, we know that (1) has at most four solutions in ${\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$. We have $\Delta(0)=\Delta(-1)=1$. For $b=1$, one can easily verify that $\Delta(x)=1$ has no solution in ${\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$, i.e., $\delta(1)=2$. Then we obtain $\delta(b)\leq 4$ for any $b$, this leads to ${}_{-1}\Delta_{F}\leq 4$ by Lemma 1 and $\gcd(d,3^{n}-1)=2$. For $d^{\prime}=d+\frac{3^{n}-1}{2}$, we have the following corollary. ###### Corollary 1 Let $F^{\prime}(x)=x^{d^{\prime}}$ be a power function over ${\mathrm{GF}}(3^{n})$, where $n$ is an odd integer and $d^{\prime}=\frac{3^{\frac{n+1}{2}}-1}{2}+\frac{3^{n}-1}{2}$. We have ${}_{-1}\Delta_{F^{\prime}}\leq 2$ when $n\equiv 3(\mathrm{mod}~{}4)$ and ${}_{-1}\Delta_{F^{\prime}}\leq 4$ when $n\equiv 1(\mathrm{mod}~{}4)$. ###### Proof It can be calculated that $\gcd(d^{\prime},3^{n}-1)\leq 2$. First we consider $n\equiv 3(\mathrm{mod}~{}4)$, i.e., $3n\equiv 1(\mathrm{mod}~{}4)$. By Theorem 2.1, $(x+1)^{\frac{3^{\frac{3n+1}{2}}-1}{2}}+x^{\frac{3^{\frac{3n+1}{2}}-1}{2}}=b$ (9) has at most two solutions in ${\mathrm{GF}}(3^{3n})$ for any $b\in{\mathrm{GF}}(3^{3n})$. Since $(3^{n}-1)|\frac{3^{\frac{3n+1}{2}}-1}{2}-d^{\prime}$, equation (9) becomes $(x+1)^{d^{\prime}}+x^{d^{\prime}}=b$ any $x,b\in{\mathrm{GF}}(3^{n})$. Therefore, this equation has at most two solutions in ${\mathrm{GF}}(3^{n})$, i.e., ${}_{-1}\Delta_{F^{\prime}}\leq 2$. The other case can be proved similarly and we omit the details. ## 3 $-1$-differential uniformity of $x^{\frac{3^{n+1}-1}{8}}$ over ${\mathrm{GF}}(3^{n})$ In this section, let $F(x)=x^{d}$ be a power function over ${\mathrm{GF}}(3^{n})$, where $n\equiv 1(\mathrm{mod}~{}4)$ and $d=\frac{3^{n+1}-1}{8}$. It was proved in DMMPW that $F(x)$ is an APN function. We consider the $-1$-differential uniformity of $F(x)$ as follows. ###### Theorem 3.1 Let $F(x)=x^{d}$ be a power function over ${\mathrm{GF}}(3^{n})$, where $n\equiv 1(\mathrm{mod}~{}4)$ and $d=\frac{3^{n+1}-1}{8}$. We have ${}_{-1}\Delta_{F}\leq 2$. ###### Proof Note that $\gcd(d,3^{n}-1)=1$, $F(x)$ is a permutation on ${\mathrm{GF}}(3^{n})$. For $b\in{\mathrm{GF}}(3^{n})$, we consider the $c$-differential equation $\Delta(x)=(x+1)^{d}+x^{d}=b.$ (10) Let $u_{x+1}=(x+1)^{d}$ and $u_{x}=x^{d}$. For $x\neq 0$, note that $u^{4}_{x}=x^{4d}=\chi(x)x.$ (11) Let $x\in{\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$ be a solution of (10) for fixed $b\in{\mathrm{GF}}(3^{n})$, then $u_{x},u_{x+1}\neq 0$. Taking the $4$th power on both sides of $u_{x+1}=-u_{x}+b$, we have $bu^{3}_{x}+b^{3}u_{x}=-\chi(x+1)(x+1)+\chi(x)x+b^{4}.$ (12) For (12), we distinguish the following four cases. Case I. $\chi(x+1)=\chi(x)=1$. In this case, we have $bu^{3}_{x}+b^{3}u_{x}=b^{4}-1$ from (12). Since the mapping $u_{x}\mapsto bu^{3}_{x}+b^{3}u_{x}$ is bijective on ${\mathrm{GF}}(3^{n})$, we can find a unique $u_{x}$. Because $F(x)$ is a permutation, a unique $x$ can be found from the $u_{x}$. This case has at most one solution. Case II. $\chi(x+1)=\chi(x)=-1$. This case has at most one solution. The discussion is similar to that of Case I and we omit it. Case III. $\chi(x+1)=1,\chi(x)=-1$. From (12), we have $bu^{3}_{x}+b^{3}u_{x}=x+b^{4}-1$ in this case, and then we have $(u_{x}+b)^{4}=-b^{4}-1$ (13) by (11). If there are two distinct solutions in this case, namely $x_{3}$ and $x^{\prime}_{3}$, then $u_{x_{3}}$ and $u_{x^{\prime}_{3}}$ satisfy (13) with $\chi(x_{3}+1)=\chi(x^{\prime}_{3}+1)=1$ and $\chi(x_{3})=\chi(x^{\prime}_{3})=-1$. Consequently $(u_{x_{3}}+b)^{4}=(u_{x^{\prime}_{3}}+b)^{4}$ can be obtained from (13). Then we have $u_{x_{3}}+b=-(u_{x^{\prime}_{3}}+b)$ since $x_{3}\neq x^{\prime}_{3}$, which leads to $u_{x_{3}}=b-u_{x^{\prime}_{3}}=u_{x^{\prime}_{3}+1}$. However, the above conclusion contradicts to $\chi(u_{x_{3}})=\chi(x_{3})=-1$ and $\chi(u_{x^{\prime}_{3}+1})=\chi(x^{\prime}_{3}+1)=1$. We conclude that Case III has at most one solution. Case IV. $\chi(x+1)=-1,\chi(x)=1$. In this case we have $bu^{3}_{x}+b^{3}u_{x}=-x+b^{4}+1$ from (12), and then $(u_{x}+b)^{4}=-b^{4}+1$ (14) by (11). Similar to Case III, we can obtain that this case has at most one solution. Next we will prove for fixed $b$, (10) cannot have solutions in Case I and Case II simultaneously. Otherwise, suppose that $x_{1}$ and $x_{2}$ are solutions of (10) in Case I and Case II with $\chi(x_{1}+1)=\chi(x_{1})=1$ and $\chi(x_{2}+1)=\chi(x_{2})=-1$ respectively. Then we have $bu^{3}_{x_{1}}+b^{3}u_{x_{1}}=b^{4}-1$ and $bu^{3}_{x_{2}}+b^{3}u_{x_{2}}=b^{4}+1$, where $u_{x_{1}}$ and $u_{x_{2}}$ we defined before. Now we have $b(u_{x_{1}}+u_{x_{2}})^{3}+b^{3}(u_{x_{1}}+u_{x_{2}})=-b^{4}$ and the consequent $u_{x_{1}}+u_{x_{2}}=b$. From (10), we can obtain $u_{x_{2}}=u_{x_{1}+1}$, which contradicts $\chi(u_{x_{2}})=\chi(x_{2})=-1$ and $\chi(u_{x_{1}+1})=\chi(x_{1}+1)=1$. Therefore, we conclude that (10) has at most one solution in Cases I and II for fixed $b\in{\mathrm{GF}}(3^{n})$. Then we prove for fixed $b$, (10) cannot have solutions in Case III and Case IV simultaneously. Otherwise, suppose that $x_{3}$ and $x_{4}$ are solutions of (10) in Case III and Case IV with $\chi(x_{3}+1)=1,\chi(x_{3})=-1$ and $\chi(x_{4}+1)=-1,\chi(x_{4})=1$ respectively. Then $x_{3}$ and $x_{4}$ satisfy (13) and (14) respectively. By the sum of (13) and (14), we have $(u_{x_{3}}+b)^{4}+(u_{x_{4}}+b)^{4}=b^{4}$, that is, $(u^{2}_{x_{3}}-bu_{x_{3}}+u^{2}_{x_{4}}-bu_{x_{4}}+b^{2})^{2}=-u_{x_{3}}(b-u_{x_{3}})u_{x_{4}}(b-u_{x_{4}}).$ (15) Note that $b-u_{x_{3}}=u_{x_{3}+1}$ and $b-u_{x_{4}}=u_{{x_{4}}+1}$, the right-hand side of (15) is a nonzero nonsquare element, which is a contradiction. We conclude that (10) has at most one solution in Cases III and IV for fixed $b\in{\mathrm{GF}}(3^{n})$. From the above discussions, (10) has at most two solutions in ${\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$. One can easily calculate that $\Delta(0)=1$ and $\Delta(-1)=-1$. For $b=1$ and $b=-1$, it can be verified that $\Delta(x)=1$ and $\Delta(x)=-1$ has no solution in ${\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$, i,e, $\delta(1)=\delta(-1)=1$. Then we obtain $\delta(b)\leq 2$ for any $b$, this leads to ${}_{-1}\Delta_{F}\leq 2$ by Lemma 1 and $\gcd(d,3^{n}-1)=1$ . For $n\equiv 3(\mathrm{mod}~{}4)$ and $d^{\prime}=d+\frac{3^{n}-1}{2}$, we list the following theorems without proof. ###### Theorem 3.2 Let $F(x)=x^{d}$ be a power function over ${\mathrm{GF}}(3^{n})$, where $n\equiv 3(\mathrm{mod}~{}4)$ and $d=\frac{3^{n+1}-1}{8}$. We have ${}_{-1}\Delta_{F}\leq 4$. ###### Corollary 2 Let $F^{\prime}(x)=x^{d^{\prime}}$ be a power function over ${\mathrm{GF}}(3^{n})$, where $n$ is an odd integer and $d^{\prime}=\frac{3^{n+1}-1}{8}+\frac{3^{n}-1}{2}$. We have ${}_{-1}\Delta_{F^{\prime}}\leq 2$ when $n\equiv 3(\mathrm{mod}~{}4)$ and ${}_{-1}\Delta_{F^{\prime}}\leq 4$ when $n\equiv 1(\mathrm{mod}~{}4)$. ## 4 $-1$-differential uniformity of $x^{\frac{3^{n}+1}{4}+\frac{3^{n}-1}{2}}$ over ${\mathrm{GF}}(3^{n})$ It was proved in HRS that the power function $x^{d}$ is an APN function over ${\mathrm{GF}}(3^{n})$, where $n$ is an odd integer and $d=\frac{3^{n}+1}{4}+\frac{3^{n}-1}{2}$. The $-1$-differential uniformity is considered as follows. ###### Theorem 4.1 Let $F(x)=x^{d}$ be a power function over ${\mathrm{GF}}(3^{n})$, where $n$ is odd and $d=\frac{3^{n}+1}{4}+\frac{3^{n}-1}{2}$. Then ${}_{-1}\Delta_{F}\leq 4$. ###### Proof One can easily obtain that $d$ is even and $\gcd(d,3^{n}-1)=2$. Note that $\chi(-1)=-1$ since $n$ is odd. We consider the $-1$-differential equation $\Delta(x)=(x+1)^{d}+x^{d}=b.$ (16) When $b=0$, (16) has no solution. For fixed $b\in{\mathrm{GF}}(3^{n})^{*}$, let $x\in{\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$ is a solution of (16), we distinguish the following four cases. Case I. $\chi(x+1)=\chi(x)=1$. Let $x+1=\alpha^{2}$ and $x=\beta^{2}$ for $\alpha,\beta\in{\mathrm{GF}}(3^{n})^{*}$, then $\alpha^{2}-\beta^{2}=1$. We can obtain $\chi(\alpha)\alpha+\chi(\beta)\beta=b$ from (16). We have $\beta^{2}+1=\alpha^{2}=(\chi(\alpha)\alpha)^{2}=(b-\chi(\beta)\beta)^{2}=b^{2}+b\chi(\beta)\beta+\beta^{2}.$ One can obtain $\chi(\beta)\beta=b^{-1}-b$ and $x=\beta^{2}=(\chi(\beta)\beta)^{2}=(b^{-1}-b)^{2}$. This case has at most one solution. Case II. $\chi(x+1)=\chi(x)=-1$. Let $x+1=-\alpha^{2}$ and $x=-\beta^{2}$ for $\alpha,\beta\in{\mathrm{GF}}(3^{n})^{*}$, then $\alpha^{2}-\beta^{2}=-1$. Similar to Case I, we can obtain $x=-(b+b^{-1})^{2}$. This case has at most one solution. Case III. $\chi(x+1)=1,\chi(x)=-1$. Let $x+1=\alpha^{2}$ and $x=-\beta^{2}$ for $\alpha,\beta\in{\mathrm{GF}}(3^{n})^{*}$, then $\alpha^{2}+\beta^{2}=1$. We can obtain $\chi(\alpha)\alpha+\chi(\beta)\beta=b$ from (16). Let $\gamma=\chi(\beta)\beta$, which is a square element in ${\mathrm{GF}}(3^{n})$. Then $\gamma^{2}=\beta^{2}$ and $\gamma$ satisfies $(b-\gamma)^{2}+\gamma^{2}=1$, i.e. $\gamma^{2}-b\gamma+1-b^{2}=0,$ (17) which is a quadratic equation on $\gamma$. Equation (17) has most two solutions, then we can obtain at most two $x$’s since $x=-\gamma^{2}$. This case has at most two solutions. Case IV. $\chi(x+1)=-1,\chi(x)=1$. Let $x+1=-\alpha^{2}$ and $x=\beta^{2}$ for $\alpha,\beta\in{\mathrm{GF}}(3^{n})^{*}$, then $\alpha^{2}+\beta^{2}=-1$. We can obtain $\chi(\alpha)\alpha+\chi(\beta)\beta=b$ from (16). Let $\gamma=\chi(\beta)\beta$, which is a square element in ${\mathrm{GF}}(3^{n})$. Then $\gamma^{2}=\beta^{2}$ and $\gamma$ satisfies $(b-\gamma)^{2}+\gamma^{2}=-1$, i.e. $\gamma^{2}-b\gamma-1-b^{2}=0,$ (18) which is a quadratic equation on $\gamma$. Equation (18) has most two solutions, then we can obtain at most two $x$’s since $x=\gamma^{2}$. This case has at most two solutions. Note that $x$ is a solution of (16) if and only if $-x-1$ is a solution of (16). This implies that when $x\neq 1$ (the corresponds $b\neq-1$), if Case III (the same for Case IV) has solutions, it must has two solutions. Next we prove that for fixed $b\in{\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$, (16) cannot have solution in Case III and Case IV simultaneously. Suppose on the contrary that $x_{1}$, $x_{2}$ are distinct solutions of (16) for some given $b$ in Case III, and $x_{3}$, $x_{4}$ are distinct solutions of (16) for the same $b$ in Case IV. By the discussions above, each $x_{i},1\leq i\leq 4$ corresponds to square element $\gamma_{i}$. Moreover, $\gamma_{1}$, $\gamma_{2}$ are the two solutions of (17), and $\gamma_{3}$, $\gamma_{4}$ are the two solutions of (18). They satisfy $\gamma_{1}+\gamma_{2}=\gamma_{3}+\gamma_{4}=b$, $\gamma_{1}\gamma_{2}=1-b^{2}$ and $\gamma_{3}\gamma_{4}=-1-b^{2}$. We can obtain $\gamma^{2}_{1}+\gamma^{2}_{2}+\gamma^{2}_{3}+\gamma^{2}_{4}=(\gamma_{1}+\gamma_{2})^{2}+\gamma_{1}\gamma_{2}+(\gamma_{3}+\gamma_{4})^{2}+\gamma_{3}\gamma_{4}=0$. Since $\gamma_{4}\neq 0$, let $\delta_{i}=\gamma_{i}/\gamma_{4}$, $1\leq i\leq 3$, then $\delta_{1},\delta_{2}$ and $\delta_{3}$ are square elements, and they satisfy $\delta_{1}+\delta_{2}-\delta_{3}-1=0$ and $\delta^{2}_{1}+\delta^{2}_{2}+\delta^{2}_{3}+1=0$. Replace by $\delta_{3}=\delta_{1}+\delta_{2}-1$, we have the following quadratic equation on $\delta_{1}$. $\delta^{2}_{1}-(\delta_{2}-1)\delta_{1}+(\delta^{2}_{2}-\delta_{2}+1)=0.$ The discriminate of the above quadratic equation is $\Delta=(\delta_{2}-1)^{2}-(\delta^{2}_{2}-\delta_{2}+1)=-\delta_{2}$, which is a nonzero nonsquare element in ${\mathrm{GF}}(3^{n})$. It contradicts to $\delta_{1}\in{\mathrm{GF}}(3^{n})$. Then we proved that for $b\in{\mathrm{GF}}(3^{n})^{*}$, (16) has at most $4$ solutions in ${\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$. One can easily check that $\Delta(0)=\Delta(-1)=1$. For $b=1$, it can be verified that $\Delta(x)=1$ has no solution in the four cases, i.e., $\Delta(x)=1$ has no solution in ${\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$, $\delta(1)=2$. For $b=-1$, it can be verified that $x=1$ is the only solution of (16), i.e. $\delta(-1)=1$. This with the discussions above leads to ${}_{-1}\Delta_{F}\leq 4$. ## 5 $-1$-differential uniformity of $x^{(3^{\frac{n+1}{4}}-1)(3^{\frac{n+1}{2}}+1)}$ over ${\mathrm{GF}}(3^{n})$ In ZW11 , the authors studied the power function $F(x)=x^{d}$ over ${\mathrm{GF}}(3^{n})$, where $n\equiv 3(\mathrm{mod}~{}4)$ and $d=(3^{\frac{n+1}{4}}-1)(3^{\frac{n+1}{2}}+1)$. It was shown that $x^{d}$ is an APN function. In what follows, we discuss the $-1$-differential uniformity of $F(x)$. ###### Theorem 5.1 Let $F(x)=x^{d}$ be a power function over ${\mathrm{GF}}(3^{n})$, where $n\equiv 3(\mathrm{mod}~{}4)$, $d=(3^{m}-1)(3^{2m}+1)$ and $m=\frac{n+1}{4}$. Then ${}_{-1}\Delta_{F}\leq 4$. ###### Proof Note that $d$ is an even number and $\gcd(d,3^{n}-1)=2$. For $b\in{\mathrm{GF}}(3^{n})$, we consider equation $\Delta(x)=(x+1)^{d}+x^{d}=b.$ (19) It is easy to see that $\Delta(0)=\Delta(-1)=1$, and (19) has no solution when $b=0$. Let $x\in{\mathrm{GF}}(3^{n})\setminus\\{0,-1\\}$ be a solution of (19) for some given $b\in{\mathrm{GF}}(3^{n})^{*}$. Denote by $u_{x+1}=(x+1)^{d}$ and $u_{x}=x^{d}$. Since $\frac{3^{m}+1}{2}\cdot d=\frac{3^{n+1}-1}{2}\equiv 1+\frac{3^{n}-1}{2}(\mathrm{mod}~{}3^{n}-1)$, we have ${u_{x}}^{\frac{3^{m}+1}{2}}=\chi(x)x$ and ${u_{x+1}}^{\frac{3^{m}+1}{2}}=\chi(x+1)(x+1)$. One can easily see that if $u_{x}$ and $\chi(x)$ are given, $x$ can be determined uniquely. Let $\xi\in{\mathrm{GF}}(3^{2n})\setminus\\{0,\pm 1\\}$ such that $\frac{u_{x}}{b}=\xi+\frac{1}{\xi}-1=\frac{(\xi+1)^{2}}{\xi}$, then we have $\frac{u_{x+1}}{b}=-\xi-\frac{1}{\xi}-1=-\frac{(\xi-1)^{2}}{\xi}$ by (19). Moreover, we can obtain $\chi(x)x={u_{x}}^{\frac{3^{m}+1}{2}}=(\frac{b(\xi+1)^{2}}{\xi})^{\frac{3^{m}+1}{2}}$ and $\frac{b(\xi+1)^{2}}{\xi}=x^{d}=(\frac{b(\xi+1)^{2}}{\xi})^{\frac{3^{m}+1}{2}\cdot d}$. Similarly, $-\frac{b(\xi-1)^{2}}{\xi}=(x+1)^{d}=(-\frac{b(\xi-1)^{2}}{\xi})^{\frac{3^{m}+1}{2}\cdot d}$. Then $\xi$ satisfies $-(\frac{\xi+1}{\xi-1})^{2}=(\frac{\xi+1}{\xi-1})^{(3^{m}+1)d}$, i.e., $(\frac{\xi+1}{\xi-1})^{3(3^{n}-1)}=-1$. This with $\xi\in{\mathrm{GF}}(3^{2n})$ leads to $\xi^{3^{n}+1}=1$. In the following, we discuss equation (19) in two cases. Case 1. $\chi(x+1)=\chi(x)$. In this case, ${u_{x+1}}^{\frac{3^{m}+1}{2}}-{u_{x}}^{\frac{3^{m}+1}{2}}=\chi(x+1)(x+1)-\chi(x)x=\chi(x)$. That is, $(-\frac{b(\xi-1)^{2}}{\xi})^{\frac{3^{m}+1}{2}}-(\frac{b(\xi+1)^{2}}{\xi})^{\frac{3^{m}+1}{2}}=\chi(x).$ We deduce the following equation $(-1)^{\frac{3^{m}+1}{2}}(\xi-1)^{3^{m}+1}-(\xi+1)^{3^{m}+1}=\chi(x)b^{-\frac{3^{m}+1}{2}}\xi^{\frac{3^{m}+1}{2}}.$ (20) Two subcases are considered as follows. Subcase 1.1. $\frac{3^{m}+1}{2}$ is even, i.e., $m$ is odd. Then (20) becomes $\xi^{3^{m}}+\xi=\chi(x)b^{-\frac{3^{m}+1}{2}}\xi^{\frac{3^{m}+1}{2}}$. Let $t=\xi^{\frac{3^{m}-1}{2}}$, then $t_{1,2}=-\chi(x)b^{-\frac{3^{m}+1}{2}}\pm\sqrt{b^{-(3^{m}+1)}-1}$. Since $m$ is odd, then $\gcd(m,2n)=1$ and $\gcd(\frac{3^{m}-1}{2},3^{2n}-1)=1$. We can obtain a unique $\xi_{1}$ from $\xi^{\frac{3^{m}-1}{2}}=t_{1}$ since $\gcd(\frac{3^{m}-1}{2},3^{2n}-1)=1$. For $t_{2}=t^{-1}_{1}$, we can also obtain a unique $\xi_{2}$ such that $\xi_{2}^{\frac{3^{m}-1}{2}}=t_{2}$. Note that $\xi_{2}=\xi^{-1}_{1}$ and they give the same $u_{x}$. Subcase 1.2. $\frac{3^{m}+1}{2}$ is odd, i.e., $m$ is even. Then (20) becomes $\xi^{3^{m}+1}+1=\chi(x)b^{-\frac{3^{m}+1}{2}}\xi^{\frac{3^{m}+1}{2}}$. Let $t=\xi^{\frac{3^{m}+1}{2}}$, then $t_{1,2}=-\chi(x)b^{-\frac{3^{m}+1}{2}}\pm\sqrt{b^{-(3^{m}+1)}-1}$. Since $m$ is even, then $\gcd(m,2n)=2$ and $\gcd(\frac{3^{m}+1}{2},3^{2n}-1)=1$. We can obtain a unique $\xi_{1}$ from $\xi^{\frac{3^{m}+1}{2}}=t_{1}$ since $\gcd(\frac{3^{m}+1}{2},3^{2n}-1)=1$. For $t_{2}=t^{-1}_{1}$, we can also obtain a unique $\xi_{2}$ such that $\xi_{2}^{\frac{3^{m}+1}{2}}=t_{2}$. Note that $\xi_{2}=\xi^{-1}_{1}$ and they give the same $u_{x}$. By discussions in the above two subcases, we conclude that one can obtain a unique $u_{x}$ from given $b$ and $\chi(x)$, and then we find at most one solution of (20) for each $\chi(x)$. This case has at most $2$ solutions. Case 2. $\chi(x+1)=-\chi(x)$. In this case, ${u_{x+1}}^{\frac{3^{m}+1}{2}}+{u_{x}}^{\frac{3^{m}+1}{2}}=\chi(x+1)(x+1)+\chi(x)x=-\chi(x)$. That is, $(-\frac{b(\xi-1)^{2}}{\xi})^{\frac{3^{m}+1}{2}}+(\frac{b(\xi+1)^{2}}{\xi})^{\frac{3^{m}+1}{2}}=-\chi(x).$ We deduce the following equation $(-1)^{\frac{3^{m}+1}{2}}(\xi-1)^{3^{m}+1}+(\xi+1)^{3^{m}+1}=-\chi(x)b^{-\frac{3^{m}+1}{2}}\xi^{\frac{3^{m}+1}{2}}.$ (21) We have following two subcases. Subcase 2.1. $\frac{3^{m}+1}{2}$ is even. Then (21) becomes $\xi^{3^{m}+1}+1=\chi(x)b^{-\frac{3^{m}+1}{2}}\xi^{\frac{3^{m}+1}{2}}.$ Let $t=\xi^{\frac{3^{m}+1}{2}}$, if $\chi(x)=1$, then $t_{1,2}=-b^{-\frac{3^{m}+1}{2}}\pm\sqrt{b^{-(3^{m}+1)}-1}$. Note that $t_{2}=t^{-1}_{1}$ and they give the same $u_{x}$’s, we only consider $t_{1}$. Since $\frac{3^{m}+1}{2}$ is even, $m$ is odd, then $\gcd(\frac{3^{m}+1}{2},3^{n}+1)=2$. We can obtain two solutions, namely $\pm\xi_{1}$, from $\xi^{\frac{3^{m}+1}{2}}=t_{1}$ since $\gcd(\frac{3^{m}+1}{2},3^{n}+1)=2$. If $\chi(x)=-1$, then $t_{3,4}=b^{-\frac{3^{m}+1}{2}}\pm\sqrt{b^{-(3^{m}+1)}-1}$. We only consider $t_{4}$, which satisfies $t_{4}=-t_{1}$. Similarly, we obtain another two $\xi$’s, namely $\delta\xi_{1}$, $-\delta\xi_{1}$, where $\delta\in{\mathrm{GF}}(3^{2n})$ with $\delta^{2}=-1$. In this subcase, we get four distinct $\xi$’s and each of them corresponds a possible solution of (19). Subcase 2.2. $\frac{3^{m}+1}{2}$ is odd. Then (21) becomes $\xi^{3^{m}}+\xi=\chi(x)b^{-\frac{3^{m}+1}{2}}\xi^{\frac{3^{m}+1}{2}}.$ Let $t=\xi^{\frac{3^{m}-1}{2}}$, if $\chi(x)=1$, then $t_{1,2}=-b^{-\frac{3^{m}+1}{2}}\pm\sqrt{b^{-(3^{m}+1)}-1}$. We only consider the equation $\xi^{\frac{3^{m}-1}{2}}=t_{1}$ since the solutions of another equation correspond the same $u_{x}$’s. Since $\frac{3^{m}+1}{2}$ is odd, $m$ is even, and $\gcd(\frac{3^{m}-1}{2},3^{n}+1)=4$. We obtain four solutions from $\xi^{\frac{3^{m}-1}{2}}=t_{1}$, namely $\xi_{2},\delta\xi_{2},-\xi_{2},-\delta\xi_{2}$, where $\delta\in{\mathrm{GF}}(3^{2n})$ with $\delta^{2}=-1$. If $\chi(x)=-1$, then $t_{3,4}=b^{-\frac{3^{m}+1}{2}}\pm\sqrt{b^{-(3^{m}+1)}-1}$. We only consider $t_{4}$, which satisfies $t_{4}=-t_{1}$. If $\xi^{\prime}_{2}$ is a solution of $\xi^{\frac{3^{m}-1}{2}}=t_{4}=-t_{1}$, then $(\frac{\xi^{\prime}_{2}}{\xi_{2}})^{\frac{3^{m}-1}{2}}=-1$. We obtain that $(\frac{\xi^{\prime}_{2}}{\xi_{2}})^{4}=1$ from $\gcd(3^{m}-1,3^{n}+1)=4$, i.e., $\xi^{\prime}_{2}=\delta^{i}\xi_{2}$, $0\leq i\leq 3$. That means $\xi^{\frac{3^{m}-1}{2}}=t_{4}$ cannot contribute new $\xi$’s. We also obtain four distinct $\xi$’s in this subcase. Recall that $u_{x}=b(\xi+\frac{1}{\xi}-1)$, in the following we prove that $\xi$ and $\delta\xi$ cannot contribute solutions of (19) simultaneously, where $\delta$ we defined before. More precisely, let $u_{x_{1}}=b(\xi+\frac{1}{\xi}-1)$ and $u_{x_{2}}=b(\delta\xi+\frac{1}{\delta\xi}-1)$, then we have $(u_{x_{1}}+b)^{2}+(u_{x_{2}}+b)^{2}=b^{2}((\xi+\frac{1}{\xi})^{2}+(\delta\xi+\frac{1}{\delta\xi})^{2})=b^{2}.$ The above identity can be rewritten as $(u_{x_{1}}+u_{x_{2}}+b)^{2}=-u_{x_{1}}u_{x_{2}},$ which is a contradiction. That means each of subcases 2.1 and 2.2 has at most two solutions. By discussions as above, we conclude that ${}_{-1}\Delta_{F}\leq 4$. The proof is finished. ## 6 concluding remarks In this paper, we studied the $-1$-differential uniformity of ternary APN power functions. We obtain many classes of power functions with low $-1$-differential uniformity, and some of them are almost perfect $-1$-nonlinear. It is mentioned that in this paper we give the upper bound of the $-1$-differential uniformity of some power functions, it is better to study whether the equality holds. In this paper, we only studied $c=-1$, it is also good to study the $c$-differential properties for $\pm 1\neq c\in{\mathrm{GF}}(3^{n})$. Our future work is to find more power functions with low $c$-differential uniformity. This topic is widely open. Power functions with low usual differential uniformity are useful in sequences, coding theory, and combinatorial designs. It is worth finding the applications of power functions with low $c$-differential uniformity in such areas. ## 7 Acknowledgments H. Yan’s research was supported by the National Natural Science Foundation of China Grant (No.11801468) and the Guangxi Key Laboratory of Cryptography and Information Security Grant (No. GCIS201814). ## References * (1) Beth, T., Ding, C.: On almost perfect nonlinear permutations, in Advances in Cryptography. EUROCRYPT 93 (Lecture Notes in Computer Science). New York: Springer-Verlag, vol. 765, pp. 65-76, 1994. * (2) Blondeau, C., Canteaut, A., Charpin, P.: Differential properties of power functions, Int. J. Inf. Coding Theory, 1(2), 149–170(2010) * (3) Borisov, N., Chew, M., Johnson, R., Wagner, D.: Multiplicative Differentials, In: Daemen J., Rijmen V. (eds) Fast Software Encryption. FSE 2002\. Lecture Notes in Computer Science, vol 2365. Springer, Berlin, Heidelberg, 2002. * (4) Biham, E., Shamir, A.: Differential cryptanalysis of DES-like cryptosystems, In Alfred Menezes and Scott A. Vanstone, editors, Advances in Cryptology-CRYPTO’ 90, 10th Annual International Cryptology Conference, Santa Barbara, California, USA, August 11-15, 1990, Proceedings, volume 537 of Lecture Notes in Computer Science, pages 2-21. Springer, 1990. * (5) Biham, E., Shamir, A.: Differential Cryptanalysis of the Data Encryption Standard. Springer, 1993. * (6) Bartoli, D., Timpanella, M.: On a generalization of planar functions, J. Algebr. Comb, DOI:https://doi.org/10.1007/s10801-019-00899-2 (2019) * (7) Canteaut, A., Videau, M.: Degree of composition of highly nonlinear functions and applications to higher order differential cryptanalysis, in Advances in Cryptology – EUROCRYPT 2002, Springer, Berlin, 2002, vol. 2332, Lecture Notes in Comput. Sci., pp. 518–533. * (8) Coulter, R.S., Matthews, R.W.: Planar functions and planes of Lenz-Barlotti class II, Des. Codes Cryptogr. 10, 167-184 (1997) * (9) Courtois, N., Pieprzyk, J.: Cryptanalysis of block ciphers with overdefined systems of equations, in Advances in Cryptology – ASIACRYPT 2002, Springer, Berlin, 2002, vol. 2501, Lecture Notes in Comput. Sci., pp. 267–287. * (10) Dembowski, p., Ostrom, T. G.: Planes of order $n$ with collineation groups of order $n^{2}$, Math. Z. 193, 239-258(1968) * (11) Ding, C., Yuan, J.: A new family of skew Paley-Hadamard difference sets, J. Comb. Theory Ser. A 113, 1526-1535(2006) * (12) Dobbertin, H.: Almost perfect nonlinear power functions on ${\mathrm{GF}}(2^{n})$ : A new case for n divisible by 5, in Finite Fields and Applications, Augsburg, Germany, 1999, pp. 113-121. * (13) Dobbertin, H.: Almost perfect nonlinear power functions on ${\mathrm{GF}}(2^{n})$ : The Welch case, IEEE Trans. Inf. Theory 45(4), 1271-1275 (1999) * (14) Dobbertin, H.: Almost perfect nonlinear power functions on ${\mathrm{GF}}(2^{n})$ : The Niho case, Inform. Comput. 151(1-2), 57-72 (1999) * (15) Dobbertin, H., Mills, D., Muller, E. N., Pott, A., Willems, W.: APN functions in odd characteristic, Discr. Math. 267, 95-112 (2003) * (16) Ellingsen, P., Felke, P., Riera, C., St$\mathrm{\check{a}}$nic$\mathrm{\check{a}}$, P., Tkachenko, A.: C-differentials, multiplicative uniformity and (almost) perfect c-nonlinearity, IEEE Trans. Inform. Theory 66(9), 5781-5789 (2020) * (17) Gold, R.: Maximal recursive sequences with 3-valued recursive crosscorrelation function, IEEE Trans.Inf .Theory 14(1), 154-156 (1968) * (18) Helleseth, T., Rong, C., Sandberg, D.: New families of almost perfect nonlinear power mappings, IEEE Trans. Inform. Theory 45(2), 475–485 (1999) * (19) Helleseth, T., Sandberg, D.: Some power mappings with low differential uniformity, Appl. Algebra Engrg. Commun. Comput. 8, 363-37 (1997) * (20) Jakobsen, T., Knudsen, L. R.: The interpolation attack on block ciphers, in Fast Software Encryption – FSE 1997, Springer, Berlin, 1997, vol. 1267, Lecture Notes in Comput. Sci., pp. 28–40. * (21) Janwa, H., Wilson, R. M.: Hyperplane sections of Fermat varieties in P3 in char. 2 and some applications to cyclic codes, in Applied Algebra, Algebraic Algorithms and Error-Correcting Codes (Lecture Notes in Computer Science). Berlin, Germany: Springer-Verlag, vol. 673, pp. 180-194, 1993. * (22) Kasami, T.: The weight enumerators for several classes of subcodes of the $2$nd order binary reed-muller codes, Inform. Contr. 18, 369-394 (1971) * (23) Leducq, E.: New families of APN functions in characteristic 3 or 5, In: Arithmetic, Geometry, Cryptography and Coding Theory, Contemporary Mathematics, vol. 574, pp. 115-123, AMS 2012. * (24) Nyberg, K.: Differentially uniform mappings for cryptography, in Advances in Cryptography. EUR OCRYPT93 (Lecture Notes in Computer Science). New York: Springer-Verlag, 1994, vol. 765, pp. 55-64. * (25) Nyberg, K., Knudsen, L.: Provable security against differential cryptanalysis, in Proc. Advances in Cryptology-CRYPTO 92, 1993, vol. 740, Lecture Notes in Computer Science, pp. 566-574. * (26) Yan, H., Mesnager, S., Zhou, Z.: Power functions over finite fields with low $c$-differential uniformity, arXiv:2003.13019v3. * (27) Zha, Z., Wang, X.: Power functions with low uniformity on odd characteristic finite fields, Sci. China Math. 53(8), 1931-1940 (2010) * (28) Zha, Z., Wang, X.: Almost perfect nonlinear power functions in odd characteristic, IEEE Trans. Inf. Theory 57(7), 4826-4832 (2011)
# ResPer : Computationally Modelling Resisting Strategies in Persuasive Conversations. Ritam Dutt∗,1, Sayan Sinha∗,2, Rishabh Joshi1, Surya Shekhar Chakraborty3, Meredith Riggs1, Xinru Yan1, Haogang Bao1, Carolyn Penstein Rosé1 1Carnegie Mellon University, 2Indian Institute of Technology Kharagpur, 3Zendrive Inc <EMAIL_ADDRESS> <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Modelling persuasion strategies as predictors of task outcome has several real-world applications and has received considerable attention from the computational linguistics community. However, previous research has failed to account for the resisting strategies employed by an individual to foil such persuasion attempts. Grounded in prior literature in cognitive and social psychology, we propose a generalised framework for identifying resisting strategies in persuasive conversations. We instantiate our framework on two distinct datasets comprising persuasion and negotiation conversations. We also leverage a hierarchical sequence-labelling neural architecture to infer the aforementioned resisting strategies automatically. Our experiments reveal the asymmetry of power roles in non-collaborative goal-directed conversations and the benefits accrued from incorporating resisting strategies on the final conversation outcome. We also investigate the role of different resisting strategies on the conversation outcome and glean insights that corroborate with past findings. We also make the code and the dataset of this work publicly available at https://github.com/americast/resper. ## 1 Introduction ††footnotetext: ∗ denotes equal contribution Persuasion is pervasive in everyday human interactions. People are often exposed to scenarios that challenge their existing beliefs and opinions, such as medical advice, election campaigns, and advertisements Knobloch-Westerwick and Meng (2009); Bartels (2006); Speck and Elliott (1997). Of late, huge strides have been taken by the Computational Linguistics community to advance research in persuasion. Some seminal works include identifying persuasive strategies in text Yang et al. (2019) and conversations Wang et al. (2019), investigating the interplay of language and prior beliefs on successful persuasion attempts Durmus and Cardie (2018); Longpre et al. (2019), and generating persuasive dialogues Munigala et al. (2018). However, a relatively unexplored domain by the community is the investigation of resisting strategies employed to foil persuasion attempts. As succinctly observed by Miller (1965): “In our daily lives we are struck not by the ease of producing attitude change but by the rarity of it.” Several works in cognitive and social psychology Fransen et al. (2015a); Zuwerink Jacks and Cameron (2003) have put forward different resisting strategies and the motivations for the same. However, so far, there has not been any attempt to operationalise these strategies from a computational standpoint. We attempt to bridge this gap in our work. We propose a generalised framework, grounded in cognitive psychology literature, for automatically identifying resisting strategies in persuasion oriented discussions. We instantiate our framework on two publicly available datasets comprising persuasion and negotiation conversations to create an annotated corpus of resisting strategies. Furthermore, we design a hierarchical sequence modelling framework, that leverages the conversational context to identify resisting strategies automatically. Our model significantly outperforms several neural baselines, achieving a competitive macro-F1 score of 0.56 and 0.66 on the persuasion and negotiation dataset, respectively. We refer to our model as ResPer, which is not only an acronym for Resisting Persuasion, but also a play on the word ESPer: a person with extrasensory abilities. The name is apt since we observe that incorporating such resisting strategies could provide additional insight on the outcome of the conversation. In fact, our experiments reveal that the resisting strategies are better predictors of conversation success for the persuasion dataset than the strategies employed by the persuader. We also observe that the buyer’s strategies are more influential in negotiating the final price. Our findings highlight the asymmetric nature of power roles arising in non-collaborative dialogue scenarios and form motivation for this work. ## 2 Related Works The use of persuasion strategies to change a person’s view or achieve a desired outcome finds several real-world applications, such as in election campaigns Knobloch-Westerwick and Meng (2009); Bartels (2006), advertisements Speck and Elliott (1997), and mediation Cooley (1993). Consequently, several seminal NLP research have focused on operationalising and automatically identifying persuasion strategies Wang et al. (2019), propaganda techniques Da San Martino et al. (2019), and negotiation tactics Zhou et al. (2019), as well as the impact of such strategies on the outcome of a task Yang et al. (2019); He et al. (2018); Joshi et al. (2021). However, there is still a dearth of research from a computational linguistic perspective investigating resisting strategies to foil persuasion. Resisting strategies have been widely discussed in literature from various aspects such as marketing Heath et al. (2017), cognitive psychology Zuwerink Jacks and Cameron (2003), and political communication Fransen et al. (2015b) . Some notable works include the identification and motivation of commonly-used resisting strategies Fransen et al. (2015a); Zuwerink Jacks and Cameron (2003), the use of psychological metrics to predict resistance San José (2019); Ahluwalia (2000), and the design of a framework to measure the impact of resistance Tormala (2008). However, these works have mostly relied on qualitative methods, unlike ours, which adopts a data-driven approach. We propose a generalised framework to characterise resisting strategies and employ state-of-the-art neural models to infer them automatically. Thus our work can be considered complementary to past research. The closest semblance to our work in NLP literature ties in with argumentation, be it essays Carlile et al. (2018), debates Cano-Basave and He (2016), or discussions on social media platforms Al-Khatib et al. (2018); Zeng et al. (2020). Such works have revolved mostly on analysing argumentative strategies and their effect on others. Recently, Al Khatib et al. (2020) demonstrated that incorporating the personality traits of the resistor was influential in determining their resistance to persuasion. Such an observation acknowledges the power vested in an individual to resist change to their existing beliefs. Our work exhibits significant departure from this because we explicitly characterise the resisting strategies employed by the user. Moreover, our work focuses on the general domain of non-collaborative task-oriented dialogues, where several non-factual resisting strategies are observed, making it distinctly different from argumentation Galitsky et al. (2018). We assert that focusing on both parties is imperative to get a complete picture of persuasive conversations. Resisting Strategy | Persuasion (P4G) | Negotiation (CB) ---|---|--- Source Derogation | Attacks/doubts the organisation’s credibility. | Attacks the other party or questions the item. | My money probably won’t go to the right place | Was it new denim, or were they someone’s funky old worn out jeans? Counter Argument | Argues that the responsibility of donation is not on them or refutes a previous statement. | Provides a non-personal argument/factual response to refute a previous claim or to justify a new claim. | There are other people who are richer | It may be old, but it runs great. Has lower mileage and a clean title. Personal Choice | Attempts to saves face by asserting their personal preference such as their choice of charity and their choice of donation. | Provides a personal reason for disagreeing with the current situation or chooses to agree with the situation provided some specific condition is met. | I prefer to volunteer my time | I will take it for $300 if you throw in that printer too. Information Inquiry | Ask for factual information about the organisation for clarification or as an attempt to stall. | Requests for clarification or asks additional information about the item or situation. | What percentage of the money goes to the children? | Can you still fit it in your pocket with the case on? Self Pity | Provides a self-centred reason for not being able/willing to donate at the moment. | Provides a reason (meant to elicit sympathy) for disagreeing with the current terms. | I have my own children | $130 please I only have $130 in my budget this month. Hesitance | Attempts to stall the conversation by either stating they would donate later or is currently unsure about donating. | Stalls for time and is hesitant to commit; specifically, they seek to further the conversation and provide a chance for the other party to make a better offer. | Yes, I might have to wait until my check arrives. | Ok, would you be willing to take $50 for it? Self-assertion | Explicitly refuses to donate without even providing a factual/personal reason | Asserts a new claim or refutes a previous claim with an air of finality/ confidence. | Not today | That is way too little. Table 1: Framework describing the resisting strategies for persuasion (P4G) and negotiation (CB) datasets. We emphasise that Information Inquiry is not a resisting strategy for CB. Examples of each strategy are italicised. ## 3 Framework In this section, we describe the datasets, the resisting strategies employed, and the annotation framework to instantiate the strategies. ### 3.1 Dataset Employed We choose persuasion-oriented conversations, rather than essays or advertisements Yang et al. (2019), since we can observe how the participants respond to the persuasion attempts in real-time. To that end, we leverage two publicly available corpora on persuasion Wang et al. (2019) and negotiation He et al. (2018). We refer to these datasets as “Persuasion4Good” or P4G and “Craigslist Bargain” or CB hereafter. P4G comprises conversational exchanges between two anonymous Amazon Mechanical Turk workers with designated roles of the persuader, ER and persuadee, EE. ER had to convince EE to donate a part of their task earnings to the charity _Save the Children_. We investigate the resisting strategies employed only by EE in response to the donation efforts. We emphasise that the conversational exchanges are not scripted, and the task is set up so that a part of EE’s earnings is deducted if they agree to donate. Since there is a monetary loss at stake for EE, we expect them to resist. CB consists of simulated conversations between a buyer (BU) and a seller (SE) over an online exchange platform. Both are given their respective target prices and employ resisting strategies to negotiate the offer. We choose these datasets since they involve non-collaborative goal-oriented dialogues. As a result, we can definitively assess the impact of different resisting strategies on the goal. Table 2: Description for the Persuasion (P4G) Wang et al. (2019) and Negotiation (CB) He et al. (2018) datasets Properties | P4G | CB ---|---|--- # of conversations | 530 | 800 Max # of utterances/conversation | 76 | 44 Avg # of utterances/conversation | 36.34 | 11.94 Max # of tokens/utterance | 90 | 93 Avg # of tokens/utterance | 11.03 | 14.62 Vocabulary size | 6137 | 5370 ### 3.2 Framework Description In this subsection, we briefly describe the resisting strategies commonly referenced in social and cognitive psychology literature. This enables us to design a unified framework for the two datasets, built upon common underlying semantic themes. Fransen et al. (2015a) identified 4 major clusters of resisting strategies, namely contesting Wright (1975); Zuwerink Jacks and Cameron (2003); Abelson and Miller (1967), empowerment Zuwerink Jacks and Cameron (2003); Sherman and Gorkin (1980), biased processing Ahluwalia (2000), and avoidance Speck and Elliott (1997). Each individual category can be subdivided into finer categories showcased in italics henceforth. Contesting refers to attacking either the source of the message (Source Derogation) or its content (Counter Argumentation). A milder form of contesting involves seeking clarification or information termed Information Inquiry. Prior work has shown a positive association between working knowledge and one’s ability to resist persuasion Wood and Kallgren (1988); Luttrell and Sawicki (2020). Therefore, Information Inquiry can be interpreted as a form of resistance where the resistor seeks to satisfy their doubts because they are sceptical of the persuader’s intents or messages. This is prominent in certain conversations in P4G where a sceptical EE questions the charity’s legitimacy. Empowerment strategies encompass reinforcing one’s personal preference to refute a claim (Attitude Bolstering) Sherman and Gorkin (1980), attempting to arouse guilt in the opposing party (Self Pity) Vangelisti et al. (1991); O’Keefe (2002), stating one’s wants outright (Self Assertion) Zuwerink Jacks and Cameron (2003), or seeking validation from like-minded people (Social Validation) Fransen et al. (2015a). Overall, empowerment strategies drive the discussion towards the resistor’s self as opposed to attacking the persuader. Biased processing mitigates external persuasion by selectively processing information that conforms with one’s opinion or beliefs Fransen et al. (2015a). For simplicity, we subsume strategies that denote personal preference, namely Attitude Bolstering and Biased Processing, into a unified category Personal Choice. We refrain from incorporating Self Assertion into the Personal Choice category since it deals with bolstering one’s confidence and not one’s opinions or attitudes. The subtle difference is highlighted in Table 1. Avoidance strategies distance the resistor from persuasion, either physically or mechanically, or refuse to engage in topics that induce cognitive dissonance Fransen et al. (2015a). However, in the context of task-oriented conversations, wherein participants are expected to further a goal, avoidance often manifests as Hesitance to commit to the current situation. We identify seven major resisting strategies across the datasets, namely Source Derogation, Counter Argumentation, Information Inquiry, Personal Choice, Self Pity, Hesitance, and Self Assertion. Since the datasets comprise two-party conversations between strangers, Social Validation, which requires garnering the support of others, was absent. We now describe how these resisting strategies were instantiated in the following section. ### 3.3 Instantiating the Resistance Framework We emphasise that although the description and meaning of a strategy remain the same across the two datasets, their semantic interpretation depends on the context. For example, scepticism towards the charity in P4G and criticism of the product in CB are instances of Source Derogation. This is because ER represents the charity, whereas the seller is being accused of selling an inferior product. Likewise, we instantiate the predicates for the remaining six resisting strategies for the two datasets, with examples in Table 1. We label the utterances of persuadee (EE) in P4G and the buyers (BU) and sellers (SE) in CB with at least one of the seven corresponding resisting strategies, or ‘Not-A-Strategy’ if none applies. The ‘Not-A-Strategy’ label includes greetings, off-task discussions, agreement, compliments, or other tokens of approval. We acknowledge that an utterance can have more than one resisting strategy embedded in it. For example, the utterance “The price is slightly high for used couches, would you come down to $240$ if I also picked them up?”, is an instance of both Personal Choice and Counter Argumentation. We also note that Information-Inquiry is not a resisting strategy for CB since asking additional information/clarification is an expected behaviour before finalising a deal. We keep the label nevertheless to show comparison with P4G. We present the flowchart detailing the annotation framework in Figure 3 of Appendix. ### 3.4 Annotation Procedure and Validation We describe the annotation procedure for both the CB and P4G dataset here and its subsequent validation. For CB, three authors independently annotated five random conversations adhering to the flowchart. If the conversations chosen were simple or had few labels, a new set of 5 conversations were taken up. This constitutes one round. After each round, the Fleiss Kappa score was computed, and the authors discussed to resolve the disagreements and revise the flowchart. Then began the next round on a new set of 5 random conversations. For CB, 5 rounds of revision were carried out over 24 conversations, until a high Fleiss kappa (0.790) Fleiss (1971) was obtained. Finally, the three authors independently went ahead and annotated approximately 250 distinct conversations, yielding a corpus of 800 CB conversations. Our annotation procedure requires a rigorous reliable refinement phase but a comparatively faster annotation phase by dividing the annotation between the authors. Thus the conversations annotated by each author were mutually exclusive. Similarly, for P4G dataset, four authors annotated 3 conversations per round, since a conversation in P4G was comparatively longer. 4 rounds of revision across 12 conversations was done to achieve the final kappa-score of 0.787. The four authors then went ahead and divided the task of annotating the 500 conversations amongst themselves. We show an annotated conversation snippet for the two datasets in Table 3. Table 3: Examples of annotation snippets for the Persuasion (P4G) and Negotiation (CB). The utterances of the EE and the SE are highlighted in cyan. Some strategies are shortened, like Info Inquiry, and Per Choice for Information Inquiry and Personal Choice. Role | Text | Strategy ---|---|--- Negotiation (CB) SE | I have a wonderful phone for you if you are interested. | No Strategy BU | I am interested. Did you just buy it? | Info inquiry SE | I bought it two weeks ago but it just wasn’t what I needed anymore. | No Strategy BU | Would you be willing to work with the price? | Hesitance SE | Yes we can negotiate. | No Strategy BU | If I come today would you accept $56 I can bring it now? | Per Choice SE | How about 65 and I can deliver it to you now? | Per Choice BU | Can you go $60 Kind of all I have right now ? | Self Pity SE | Yes I can. | No Strategy Persuasion (P4G) ER | Hello, Save the Children looks like an interesting organisation. | - EE | i would like to know more about it | Info Inquiry .. | .. | .. EE | thanks i will definitely check it out | Hesitance ER | They also promote children’s rights and provide relief when needed. | - EE | and where does the money go if i do donate ? | Info Inquiry EE | Straight to the organisation? | Info Inquiry ER | Yes, it goes straight to the organisation, where it can be used to help many children. | - EE | because some organisations do not divide the money properly | Source Derogation ER | This organisation has been checked by some groups, and they divide the money properly. | - .. | .. | .. EE | I will certainly consider it | No Strategy ### 3.5 Dataset Statistics The P4G and CB datasets comprise $530$ and $800$ labelled conversations, respectively, spanning an average of 37 and 12 utterances per conversation. The datasets cover two distinct persuasion scenarios and also illustrate the rights and obligations shown by the participants. For example, in P4G, EE comes into the interaction blind and is unaware of the donation attempt. We encounter several conversations where EE is willing to donate since it resonates with their beliefs, and no resisting strategies are observed. However, for CB, the participants received prior instructions to negotiate a deal, and hence resisting strategies were more prominent. We present the frequency distribution of the seven strategies in Table 4. We observe that the distributions of strategies are skewed for both the datasets and is more pronounced for P4G, where ‘Not-A-Strategy’ accounts for the lion’s share. We also see that the buyer exhibits more resisting strategies than the seller highlighting the asymmetric role of the two participants. Nevertheless, we reiterate that the resisting strategies we propose are applicable for both the domains. In the next section, we propose the framework to infer such strategies automatically. Table 4: Proportion of resisting strategies (in %)for the Persuasion (P4G) and Negotiation (CB) dataset. The strategies are observed only for the persuadee (EE) in P4G and for both buyer (BU) and seller (SE) in CB. Strategy | Persuasion (P4G) | Negotiation (CB) ---|---|--- EE | BU | SE Source Derogation | 2.16 | 7.61 | 0.44 Counter Argument | 2.28 | 3.74 | 6.06 Personal Choice | 2.52 | 9.43 | 8.49 Information Inquiry | 7.19 | 18.27 | 0.38 Self Pity | 1.58 | 4.66 | 0.34 Hesitance | 1.76 | 15.78 | 9.14 Self-assertion | 0.94 | 2.20 | 5.05 Not a strategy | 81.56 | 38.30 | 70.09 Figure 1: A diagram illustrating how ResPer works. The encoder shown on the left takes the BERT representations of a token as input and passes it through a BiGRU layer followed by Self Attention. The outputs from BERT, BiGRU and self-attention are then concatenated to form the output. Max-pooling over this output yields the corresponding utterance embedding. This utterance representation is passed through a uni-directional GRU followed by Masked- Self-Attention and fusion to yield the contextualised utterance embedding. ## 4 Methodology In this section, we describe the methodology adopted for inferring the resisting strategies in persuasion dialogues and how they can be leveraged to determine the dialogue’s outcome. ### 4.1 Resisting Strategy prediction We model the task of identifying resisting strategies as a sequence labelling task. We assign each utterance in the dialogues with a label representing either one of the seven resisting strategies or _Not-A-Strategy_. ††We acknowledge that an utterance can have multiple labels. However, such utterances comprise only 1.2% and 3.85% of the P4G and the CB datasets, respectively. In such cases, the label is randomly selected. Since the resisting strategies, by definition, occur in response to the persuasion attempts, our model architecture needs to be cognizant of the conversational history. To that end, we adopt a hierarchical neural network architecture, similar to Jiao et al. (2019), to infer the corresponding resisting strategy. The architecture leverages the previous conversational context in addition to the current contextualised utterance embedding. Our choice is motivated by the recent successes of hierarchical sequence labelling frameworks in achieving state-of-the-art performance on several dialogue- oriented tasks. Some myriad examples include emotion recognition Majumder et al. (2019); Jiao et al. (2019), dialogue act classification Chen et al. (2018); Raheja and Tetreault (2019), face act prediction Dutt et al. (2020), open domain chit-chat Zhang et al. (2018); Kumar et al. (2020) and the like. We hereby adopt this as the foundation architecture for our work and refer to our instantiation of the architecture as ResPer. Architecture of ResPer: An utterance $u_{j}$ is composed of tokens $[w_{0},w_{1},...,w_{K}]$ represented by their corresponding embeddings $[e(w_{0}),e(w_{1}),...,e(w_{K})]$. In ResPer, we obtain these using a pre- trained BERT model Devlin et al. (2019). We pass these contextualised word representations through a bidirectional GRU to obtain the forward $\overrightarrow{h_{k}}$ and backward $\overleftarrow{h_{k}}$ hidden states of each word, before passing them into a Self-Attention layer. This gives us the corresponding attention outputs, $\overrightarrow{ah_{k}}$ and $\overleftarrow{ah_{k}}$ as described below. $\begin{split}\overrightarrow{h_{k}}=\operatorname{GRU}\left(e\left(w_{k}\right),\overrightarrow{h_{k-1}}\right)\\\ \overleftarrow{h_{k}}=\operatorname{GRU}\left(e\left(w_{k}\right),\overleftarrow{h_{k+1}}\right)\\\ \overrightarrow{ah_{k}}=SelfAttention(\overrightarrow{h_{k}})\\\ \overleftarrow{ah_{k}}=SelfAttention(\overleftarrow{h_{k}})\end{split}$ Finally, we concatenate the contextualised word embedding with the GRU hidden states and Attention outputs in the fusion layer to obtain the final representation of the word $e_{c}(w_{k})$. We represent the bias as $b_{w}$. Here, We perform max-pooling over the fused word embeddings to obtain the $j^{th}$ utterance embedding, $e(u_{j})$. $\begin{split}e_{c}(w_{k})=\operatorname{tanh}(&W_{w}[\overrightarrow{ah_{k}};\overrightarrow{h_{k}};e(w_{k});\overleftarrow{h_{k}};\overleftarrow{ah_{k}}]+b_{w})\\\ e(u_{j})=&\operatorname{max}(e_{c}(w_{1}),e_{c}(w_{2}),...e_{c}(w_{K}))\end{split}$ We use a unidirectional GRU and Masked Self-Attention to encode conversational context, to ensure that the prediction for the $j^{th}$ utterance is not influenced by future utterances. Similarly, we calculate the contextualized representation of an utterance $e_{c}(u_{j})$ using the conversation context. We pass $e(u_{j})$ through a uni-directional GRU that yields the forward hidden state $\overrightarrow{H_{j}}$. Masked Self-Attention over the previous hidden states, yields $\overrightarrow{AH_{j}}$. We fuse $e(u_{j})$, $\overrightarrow{H_{j}}$ and $\overrightarrow{AH_{j}}$ before passing it through a linear layer with tanh activation to obtain $e_{c}(u_{j})$. We project the final contextualised utterance embedding $e_{c}(u_{j})$ onto the state space of resisting strategies. We apply softmax to obtain a probability distribution over the strategies, with Negative Log-Likelihood (NLL) as the loss function to obtain the strategy loss. ### 4.2 Conversation Outcome prediction We further investigate the impact of resisting strategies on the outcome of the conversation. We represent a strategy as a fixed dimensional embedding initialised at random. We subsequently encode a sequence of strategies by passing them through a uni-directional GRU to obtain a final representation for the sequence. We project the representation onto a binary vector which encodes for the conversation outcome. We apply softmax with NLL across all the conversations to obtain the outcome prediction loss. ## 5 Experiments In this section, we describe the baselines and evaluation metrics. We present the experimental details of our model in Table 5. ### 5.1 Baselines Resisting strategy prediction: We experiment with standard neural baselines for text classification, which have also been used in classifying persuasion strategies, namely CNN Kim (2014); Wang et al. (2019) and BiGRU Yang et al. (2019). To ensure a fair comparison, we introduce pre-trained BERT-embeddings Devlin et al. (2019) as input to the baselines, henceforth denoted as BERT-CNN and BERT-BiGRU. Furthermore, to inspect the impact of conversational history, we remove the conversational GRU from ResPer such that the utterance embedding $e(u_{j})$ is directly used for prediction. We refer to this architecture as BERT-BiGRU-sf, since it employs self-attention(s) and fusion (f) on top of BERT-BiGRU. Finally, we experiment with the best performing HiGRU-sf model of Jiao et al. (2019) as another baseline. Conversation success prediction: The notion of conversation success depends on the choice of dataset. For P4G, we consider the resisting strategies to be successful if the persuadee (EE) refused to donate to charity. For CB, we adopt the same notion of success as Zhou et al. (2019), namely when the seller (SE) can sell at a price greater than the median sale-to-list ratio $r$. $r=\frac{\text{sale price}-\text{buyer target price}}{\text{listed price}-\text{buyer target price}}$ (1) To observe the effect of conversation success, we experiment with strategies of both the parties involved. For P4G, we encode separately (i) the persuasion strategies of ER as identified by Wang et al. (2019), (ii) the resisting strategies employed by EE and (iii) both the persuasion and resisting strategies. Likewise, for CB, we encode the resisting strategies of only (i) the buyer (BU) (ii) the seller (SE) (iii) both. These experiments would enable us to investigate which party has a greater influence on conversation success. Table 5: Here we describe the search-space of all the hyper-parameters used in our experiments and describe the search space we used to find the hyper-parameters. $d_{h1},d_{h2}$ represents the hidden dimensions of the Utterance GRU and the Conversation GRU. Hyper-parameter | Search space | Final Value ---|---|--- learning-rate (lr) | 1e-3 to 1e-5 | 1e-4 Batch-size | - | 1 conversation $\\#$Epochs | $<100$ | 30.8, 22 lr-decay | - | 0.5 every 20 epochs $d_{h1}$ | - | 1024 $d_{h2}$ | - | 300 ### 5.2 Evaluation metrics We adopt the same evaluation procedure for both the resisting strategy and the conversation outcome prediction task across the datasets. In either case, we perform five-fold cross-validation due to paucity of annotated data. We report performance in terms of the weighted††Weighted F1 Scores are calculated by taking the average of the F1 scores for each label weighted by the number of true instances for each label. and macro F1-scores across the five folds. Our choice of the metric is motivated by the high label imbalance, as observed in Table 4. ## 6 Results In this section, we answer the following : * Q1. How well does ResPer identify resisting strategies for Persuasion and Negotiation? * Q2. Are resisting strategies good predictors of conversation success? What insights can one glean from the results? ### 6.1 Predicting resisting strategies We present the results for the automated identification of resisting strategies in Table 6. We observe that all the models achieve a comparatively lower performance on P4G, mainly due to the higher proportion of ‘Not-a- Strategy’ labels for the latter. We gauge the benefits of incorporating conversational context by the significant††We estimate the statistical significance using the paired bootstrapped test of Berg-Kirkpatrick et al. (2012), due to the small number of data Dror et al. (2018). improvement of Macro F1 score by 0.036 and 0.011 for P4G and CB respectively. In fact, ResPer outperforms all the proposed baselines significantly. Table 6: Results of ResPer and other baselines on the resistance strategy prediction task on the Persuasion and CB dataset. The metrics used for evaluation are Macro F1 and Weighted F1 represented as M-F1 and W-F1 respectively. The best results are in bold. Model | Persuasion (P4G) | Negotiation (CB) ---|---|--- M-F1 | W-F1 | M-F1 | W-F1 CNN | 0.261 | 0.757 | 0.560 | 0.706 BERT + CNN | 0.508 | 0.819 | 0.651 | 0.751 HiGRU-sf | 0.446 | 0.788 | 0.605 | 0.734 BERT + BiGRU | 0.514 | 0.815 | 0.647 | 0.747 BERT + BiGRU-sf | 0.522 | 0.814 | 0.649 | 0.750 ResPer | 0.558 | 0.828 | 0.662 | 0.767 Figure 2: Confusion matrix for resisting strategies for the Persuasion (P4G) and Negotiation (CB) datasets on the left and right respectively. Each resisting strategy is represented as its initial (Self Pity) as SP. True and Predicted Labels have been plotted on the X-axis and the Y-axis respectively. Error Analysis: We present the confusion matrix for predicting resisting strategies using ResPer on the Persuasion (P4G) and Negotiation (CB) datasets in Figures 2(a) and 2(b) respectively. We observe that most classification errors occur when a resisting strategy is incorrectly inferred as ‘Not-A- Strategy’. The effect is more prevalent for P4G since ‘Not-A-Strategy’ comprises 80% of all annotated labels. Other notable instances of misclassification for P4G occurs when Self Assertion is predicted as Self Pity since both strategies refer to one’s self. These strategies occur so infrequently (see Table 4) that the models lack sufficient information to distinguish between the two categories. Likewise, for the CB corpus, Hesitance utterances which constitute a price request, are often posed as questions. This causes the model to predict the strategy as Information Inquiry instead. Self Assertion is often incorrectly marked as Source Derogation possibly because it often takes a firm stance, and is likely to disparage the other party in the process, thereby confusing the model. ### 6.2 Conversation Outcome Prediction Table 7: We observe the impact of incorporating sequence of strategies on conversation outcome prediction in terms of Macro-F1 and Weighted-F1 score. For P4G, we observe strategies of the persuader (ER), persuadee (EE) and both. For CB, we observe strategies of the buyer (BU), seller (SE) and both. Persuasion (P4G) | Negotiation (CB) ---|--- User | Macro-F1 | W-F1 | User | Macro-F1 | W-F1 ER | 0.588 | 0.620 | BU | 0.618 | 0.640 EE | 0.618 | 0.640 | SE | 0.462 | 0.508 Both | 0.646 | 0.671 | Both | 0.605 | 0.626 We observe how the sequence of strategies adopted by the two participants have a disproportionate impact on the final conversation outcome in Table 7. It is interesting to note that the resisting strategies for the persuadee have a greater effect on the conversation outcome (macro-F1 score of 0.62) than the persuasion strategies themselves (macro-F1 score of 0.59). Moreover, incorporating both the persuasion and resisting strategies boosts the prediction performance even further to 0.65. We also observe an asymmetry in the roles of the buyer (BU) and the seller (SE) for the CB dataset. We observe that BU’s strategies are significantly more effective in deciding the conversation outcome, probably because buyers demonstrate a higher number of resisting strategies. These experiments highlight the importance of incorporating resisting strategies to gain a complete picture. ### 6.3 Comparative Analysis of Strategies Emboldened by the success of resisting strategies to infer the conversational outcome, we probe deeper to investigate the impact of individual strategies. We apply logistic regression with the frequency of strategies, of either participant, as the features while the outcome variable denotes conversation success. We observe the coefficients of the strategies to infer their correlation with conversation success and their corresponding p-values to determine whether the correlation was indeed statistically significant. Our procedure follows previous work in identifying influential persuasion strategies Yang et al. (2019); Wang et al. (2019). We present the results of this analysis in Table 8. Table 8: Coefficients of the different persuasion strategies corresponding to the persuadee, EE in Persuasion and the buyer, BU, and seller, SE in Negotiation. A value of * and ** means the strategy is signficant with p-value $\leq$ 0.05 and 0.01 respectively. | Persuasion (P4G) | Negotiation (CB) ---|---|--- Strategy | EE | BU | SE Not-A-Strategy | -0.008 | 0.287** | -0.138 Hesitance | 0.344 | 0.328* | 0.266 Counter Argument | -0.014 | -0.256 | 0.429* Personal Choice | 0.153 | 0.126 | 0.164 Information Inquiry | 0.180* | 0.091 | -0.704 Source Derogation | 0.043 | 0.052 | -0.455 Self Pity | 0.103 | 0.081 | -0.314 Self Assertion | 0.843* | -0.576* | -0.040 For P4G, all the resisting strategies for persuasion apart from Counter- Argumentation are positively correlated with a refusal to donate. The highest impact stems from Self Assertion. Previous research Fransen et al. (2015a); Zuwerink Jacks and Cameron (2003) has noticed that Self Assertion is prominent amongst individuals with high self-esteem. Such individuals are confident about their beliefs and less likely to conform. Similarly, a high positive coefficient for Information Inquiry can be attributed as follows. EE inquires information about the charity not only as a means to verify their legitimacy, but also to gain the knowledge they can exploit to their advantage. An innocuous question like ‘Where will my money go?’ would enable EE to assert that they are keener to help children in their own country instead, thereby resisting the donation attempt and saving face. The CB scenario setup ensures that the coefficients of the strategies set for BU and SE would be anti-correlated, which holds for the Table 8. Like P4G, a high negative coefficient of Self Assertion signifies that SE’s price is disagreeable to BU - they would instead not buy. Moreover, the high coefficient of Counter Argumentation justifies that it is an effective tactic for both parties. ## 7 Conclusion We present a generalised computational framework grounded in cognitive psychology to operationalise resisting strategies employed to counter persuasion. We identify seven distinct resisting strategies that we instantiate on two publicly available corpora comprising persuasion and negotiation conversations. We adopt a hierarchical sequence labelling architecture to infer the resisting strategies automatically and observe that our model achieves competitive performance for both datasets. Furthermore, we examine the interplay of resisting strategies in determining the final conversation outcome, which corroborates with previous findings. In the future, we would like to explore better models to encode the strategy information and apply our framework to improve personalised persuasion and negotiation dialogue systems. We would also like to study the influence of other confounding factors such as power dynamics on the outcomes of conversations featuring resisting strategies. ## Acknowledgments We thank the anonymous EACL reviewers for their insightful comments and constructive feedback. This research was funded in part by NSF Grants (IIS 1917668 and IIS 1822831) and Dow Chemical. The first author would also like to acknowledge his best friend, Ahana Sadhu, for her constant support and motivation, who unfortunately and untimely left us this year. ## References * Abelson and Miller (1967) Robert P Abelson and James C Miller. 1967. Negative persuasion via personal insult. _Journal of Experimental Social Psychology_ , 3(4):321–333. * Ahluwalia (2000) Rohini Ahluwalia. 2000. Examination of psychological processes underlying resistance to persuasion. _Journal of Consumer Research_ , 27(2):217–232. * Al Khatib et al. (2020) Khalid Al Khatib, Michael Völske, Shahbaz Syed, Nikolay Kolyada, and Benno Stein. 2020. Exploiting personal characteristics of debaters for predicting persuasiveness. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7067–7072, Online. Association for Computational Linguistics. * Al-Khatib et al. (2018) Khalid Al-Khatib, Henning Wachsmuth, Kevin Lang, Jakob Herpel, Matthias Hagen, and Benno Stein. 2018. Modeling deliberative argumentation strategies on Wikipedia. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 2545–2555, Melbourne, Australia. Association for Computational Linguistics. * Bartels (2006) Larry M Bartels. 2006. Priming and persuasion in presidential campaigns. _Capturing campaign effects_ , 1:78–114. * Berg-Kirkpatrick et al. (2012) Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In _Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning_ , pages 995–1005, Jeju Island, Korea. Association for Computational Linguistics. * Cano-Basave and He (2016) Amparo Elizabeth Cano-Basave and Yulan He. 2016. A study of the impact of persuasive argumentation in political debates. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1405–1413, San Diego, California. Association for Computational Linguistics. * Carlile et al. (2018) Winston Carlile, Nishant Gurrapadi, Zixuan Ke, and Vincent Ng. 2018. Give me more feedback: Annotating argument persuasiveness and related attributes in student essays. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 621–631, Melbourne, Australia. Association for Computational Linguistics. * Chen et al. (2018) Zheqian Chen, Rongqin Yang, Zhou Zhao, Deng Cai, and Xiaofei He. 2018. Dialogue act recognition via crf-attentive structured network. In _The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval_, pages 225–234. * Cooley (1993) John W Cooley. 1993. A classical approach to mediation-part i: Classical rhetoric and the art of persuasion in mediation. _U. Dayton L. Rev._ , 19:83. * Da San Martino et al. (2019) Giovanni Da San Martino, Seunghak Yu, Alberto Barrón-Cedeño, Rostislav Petrov, and Preslav Nakov. 2019. Fine-grained analysis of propaganda in news article. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5636–5646, Hong Kong, China. Association for Computational Linguistics. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186. * Dror et al. (2018) Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker’s guide to testing statistical significance in natural language processing. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1383–1392, Melbourne, Australia. Association for Computational Linguistics. * Durmus and Cardie (2018) Esin Durmus and Claire Cardie. 2018. Exploring the role of prior beliefs for argument persuasion. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1035–1045, New Orleans, Louisiana. Association for Computational Linguistics. * Dutt et al. (2020) Ritam Dutt, Rishabh Joshi, and Carolyn Penstein Rose. 2020. Keeping up appearances: Computational modeling of face acts in persuasion oriented discussions. _arXiv preprint arXiv:2009.10815_. * Fleiss (1971) Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. _Psychological bulletin_ , 76(5):378. * Fransen et al. (2015a) Marieke L Fransen, Edith G Smit, and Peeter WJ Verlegh. 2015a. Strategies and motives for resistance to persuasion: an integrative framework. _Frontiers in psychology_ , 6:1201. * Fransen et al. (2015b) Marieke L Fransen, Peeter WJ Verlegh, Amna Kirmani, and Edith G Smit. 2015b. A typology of consumer strategies for resisting advertising, and a review of mechanisms for countering them. _International Journal of Advertising_ , 34(1):6–16. * Galitsky et al. (2018) Boris Galitsky, Dmitry Ilvovsky, and Dina Pisarevskaya. 2018. Argumentation in text: Discourse structure matters. _CICLing 2018_. * He et al. (2018) He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2333–2343. * Heath et al. (2017) Teresa Heath, Robert Cluley, and Lisa O’Malley. 2017. Beating, ditching and hiding: consumers’ everyday resistance to marketing. _Journal of Marketing Management_ , 33(15-16):1281–1303. * Jiao et al. (2019) Wenxiang Jiao, Haiqin Yang, Irwin King, and Michael R. Lyu. 2019. HiGRU: Hierarchical gated recurrent units for utterance-level emotion recognition. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 397–406, Minneapolis, Minnesota. Association for Computational Linguistics. * Joshi et al. (2021) Rishabh Joshi, Vidhisha Balachandran, Shikhar Vashishth, Alan Black, and Yulia Tsvetkov. 2021. Dialograph: Incorporating interpretable strategy-graph networks into negotiation dialogues. In _International Conference on Learning Representations_. * Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1746–1751. * Knobloch-Westerwick and Meng (2009) Silvia Knobloch-Westerwick and Jingbo Meng. 2009. Looking the other way: Selective exposure to attitude-consistent and counterattitudinal political information. _Communication Research_ , 36(3):426–448. * Kumar et al. (2020) Gaurav Kumar, Rishabh Joshi, Jaspreet Singh, and Promod Yenigalla. 2020. AMUSED: A multi-stream vector representation method for use in natural dialogue. In _Proceedings of The 12th Language Resources and Evaluation Conference_ , pages 750–758, Marseille, France. European Language Resources Association. * Longpre et al. (2019) Liane Longpre, Esin Durmus, and Claire Cardie. 2019. Persuasion of the undecided: Language vs. the listener. In _Proceedings of the 6th Workshop on Argument Mining_ , pages 167–176, Florence, Italy. Association for Computational Linguistics. * Luttrell and Sawicki (2020) Andrew Luttrell and Vanessa Sawicki. 2020. Attitude strength: Distinguishing predictors versus defining features. _Social and Personality Psychology Compass_ , 14(8):e12555. * Majumder et al. (2019) Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. Dialoguernn: An attentive rnn for emotion detection in conversations. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pages 6818–6825. * Munigala et al. (2018) Vitobha Munigala, Abhijit Mishra, Srikanth G Tamilselvam, Shreya Khare, Riddhiman Dasgupta, and Anush Sankaran. 2018. Persuaide! an adaptive persuasive text generation system for fashion domain. In _Companion Proceedings of the The Web Conference 2018_ , pages 335–342. * O’Keefe (2002) Daniel J O’Keefe. 2002. Guilt as a mechanism of persuasion. _The persuasion handbook: Developments in theory and practice_ , pages 329–344. * Raheja and Tetreault (2019) Vipul Raheja and Joel Tetreault. 2019. Dialogue Act Classification with Context-Aware Self-Attention. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3727–3733, Minneapolis, Minnesota. Association for Computational Linguistics. * San José (2019) Victor Tejedor San José. 2019. The role of humor and threat on predicting resistance and persuasion. * Sherman and Gorkin (1980) Steven J Sherman and Larry Gorkin. 1980. Attitude bolstering when behavior is inconsistent with central attitudes. _Journal of Experimental Social Psychology_ , 16(4):388–403. * Speck and Elliott (1997) Paul Surgi Speck and Michael T Elliott. 1997. Predictors of advertising avoidance in print and broadcast media. _Journal of Advertising_ , 26(3):61–76. * Tormala (2008) Zakary L Tormala. 2008. A new framework for resistance to persuasion: The resistance appraisals hypothesis. _Attitudes and attitude change_ , pages 213–234. * Vangelisti et al. (1991) Anita L Vangelisti, John A Daly, and Janine Rae Rudnick. 1991. Making people feel guilty in conversations: Techniques and correlates. _Human Communication Research_ , 18(1):3–39. * Wang et al. (2019) Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Persuasion for good: Towards a personalized persuasive dialogue system for social good. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5635–5649, Florence, Italy. Association for Computational Linguistics. * Wood and Kallgren (1988) Wendy Wood and Carl A Kallgren. 1988. Communicator attributes and persuasion: Recipients’ access to attitude-relevant information in memory. _Personality and Social Psychology Bulletin_ , 14(1):172–182. * Wright (1975) Peter Wright. 1975. Factors affecting cognitive resistance to advertising. _journal of Consumer Research_ , 2(1):1–9. * Yang et al. (2019) Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, and Eduard Hovy. 2019. Let’s make your request more persuasive: Modeling persuasive strategies via semi-supervised neural nets on crowdfunding platforms. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3620–3630, Minneapolis, Minnesota. Association for Computational Linguistics. * Zeng et al. (2020) Jichuan Zeng, Jing Li, Yulan He, Cuiyun Gao, Michael Lyu, and Irwin King. 2020. What changed your mind: The roles of dynamic topics and discourse in argumentation process. In _Proceedings of The Web Conference 2020_ , WWW ’20, page 1502–1513, New York, NY, USA. Association for Computing Machinery. * Zhang et al. (2018) Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics. * Zhou et al. (2019) Yiheng Zhou, He He, Alan W Black, and Yulia Tsvetkov. 2019. A dynamic strategy coach for effective negotiation. In _Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue_ , pages 367–378, Stockholm, Sweden. Association for Computational Linguistics. * Zuwerink Jacks and Cameron (2003) Julia Zuwerink Jacks and Kimberly A Cameron. 2003. Strategies for resisting persuasion. _Basic and applied social psychology_ , 25(2):145–161. blabla blabla ## Appendix Figure 3: Flowchart for annotating CB
# ALMA observation of the protoplanetary disk around WW Cha: faint double- peaked ring and asymmetric structure Kazuhiro D. Kanagawa Research Center for the Early Universe, Graduate School of Science, The University of Tokyo, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan College of Science, Ibaraki University, 2-1-1 Bunkyo, Mito, Ibaraki 310-8512, Japan Jun Hashimoto Astrobiology Center, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Takayuki Muto Division of Liberal Arts, Kogakuin University, 1-24-2 Nishi-Shinjuku, Shinjuku-ku, Tokyo 163-8677, Japan Takashi Tsukagoshi National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Sanemichi Z. Takahashi National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Yasuhiro Hasegawa Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Mihoko Konishi Faculty of Science and Technology, Oita University, 700 Dannoharu, Oita 870-1192, Japan Hideko Nomura National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Hauyu Baobab Liu Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS No.1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C. Ruobing Dong Department of Physics & Astronomy, University of Victoria, Victoria, BC, V8P 1A1, Canada Akimasa Kataoka National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Munetake Momose College of Science, Ibaraki University, 2-1-1 Bunkyo, Mito, Ibaraki 310-8512, Japan Tomohiro Ono Department of Earth and Planetary Sciences, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551, Japan Michael Sitko Department of Physics, University of Cincinnati, Cincinnati, OH 45221, USA Space Science Institute, 475 Walnut Street, Suite 205, Boulder, CO 80301, USA Michihiro Takami Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS No.1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C. Kengo Tomida Astronomical Institute, Tohoku University, Sendai 980-8578, Japan ###### Abstract We present Atacama Large Millimeter/submillimeter Array (ALMA) band 6 observations of dust continuum emission of the disk around WW Cha. The dust continuum image shows a smooth disk structure with a faint (low-contrast) dust ring, extending from $\sim 40$ au to $\sim 70$ au, not accompanied by any gap. We constructed the simple model to fit the visibility of the observed data by using MCMC method and found that the bump (we call the ring without the gap the bump) has two peaks at 40 au and 70 au. The residual map between the model and observation indicates asymmetric structures at the center and the outer region of the disk. These asymmetric structures are also confirmed by model- independent analysis of the imaginary part of the visibility. The asymmetric structure at the outer region is consistent with a spiral observed by SPHERE. To constrain physical quantities of the disk (dust density and temperature), we carried out radiative transfer simulations. We found that the midplane temperature around the outer peak is close to the freezeout temperature of CO on water ice ($\sim 30$ K). The temperature around the inner peak is about $50$ K, which is close to the freezeout temperature of H2S and also close to the sintering temperature of several species. We also discuss the size distribution of the dust grains using the spectral index map obtained within the band 6 data. protoplanetary disks – stars:individual (WW Cha) – stars:pre-main sequence – techniques:interferometric ††journal: ApJ††software: RADMC-3D (Dullemond & Dominik, 2005), CASA v5.6 (McMullin et al., 2007), vis_sample https://github.com/AstroChem/vis_sample, emcee (Foreman-Mackey et al., 2013), Matplotlib (Hunter, 2007, http://matplotlib.org), NumPy (van der Walt et al., 2011, http://www.numpy.org) ## 1 Introduction Planets are born in a protoplanetary disk around a young star. Recent observations have revealed substructures such as gaps, rings, and crescents in the protoplanetary disks (e.g., Fukagawa et al., 2013; Akiyama et al., 2015, 2016; ALMA Partnership et al., 2015; Momose et al., 2015; Dong et al., 2018b; Long et al., 2018; van der Marel et al., 2019; Soon et al., 2019; Kim et al., 2020). These structure could be formed at an edge of a gap induced by disk- planet interaction (e.g., Paardekooper & Mellema, 2004; Muto & Inutsuka, 2009; Zhu et al., 2012; Dong et al., 2015; Pinilla et al., 2015; Kanagawa et al., 2018). Alternatively, these could be associated to dust growth related to snowline (e.g., Zhang et al., 2015; Cieza et al., 2017; Macías et al., 2017; van der Marel et al., 2018; Facchini et al., 2020) and the sintering effect (Okuzumi et al., 2016), or secular gravitational instability (Takahashi & Inutsuka, 2014, 2016; Tominaga et al., 2018). The ring/gap structures of the dust grains also could be formed by axisymmetric gas perturbation due to evolution of luminosity of the central star (Vorobyov et al., 2020). In any case, these substructures could be reflected by planet formation and growth of dust grains which are building blocks of planets. Direct observations of the disks help to understand how formation of the planets progresses in the disk. Our target, WW Cha is a young star with a circumstellar disk (e.g., Pascucci et al., 2016; Garufi et al., 2020) in the Chameleon I star-forming region. The star is located at about 190 pc (Gaia Collaboration et al., 2018, 2020). The mass of the star is about $1M_{\odot}$, the surface temperature is 4350 K (Spectral type is K5) (Luhman, 2007), and the luminosity is $11L_{\odot}$ (Garufi et al., 2020). The star is very young ($\sim 0.2$ Myr) and it could be still embedded into the molecular cloud core with a high extinction (Ribas et al., 2013; Garufi et al., 2020). A high accretion rate onto the star, $10^{-6.6}M_{\odot}/\mbox{yr}$, is inferred from the photometric and the Balmer continuum observations (Manara et al., 2016). Moreover, the binary with the separation of $\sim 1$ au is reported by VLTI (Anthonioz et al., 2015). The disk of WW Cha may be a pre-transition disk because strong infrared emission is detected (Espaillat et al., 2011; Ribas et al., 2013), while the recent modeling using radiative transfer simulations done by van der Marel et al. (2016) suggested an inner cavity with the radius of $\sim 50$ au (but with a large uncertainty). The disk is very bright in millimeter wavelength (Pascucci et al., 2016) and Lommen et al. (2009) reported the emission at $\sim 1$ cm, which indicates the presence of large grains due to growth of dust grains. In this paper, we report dust continuum observations of the disk around WW Cha in ALMA Cycle 5. In Section 2, we describe the setup of the observation and show the observational results. We developed a model for the observed emission by using the Markov Chain Monte Carlo (MCMC) method and found axisymmetric substructures and asymmetric structures, which is described in Section 3. Moreover, we carried out radiative transfer simulations to constrain the physical parameters of the disk which are described in Section 4. In Section 5, we discuss origins of the substructures, an inner cavity and binary, and the dust growth in the disk. Section 6 contains our conclusion. ## 2 Observations and results The observation was carried out by ALMA in band 6, which is summarized in Table 1. The data were calibrated by the Common Astronomy Software Applications (CASA) package (McMullin et al., 2007) version 5.6.1-8, following the calibration scripts provided by ALMA. We conducted self-calibration of the visibilities. The phases were self-calibrated once with a fairly long solution intervals (solint=‘inf’) combining all spectral windows (SPWs). We combined the data taken by sparse (C43-8) and compact (C43-4) array configurations to recover the missing flux at larger angular scales. By using the CASA tool `uvmodelfit`, we fitted the data by a Gaussian shape, and the phase center was corrected to be the center of the Gaussian shape by `fixvis`. The inclination $i=37.2^{\circ}\pm 0.026^{\circ}$ and position angle $\phi=32.4^{\circ}\pm 0.04^{\circ}$ are obtained by the Gaussian fit by `uvmodelfit` 111The fitted inclination and position angle are slightly different in the C43-8 (sparse configuration) and C43-4 (compact configuration) data. We adopted the values of the C43-8 data. For the C43-4 data, $i=39.2^{\circ}$ and $\phi=30.9^{\circ}$ with the relatively large reduced $\chi^{2}$, 5.78 while, for the fit of the C43-8 data, reduced $\chi^{2}=1.89$. The relatively large $\chi^{2}$ for the C43-4 data can be due to the asymmetric structure discussed in Section 3.4. Hence, we adopted the values given by the fit of the C43-8 data. There are two SPWs for continuum with $1.875$ GHz frequency width with the central frequency being $233.0$ GHz (Upper band) and $216.7$ GHz (Lower band) in both the C43-4 and C43-8 data. As shown below, the total flux density of the data in $233.0$ GHz ($\sim 500$ mJy) is significantly larger than that in $216.7$ GHz ($\sim 430$ mJy). Hence, the dust continuum image of combined data was synthesized by CASA with the `tclean` task using the `mtmfs` algorithm (Rau & Cornwell, 2011) with `nterms`=2. We obtained a synthesized image at $224.9$ GHz with the beam size of $89.6\times 60.0\mbox{ mas}$ ($17.0\times 11.4$ au) with PA=$168.8^{\circ}$ and with the $1\sigma$ RMS noise level of $0.029\mbox{ mJy/Beam}$. The imaging parameters are summarized in Table 1. Table 1: ALMA band 6 Observations and Imaging Parameters Observations | Sparse configuration | Compact configuration ---|---|--- Observing date (UT) | 2017.Nov.27 | 2018.Mar.11 Configuration | C43-8 | C43-4 Project code | 2017.1.00286.S Time on source (min) | 60.3 | 29.4 Number of antennas | 47 | 42 Baseline lengths | 92.1 m to 8.5 km | 15.1 m to 1.2 km Baseband Freqs. (GHz) | 233.0 (Upper band), 216.7 (Lower band) Channel width (GHz) | 1.87 Continuum band width (GHz) | 4.0 Bandpass calibrator | J0635$-$7516 | J1427$-$4206 Flux calibrator | J0635$-$7516 | J1427$-$4206 Phase calibrator | J1058$-$8003 | J1058$-$8003 Mean PWV (mm) | 1.5 | 0.6 Imaging | Robust clean parameter | 0.0 Deconvolution algorithm | mtmfs Weighting | Briggs nterms | 2 Beam shape | 89.6 $\times$ 60.0 mas ($17.0\times 11.4$ au) at PA of $168.8^{\circ}$ r.m.s. noise (mJy/beam) | 0.029 Figure 1: panels (a) and (c) show the image and the spectral index maps resulting from the combination of 233.0 GHz and 216.7GHz data. The contours in Panel (c) indicate the intensity levels of $0.087$ mJy/Beam (3$\sigma$), $0.29$ mJy/Beam (10$\sigma$), $2.9$ mJy/Beam ($100\sigma$), $8.7$ mJy/Beam ($300\sigma$) and $14.5$ mJy/Beam ($500\sigma$) in Panel (a). Panels (b) and (d) show brightness temperature and spectral index along the major and minor axis, respectively. The gray thick lines in Panel (b) denote the midplane temperature given by Equation (1) with $L_{\ast}=11L_{\odot}$. The synthesized image is shown in Figure 1. The panel (a) shows the synthesized dust continuum image derived from all SPW data, and the brightness temperatures along the major and minor axes are shown in the panel (b). The dust disk is clearly resolved and its size is about $0.5$ arcsec, which corresponds to about $100$ au. We see a faint low-contrast dust ring feature in the radial profile of brightness temperature at $\sim 0.3$ arcsec ($\sim 60$ au) from the central star. This ring structure is not accompanied with the gap structure, as different from the rings found in the disk of HL Tau (ALMA Partnership et al., 2015), and DSHARP’s samples (Andrews et al., 2018). Hence, we call this ring structure (without the gap) the bump in the following. The total flux density with $>3\sigma(0.087\mbox{ mJy})$ emission is measured to be $449.58$ mJy. We do not see a clear cavity structure in the image. Therefore, the large cavity such as that predicted by SED analysis done by van der Marel et al. (2016), is ruled out at least in the millimeter image, while the existence of a cavity that is smaller than the beam size is not ruled out. In the panel (c) of Figure 1, we show the spectral index map, and panel (d) illustrates the spectral indexes along the major and minor axes. Within the region of $<0.25$ arcsec ($<50$ au), the spectral index is $\sim 2$, which is an indicative of optically thick dust emission. Around the center of the disk, in particular, the spectral index is slightly below $2$, which may indicate optically thick dust scattering (Liu, 2019; Zhu et al., 2019). In the region where the offset is larger than $0.25$ arcsec, the disk may be optically thin because the spectral index is larger than 2, and the spectral index seems to increase in the outer region, though there is large uncertainly at $>0.5$ arcsec. In the panel (b) of Figure 1, we also plot the midplane temperature along the major axis, estimated by the simple expression for a passive heated radiative disk (e.g., Chiang & Goldreich, 1997), $\displaystyle T_{\rm mid}$ $\displaystyle=\left(\frac{\alpha_{g}L_{\ast}}{8\pi R^{2}\sigma_{\rm SB}}\right)^{1/4},$ (1) where $\sigma_{\rm SB}$ is the Stefan-Bolzmann constant, and $R$ is the distance from the star. The grazing angle $\alpha_{g}$ is set to be $0.02$ and the stellar luminosity $L_{\ast}$ is $11L_{\odot}$ in the plot. The brightness temperature is close to the midplane temperature within $0.25$ arcsec from the center, which indicate the optically thick emission. The outer region ($R>0.25$ arcsec or $>50$ au) can be optically thin, which is consistent with the spectral index map mentioned above. Figure 2: Intensity distributions along the major and minor axes of the face- on view of Figure 1 (a). We also plot the 1$\sigma$ noise level at each data point. The black arrows indicate the locations of the peaks within the bump structure, and the gray one indicates location of the bump structure. The dashed horizontal line denotes $3\sigma$ noise level (=$0.087$ mJy/Beam). The bottom panel is the same as the top panel, but the vertical axis is in logarithmic scale. The gray double-sided arrows indicate the location of the faint dust bump and the black arrows denote the locations of the peaks within the bump. Figure 3: Azimuthal distributions of observed intensity in the face-on view, at offset $=0.05$ arcsec ($\simeq 10$ au), $0.25$ arcsec ($\simeq 50$ au), $0.5$ arcsec ($\simeq 100$ au), $0.75$ arcsec ($\simeq 150$ au). The error bars indicate $1\sigma$ noise level (0.029 mJy). The horizontal lines are the averages, which indicates $16.98$ mJy/Beam ($\sim 600\sigma$), $7.37$ mJy/Beam ($\sim 250\sigma$), $0.88$ mJy/Beam ($\sim 30\sigma$), and $0.20$ mJy/Beam ($\sim 7\sigma$), from the top panel to bottom panel. Using the inclination and position angle, we have deprojected the face-on equivalent view, to identify substructures on the disk. In Figure 2, we show the intensity profile along the major axis and minor axis in the face-on view. In Figure 2, we put gray double-sided arrows to indicate the location of the faint low-contrast dust bump, which extends from $R\simeq 0.2$ arcsec ($\simeq 40$ au) to $R\simeq 0.4$ arcsec ($\simeq 80$ au). Moreover, one can see this bump has a double-peak feature, which is indicated by the black arrows in the figure: the inner peak locates at $|R|\simeq 0.2$ arcsec ($40$ au) and the outer one locates at $|R|\simeq 0.35$ arcsec ($70$ au), where $R$ is an offset from the center. Although the double-peak feature is not clear in the image, it is confirmed by the visibility fitting described in Section 3. Outside of $0.5$ arcsec, the intensity decreases quickly, but around $|R|\sim 0.75$ arcsec ($R\sim 150$ au), the slope of the intensity becomes moderate. Figure 3 shows the azimuthal distributions of the intensity in the face-on view. There are asymmetric structures in the azimuthal distribution at $R=0.05$ arcsec ($\sim 10$ au), and the deviation from the averaged value of the averaged value at this radius is at most about 1 mJy/Beam (5% of the averaged value or $\sim 30\sigma$). The asymmetry at the innermost radii of the disk is also indicated in Figure 2 as the distribution along the major and minor axes do not overlap at $R<20$ mas or $<4$ au. At $R=0.25$ arcsec ($\sim 50$ au), the structure has the similar pattern of asymmetricity seen in the distribution at $R=0.05$ arcsec, and the derivation from the averaged value is about $0.3$ mJy/Beam (it is about 5% of the averaged value or $\sim 10\sigma$). At $R=0.5$ arcsec ($\sim 100$ au), one can see a significant asymmetric structure. The intensity at $<150^{\circ}$ is larger than averaged value by $0.18$ mJy/Beam ($\sim 6\sigma$) and it is smaller around $250^{\circ}$ by $0.25$ mJy/Beam ($\sim 9\sigma$). The deviation from the averaged value is about 20% of the averaged value. At $R=0.75$ arcsec ($\sim 150$ au), we may see an asymmetric structure, as the intensity at $\sim 100^{\circ}$ is larger in $0.14$ mJy/Beam ($\sim 5\sigma$) than the average and it is smaller than the average at $>200^{\circ}$ in $0.08$ mJy/Beam ($\sim 3\sigma$). We discuss asymmetric structures in Section 3.4 in more detail by directly analyzing the data in visibility domain. Figure 4: Real part of the visibility for the upper and lower band data. In the plot, we combine the C43-4 and C43-8 data. The inset shows the zoom in of the region with the flux $<30$ mJy. In the rest of this section, we show the difference between the upper ($233.0$ GHz) and lower ($216.7$ GHz) band data. In Figure 4, we compare the real parts of the visibility of upper and lower band data. The visibility data are deprojected using the inclination and position angle derived earlier. Then, the data with similar $uv$-distance are binned and averaged. The bin width are $10k\lambda$ for $\rho<1200k\lambda$ while $20k\lambda$ for $\rho>1200k\lambda$, where $\rho$ is the $uv$-distance of the deprojected visibility data. The error bars in the figure indicate the standard deviation of the data divided by the square root of the number of data. The amplitudes of the visibility are clearly different at small $uv$-distances, namely $\rho<200k\lambda$, which corresponds to a spacial scale of $\sim 1$ arcsec. Moreover, one can find that the visibility of the upper band data is larger than that of the lower band data around the peak around $\rho=700k\lambda$. The details of statistics of the data are described in Appendix A. Figure 5: Intensity distributions along the major axis for the upper and lower band data. The error bar denotes $1\sigma$ noise level (=$0.029$ mJy/Beam) and the dashed horizontal line denotes $3\sigma$ noise level ($0.087$ mJy/Beam). Figure 5 compares the intensity distributions along the major axis for the upper and lower band data. The intensity of the upper band data is slightly larger than that of the lower band data. Indeed, for the upper band data, the total flux density with $>3\sigma$ ($0.087$ mJy) emission is $516.16$ mJy and for the lower band data, it is $430.20$ mJy. We note that at the offset $\simeq 0.75$ arcsec, the profile is different in the upper and lower band data, which can be related to the asymmetric structures discussed in Section 3.4. ## 3 MCMC modeling of dust continuum emission ### 3.1 Model description To examine the structure of the disk in detail, we performed fitting for dust continuum emission in the visibility domain with a simple disk model. As described in the previous section, the disk could have two peaks at $\sim 0.3$ arcsec and $0.75$ arcsec. Moreover, we also include an unresolved small cavity. By motivated from the observed features, we adopted a simple power-law intensity profile with exponential cutoff with two Gaussian bumps, two intensity enhanced/depleted regions and an inner cavity, as described below: $\displaystyle I(R)$ $\displaystyle\propto\left[f_{I}\left(\frac{R}{R_{\rm c}}\right)^{-\gamma}\exp\left[-\left(\frac{R}{R_{\rm c}}\right)^{\zeta}\right]\right.$ $\displaystyle\qquad\qquad+\left.\sum_{i=1}^{2}H_{i}\exp\left[-\left(\frac{R-R_{\rm rc,i}}{W_{i}}\right)^{2}\right]\right],$ (2) where $\displaystyle f_{I}$ $\displaystyle=\begin{cases}\delta_{\rm cav}&0<R<R_{\rm cav}\\\ 1&R_{\rm cav}<R<R_{\rm g,in,1}\\\ \delta_{\rm gc,1}&R_{\rm g,in,1}<R<R_{\rm g,out,1})\\\ \delta_{\rm gc,2}&R>R_{\rm g,out,1})\\\ \end{cases}$ (3) The total flux that the intensity given by Equation (2) is integrated over the entire disk is normalized to be $F_{\rm tot}$ which is one of the parameters of the model. The intensity distribution of Equation (2) has 16 parameters, namely, two exponents $\gamma$ $\zeta$, the depth and radius of the inner cavity $\delta_{\rm cav},R_{\rm cav}$, characteristic radius $R_{\rm c}$, total intensity of the disk $F_{\rm tot}$, and parameters of substructures: $R_{\rm g,in,1},R_{\rm g,out,1},\delta_{\rm gc,1},\delta_{\rm gc,2}$ for two enhanced/depleted regions and $R_{\rm rc,1},W_{1},H_{1}$,$R_{\rm rc,2},W_{2},H_{2}$ for two Gaussian bumps. ### 3.2 Fitting approach We fit the observation data with the model of Equation (2) in the visibility domain. In this modeling, we focus on a symmetric structure, following Zhang et al. (2015). In the following, $\rho$ indicates the deprojected baseline in the visibility domain. The likelihood function is defined by $\displaystyle\chi^{2}$ $\displaystyle=\sum_{k}^{N}\left(\frac{{\rm Re}\left(\overline{V}\right)_{\rm obs,k}-{\rm Re}\left(\overline{V}\right)_{\rm model,k}}{\sigma_{\rm obs,k}}\right)^{2},$ (4) where $k$ indicates the index of the radial bin and $N$ is the total number of radial bins. We take an average within the bin in radial and azimuthal direction in visibility domain (the overline indicates the average). The bin size is $10k\lambda$ for $\rho>1200k\lambda$, and $20k\lambda$ for $\rho<1200k\lambda$. Since the amplitude of the visibility is comparable with the noise level in $\rho>2000k\lambda$, we used the visibility in the range of $\rho<2000k\lambda$ in this modeling. The real part of the visibility is denoted by ${\rm Re}\left(V\right)$ and the subscript ’model’ and ’obs’ indicate the quantities of model and observation, respectively. The standard deviation of the averaged real part of the visibility $\sigma_{\rm obs,i}$ is calculated by dividing the standard deviation of azimuthal direction in the visibility domain by the square root of the number of data within the bin. For the fitting, we utilized the public python code `vis_sample` (Loomis et al., 2017). We used the Markov Chain Monte Carlo (MCMC) method in the `emcee` package (Foreman-Mackey et al., 2013). We carried out the fitting with the MCMC method with $\chi^{2}$ given by Equations (4). In the MCMC fitting, we run 1000 steps with 100 walkers after the burnin phase with 1000 steps (2000 steps in total). ### 3.3 fitting result We found that the C43-4 (compact configuration) data is slightly scattered as compared with the C43-8 (sparse configuration) data, and the data is slightly statistically different around $\rho=200k\lambda$ (see Appendix A). Because of this difference between the C43-4 and C43-8 data, the reduced $\chi^{2}$ of the best fit model is much deviated from unity, when the data are combined. If only C43-4 data is used, the resolution is not enough to identify substructures. Hence, we used only the C43-8 data for the MCMC fitting 222When both the C43-4 and C43-8 data are used, the reduced $\chi^{2}$ is $\sim 6$, though the best fit parameters are similar to these shown in table 2. This large reduced $\chi^{2}$ is mainly due to points around $\rho=200k\lambda$.. We performed the MCMC fitting for the upper and lower band data separately. The total flux density of the images synthesized by only the C43-8 data is $10\%$ – $20\%$ smaller than that by both C43-8 and C43-4 data due to missing flux. However, since the visibilities of the C43-8 data are quite similar to these that combined by the C43-8 and C43-4 data, as shown in Figure 19), excluding the C43-4 data could not affect the fitting results. Table 2: Fitting results for C43-8 data | Upper band (233.0 GHz data) | Lower band (216.7 GHz data) ---|---|--- Parameters | Best fit | Range | Best fit | Range | | Min | Max | | Min | Max $\gamma$ | $0.349^{-0.025}_{+0.030}$ | 0.0 | 0.5 | $0.280^{-0.021}_{+0.023}$ | 0.0 | 0.5 $\zeta$ | $1.946^{-0.061}_{+0.079}$ | 1.8 | 2.3 | $1.489^{-0.041}_{+0.062}$ | 1.0 | 2.0 $R_{\rm c}$ (au) | $51.160^{-1.777}_{+2.249}$ | 38.0 | 57.0 | $45.881^{-1.499}_{+1.710}$ | 26.6 | 76.0 $R_{\rm cav}$ (au) | $1.296^{-0.490}_{+0.348}$ | 0.0 | 1.9 | $0.986^{-0.450}_{+0.353}$ | 0.0 | 1.9 $R_{\rm g,in,1}$ (au) | $35.908^{-0.211}_{+0.216}$ | 30.4 | 45.6 | $35.630^{-0.314}_{+0.245}$ | 30.4 | 45.6 $R_{\rm g,out,1}$ (au) | $81.994^{-2.973}_{+3.488}$ | 57.0 | 114.0 | $71.894^{-1.949}_{+1.804}$ | 57.0 | 114.0 $R_{\rm rc,1}$ (au) | $67.046^{-0.240}_{+0.216}$ | 57.0 | 76.0 | $68.310^{-0.742}_{+0.775}$ | 57.0 | 76.0 $W_{\rm 1}$ (au) | $10.835^{-0.553}_{+0.613}$ | 0.4 | 38.0 | $10.990^{-0.621}_{+1.308}$ | 0.4 | 38.0 $\ln(H_{\rm 1})$ | $-0.627^{-0.019}_{+0.026}$ | -1.0 | 0.5 | $-0.826^{-0.024}_{+0.036}$ | -1.0 | 0.5 $R_{\rm rc,2}$ (au) | $122.223^{-6.281}_{+5.861}$ | 95.0 | 152.0 | $146.630^{-6.825}_{+4.100}$ | 114.0 | 190.0 $W_{\rm 2}$ (au) | $56.530^{-3.527}_{+3.597}$ | 11.4 | 76.0 | $42.890^{-3.547}_{+5.221}$ | 11.4 | 76.0 $\ln(H_{\rm 2})$ | $-1.708^{-0.031}_{+0.051}$ | -3.0 | -1.0 | $-2.036^{-0.050}_{+0.060}$ | -3.0 | -1.5 $\ln(\delta_{\rm cav})$ | $-1.055^{-0.763}_{+0.498}$ | -3.0 | -0.0 | $-1.517^{-0.845}_{+0.781}$ | -3.0 | -0.0 $\ln(\delta_{\rm g,1})$ | $0.194^{-0.010}_{+0.009}$ | -0.5 | 1.0 | $0.199^{-0.010}_{+0.008}$ | -0.5 | 1.0 $\ln(\delta_{\rm g,2})$ | $0.305^{-0.045}_{+0.037}$ | -1.0 | 1.0 | $0.091^{-0.063}_{+0.055}$ | -1.0 | 0.5 $F_{\rm tot}$ (mJy) | $493.995^{-0.553}_{+0.527}$ | 470.0 | 510.0 | $418.431^{-0.593}_{+0.558}$ | 390.0 | 430.0 Note. — Error range of the best parameters are estimated by $\pm 1\sigma$. The fitting results are summarized in Table 2. The best fit parameters for the upper and lower band data are slightly different, especially on the total flux density, and the parameters related the outer structure, namely, $\zeta$, $\delta_{g,2}$ and the parameters of the outer peak ($R_{\rm rc,2},W_{2},H_{2}$), and the depth of the inner cavity ($\delta_{\rm cav}$). Figure 6: The shapes of the best-fit model given by parameters listed in Table 2, for the upper and lower band data. The inset is the zoom in of the region of offset $<0.2$ arcsec. Figure 6 shows the best-fit models for the upper and lower band data. They look very similar, as both have small cavity with radius $\sim 1$ au (corresponding to $R_{\rm cav}$), a bump-like excess emission at $R\sim 0.2$ – $0.4$ arcsec with double peaks at $0.2$ arcsec (corresponding to $R_{\rm g,in,1}$, $\simeq 40$ au) and $0.35$ arcsec (corresponding to $R_{\rm rc,1}$, $\simeq 70$ au). The fitting results of both upper and lower bands indicate that the there is a cavity with the radius of $\sim 0.005$ arcsec ($\sim 1$ au). Since the size of inner cavity is much smaller than the spatial resolution with $\rho<2000k\lambda$, we consider that this structure should be confirmed in future observations. For the outer region, though the parameters such as $\gamma$, $\zeta$, and $R_{\rm rc,2}$ are slightly different, the profiles agree with each other. Figure 7: Real part of the visibilities for the observation (dot) and best- fit-model (solid)) for the upper and lower band data, in the upper and middle panels, respectively. The inset shows the zoom-up view of the region with $<30$ mJy. In the lower panel, we show the different of the visibilities between the observation and the model. The error bars of the observational data and residuals are estimated by $1\sigma$ deviation of the average. Figure 7 compares the visibility of the model and observation. As can be seen in the figure, the models well reproduce the observed visibility, and the reduced $\chi^{2}$ for the model of the upper band data is $1.58$ and that of the lower band data is $1.28$, respectively. Figure 8: Intensity distributions along the major axis in the models and observations for the upper band data(upper) and the lower band data (middle). The bottom panel shows the residuals between the model and the observation, along the major axis. Figure 8 compares the model and the observation in the image. The model image is first converted to the ALMA measurement set by `vis_simple` with the observed measurement set and we made a mock observational image by the same procedure of the imaging of the observed data with the parameters listed in Table 1. In Figure 8, we show the intensity distributions along the major axis in the mock observational image (model) and the observed image. In the bottom panel of the figure, we show the residual between the model and the observational data. We calculated the residual as the subtraction between the model and the observation in the visibility domain, by using `vis_sample`. The visibility of the residual is converted by `tclean` task with the imaging parameters listed in Table 1. Around the center, one can see the residual which is larger than $3\sigma$, though the residual is smaller or comparable with the $3\sigma$ level in other regions. This discrepancy is related to the asymmetry which is indicated by the difference between the structures along the major and minor axes shown in Figure 2. Figure 9: Residual between the observational data and model at 233.0 GHz (a), and at 216.7 GHz (b). The contour indicates the levels of $\pm 3\sigma$ and $\pm 5\sigma$. The thick dashed contour lines indicate the observed intensity distribution, which are $0.087$ mJy/Beam ($3\sigma$), $0.29$ mJy/Beam ($10\sigma$), $2.9$ mJy/Beam ($100\sigma$), $8.7$ mJy/Beam ($300\sigma$), and $14.5$ mJy/Beam ($500\sigma$) from the outside. Figure 9 shows the map of the residual between the model and the observation in the upper and lower band data. The pattern of the residual in the upper and lower band data are similar to each other. One can see significant residuals around the center, which is also shown by Figure 8. Moreover, the residual map shows positive and negative structures at the upper left (R.A. offset $\simeq-0.4$ arcsec, Dec. offset $\simeq 0.2$ arcsec). The amplitudes of those structures are larger than $5\sigma$, which can indicate the real asymmetric structures. ### 3.4 Asymmetric structure As shown in the previous subsection, the residual map between the model and the observation indicates the asymmetric structures at the center and the outer disk. Here we further investigate this asymmetry of the disk, by using a model-independent analysis. In the visibility domain, the visibility is expressed by $\displaystyle V(\vec{\rho})$ $\displaystyle=\int\int I(\vec{R})e^{-j\vec{R}\cdot\vec{\rho}}d\vec{R},$ (5) where $j=\sqrt{-1}$ is the imaginary unit and $\vec{\rho}$ and $\vec{R}$ indicate the position vectors in the deprojected uv plane and the image, $I(\vec{R})$ is the intensity distribution. When $I(\vec{R})$ is axisymmetric, we can express the visibility as $\displaystyle V(\rho)$ $\displaystyle=2\pi\int I(R)J_{0}(R\rho)RdR,$ (6) where $J_{0}(k)$ is 0th-order Bessel function of the first kind. The visibility of the axisymmetric disk has only a real part. The image does not change if the disk is $180^{\circ}$ rotated against the disk center. In mathematics, a $180^{\circ}$ rotated image has the visibility which is the complex conjugate of that of the original image. Hence, the difference between the original and $180^{\circ}$ rotated images has only imaginary part, namely, twice the imaginary part of the original image. When the system does not have a significant asymmetric structure, the difference is almost zero because the imaginary part of the visibility of the original image is very small. On the other hand, when the disk has asymmetric structures, we could see some residual between the original and $180^{\circ}$ rotated images, which corresponds to the imaginary part of the visibility. This approach of investigating asymmetric structures is totally model-independent. Figure 10: Imaginary part of the visibility combined from all data. Figure 11: Image synthesized from the imaginary part of the visibility shown in Figure 10. The contour indicates the levels of $\pm 3\sigma$ ($\pm 0.087$ mJy/Beam) and $\pm 5\sigma$ ($\pm 0.145$ mJy/Beam). We produced the synthesized image using all the data, namely, the C43-4 and C43-8 data in upper and lower bands. Figure 10 shows the azimuthally averaged imaginary part of the visibility. The average is calculated in the same way as in Figure 4. The imaginary part of the visibility is much fainter than the real part. However, one can see the significant signals as large as a few mJy, at $\rho\lesssim 300k\lambda$, which implies the asymmetric structure with the scale of $\gtrsim 0.7$ arcsec ($\gtrsim 140$ au). Figure 11 shows the synthesized image produced only from the imaginary part of visibility data. We can find the asymmetric structure at the center and that at the upper left (R.A offset $\simeq-0.4$ arcsec ($\simeq-80$ au) and Dec. offset $\simeq 0.2$ arcsec ($\simeq 40$ au)), which is consistent with the difference btween the observed image and the axisymmetric model shown in Figure 9. In addition, we see structures at the lower right, namely, R.A offset $\simeq 0.4$ arcsec and Dec. offset $\simeq-0.2$ arcsec. The Fourier transform of pure imaginary function must be an odd function, which means that there is a counter part in the opposite location in the synthesized image. If the asymmetric structure indicated by the imaginary part is real, we should find a signal at the same location with the model-subtracted image. Hence, we consider that the structure at the lower right could be the counter part of the structure at the upper left. As described in Section 2 and in the result shown above, we determined the phase center by the Gaussian fitting with CASA tool `uvmodelfit`. However, the above analysis of the imaginary part of the visibility depends on the choice of the phase center. Figure 12: Images deprojected only from the imaginary part of the visibility as shown in Figure 11, but with the different phase center. The central figure is the same as that shown in Figure 11 with the same phase center. The horizontal coordinate of phase center is shifted in 2 mas ($-2$ mas) from that of the central figure, in left (right) column, and the vertical coordinate of the phase center is shifted in 2 mas ($-2$ mas) in the top (bottom) row. In the middle column (row), the horizontal (vertical) coordinate of the phase center is the same as that of the center figure. Figure 12 shows how the images (deprojected to the disk plane) produced from pure imaginary visibility change with the choice of the phase center. In the figure, we shift the center in $\pm 2$ mas ($\pm 0.38$ au) in horizontal and vertical directions from the phase center of Figure 11. The image is shifted in the visibility domain by the phase shift defined as $\exp\left[2\pi\left(udx+vdy\right)\right]$, where $u$ and $v$ are the spatial frequencies and $dx$ and $dy$ are shift values in R.A and Dec. directions, respectively. It is reasonable to assume that the disk structure is almost axisymmetric, despite the disk has some asymmetric structures. Under this assumption, the image deprojected from the imaginary part with the ’correct’ phase center has the minimum root mean square of the intensity. We calculated the sum of the root mean square value of the intensity at each pixel within the radius of 0.6 arcsec (114 au) from the center, which is labeled at the upper left corner at each panel (labeled by RMS). The figure with our fiducial phase center has the minimum value of RMS among the listed panels. Hence, the phase center determined by the Gaussian fit is consistent with the center which minimalize the asymmetry. On the other hand, in the panel at the upper left corner (dx=2 mas and dy=2 mas), the asymmetric structure around the center is almost vanished. This indicates that the center of the inner structure is shifted to the center of the outer structure. ## 4 Modeling by radiative transfer simulations ### 4.1 Setup and model description We now have a model for the intensity distribution of the disk around WW Cha. In this section, we discuss the physical condition (e.g., dust surface density, size distribution of the dust grains, and temperature) of the disk, by using radiative transfer simulations with RADMC-3D 333http://www.ita.uniheidelberg.de/~dullemond/software/radmc-3d/index.html (Dullemond et al., 2012). We setup a model of an axisymmetric dust surface density distribution which is motivated by the intensity distribution derived in the previous section as $\displaystyle\Sigma(R)=\Sigma_{0}\left[f_{I}\left(\frac{R}{R_{\rm c}}\right)^{-s}\exp\left[-\left(\frac{R}{R_{\rm c}}\right)^{\zeta}\right]\right.$ $\displaystyle\qquad\qquad+\left.\sum_{i=1}^{2}H_{i}\exp\left[-\left(\frac{R-R_{\rm rc,i}}{W_{i}}\right)^{2}\right]\right],$ (7) Here $f_{I}$ is defined by Equation (3), $\zeta$,$R_{\rm c}$, $H_{i}$,$W_{i}$, $R_{\rm rc,i}$, and parameters in $f_{I}$ ($\delta_{\rm cav}$,$\delta_{\rm gc,1}$,$\delta_{\rm gc,2}$, $R_{\rm cav}$, $R_{\rm g,in,1}$,$R_{\rm g,out,1}$) are fixed to the values given in Table 2 (for C43-8 data). For the parameter $s$ that is related to the slope of the dust surface density, we use $s=0$, which is different from the best-fit parameter of $\gamma$ which is $\sim 0.35$. We confirm that the choice of $s$ hardly affects the estimates of physical parameters. When we use $s=0.5$, the mid-plane temperature is affected only by a few Kelvin. The parameter $\Sigma_{0}$ controls the total mass of the dust $M_{\rm dust}$ and when $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$, $M_{\rm dust}=3\times 10^{-3}M_{\odot}$. We first vary the stellar luminosity and $\Sigma_{0}$ and check the agreement with observations in order to address the uncertainty of the estimate of the disk physical parameters. We then vary the dust size in order to address the spectral index distribution. Here we present a physical disk and star model that reasonably matches observations. Full modeling studies that derive the uncertainties of all the parameters are beyond the scope of this paper. Rest of the setup of the simulation is as follows. In the vertical direction, we adopt a Gaussian shape distribution, namely, $\displaystyle\rho(R,z)$ $\displaystyle=\frac{\Sigma(R)}{\sqrt{2\pi}h}\exp\left(-\frac{z^{2}}{2h^{2}}\right),$ (8) where $h$ is the scale height of the dust layer. The dust scale height can be smaller than that of gas structure, due to dust settling (e.g., Nakagawa et al., 1986). Hence, we consider two dust components:one is a small dust component with size distribution $\propto s^{-3.5}$, where $s$ is the size of the grains, and the minimum and maximum sizes are $0.1\ \mu\mbox{m}$ and $0.1\ \mbox{mm}$, respectively. The other is a large dust component with its size having Gaussian distribution in logarithmic space. The peak of the size distribution $s_{\rm d,large}$ is $1$ mm with the smallest size $0.6$ mm and with the largest size of $1.7$ mm. We assume that the scale height of the large grains is 0.1 times the scale height of the gas. The mass ratio between the small and large grains is set to be 1:9. We adopt the same compositions of the dust grains to that adopted in Birnstiel et al. (2018)444The optical constant file was provided by Dr. Ryo Tazaki.. The absorption and scattering coefficients for each component are averaged by the size distribution. The mass and the surface temperature of the central star are set to $1.2M_{\odot}$ and 4350 K, respectively (Luhman, 2007). By the SED modeling, the stellar luminosity is estimated by $11L_{\odot}$ (Garufi et al., 2020). Since WW Cha is a young newborn star, however, it could be still embedded in the core (Ribas et al., 2013; Garufi et al., 2020). In such a case, the luminosity may be underestimated due to high extinction (Follette et al., 2015). Hence, we also consider the case with $L_{\ast}=22L_{\odot}$. The radial coordinate extends from $0.1$ au to $1000$ au, which is divided into 256 meshes with logarithmic spacing. The azimuthal angle $\theta$ and polar angle $\phi$ are divided into 256 meshes in $0<\theta<2\pi$ and in $0<\phi<\pi$ (the midplane is located at $\phi=\pi/2$), respectively. We adopted $3\times 10^{8}$ photons for thermal Monte Carlo radiative transfer and imaging, and for the SED, $10^{7}$ photons are adopted. We first carried out radiative transfer simulations with the disk scale height calculated by the empirical formula given by Dong et al. (2018a), namely, $T=220(L_{\ast}/11L_{\odot})^{1/4}(R/\rm{1au})^{-0.5}$ (it is quite similar to Equation 1). After the first run, we calculate the temperature on the midplane at each radius and we preformed simulation again with the scale height given by the midplane temperature. We repeated the above cycles until the temperature distribution is converged. In our case, the temperature distribution is converged after 2–3 iterations. For the imaging, we converted the output of RADMC-3D to the ALMA measurement set with observed measurement set by use of `vis_sample`. Then, we deprojected the image from the model measurement set with the imaging parameter listed in Table 1. ### 4.2 Results #### 4.2.1 Intensity and spectral energy distribution Figure 13: Brightness temperature distributions along the major axis of the disk, for the observation (black dot), the results of the simulations with $L_{\ast}=11L_{\odot},\Sigma_{0}=3\mbox{ g/cm}^{2}$ (red dashed) and with $L_{\ast}=22L_{\odot},\Sigma_{0}=1.5\mbox{ g/cm}^{2}$ (cyan solid). Figure 13 compares the brightness temperatures given by the observations and simulations. With $L_{\ast}=11L_{\odot}$, we need $\Sigma_{0}=3\mbox{ g/cm}^{2}$ which corresponds to the total dust mass of $7\times 10^{-3}M_{\odot}$. In this case, the disk is highly gravitationally unstable in most part if the gas-to-dust mass ratio is 100, as shown below. If the stellar luminosity is larger by a factor of two ($L_{\ast}=22L_{\odot}$), we found that $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$ is enough to reproduce the observation. Figure 14: Spectral energy distribution given by observations (dots) and the simulations with $L_{\ast}=11L_{\odot},\Sigma_{0}=3\mbox{ g/cm}^{2}$ (red dashed) and with $L_{\ast}=22L_{\odot},\Sigma_{0}=1.5\mbox{ g/cm}^{2}$ (cyan solid). The crosses indicate the total fluxes given by our observation at 233 GHz (494 mJy) and 216.7 GHz (418 mJy). The observational data with $\lambda<1$ mm are extracted from VizieR database (%****␣paper.tex␣Line␣725␣****https://vizier.u-strasbg.fr/) and the references are in the main text. Figure 14 illustrates the SED at $0.3\mu\mbox{m}$ – 2 cm, given by the previous observations (Lommen et al., 2007, 2009; Gutermuth et al., 2009; Ishihara et al., 2010; Cutri & et al., 2014; Pascucci et al., 2016; Ribas et al., 2017), including our result, and the simulations. Both simulations with $L_{\ast}=11L_{\odot}$ and $22L_{\odot}$ can reproduce the ALMA band 6 flux ($\sim 230$ GHz). For fluxes at the longer wavelengths, namely $3$ mm $<\lambda<1$ cm, simulations agree with the observation. The fluxes of the simulations at $\lambda>1$ cm is smaller than the observed flux, though it could be due to the contribution from the free-free emission from the star (Rodmann et al., 2006). In the case with $L_{\ast}=11L_{\odot}$, the flux at far infrared wavelengths is about a factor of two smaller than the observed values. In this case, we need some contribution from the envelope to account for infrared flux. On the other hand, the case with $L_{\ast}=22L_{\odot}$ can reproduce the fluxes from the infrared to the radio, by only the emission from the star. #### 4.2.2 Dust density and temperature Figure 15: (Upper) Dust surface density and Toormre’s Q-value (with gas-to- dust ratio being 100) in the case of $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$ (in the case of $\Sigma_{0}=3\mbox{ g/cm}^{2}$, $\Sigma$ simply becomes two times larger and Q-value decreases with the increase of $\Sigma$.). The horizontal thin line indicates $Q=1$. (Lower) Midplane temperatures given by the radiative transfer simulations. The two horizontal lines indicate $50$ K and $30$ K from the top. The stellar luminosity affects estimate on the mass of the gas disk, but it does not significantly affect the midplane temperature. In the upper panel of Figure 15, when $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$, we show the distribution of the dust surface density and Toomre’s Q-value (Toomre, 1964), assuming the gas-to-dust ratio of 100. The lower panel of Figure 15 shows the midplane temperatures given by simulations. The midplane temperature with $L_{\ast}=22L_{\odot}$ is just about 1.16 times higher than that with $L_{\ast}=11L_{\odot}$, roughly corresponding to $L_{\ast}^{1/4}$ dependence as expected from Equation (1). We find that the disk is expected to have relatively low values of Toomre’s Q-value. As shown in the upper panel of Figure 15, in the case of $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$, the Q-value is smaller than or close to unity at 20 au $<R<$ 100 au. In particular, the dust bump ($40$ au – $70$ au) is gravitationally unstable because $Q\lesssim 1$ and the dust bump may fragment. In the case of $\Sigma_{0}=3\mbox{ g/cm}^{2}$, the Q-value are decreased by a factor of two. In this case, the disk is still nearly gravitationally unstable and the fragmentation is expected at $R>10$ au and turbulence due to gravitational instability is also expected. However, no fragment-like structure is seen in the dust-continuum image. This may indicate that the gas-to-dust ratio is smaller than 100 in the region with $>20$ au. It is discussed in Section 5.1. We note that the temperatures at the peak locations are $\sim 30$ – $50$ K, which is close to the freezeout temperature of the CO on water ice. This may be related to the origin of the bump structure, as discussed in Section 5.1. #### 4.2.3 Spectral index Figure 16: Spectral index maps given by radiative transfer simulations, with $s_{\rm d,large}=1\mbox{ mm}$ (left panel) and $s_{\rm d,large}=0.5\mbox{ mm}$ (right panel). The contours indicates the same flux levels shown in the panel (b) of Figure 1. Using the images given by the simulations at upper and lower bands, we produced the map of the spectral index by the use of `tclean` task with `nterm`=2. Although it is calculated from the very narrow frequency range, we may constrain the dust size distribution from it. The spectral index depends on the peak size of the large grain $s_{\rm d,large}$, rather than stellar luminosity and the dust surface density. Hence, we carried out additional simulations with $s_{\rm d,large}=0.5\mbox{ mm}$. In Figure 16, we show the spectral index map in the case of $s_{\rm d,large}=1.0\mbox{ mm}$ and $0.5\mbox{ mm}$, when $L_{\ast}=22L_{\odot}$ and $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$. In the case of $s_{\rm d,large}=1.0\mbox{ mm}$, the spectral index increases in the outer region, which agrees with the observations (panel (b) of Figure 1), but it is larger than 2 around the center. In the case of $s_{\rm d,large}=0.5\mbox{ mm}$, on the other hand, the spectral index is below 2 around the center, whereas it is much smaller than the observed index at the outer region. Figure 17: Distributions of the spectral index along the major axis in the case of $s_{\rm d,large}=1.0\mbox{ mm}$ (red) and $0.5\mbox{ mm}$ (cyan). In the upper panel, $L_{\ast}=22L_{\odot}$ and $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$, while in the lower panel, $L_{\ast}=11L_{\odot}$ and $\Sigma_{0}=3.0\mbox{ g/cm}^{2}$. The dots indicates the observed one shown in the panel (d) of Figure 1. Figure 17 compares the distributions of the spectral index along the major axis between the simulations and the observation. As can be seen in the figure, the distributions are similar between the case with $L_{\ast}=22L_{\odot}$ and $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$ and the case with $L_{\ast}=11L_{\odot}$ and $\Sigma_{0}=3.0\mbox{ g/cm}^{2}$. In both the cases, for the outer region of offset $>0.25$ arcsec, the observational feature that the spectral index increases in the outer region is consistent with the radiative transfer calculations when $s_{\rm d,large}=1.0\mbox{ mm}$. For the inner region of $<0.25$ arcsec, the spectral index below 2 is consistent with the case of $s_{\rm d,large}=0.5\mbox{ mm}$. The spectral index also depends on the size distribution of the large grains, though the intensity does not significantly depends on that. In Appendix C, we demonstrate radiative transfer simulations with the large grains which have a power-law size distribution like that of the small grains. With the power-law distribution, the spectral index does not become below 2, even if the maximum size of the dust grains is 0.5 mm. Hence, the log-normal size distribution of large grains may be preferred for the inner region. On the other hand, the power-law distribution may be preferred for the outer region as can be seen in Figure 23. We should note that the spectral index discussed in this paper is provided from the very narrow range of the observed frequency within band 6. The spatial variation of spectral index should be investigated by future observations at multiple wavelengths. In Appendix D, we show a few examples of the spectral index based on ALMA bands, which are calculated from the results of radiative transfer simulations. The spectral index can be different by the choice of the bands which are taken to calculate the spectra index. We may be able to constrain the dust size distribution from the spectral indexes in multiple bands. ## 5 Discussion ### 5.1 Origin of substructures #### 5.1.1 Ring We found the bump with double peaks at 40 – 80 au from the central star by the model fitting in the visibility domain. One of the peaks is at $\sim 40$ au and the other is at $\sim 70$ au. The bump structure can be formed by dust trapping due to the pressure bump (e.g., Pinilla et al., 2012; Dullemond et al., 2018). However, the bump that is formed by the above mechanisms likely has a single peak, while the bump of our model has double peaks. If there are two pressure bumps, it might explain a shape with double peaks. Another possible location where a dust bump is likely to form is the outer edge of the planet-induced gap (e.g., Paardekooper & Mellema, 2004; Muto & Inutsuka, 2009; Zhu et al., 2012; Pinilla et al., 2015; Dong et al., 2015; Kanagawa et al., 2018). However, we did not detect any gap structure interior to the bump structure. Hence, it might not be the structure induced by the dust trap of the pressure bump and planetary gap. One plausible scenario of producing a bump with double peaks uses the snowline. The bump can be formed due to volatile freeze-out altering the coagulation and fragmentation of dust grains (e.g., Zhang et al., 2015; Okuzumi et al., 2016). As shown in Figure 15, the temperature around the outer peak is about $30$ K, which is close to the freezeout temperature of CO at water ice Huang et al. (2018). Around the inner peak, the temperature is about $50$ K. Although it is not close to the condensation temperatures of major volatiles (e.g., CO, CO2), it is close to the condensation temperature of H2S (Zhang et al., 2015). Moreover, in $T\sim 50$ K, the sintering can occur for some species such as H2S and C2H8 (Okuzumi et al., 2016). Hence, this bump with double peaks may be formed by the snowline and the sintering effect. #### 5.1.2 Asymmetric structure As discussed in Section 3.4, the asymmetric structure is suggested at the outer region of the disk, the positive (bright) structure at R.A. offset $\simeq-0.5$ arcsec and the negative (dark) structure at R.A. offset $\simeq-0.3$ arcsec. Figure 18: Comparison between the SPHERE image (Garufi et al., 2020) (color) and the residual map for the upper band data shown in Figure 9 (contour). The contours show $\pm 3\sigma$ and $\pm 5\sigma$ levels. The solid and dashed contours indicate positive and negative excesses, respectively. Recently Garufi et al. (2020) have observed a bright spiral in the disk of WW Cha by SPHERE observation, which wraps around from east to north in clockwise. In Figure 18, we compare the infrared image given by SPHERE and the image synthesized from the imaginary part of the visibility (the same as that shown in Figure 11). The spiral feature in the SPHERE image has bright and faint parts. The location of the positive asymmetric feature in sub-mm observations coincides with the bright NIR spiral while that of the negative asymmetric feature coincides with the location of faint spiral region of the NIR observation. In tandem with the radiative transfer modeling, we suggest that the spiral features are formed by gravitational instability. When the value of Toomre’s Q is smaller than $\sim 1$, the effects of self-gravity become prominent spiral features appear (e.g., Lodato & Rice, 2004). As shown in Figure 15, hence, the disk can be gravitational unstable at $R=40$ – $80$ au, when $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$, if the gas-to-dust ratio is 100. When the disk was highly gravitationally unstable, we could observe clear spiral waves or fragmented structures. However, we cannot see clear significant spirals or fragments on the disk, except the relatively weak structure at the upper left. Hence, the gas-to-dust ratio may be smaller than 100, especially within the bump region. The faint positive structure at the upper left might be explained by the gravitational instability if the disk is marginally stable with a smaller gas-to-dust ratio. Alternatively, we cannot see significant spiral pattern because the disk is optically thick. In this case, the spiral might be observed at longer wavelengths. Another possible mechanism for making asymmetry is dust concentration in the vortex. Several protoplanetary disks have been observed to have vortex structures, for instance, HD 142527 (Fukagawa et al., 2013; Soon et al., 2019), Sz 91 Tsukagoshi et al. (2014); Canovas et al. (2016), Oph IRS 48 (van der Marel et al., 2015), MWC 758 (Dong et al., 2018b), and etc. One possible mechanism to form such a vortex is trapping dust grains into vortex formed by Rossby wave instability (e.g., Lovelace et al., 1999; Li et al., 2000; Lin, 2014; Fu et al., 2014; Ono et al., 2016). However, as different from the above disks, the asymmetric structure found in the disk of WW Cha is so faint that it is not visible in the raw image (Figure 1). It is visible only in the image with subtraction (the right panel of Figure 9 and Figure 11). The region of $R>0.3$ arcsec can be optically thin and the intensity of the asymmetric structure ($\sim 5\sigma$) is just 5 % of that of the symmetric structure ($\sim 100\sigma$ at $R\simeq 0.4$ arcsec, as can be seen in Figure 1). Such a faint structure might be difficult to be formed by the dust trap into the vortex. The origin of the negative asymmetric structure could not be explained by the above mechanisms. Further theoretical models are required to discuss the origin of the negative asymmetric structure. The asymmetric structure may be originated from the collapsing cloud/envelope. The large-scale structure observed by e.g., SPHERE (Garufi et al., 2020) indicate presence of the envelope, but the system is not significantly embedded as the extinction is not so large ($A_{V}\sim 4.8$ mag (Ribas et al., 2013), compared to e.g., $24$ – $36$ mag of HL Tau system (ALMA Partnership et al., 2015)). Since the star has a high accretion rate, the asymmetry may be associated to the accretion variability or a jet. So far, we do not see a significant jet-like structures both in the SPHERE and our dust continuum images so it is difficult to address this further with current observations. Further observations (e.g., H$\alpha$ observations to see accretion variability or jet structure) would be required. ### 5.2 Inner cavity and binary Although a large cavity is ruled out by the observed image shown in Figure 1, our model allows the existence of a small cavity with the radius of about $1$ au. However, further observations are required to confirm the small cavity, because it is much smaller than the angular resolution with the baselines of $<2000k\lambda$. Moreover, we should note that there is a relatively large uncertainties on the MCMC fitting of the cavity size ($R_{\rm cav}$) and depth ($\delta_{\rm cav}$) as seen in Table 2. It indicates that our fitting cannot rule out the solution with no cavity. If the disk has the small cavity, it can be formed by binary interaction (e.g. Artymowicz & Lubow, 1994; Dunhill et al., 2015; Miranda et al., 2017; Thun et al., 2017; Price et al., 2018). The binary separation $a_{\rm bin}$ may be estimated from the cavity radius $R_{0}$ as $R_{0}=2.5a_{\rm bin}$ (Artymowicz & Lubow, 1994), and equivalently $a_{\rm bin}\simeq 0.4$ au. This separation is roughly consistent with the binary motion observed by Anthonioz et al. (2015). As shown in Section 3.4, the center of the inner structure can be different in $\sim 2$ mas from the center of the outer structure. When the eccentricity of the binary motion is relatively large, the shape of the cavity induced by the binary interaction is also eccentric (e.g. Thun et al., 2017). On the other hand, the shape of the outer disk can keep a symmetric shape because of the small binary separation. Hence, the binary eccentricity may be relatively large, if the star is binary. ### 5.3 Dust size distribution Finally, we discuss the size distribution of the dust grains in the disk from the spectral index map, though it is calculated from the narrow range of the observed frequency. As can be seen in the panels (b) and (d) of Figure 1, the spectral index below 2 around the center can be induced by the optical thick scattering emission (Zhu et al., 2019; Liu, 2019), and the model with $s_{\rm d,large}=0.5\mbox{mm}$ can reproduce such a spectral index. In the outer region of $>0.25$ arcsec from the center ($R>50$ au), the spectral index increases in the outer region, which is consistent with the model with $s_{\rm d,large}=1.0\mbox{ mm}$. This feature implies that the size of the large dust is larger in the outer disk. Here, we discuss how such a distribution can be realized. When the Stokes number of the dust grains is much smaller than unity, the radial drift velocity can be written by (e.g., Nakagawa et al., 1986) $\displaystyle v_{\rm R,dust}\simeq-2St\eta V_{R},$ (9) where $V_{R}$ denotes the Keplerian rotation velocity, $St$ is the Stokes number of the dust grains given by $\pi\rho_{d}s_{d}/(2\Sigma_{\rm gas})$ ($\rho_{d}$ is the internal density of the dust), and $\eta=d\ln P/d\ln R$ is $\sim 10^{-3}$ in conventional disk models. Our model indicates $\Sigma_{\rm dust}\sim 1.0\mbox{ g/cm}^{2}$ (Figure 15), and hence, the Stokes number of 1 mm-sized dust is about $0.005$ when the gas-to-dust ratio is 100 and $\rho_{d}=3$. The radial drift timescale of the grains, $\tau_{\rm drift}=-R/v_{\rm R,dust}$, can be estimated by $\displaystyle\tau_{\rm drift}\simeq$ $\displaystyle 5.5\mbox{ Myr}\left(\frac{\Sigma_{\rm dust}}{1\mbox{g/cm}^{2}}\right)\left(\frac{\epsilon}{100}\right)\left(\frac{\rho_{d}}{3\mbox{ g/cm}^{3}}\right)^{-1}$ $\displaystyle\times\left(\frac{s_{\rm dust}}{1\mbox{ mm}}\right)^{-1}\left(\frac{\eta}{10^{-3}}\right)^{-1}\left(\frac{R}{50\mbox{ au}}\right)^{3/2},$ (10) where $\epsilon$ is the gas-to-dust ratio and $s_{\rm dust}$ is the size of the dust grain, respectively. WW Cha is young and the age is about $\lesssim 1$ Myr, which is shorter than the drift timescale of $1$ mm-sized dust and longer than the growth timescale of the dust, $\tau_{\rm growth}\simeq\epsilon/\Omega_{\rm K}=0.005\mbox{ Myr}(50\mbox{ au}/R)^{3/2}$ with $\epsilon=100$ (Brauer et al., 2008). Hence, the 1 mm-sized dust grains observed in the outer region is consistent with the dust drift. In the inner region, the drift timescale of the dust grains becomes shorter. Considering the star is as young as $<1$ Myr, the drift timescale can be comparable with the stellar age at $R<16$ au. This may explain the reason why the size of the dust grains is smaller than in the outer region. The size of the grains can be determined by turbulent fragmentation (Birnstiel et al., 2010). In this case, the $\alpha$-parameter relevant to that the fragmentation threshold size equals to $\sim 1$ mm is $\displaystyle\alpha$ $\displaystyle=1.57\times 10^{-2}\left(\frac{\Sigma_{\rm dust}}{1\mbox{ g/cm}^{2}}\right)\left(\frac{\epsilon}{100}\right)$ $\displaystyle\times\left(\frac{\rho_{d}}{1\mbox{ g/cm}^{3}}\right)^{-1}\left(\frac{s_{\rm dust}}{1\mbox{ mm}}\right)^{-1}\left(\frac{v_{f}}{10\mbox{ m/s}}\right)^{2}\left(\frac{T}{50\mbox{ K}}\right)^{-1},$ (11) where $v_{f}$ is the fragmentation threshold velocity. Hence, considering the mid-plane temperature shown in Figure 15, we can estimate $\alpha\simeq 10^{-2}$. This relatively large $\alpha$ is consistent with the high accretion rate onto the star (Manara et al., 2016), whereas it is larger than a value of $\alpha$ estimated by the recent observations of the disks around the aged stars, namely, HD 163296 (Flaherty et al., 2015), and TW Hya (Teague et al., 2016; Flaherty et al., 2018), namely, $\alpha\lesssim 10^{-3}$. Such a high viscosity may be due to turbulence induced by gravitational instability (e.g., Boley et al., 2006). As mentioned in Section 4.2.3, we note that the above discussion is based on the spectral index provided only from the very narrow range within band 6. The spectral index should be investigated by future multi-wavelength observations. ## 6 Conclusion We presented the dust continuum emission of the protoplanetary disk around WW Cha, observed by ALMA band 6. Our conclusions are summarized as follows: 1. 1. The dust continuum image clearly shows no large cavity, and a faint dust bump (Figure 1). We also found the asymmetric structure at the center (Figure 2). Moreover, since the visibility is clearly different in the upper ($233.0$ GHz) and lower bands ($216.7$ GHz) (Figure 4), we can obtain the spectral index map. The spectral index around the center is below 2, and it becomes larger in the outer region. 2. 2. We constructed a model to fit the observation in visibility domain by MCMC fitting (Section 3). Our model has a bump extending from $\sim 40$ au from the central star to $\sim 80$ au, with two local peaks located at $\sim 40$ au and $\sim 70$—au. As a result of radiative transfer simulations (Figure 15), the midplane temperature around the outer peak is close to the freezeout temperature of CO on water ice ($\sim 30$ K). The midplane temperature around the inner peak is about $50$ K, which is close to the condensation temperature of H2S and it is also close to the temperature that the sintering can be caused for several species. Hence, this bump may be induced by the snowline and the sintering effect. 3. 3. The residual map between the observed data and best fit model indicates asymmetric structures at the center and the upper left of the disk. We also confirmed those asymmetric structures by a model-independent method, which is imaging of the imaginary part of the visibility of the observed data (Section 3.4). These structure could be robust, though the amplitude of the asymmetric structures are faint ($\sim 5\sigma$ level ) as compared with the symmetric structure. 4. 4. The spectral index map given by the observation may be consistent with the result of radiative transfer simulations with the relatively large dust grains (1 mm) in the outer region, whereas the result of the simulation with the smaller dust grains (0.5 mm) can be suitable for the region close to the center (Figures 16 and 17). This implies that the size of the dust grains is larger than that in the inner region. As discussed in Section 5.3, such a distribution is consistent with the radial drift and collisional growth of the dust grains, because of the massive disk around a young star. We thank Ryo Tazaki for providing the optical constant data to calculate opacity of dust grains. KDK was also supported by JSPS Core-to-Core Program “International Network of Planetary Sciences” and JSPS KAKENHI. This work is in part supported by JSPS KAKENHI grant Nos. 18H05441 and 17H01103. Y.H. is supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space. H.B.L. is supported by the Ministry of Science and Technology (MoST) of Taiwan (grant Nos. 108-2112-M-001-002-MY3). This paper makes use of the following ALMA data: ADS/JAO.ALMA#2017.1.00286.S. ALMA is a partnership of ESO (representing its member states), NSF (U.S.), and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The joint ALMA observatory is operated by ESO, auI/NRAO, and NAOJ. Data analysis was carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan. Radiative transfer simulations were carried out on analysis servers at Center for Computational Astrophysics, National Astronomical Observatory of Japan. ## Appendix A Statistics of visibility data Figure 19: (Upper) Real parts of the visibility of the C43-4 (compact array configuration) and C43-8 (sparse array configuration) data in the upper and lower bands. The visibilities are averaged within the bin which the radial width is the same as that described in Section 2 and the azimuthal width is $0.2\pi$. The thin solid lines indicates the visibilities shown in Figure 4. (Lower) Standard deviation of the data within the bin. In this appendix, we show the statistics of the visibility data. In the upper panel of Figure 19, we shows the real parts of the visibility for the C43-4 (compact configuration) and C43-8 (sparse configuration) data in the upper and lower bands, separately. The visibility in the figure is averaged the bin of the same $uv$-distance width as Figure 4 but with the azimuthal width of $0.2\pi$, instead of whole $2\pi$ in Figure 4. Hence, the figure enable us to see the scatter of data in the azimuthal direction. For reference, we plot the visibility shown in Figure 4. In the lower panel of Figure 19, we shows the standard deviation of the data within the bin. The visibilities of the C43-4 and C43-8 data are similar to each other. The standard deviations are similar in all the data, namely $\sigma\simeq 20\mbox{ mJy}$. However, around $\rho=200k\lambda$, there are some points with larger standard deviations in the C43-4 data (especially at the upper band data). Because of this data scatter, the value of $\chi^{2}$ increases around $\rho=200k\lambda$ when the visibility combined by the short and long baseline data for the MCMC fitting. Figure 20: The same as Figure 19, but for the imaginary parts. Figure 20 is the same as Figure 19 but for the imaginary part of the visibility. The standard deviation of the imaginary part is larger than that of the real part in the shorter baseline, namely $\rho\lesssim 200k\lambda$, whereas it is comparable with that of the real parts at long baseline. Hence, the imaginary part of the visibility shown in Figure 10 has a relatively large error at short baseline. ## Appendix B Posteriors of MCMC fitting The posterior of the MCMC fitting for the upper hand data is shown in Figure 21 and that for the lower band data is shown in Figure 22. Figure 21: Posterior of the MCMC fitting for the upper band data. Figure 22: Posterior of the MCMC fitting for the lower band data. ## Appendix C Spectral index map with a power-law size distribution for large dust grains In Section 4, we carried out radiative transfer simulations with large grains which has a Gaussian size distribution around the specific radius of the dust grains. Here, we show the results of radiative transfer simulations with large grains which has a power-law size distribution like that of the small grains, and investigate the dependence of the dust size distribution on the spectral index map. As in the cases described in Section 4, we consider two-kind of dust grains: one represents small grains, other is large grains. The size distribution of the small dust is the same as that described in Section 4. For the large grains, the number density of the dust grains is proportional to $s^{p}$, and we adopted $p=-3.5$ and $-2.5$ cases. The minimum size of the large grains is $0.1$ mm, and we consider three maximum sizes of the large grains, namely, $s_{\rm max}=$ 1 mm and 0.5 mm. The stellar luminosity is $22L_{\odot}$ and $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$, and other parameters are the same these described in Section 4. Figure 23: The same as Figure 17, but for the power-law sized large grains, and in the case of $p=-3.5$ (solid lines) and the case with $p=-2.5$ (dashed lines). Figure 23 shows the spectral index along the major axis for the cases with $p=-3.5$ and $p=-2.5$. In all the cases, the spectral index is about $2$ (but slightly larger than 2), and it increases in outer region. ## Appendix D Spectral index map for ALMA bands For future observations, in this appendix, we present a few examples of the spectral index map given by radiative transfer simulations. As in the previous section, $L_{\odot}=22L_{\odot}$ and $\Sigma_{0}=1.5\mbox{ g/cm}^{2}$, and other parameters are the same these described in Section 4. As different from these shown in Section 4, the spectral index presented in this section is not calculated through the `tclean` task. The spectral index is calculated by just the differences of the fluxes convoluted with a Gaussian filter with $0.1$ arcsec standard deviation. We calculated the spectral index as the difference of the fluxes among ALMA band 3, band 6, band 7, and band 9. Figure 24: The spectral index distributions along the major axis in the cases that the large dust grains have a Gaussian size distribution with $s_{\rm d,large}=1$ mm (top), $0.5$ mm (bottom). The dashed, dotted, and solid lines indicate the spectral indices calculated from the fluxes at band 3 (3.1 mm) and at band 6 (1.3 mm), the fluxes at band 6 and at band 7 (0.87 mm), and the fluxes at band 7 and band 9 (0.45 mm), respectively. The thin solid line indicates the spectral index given by the upper and lower bands of band 6 as the same as presented in the main text, for reference. Figure 25: The same as Figure 24, but the large grains have a power-law size distribution. Figure 24 shows the spectral index along the major axis, when the large grains has a Gaussian (log-normal) distribution (for the detail, see Section 4). The spectral index depends on the choice of the bands used. The spectral index calculated from bands with longer wavelengths becomes larger , as the opacity is smaller and it is optically thin. For instance, the spectral index calculated from the band 3 and band 6 fluxes are larger than the spectral index calculated from other pairs of bands. The spectral index calculated from the band 7 and band 9 is below 2 around the center, regardless of $s_{\rm d,large}$. It is worth pointing out that the spectral index calculated from band 6 and band 7 significantly different from that calculated from the upper and lower bands of band 6, in the cases with $s_{\rm d,large}=0.5$ mm. Figure 25 shows the same as that shown in Figure 25, but for the cases that the large dust grains has a power-law distribution of $\propto s^{-3.5}$. The distributions of the spectral index calculated by band 6 and 7 is similar to that in the case with the Gaussian distribution shown in Figure 24, which is below 2 around the center. On the other hand, the spectral index calculated from the band 3 and 6 is larger than that in the case with the power-law distributions, as compared with that in the cases with the Gaussian distribution. We may be able to constrain the dust size distribution from the difference of the spectral indexes calculated from the different pair of the bands. ## References * Akiyama et al. (2015) Akiyama, E., Muto, T., Kusakabe, N., et al. 2015, ApJ, 802, L17, doi: 10.1088/2041-8205/802/2/L17 * Akiyama et al. (2016) Akiyama, E., Hashimoto, J., Liu, H. B., et al. 2016, AJ, 152, 222, doi: 10.3847/1538-3881/152/6/222 * ALMA Partnership et al. (2015) ALMA Partnership, Brogan, C. L., Pérez, L. M., et al. 2015, ApJ, 808, L3, doi: 10.1088/2041-8205/808/1/L3 * Andrews et al. (2018) Andrews, S. M., Huang, J., Pérez, L. M., et al. 2018, ApJ, 869, L41, doi: 10.3847/2041-8213/aaf741 * Anthonioz et al. (2015) Anthonioz, F., Ménard, F., Pinte, C., et al. 2015, A&A, 574, A41, doi: 10.1051/0004-6361/201424520 * Artymowicz & Lubow (1994) Artymowicz, P., & Lubow, S. H. 1994, ApJ, 421, 651, doi: 10.1086/173679 * Birnstiel et al. (2010) Birnstiel, T., Dullemond, C. P., & Brauer, F. 2010, A&A, 513, A79, doi: 10.1051/0004-6361/200913731 * Birnstiel et al. (2018) Birnstiel, T., Dullemond, C. P., Zhu, Z., et al. 2018, ApJ, 869, L45, doi: 10.3847/2041-8213/aaf743 * Boley et al. (2006) Boley, A. C., Mejía, A. C., Durisen, R. H., et al. 2006, ApJ, 651, 517, doi: 10.1086/507478 * Brauer et al. (2008) Brauer, F., Dullemond, C. P., & Henning, T. 2008, A&A, 480, 859, doi: 10.1051/0004-6361:20077759 * Canovas et al. (2016) Canovas, H., Caceres, C., Schreiber, M. R., et al. 2016, MNRAS, 458, L29, doi: 10.1093/mnrasl/slw006 * Chiang & Goldreich (1997) Chiang, E. I., & Goldreich, P. 1997, ApJ, 490, 368, doi: 10.1086/304869 * Cieza et al. (2017) Cieza, L. A., Casassus, S., Pérez, S., et al. 2017, ApJ, 851, L23, doi: 10.3847/2041-8213/aa9b7b * Cutri & et al. (2014) Cutri, R. M., & et al. 2014, VizieR Online Data Catalog, II/328 * Dong et al. (2018a) Dong, R., Najita, J. R., & Brittain, S. 2018a, ApJ, 862, 103, doi: 10.3847/1538-4357/aaccfc * Dong et al. (2015) Dong, R., Zhu, Z., & Whitney, B. 2015, ApJ, 809, 93, doi: 10.1088/0004-637X/809/1/93 * Dong et al. (2018b) Dong, R., Liu, S.-y., Eisner, J., et al. 2018b, ApJ, 860, 124, doi: 10.3847/1538-4357/aac6cb * Dullemond & Dominik (2005) Dullemond, C. P., & Dominik, C. 2005, A&A, 434, 971, doi: 10.1051/0004-6361:20042080 * Dullemond et al. (2012) Dullemond, C. P., Juhasz, A., Pohl, A., et al. 2012, RADMC-3D: A multi-purpose radiative transfer tool. http://ascl.net/1202.015 * Dullemond et al. (2018) Dullemond, C. P., Birnstiel, T., Huang, J., et al. 2018, ApJ, 869, L46, doi: 10.3847/2041-8213/aaf742 * Dunhill et al. (2015) Dunhill, A. C., Cuadra, J., & Dougados, C. 2015, MNRAS, 448, 3545, doi: 10.1093/mnras/stv284 * Espaillat et al. (2011) Espaillat, C., Furlan, E., D’Alessio, P., et al. 2011, ApJ, 728, 49, doi: 10.1088/0004-637X/728/1/49 * Facchini et al. (2020) Facchini, S., Benisty, M., Bae, J., et al. 2020, A&A, 639, A121, doi: 10.1051/0004-6361/202038027 * Flaherty et al. (2015) Flaherty, K. M., Hughes, A. M., Rosenfeld, K. A., et al. 2015, ApJ, 813, 99, doi: 10.1088/0004-637X/813/2/99 * Flaherty et al. (2018) Flaherty, K. M., Hughes, A. M., Teague, R., et al. 2018, ApJ, 856, 117, doi: 10.3847/1538-4357/aab615 * Follette et al. (2015) Follette, K. B., Grady, C. A., Swearingen, J. R., et al. 2015, ApJ, 798, 132, doi: 10.1088/0004-637X/798/2/132 * Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067 * Fu et al. (2014) Fu, W., Li, H., Lubow, S., Li, S., & Liang, E. 2014, ApJ, 795, L39, doi: 10.1088/2041-8205/795/2/L39 * Fukagawa et al. (2013) Fukagawa, M., Tsukagoshi, T., Momose, M., et al. 2013, PASJ, 65, L14, doi: 10.1093/pasj/65.6.L14 * Gaia Collaboration et al. (2020) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2020, arXiv e-prints, arXiv:2012.01533. https://arxiv.org/abs/2012.01533 * Gaia Collaboration et al. (2018) —. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051 * Garufi et al. (2020) Garufi, A., Avenhaus, H., Pérez, S., et al. 2020, A&A, 633, A82, doi: 10.1051/0004-6361/201936946 * Gutermuth et al. (2009) Gutermuth, R. A., Megeath, S. T., Myers, P. C., et al. 2009, ApJS, 184, 18, doi: 10.1088/0067-0049/184/1/18 * Huang et al. (2018) Huang, J., Andrews, S. M., Dullemond, C. P., et al. 2018, ApJ, 869, L42, doi: 10.3847/2041-8213/aaf740 * Hunter (2007) Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90, doi: 10.1109/MCSE.2007.55 * Ishihara et al. (2010) Ishihara, D., Onaka, T., Kataza, H., et al. 2010, A&A, 514, A1, doi: 10.1051/0004-6361/200913811 * Kanagawa et al. (2018) Kanagawa, K. D., Muto, T., Okuzumi, S., et al. 2018, ApJ, 868, 48, doi: 10.3847/1538-4357/aae837 * Kim et al. (2020) Kim, S., Takahashi, S., Nomura, H., et al. 2020, ApJ, 888, 72, doi: 10.3847/1538-4357/ab5d2b * Li et al. (2000) Li, H., Finn, J. M., Lovelace, R. V. E., & Colgate, S. A. 2000, ApJ, 533, 1023, doi: 10.1086/308693 * Lin (2014) Lin, M.-K. 2014, MNRAS, 437, 575, doi: 10.1093/mnras/stt1909 * Liu (2019) Liu, H. B. 2019, ApJ, 877, L22, doi: 10.3847/2041-8213/ab1f8e * Lodato & Rice (2004) Lodato, G., & Rice, W. K. M. 2004, MNRAS, 351, 630, doi: 10.1111/j.1365-2966.2004.07811.x * Lommen et al. (2009) Lommen, D., Maddison, S. T., Wright, C. M., et al. 2009, A&A, 495, 869, doi: 10.1051/0004-6361:200810999 * Lommen et al. (2007) Lommen, D., Wright, C. M., Maddison, S. T., et al. 2007, A&A, 462, 211, doi: 10.1051/0004-6361:20066255 * Long et al. (2018) Long, F., Pinilla, P., Herczeg, G. J., et al. 2018, ApJ, 869, 17, doi: 10.3847/1538-4357/aae8e1 * Loomis et al. (2017) Loomis, R. A., Öberg, K. I., Andrews, S. M., & MacGregor, M. A. 2017, ApJ, 840, 23, doi: 10.3847/1538-4357/aa6c63 * Lovelace et al. (1999) Lovelace, R. V. E., Li, H., Colgate, S. A., & Nelson, A. F. 1999, ApJ, 513, 805, doi: 10.1086/306900 * Luhman (2007) Luhman, K. L. 2007, ApJS, 173, 104, doi: 10.1086/520114 * Macías et al. (2017) Macías, E., Anglada, G., Osorio, M., et al. 2017, ApJ, 838, 97, doi: 10.3847/1538-4357/aa6620 * Manara et al. (2016) Manara, C. F., Fedele, D., Herczeg, G. J., & Teixeira, P. S. 2016, A&A, 585, A136, doi: 10.1051/0004-6361/201527224 * McMullin et al. (2007) McMullin, J. P., Waters, B., Schiebel, D., Young, W., & Golap, K. 2007, in Astronomical Society of the Pacific Conference Series, Vol. 376, Astronomical Data Analysis Software and Systems XVI, ed. R. A. Shaw, F. Hill, & D. J. Bell, 127 * Miranda et al. (2017) Miranda, R., Muñoz, D. J., & Lai, D. 2017, MNRAS, 466, 1170, doi: 10.1093/mnras/stw3189 * Momose et al. (2015) Momose, M., Morita, A., Fukagawa, M., et al. 2015, PASJ, 67, 83, doi: 10.1093/pasj/psv051 * Muto & Inutsuka (2009) Muto, T., & Inutsuka, S.-i. 2009, ApJ, 695, 1132, doi: 10.1088/0004-637X/695/2/1132 * Nakagawa et al. (1986) Nakagawa, Y., Sekiya, M., & Hayashi, C. 1986, Icarus, 67, 375, doi: 10.1016/0019-1035(86)90121-1 * Okuzumi et al. (2016) Okuzumi, S., Momose, M., Sirono, S.-i., Kobayashi, H., & Tanaka, H. 2016, ApJ, 821, 82, doi: 10.3847/0004-637X/821/2/82 * Ono et al. (2016) Ono, T., Muto, T., Takeuchi, T., & Nomura, H. 2016, ApJ, 823, 84, doi: 10.3847/0004-637X/823/2/84 * Paardekooper & Mellema (2004) Paardekooper, S.-J., & Mellema, G. 2004, A&A, 425, L9, doi: 10.1051/0004-6361:200400053 * Pascucci et al. (2016) Pascucci, I., Testi, L., Herczeg, G. J., et al. 2016, ApJ, 831, 125, doi: 10.3847/0004-637X/831/2/125 * Pinilla et al. (2012) Pinilla, P., Birnstiel, T., Ricci, L., et al. 2012, A&A, 538, A114, doi: 10.1051/0004-6361/201118204 * Pinilla et al. (2015) Pinilla, P., de Juan Ovelar, M., Ataiee, S., et al. 2015, A&A, 573, A9, doi: 10.1051/0004-6361/201424679 * Price et al. (2018) Price, D. J., Cuello, N., Pinte, C., et al. 2018, MNRAS, 477, 1270, doi: 10.1093/mnras/sty647 * Rau & Cornwell (2011) Rau, U., & Cornwell, T. J. 2011, A&A, 532, A71, doi: 10.1051/0004-6361/201117104 * Ribas et al. (2013) Ribas, Á., Merín, B., Bouy, H., et al. 2013, A&A, 552, A115, doi: 10.1051/0004-6361/201220960 * Ribas et al. (2017) Ribas, Á., Espaillat, C. C., Macías, E., et al. 2017, ApJ, 849, 63, doi: 10.3847/1538-4357/aa8e99 * Rodmann et al. (2006) Rodmann, J., Henning, T., Chandler, C. J., Mundy, L. G., & Wilner, D. J. 2006, A&A, 446, 211, doi: 10.1051/0004-6361:20054038 * Soon et al. (2019) Soon, K.-L., Momose, M., Muto, T., et al. 2019, PASJ, 71, 124, doi: 10.1093/pasj/psz112 * Takahashi & Inutsuka (2014) Takahashi, S. Z., & Inutsuka, S.-i. 2014, ApJ, 794, 55, doi: 10.1088/0004-637X/794/1/55 * Takahashi & Inutsuka (2016) —. 2016, AJ, 152, 184, doi: 10.3847/0004-6256/152/6/184 * Teague et al. (2016) Teague, R., Guilloteau, S., Semenov, D., et al. 2016, A&A, 592, A49, doi: 10.1051/0004-6361/201628550 * Thun et al. (2017) Thun, D., Kley, W., & Picogna, G. 2017, A&A, 604, A102, doi: 10.1051/0004-6361/201730666 * Tominaga et al. (2018) Tominaga, R. T., Inutsuka, S.-i., & Takahashi, S. Z. 2018, PASJ, 70, 3, doi: 10.1093/pasj/psx143 * Toomre (1964) Toomre, A. 1964, ApJ, 139, 1217, doi: 10.1086/147861 * Tsukagoshi et al. (2014) Tsukagoshi, T., Momose, M., Hashimoto, J., et al. 2014, ApJ, 783, 90, doi: 10.1088/0004-637X/783/2/90 * van der Marel et al. (2019) van der Marel, N., Dong, R., di Francesco, J., Williams, J. P., & Tobin, J. 2019, ApJ, 872, 112, doi: 10.3847/1538-4357/aafd31 * van der Marel et al. (2015) van der Marel, N., Pinilla, P., Tobin, J., et al. 2015, ApJ, 810, L7, doi: 10.1088/2041-8205/810/1/L7 * van der Marel et al. (2016) van der Marel, N., Verhaar, B. W., van Terwisga, S., et al. 2016, A&A, 592, A126, doi: 10.1051/0004-6361/201628075 * van der Marel et al. (2018) van der Marel, N., Williams, J. P., & Bruderer, S. 2018, ApJ, 867, L14, doi: 10.3847/2041-8213/aae88e * van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science and Engineering, 13, 22, doi: 10.1109/MCSE.2011.37 * Vorobyov et al. (2020) Vorobyov, E. I., Elbakyan, V. G., Takami, M., & Liu, H. B. 2020, A&A, 643, A13, doi: 10.1051/0004-6361/202038122 * Zhang et al. (2015) Zhang, K., Blake, G. A., & Bergin, E. A. 2015, ApJ, 806, L7, doi: 10.1088/2041-8205/806/1/L7 * Zhu et al. (2012) Zhu, Z., Nelson, R. P., Dong, R., Espaillat, C., & Hartmann, L. 2012, ApJ, 755, 6, doi: 10.1088/0004-637X/755/1/6 * Zhu et al. (2019) Zhu, Z., Zhang, S., Jiang, Y.-F., et al. 2019, ApJ, 877, L18, doi: 10.3847/2041-8213/ab1f8c
Generalized Damped Newton Algorithms in Nonsmooth Optimization via Second- Order Subdifferentials Pham Duy Khanh***Department of Mathematics, HCMC University of Education, Ho Chi Minh City, Vietnam., Boris S. Mordukhovich†††Department of Mathematics, Wayne State University, Detroit, Michigan, USA. E-mail<EMAIL_ADDRESS>Research of this author was partly supported by the US National Science Foundation under grants DMS-1512846 and DMS-1808978, by the US Air Force Office of Scientific Research under grant #15RT0462, and by the Australian Research Council under Discovery Project DP-190100555., Vo Thanh Phat ‡‡‡Department of Mathematics, HCMC University of Education, Ho Chi Minh City, Vietnam and Department of Mathematics, Wayne State University, Detroit, Michigan, USA. E-mails<EMAIL_ADDRESS><EMAIL_ADDRESS>Research of this author was partly supported by the US National Science Foundation under grants DMS-1512846 and DMS-1808978, and by the US Air Force Office of Scientific Research under grant #15RT0462., Dat Ba Tran §§§Department of Mathematics, Wayne State University, Detroit, Michigan, USA. E-mails: <EMAIL_ADDRESS>Research of this author was partly supported by the US National Science Foundation under grant DMS-1808978.. Abstract. The paper proposes and develops new globally convergent algorithms of the generalized damped Newton type for solving important classes of nonsmooth optimization problems. These algorithms are based on the theory and calculations of second-order subdifferentials of nonsmooth functions with employing the machinery of second-order variational analysis and generalized differentiation. First we develop a globally superlinearly convergent damped Newton-type algorithm for the class of continuously differentiable functions with Lipschitzian gradients, which are nonsmooth of second order. Then we design such a globally convergent algorithm to solve a structured class of nonsmooth quadratic composite problems with extended-real-valued cost functions, which typically arise in machine learning and statistics. Finally, we present the results of numerical experiments and compare the performance of our main algorithm applied to an important class of Lasso problems with those achieved by other first-order and second-order optimization algorithms. Keywords. Variational analysis and nonsmooth optimization, damped Newton methods, global convergence, tilt stability of minimizers, superlinear convergence, Lasso problems. ## 1 Introduction This paper is mainly devoted to the design, justification, and applications of globally convergent Newton-type algorithms to solve nonsmooth (of the first or second order) optimization problems in finite-dimensional spaces. Considering the unconstrained optimization problem $\mbox{\rm minimize}\;\varphi(x)\;\text{ subject to }\;x\in\mathbb{R}^{n}$ (1.1) with a continuously differentiable (${\cal C}^{1}$-smooth) cost function $\varphi\colon\mathbb{R}^{n}\to\mathbb{R}$, recall that one of the most natural and efficient approaches to solve (1.1) globally is by using line search methods; see, e.g., [20, 33, 53] and the references therein. Given a starting point $x^{0}\in\mathbb{R}^{n}$, such methods construct an iterative procedure of the form $x^{k+1}:=x^{k}+\tau_{k}d^{k}\quad\text{for all }\;k\in{\rm I\\!N}:=\\{1,2,\ldots\\},$ (1.2) where $\tau_{k}\geq 0$ is a step size at iteration $k$, and where $d^{k}\neq 0$ is a search direction. The precise choice of $d^{k}$ and $\tau_{k}$ at each iteration in (1.2) distinguishes one algorithm from another. The main goal of line search methods is to construct a sequence of iterates $\\{x^{k}\\}$ such that the corresponding sequence $\\{\varphi(x^{k})\\}$ is decreasing. Recall also that the condition $\langle\nabla\varphi(x^{k}),d^{k}\rangle<0$ on $d^{k}$ ensures that it is a descent direction at $x^{k}$, i.e., there exists $\bar{\tau}_{k}\in(0,1]$ such that $\varphi(x^{k}+\tau d^{k})<\varphi(x^{k})$ for all $\tau\in[0,\bar{\tau}_{k}]$. There are many choices of the direction $d^{k}$ that satisfies this condition. For instance, a classical choice for the search direction is $d^{k}:=-\nabla\varphi(x^{k})$ when the resulting algorithm is known as the gradient algorithm or steepest descent method; see [2, 8, 20, 33, 52, 57] for more details and impressive further developments of gradient and subgradient methods. If $\varphi$ is twice continuously differentiable (${\cal C}^{2}$-smooth) and the Hessian matrix $\nabla^{2}\varphi(x^{k})$ is positive-definite for each $k\in{\rm I\\!N}$, then another choice of search directions in (1.2) is provided by solving the linear equation $-\nabla\varphi(x^{k})=\nabla^{2}\varphi(x^{k})d^{k},$ (1.3) where $d^{k}$ is known as a Newton direction. In this case, algorithm (1.2) with the backtracking line search is called the damped/guarded Newton method [2, 8] to distinguish it from the pure Newton method, which uses a fixed step size $\tau=1$; see, e.g., the books [16, 20, 33, 35] with the comprehensive commentaries and references therein. It has been well recognized that the latter method exhibits a local convergence with quadratic rate. There exist various extensions of the pure Newton method to solve unconstrained optimization problems (1.1), where the cost functions $\varphi$ are not ${\cal C}^{2}$-smooth but belong merely to the class ${\cal C}^{1,1}$ of continuously differentiable functions with Lipschitz continuous gradients, i.e., being nonsmooth of second order. We refer the reader to [6, 16, 20, 33, 34, 35, 49, 59, 66] and the vast bibliographies therein for a variety of results in this direction, where mostly a local superlinear convergence rate was achieved, while in some publications certain globalization procedures were also suggested and investigated. The first goal of this paper is to develop a globally convergent damped Newton method of type (1.2), (1.3) to solve problems (1.1) with cost functions $\varphi$ of class ${\cal C}^{1,1}$. Our approach is based on replacing the classical Hessian matrix $\nabla^{2}\varphi$ in equation (1.3) by the inclusion $-\nabla\varphi(x^{k})\in\partial^{2}\varphi(x^{k})(d^{k}),\quad k=0,1,\ldots,$ (1.4) where $\partial^{2}\varphi$ stands for second-order subdifferential/generalized Hessian of $\varphi$ in the sense of Mordukhovich [43]. This construction has been largely used in variational analysis and its applications with deriving comprehensive calculus rules and complete computations of $\partial^{2}\varphi$ for broad classes of composite functions that often appeared in important problems of optimization, optimal control, stability, applied sciences, etc.; see, e.g., [12, 14, 15, 17, 18, 30, 44, 45, 46, 47, 48, 54, 56, 61, 67] with further references therein. The second-order subdifferentials have been recently employed in [49] and [34] for the design and justifications of generalized algorithms of the pure Newton type to find stable local minimizers of (1.1) as well as solutions of gradient equations and subgradient inclusions associated with ${\cal C}^{1,1}$ and prox-regular functions, respectively. In this paper we obtain efficient conditions ensuring that the iterative sequence generated by the damped Newton-type algorithm in (1.2), (1.3) is well-defined (i.e., the algorithm solvability), and that the iterative sequence global converges to a tilt-stable local minimizer of (1.1) in the sense of Poliquin and Rockafellar [56]. It is shown that the rate of convergence of our algorithm is at least linear, while the superlinear convergence of the algorithm is achieved under the additional semismooth∗ assumption on $\nabla\varphi$ in the sense of Gfrerer and Outrata [26]. The next major goal of the paper is to design, for the first time in the literature, a globally convergent damped Newton algorithm of solving nonsmooth problems of convex composite optimization given in the form: $\mbox{\rm minimize}\;\varphi(x):=f(x)+g(x)\;\text{ subject to }\;x\in\mathbb{R}^{n},$ (1.5) where $f$ is a convex quadratic function defined by $f(x):=\frac{1}{2}\langle Ax,x\rangle+\langle b,x\rangle+\alpha$ with $b\in\mathbb{R}^{n}$, $\alpha\in\mathbb{R}$, and $A\in\mathbb{R}^{n\times n}$ being a positive- semidefinite matrix, and where $g\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}:=(-\infty,\infty]$ is a lower semicontinuous (l.s.c.) extended-real-valued convex function. Problems in this format frequently arise in many applied areas such as machine learning, compressed sensing, and image processing. Since $g$ is generally extended- real-valued, the unconstrained format (1.5) encompasses problems of constrained optimization. If, in particular, $g$ is the indicator function of a closed and convex set, then (1.5) becomes a constrained quadratic optimization problems studied, e.g., in the book [53] with numerous applications. Problems of this type are important in their own sake, while they also appear as subproblems in various numerical algorithms including sequential quadratic programming (SQP) methods, augmented Lagrangian methods, proximal Newton methods, etc. One of the most well-known algorithms to solve (1.5) is the forward-backward splitting (FBS) or proximal splitting method [13, 39]. Since this method is of first order, its rate of convergence is at most linear. Another approach to solve (1.5) is to use second-order methods such as proximal Newton methods, proximal quasi-Newton methods, etc.; see, e.g., [5, 36, 50]. Although the latter approach has several benefits over first-order methods (as rapid convergence and high accuracy), a severe limitation of these methods is the cost of solving subproblems. In this paper we offer a different approach to solve problems (1.5) globally by developing a generalized damped Newton algorithm based on the second-order subdifferential scheme (1.4), advanced machinery of second-order variational analysis with the usage of the proximal mapping for $g$. As revealed below, the latter mapping can be constructively computed for many particular classes of problems arising in machine learning, statistics, etc. Proceeding in this way, we justify the well-posedness and global linear convergence of the proposed algorithm for (1.5) with presenting efficient conditions for its superlinear convergence. The last topic of this paper concerns applications of the our generalized damped Newton method (GDNM) to solving an important class of Lasso problems, which appear in many areas of applied sciences and are discussed in detail in what follows. Problems of this class can be written in form (1.5) with a quadratic loss function $f$ and a nonsmooth regularizer function $g$ given in special norm-type forms. For such problems, all the parameters of GDNM and its justification (first- and second-order subdifferentials, proximal mappings, conditions for convergence and convergence rates) can be computed and expressed entirely in terms of the problem data, which thus leads us the constructive globally superlinearly convergent realization of GDNM. Finally, we conduct MATLAB numerical experiments of solving the basic version of the Lasso problem described by Tibshirani [64] and then compare the obtained numerical results with those achieved by using well-recognized first-order and second-order methods. They include: Alternating Direction Methods of Multipliers (ADMM) [22, 23], Nesterov’s Accelerated Proximal Gradient with Backtracking (APG) [51, 52], Fast Iterative Shrinkage-Thresholding Algorithm with constant step size (FISTA) [4], and a highly efficient Semismooth Newton Augmented Lagrangian Method (SSNAL) from [37]. The rest of the paper is organized as follows. Section 2 presents and discusses some basic notions of variational analysis and generalized differentiation, which are broadly used in the formulations and proofs of the main results. Section 3 is devoted to the development and justification of the globally convergent GDNM to solve unconstrained optimization problems (1.1) with ${\cal C}^{1,1}$ cost functions. In Section 4 we present results on the linear and superlinear convergence of GDNM for problems of ${\cal C}^{1,1}$ optimization. Section 5 addresses developing GDNM for nonsmooth problems of convex composite optimization with cost functions given as sums of convex quadratic and convex extended-real-valued ones. In Section 6 we specify the obtained results for the basic class of Lasso problems under consideration and present the results of numerical experiments and their comparison with other first-order and second-order methods for solving Lasso problems. The concluding Section 7 summarizes the major contributions of the paper and discusses topics of future research. ## 2 Preliminaries from Variational Analysis In this section we review the needed background from variational analysis and generalized differentiation by following the books [44, 45, 62], where the reader can find more details and references. Our notation is standard in variational analysis and optimization and can be found in the aforementioned books. Given a set $\Omega\subset\mathbb{R}^{s}$ with $\bar{z}\in\Omega$, the (Fréchet) regular normal cone to $\Omega$ at $\bar{z}\in\Omega$ is $\widehat{N}_{\Omega}(\bar{z}):=\Big{\\{}v\in\mathbb{R}^{s}\;\Big{|}\;\limsup_{z\overset{\Omega}{\rightarrow}\bar{z}}\frac{\langle v,z-\bar{z}\rangle}{\|z-\bar{z}\|}\leq 0\Big{\\}},$ where the symbol $z\overset{\Omega}{\rightarrow}\bar{z}$ stands for $z\to\bar{z}$ with $z\in\Omega$. The (Mordukhovich) limiting normal cone to $\Omega$ at $\bar{z}\in\Omega$ is defined by $N_{\Omega}(\bar{z}):=\big{\\{}v\in\mathbb{R}^{s}\;\big{|}\;\exists\,z_{k}\stackrel{{\scriptstyle\Omega}}{{\to}}\bar{z},\;v_{k}\to v\;\text{ as }\;k\to\infty\;\text{ with }\;v_{k}\in\widehat{N}_{\Omega}(z_{k})\big{\\}}.$ (2.6) Given further a set-valued mapping $F\colon\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{m}$ with the graph $\mbox{\rm gph}\,F:=\big{\\{}(x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{m}\;\big{|}\;y\in F(x)\big{\\}},$ the (basic/limiting) coderivative of $F$ at $(\bar{x},\bar{y})\in\mbox{\rm gph}\,F$ is defined via the limiting normal cone (2.6) to the graph of $F$ at the reference point $(\bar{x},\bar{y})$ as $D^{*}F(\bar{x},\bar{y})(v):=\big{\\{}u\in\mathbb{R}^{n}\;\big{|}\;(u,-v)\in N_{{\rm gph}\,F}(\bar{x},\bar{y})\big{\\}},\quad v\in\mathbb{R}^{m},$ (2.7) where $\bar{y}$ is omitted in the coderivative notation if $F(\bar{x})=\\{\bar{y}\\}$. Note that if $F\colon\mathbb{R}^{n}\to\mathbb{R}^{m}$ is a (single-valued) ${\cal C}^{1}$-smooth mapping around $\bar{x}$, then we have $D^{*}F(\bar{x})(v)=\big{\\{}\nabla F(\bar{x})^{*}v\big{\\}}\;\mbox{ for all }\;v\in\mathbb{R}^{m}$ in terms of the transpose matrix (adjoint operator) $\nabla F(\bar{x})^{*}$ of the Jacobian $\nabla F(\bar{x})$. Recall further that a set-valued mapping $F:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{m}$ is metrically regular around $(\bar{x},\bar{y})\in\text{\rm gph}F$ with modulus $\mu>0$ if there exist neighborhoods $U$ of $\bar{x}$ and $V$ of $\bar{y}$ such that ${\rm dist}\big{(}x;F^{-1}(y)\big{)}\leq\mu\,{\rm dist}\big{(}y;F(x)\big{)}\;\text{ for all }\;(x,y)\in U\times V,$ where $F^{-1}(y):=\\{x\in\mathbb{R}^{n}\;|\;y\in F(x)\\}$ is the inverse mapping of $F$. If in addition $F^{-1}$ has a single-valued localization around $(\bar{y},\bar{x})$, i.e., there exist some neighborhoods $U$ of $\bar{x}$ and $V$ of $\bar{y}$ together with a single-valued mapping $\vartheta:V\to U$ such that $\text{\rm gph}F^{-1}\cap(V\times U)=\text{\rm gph}\vartheta$, then $F$ is strongly metrically regular around $(\bar{x},\bar{y})$ with modulus $\mu>0$. A set-valued mapping $T:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}$ is locally strongly monotone with modulus $\tau>0$ around $(\bar{x},\bar{y})\in\text{\rm gph}T$ if there exist neighborhoods $U$ of $\bar{x}$ and $V$ of $\bar{y}$ such that $\langle x-u,v-w\rangle\geq\tau\|x-u\|^{2}\quad\text{for all }\;(x,v),(u,w)\in\text{\rm gph}T\cap(U\times V).$ If in addition $\text{\rm gph}T\cap(U\times V)=\text{\rm gph}S\cap(U\times V)$ for any monotone operator $S:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}$ satisfying the inclusion $\text{\rm gph}T\cap(U\times V)\subset\text{\rm gph}S$, $T$ is called locally strongly maximally monotone with modulus $\tau>0$ around $(\bar{x},\bar{y})$. Let $\varphi\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}$ be an extended-real- valued function with the domain and epigraph $\mbox{\rm dom}\,\varphi:=\big{\\{}x\in\mathbb{R}^{n}\;\big{|}\;\varphi(x)<\infty\big{\\}}\;\mbox{ and }\;\mbox{\rm epi}\,\varphi:=\big{\\{}(x,\alpha)\in\mathbb{R}^{n+1}\;\big{|}\;\alpha\geq\varphi(x)\big{\\}}.$ The (basic/limiting) subdifferential of $\varphi$ at $\bar{x}\in\mbox{\rm dom}\,\varphi$ is defined geometrically $\partial\varphi(\bar{x}):=\big{\\{}v\in\mathbb{R}^{n}\;\big{|}\;(v,-1)\in N_{{\rm\small epi}\,\varphi}\big{(}\bar{x},\varphi(\bar{x})\big{)}\big{\\}}$ (2.8) via the limiting normal cone (2.6), while admitting various analytic representations. This subdifferential is an extension of the classical gradient for smooth functions and of the classical subdifferential of convex ones. If $F\colon\mathbb{R}^{n}\to\mathbb{R}^{m}$ is locally Lipschitzian around $\bar{x}$, then we have the useful relationships between the coderivative (2.7) of $F$ and the subdifferential (2.8) of the scalarized function $\langle v,F\rangle(x):=\langle v,F(x)\rangle$ formulated as $D^{*}F(\bar{x})(v)=\partial\langle v,F\rangle(\bar{x})\;\mbox{ for all }\;v\in\mathbb{R}^{m}.$ (2.9) Following [43], we now define the second-order subdifferential $\partial^{2}\varphi(\bar{x},\bar{v})\colon\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}$ of $\varphi\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}$ at $\bar{x}\in\mbox{\rm dom}\,\varphi$ for $\bar{v}\in\partial\varphi(\bar{x})$ as the coderivative (2.7) of the subgradient mapping (2.8), i.e., by $\partial^{2}\varphi(\bar{x},\bar{v})(u):=\big{(}D^{*}\partial\varphi\big{)}(\bar{x},\bar{y})(u)\;\mbox{ for all }\;u\in\mathbb{R}^{n}.$ (2.10) If $\varphi$ is ${\cal C}^{2}$-smooth around $\bar{x}$, then we have the representation of the second-order subdifferential via the classical (symmetric) Hessian matrix $\partial^{2}\varphi(\bar{x})(u)=\big{\\{}\nabla^{2}\varphi(\bar{x})u\big{\\}}\;\mbox{ for all }\;u\in\mathbb{R}^{n},$ (2.11) which allows us to also label (2.10) as the generalized Hessian. In the case of ${\cal C}^{1,1}$ functions $\varphi$, the second-order subdifferential (2.10) is computed by the scalarization formula (2.9) via the coderivative of the gradient mapping $\nabla\varphi$. In Section 1, the reader can find the references to many publications where the second-order subdifferential is computed entirely via the given data for major classes of systems appeared in variational analysis and optimization. It is important to mention that our basic constructions (2.6)–(2.8) and (2.10), enjoy comprehensive calculus rules in general settings, despite being intrinsically nonconvex. This is due to variational/extremal principles of variational analysis; see the books [44, 45, 62] for the first-order constructions and [44, 45] for the second-order subdifferential (2.10). In what follows we are going to broadly employ the fundamental notion of tilt stability of local minimizers for extended-real-valued functions, which was introduced by Poliquin and Rockafellar [56] and characterized therein in terms of the second-order subdifferential (2.10). ###### Definition 2.1 (tilt-stable local minimizers). Given $\varphi\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}$, a point $\bar{x}\in\mbox{\rm dom}\,\varphi$ is a tilt-stable local minimizer of $\varphi$ if there exists a number $\gamma>0$ such that the mapping $M_{\gamma}\colon v\mapsto{\rm argmin}\big{\\{}\varphi(x)-\langle v,x\rangle\;\big{|}\;x\in\mathbb{B}_{\gamma}(\bar{x})\big{\\}}$ is single-valued and Lipschitz continuous on a neighborhood of $0\in\mathbb{R}^{n}$ with $M_{\gamma}(0)=\\{\bar{x}\\}$. By a modulus of tilt stability of $\varphi$ at $\bar{x}$ we understand a Lipschitz constant of $M_{\gamma}$ around the origin. Besides the seminal paper [56], the notion of tilt stability has been largely investigated, characterized, and widely applied in many publications to various classes of unconstrained and constrained optimization problems; see, e.g., [11, 17, 18, 25, 45, 46, 48] and the references therein. ## 3 Globally Convergent GDNM in $\mathcal{C}^{1,1}$ Optimization In this section we concentrate on the unconstrained optimization problem (1.1), where the cost function $\varphi\colon\mathbb{R}^{n}\to\mathbb{R}$ is of class $\mathcal{C}^{1,1}$ around the reference points. The corresponding gradient equation associated with (1.1), which gives us, in particular, a necessary condition for local minimizers, is written in the form $\nabla\varphi(x)=0.$ (3.12) The following generalization of the pure Newton algorithm to solve (1.1) locally was first suggested and investigated in [49] under the major assumption that a given point $\bar{x}$ is a tilt-stable local minimizer of (1.1). Then it was extended in [34] to solve directly the gradient equation (3.12) under certain assumptions on a given solution $\bar{x}$ to (3.12) ensuring the well-posedness and local superlinear convergence of the algorithm. ###### Algorithm 3.1 (generalized pure Newton-type algorithm for ${\cal C}^{1,1}$ functions). Step 0: Choose a starting point $x^{0}\in\mathbb{R}^{n}$ and set $k=0$. Step 1: If $\nabla\varphi(x^{k})=0$, stop the algorithm. Otherwise move to Step 2. Step 2: Choose $d^{k}\in\mathbb{R}^{n}$ satisfying $-\nabla\varphi(x^{k})\in\partial^{2}\varphi(x^{k})(d^{k}).$ Step 3: Set $x^{k+1}$ given by $x^{k+1}:=x^{k}+d^{k}\;\mbox{ for all }\;k=0,1,\ldots.$ Step 4: Increase $k$ by $1$ and go to Step 1. One of the serious disadvantages of the pure Newton method and its generalizations is that the corresponding sequence of iterates may not converges if the stating point is not sufficiently close to the solution. This motivates us to design and justify the globally convergent damped Newton counterpart of Algorithm 3.1 with backtracking line search to solve the gradient equation (3.12) that is formalized as follows. ###### Algorithm 3.2 (generalized damped Newton algorithm for $\mathcal{C}^{1,1}$ functions). Let $\sigma\in\left(0,\frac{1}{2}\right)$ and $\beta\in\left(0,1\right)$ be given real numbers. Then do the following: Step 0: Choose an arbitrary staring point $x^{0}\in\mathbb{R}^{n}$ and set $k=0$. Step 1: If $\nabla\varphi(x^{k})=0$, stop the algorithm. Otherwise move to Step 2. Step 2: Choose $d^{k}\in\mathbb{R}^{n}$ such that $-\nabla\varphi(x^{k})\in\partial^{2}\varphi(x^{k})(d^{k}).$ (3.13) Step 3: Set $\tau_{k}=1$. Until Armijo’s inequality $\varphi(x^{k}+\tau_{k}d^{k})\leq\varphi(x^{k})+\sigma\tau_{k}\langle\nabla\varphi(x^{k}),d^{k}\rangle.$ is satisfied, set $\tau_{k}:=\beta\tau_{k}$. Step 4: Set $x^{k}$ given by $x^{k+1}:=x^{k}+\tau_{k}d^{k}\;\mbox{ for all }\;k=0,1,\ldots.$ Step 5: Increase $k$ by $1$ and go to Step 1. Due to (2.11), Algorithm 3.2 reduces to the standard damped Newton method (as, e.g., in [2, 8]) if $\varphi$ is $\mathcal{C}^{2}$-smooth. Note also that by (2.7) the direction $d^{k}$ in (3.13) can be explicitly found from $\big{(}-\nabla\varphi(x^{k}),-d^{k}\big{)}\in N\big{(}(x^{k},\nabla\varphi(x^{k}));\mbox{\rm gph}\,\nabla\varphi\big{)}.$ To proceed with the study of Algorithm 3.2, first we clarify the existence of descent Newton directions. It is done in the next proposition under the positive-definiteness of the second-order subdifferential mapping $\partial^{2}\varphi(x)$. ###### Proposition 3.3 (existence of descent Newton directions). Let $\varphi\colon\mathbb{R}^{n}\to\mathbb{R}$ be of class $\mathcal{C}^{1,1}$ around $x\in\mathbb{R}^{n}$. Suppose that $\nabla\varphi(x)\neq 0$ and that $\partial^{2}\varphi(x)$ is positive-definite, i.e., $\langle z,u\rangle>0\;\mbox{ for all }\;z\in\partial^{2}\varphi(x)(u)\;\mbox{ and }\;u\neq 0.$ (3.14) Then there exists a nonzero direction $d\in\mathbb{R}^{n}$ such that $-\nabla\varphi(x)\in\partial^{2}\varphi(x)(d).$ (3.15) Moreover, every such direction satisfies the inequality $\langle\nabla\varphi(x),d\rangle<0$. Consequently, for each $\sigma\in(0,1)$ and $d\in\mathbb{R}^{n}$ satisfying (3.15) we have $\delta>0$ such that $\varphi(x+\tau d)\leq\varphi(x)+\sigma\tau\langle\nabla\varphi(x),d\rangle\;\mbox{ whenever }\;\tau\in(0,\delta).$ (3.16) [Proof.] It follows from [45, Theorem 5.16] that $\nabla\varphi$ is strongly locally maximal monotone around $(x,\nabla\varphi(x))$. Thus $\nabla\varphi$ is strongly metrically regular around $(x,\nabla\varphi(x))$ by [45, Theorem 5.13]. Using [34, Theorem 4.2] yields the existence of $d\in\mathbb{R}^{n}$ with $-\nabla\varphi(x)\in\partial^{2}\varphi(x)(d)$. To verify that $d\neq 0$, suppose on the contrary that $d=0$. Since $\nabla\varphi$ is locally Lipschitz around $x$, it follows from [44, Theorem 1.44] that $\partial^{2}\varphi(x)(d)=\big{(}D^{*}\nabla\varphi\big{)}(x)(d)=\big{(}D^{*}\nabla\varphi\big{)}(x)(0)=\\{0\\}.$ Therefore, we have that $\nabla\varphi(x)=0$ due to the inclusion $-\nabla\varphi(x)\in\partial^{2}\varphi(x)(d)$, which contradicts the assumption that $\nabla\varphi(x)\neq 0$. Employing the imposed positive- definiteness of the mapping $\partial^{2}\varphi(x)$ tells us that $\langle\nabla\varphi(x),d\rangle<0$. Using finally [33, Lemmas 2.18 and 2.19], we arrive at (3.16) and thus complete the proof of the proposition. Now we formulate and discuss our major assumption to establish the desired global behavior of Algorithm 3.2 for $\mathcal{C}^{1,1}$ functions $\varphi$. Fix an arbitrary point $x^{0}\in\mathbb{R}^{n}$ and consider the level set $\Omega:=\big{\\{}x\in\mathbb{R}^{n}\;\big{|}\;\varphi(x)\leq\varphi(x^{0})\big{\\}}.$ (3.17) ###### Assumption 1. The mapping $\partial^{2}\varphi(x)\colon\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}$ is positive-definite (3.14) for all $x\in\Omega$. Observe that Assumption 1 cannot be removed or even replaced by the positive- semidefiniteness of $\partial^{2}\varphi(x)$ to ensure the existence of descent Newton direction for Algorithm 3.2 as in Proposition 3.3. Indeed, consider the simplest linear function $\varphi(x):=x$ on $\mathbb{R}$. Then we obviously have that $\nabla^{2}\varphi(x)\geq 0$ for all $x\in\mathbb{R}$, while there is no Newtonian direction $d\in\mathbb{R}$ satisfying the backtracking line search condition (3.16). The next theorem shows that Assumption 1 not only ensures the well-posedness of Algorithm 3.2, but also allows us to conclude that all the limiting points of the iterative sequence $\\{x^{k}\\}$ are tilt-stable minimizers of the cost function $\varphi$. ###### Theorem 3.4 (well-posedness and limiting points of the generalized damped Newton algorithm). Let $\varphi\colon\mathbb{R}^{n}\to\mathbb{R}$ be of class $\mathcal{C}^{1,1}$, and let $x^{0}\in\mathbb{R}^{n}$ be an arbitrary point such that Assumption 1 is satisfied. Then we have the following assertions: * (i) Any sequence $\\{x^{k}\\}$ generated by Algorithm 3.2 is well-defined with $x^{k}\in\Omega$ for all $k\in{\rm I\\!N}$. * (ii) All the limiting points of $\\{x^{k}\\}$ are tilt-stable local minimizers of $\varphi$. [Proof.] First we check that a sequence $\\{x^{k}\\}$ generated by Algorithm 3.2 with any starting point $x^{0}$ is well-defined. Indeed, there is nothing to prove if $\nabla\varphi(x^{0})=0$. Otherwise, it follows from Proposition 3.3 due to the positive-definiteness of $\partial^{2}\varphi(x^{0})$ that there exist $d^{0}$ and $\tau_{0}$ satisfying $-\nabla\varphi(x^{0})\in\partial^{2}\varphi(x^{0})(d^{0})$ and the inequalities $\varphi(x^{1})\leq\varphi(x^{0})+\sigma\tau_{0}\langle\nabla\varphi(x^{0}),d^{0}\rangle<\varphi(x^{0}),$ which clearly ensure that $x^{1}\in\Omega$. Then we get by induction that either $x^{k}\in\Omega$, or $\nabla\varphi(x^{k})=0$ whenever $k\in{\rm I\\!N}$. Thus assertion (i) is verified. Next we prove assertion (ii). To proceed, suppose that $\\{x^{k}\\}$ has a limiting point $\bar{x}\in\mathbb{R}^{n}$, i.e., there exists a subsequence $\\{x^{k_{j}}\\}_{j\in{\rm I\\!N}}$ of $\\{x^{k}\\}$ such that $x^{k_{j}}\to\bar{x}$ as $j\to\infty$. Since $\Omega$ is closed and since $x^{k_{j}}\in\Omega$ for all $j\in{\rm I\\!N}$, we get $\bar{x}\in\Omega$. By Assumption 1, the mapping $\partial^{2}\varphi(\bar{x})$ is positive-definite. Then [10, Proposition 4.6] gives us positive numbers $\kappa$ and $\delta$ such that $\langle z,w\rangle\geq\kappa\|w\|^{2}\quad\text{for all }\;z\in\partial^{2}\varphi(x)(w),\;x\in\mathbb{B}_{\delta}(\bar{x}),\;\mbox{ and }\;w\in\mathbb{R}^{n}.$ (3.18) Since $\varphi$ is of class ${\cal C}^{1,1}$ around $\bar{x}$, we get without loss of generality that $\nabla\varphi$ is Lipschitz continuous on $\mathbb{B}_{\delta}(\bar{x})$ with some constant $\ell>0$. The rest of the proof is split into the following two claims. Claim 1: The sequence $\\{\tau_{k_{j}}\\}_{j\in{\rm I\\!N}}$ in Algorithm 3.2 is bounded from below by a positive number $\gamma>0$. Indeed, suppose on the contrary that the statement does not hold. Combining this with $\tau_{k}\geq 0$ gives us a subsequence of $\\{\tau_{k_{j}}\\}$ that converges to $0$. Suppose without loss of generality that $\tau_{k_{j}}\to 0$ as $j\to\infty$. Since $-\nabla\varphi(x^{k_{j}})\in\partial^{2}\varphi(x^{k_{j}})(d^{k_{j}})$ for all $j\in{\rm I\\!N}$, we deduce from (3.18) and the Cauchy-Schwarz inequality that $\|\nabla\varphi(x^{k_{j}})\|\geq\kappa\|d^{k_{j}}\|\;\mbox{ whenever }\;j\in{\rm I\\!N},$ which verifies the boundedness of $\\{d^{k_{j}}\\}$. Thus $x^{k_{j}}+\beta^{-1}\tau_{k_{j}}d^{k_{j}}\to\bar{x}$ as $j\to\infty$, and hence $x^{k_{j}}+\beta^{-1}\tau_{k_{j}}d^{k_{j}}\in\mathbb{B}_{\delta}(\bar{x})$ for all $j\in{\rm I\\!N}$ sufficiently large. Since $\varphi$ is of class ${\cal C}^{1,1}$ around $\bar{x}$, we suppose without loss of generality that $\nabla\varphi$ is Lipschitz continuous on $\mathbb{B}_{\delta}(\bar{x})$. It follows then from [33, Lemma A.11] that $\varphi(x^{k_{j}}+\beta^{-1}\tau_{k_{j}}d^{k_{j}})\leq\varphi(x^{k_{j}})+\beta^{-1}\tau_{k_{j}}\langle\nabla\varphi(x^{k_{j}}),d^{k_{j}}\rangle+\frac{\ell\beta^{-2}\tau_{k_{j}}^{2}}{2}\|d^{k_{j}}\|^{2}$ (3.19) whenever indices $j\in{\rm I\\!N}$ are sufficiently large. Due to the exit condition of the backtracking line search in Step 3 of Algorithm 3.2, we have the strict inequality $\varphi(x^{k_{j}}+\beta^{-1}\tau_{k_{j}}d^{k_{j}})>\varphi(x^{k_{j}})+\sigma\beta^{-1}\tau_{k_{j}}\langle\nabla\varphi(x^{k_{j}}),d^{k_{j}}\rangle$ (3.20) for large $j\in{\rm I\\!N}$. Now combining (3.18), (3.19), and (3.20) for such $j$ yields the estimates $\displaystyle\sigma\beta^{-1}\tau_{k_{j}}\langle\nabla\varphi(x^{k_{j}}),d^{k_{j}}\rangle$ $\displaystyle<$ $\displaystyle\beta^{-1}\tau_{k_{j}}\langle\nabla\varphi(x^{k_{j}}),d^{k_{j}}\rangle+\frac{\ell\beta^{-2}\tau_{k_{j}}^{2}}{2}\|d^{k_{j}}\|^{2}$ $\displaystyle\leq$ $\displaystyle\beta^{-1}\tau_{k_{j}}\langle\nabla\varphi(x^{k_{j}}),d^{k_{j}}\rangle+\frac{\ell\beta^{-2}\tau_{k_{j}}^{2}}{2\kappa}\langle\nabla\varphi(x^{k_{j}}),-d^{k_{j}}\rangle,$ which imply in turn that $\sigma\beta>\beta-\frac{\ell}{2\kappa}\tau_{k_{j}}$ for all large $j\in{\rm I\\!N}$. Letting $j\to\infty$ gives us $\sigma\beta\geq\beta$, a contradiction due to $\sigma<1$ and $\beta>0$. This justifies the claimed boundedness of $\\{\tau_{k_{j}}\\}_{j\in{\rm I\\!N}}$. Claim 2: Any limiting point $\bar{x}$ of $\\{x^{k}\\}$ is a tilt-stable local minimizer of $\varphi$. Indeed, it follows from the continuity of the gradient $\nabla\varphi$ and the convergence $x^{k_{j}}\to\bar{x}$ that $\nabla\varphi(x^{k_{j}})\to\nabla\varphi(\bar{x})$ as $j\to\infty$. Since the sequence $\\{\varphi(x^{k})\\}$ is nonincreasing, we get that $\varphi(\bar{x})$ is a lower bound for $\\{\varphi(x^{k})\\}$. Thus the sequence $\\{\varphi(x^{k})\\}$ must converge to $\varphi(\bar{x})$ as $k\to\infty$. It follows from [44, Theorem 1.44] due to $-\nabla\varphi(x^{k_{j}})\in\partial^{2}\varphi(x^{k_{j}})(d^{k_{j}})$ and the Lipschitz continuity of $\nabla\varphi$ on $\mathbb{B}_{\delta}(\bar{x})$ with constant $\ell$ that $\|\nabla\varphi(x^{k_{j}})\|\leq\ell\|d^{k_{j}}\|\;\text{ for sufficiently large }\;j\in{\rm I\\!N}.$ (3.21) Combining Claim 1 with the estimates in (3.18) and (3.21), we find $j_{0}\in{\rm I\\!N}$ such that $\varphi(x^{k_{j}})-\varphi(x^{k_{j}+1})\geq\sigma\tau_{k_{j}}\langle-\nabla\varphi(x^{k_{j}}),d^{k_{j}}\rangle\geq\sigma\gamma\kappa\|d^{k_{j}}\|^{2}\geq\sigma\gamma\kappa\ell^{-2}\|\nabla\varphi(x^{k_{j}})\|^{2},\;j\geq j_{0}.$ (3.22) Since $\\{\varphi(x^{k})\\}$ is convergent, it follows that the sequence $\\{\varphi(x^{k_{j}})-\varphi(x^{k_{j}+1})\\}_{j\in{\rm I\\!N}}$ converges to $0$ as $j\to\infty$. Furthermore, we deduce from $\eqref{ineq}$ that $\\{\|\nabla\varphi(x^{k_{j}})\|\\}$ also converges to $0$, and therefore $\nabla\varphi(\bar{x})=0$. Combining the latter with the positive- definiteness of $\partial^{2}\varphi(\bar{x})$ tells us by [56, Theorem 1.3] that $\bar{x}$ is a tilt-stable local minimizer of $\varphi$. This completes the proof. ###### Remark 3.5 (iterative sequences may diverge). Note that Theorem 3.4 does not claim anything about the convergence of the iterative sequence $\\{x^{k}\\}$. In fact, the divergence of such a sequence can be observe in simple situations under the fulfillment of all the assumptions of Theorem 3.4. To illustrate it, consider the function $\varphi(x):=e^{x}$ on $\mathbb{R}$ with the positive second derivative $\varphi^{\prime\prime}(x)=e^{x}>0$ for all $x\in\mathbb{R}$. Running Algorithm 3.2 with the starting point $x^{0}=1$, it is not hard to check that $\tau_{k}=1$ and $d^{k}=-1$ for all $k\in{\rm I\\!N}$. Thus the sequence of $x^{k}=1-k$ as $k\in{\rm I\\!N}$ generated by Algorithm 3.2 is obviously divergent. We conclude this section by giving a simple additional condition to Assumption 1 that ensures the global convergence of any sequence of iterates in Algorithm 3.2. ###### Assumption 2. The level set $\Omega$ from (3.17) is bounded. To establish the global convergence of Algorithm 3.2, we first present the following useful lemma of its own interest. ###### Lemma 3.6 (uniformly positive-definiteness of second-order subdifferentials). Let $\varphi\colon\mathbb{R}^{n}\to\mathbb{R}$ be a function of class $\mathcal{C}^{1,1}$, and let $x^{0}$ be arbitrary vector for which Assumptions 1 and 2 are satisfied. Then there exists $\kappa>0$ such that for each $x\in\Omega$ we have $\langle z,w\rangle\geq\kappa\|w\|^{2}\;\mbox{ whenever }\;z\in\partial^{2}\varphi(x)(w)\;\mbox{ and }\;w\in\mathbb{R}^{n}.$ (3.23) [Proof.] Since the mapping $\partial^{2}\varphi(x)$ is positive-definite for each $x\in\Omega$ by Assumption 1, we deduce from [10, Proposition 4.6] that there exist $\kappa_{x}>0$ and a neighborhood $U_{x}$ of $x$ such that $\langle z,w\rangle\geq\kappa_{x}\|w\|^{2}\;\text{ for all }\;z\in\partial^{2}\varphi(y)(w),\;y\in U_{x},\;\mbox{ and }\;w\in\mathbb{R}^{n}.$ (3.24) Note that the neighborhood system $\\{U_{x}\;|\;x\in\Omega\\}$ provides an open cover of $\Omega$. Using the compactness of the set $\Omega$ due its closedness and Assumption 2, we find finitely many points $x_{1},\ldots,x_{p}\in\Omega$ such that $\Omega\subset\bigcup_{j=1}^{p}U_{x_{j}}$. Denoting $\kappa:={\rm min}\big{\\{}\kappa_{x_{1}},\ldots,\kappa_{x_{p}}\big{\\}}>0,$ we arrive at the fulfillment of the claimed condition (3.23) for each $x\in\Omega$. Now we are ready to justify the global convergence of Algorithm 3.2. ###### Theorem 3.7 (global convergence of the damped Newton algorithm for ${\cal C}^{1,1}$ functions). In the setting of Theorem 3.4, suppose in addition that Assumption 2 is satisfied. Then the sequence $\\{x^{k}\\}$ is convergent, and its limit is a tilt-stable local minimizer of $\varphi$. [Proof.] The well-definiteness of the sequence $\\{x^{k}\\}$ and the inclusion $\\{x^{k}\\}\subset\Omega$ follow from Theorem 3.4. Furthermore, employing Assumptions 1 and 2, the inclusion $-\nabla\varphi(x^{k})\in\partial^{2}\varphi(x^{k})(d^{k})$ for all $k\in{\rm I\\!N}$, and Proposition 3.6 ensures the existence of $\kappa>0$ such that $\langle-\nabla\varphi(x^{k}),d^{k}\rangle\geq\kappa\|d^{k}\|^{2}\quad\text{for all }\;k\in{\rm I\\!N}.$ (3.25) Assumption 2 tells us that the sequence $\\{x^{k}\\}$ is bounded, and so it has a limiting point $\bar{x}\in\Omega$. Hence the value $\varphi(\bar{x})$ is a limiting point of the numerical sequence $\\{\varphi(x^{k})\\}$. Combining this with the nonincreasing property of $\\{\varphi(x^{k})\\}$ yields the convergence of $\\{\varphi(x^{k})\\}$ to $\varphi(\bar{x})$ as $k\to\infty$. It follows from (3.25) that $\varphi(x^{k})-\varphi(x^{k+1})\geq\sigma\tau_{k}\langle-\nabla\varphi(x^{k}),d^{k}\rangle\geq\sigma\tau_{k}\kappa\|d^{k}\|^{2}\;\text{ for all }\;k\in{\rm I\\!N}.$ (3.26) The above convergence of $\\{\varphi(x^{k})\\}$ implies that the sequence $\\{\varphi(x^{k})-\varphi(x^{k+1})\\}_{k\in{\rm I\\!N}}$ converges to $0$ as $k\to\infty$. It follows from $\eqref{ineq1}$ that $\lim_{k\to\infty}\tau_{k}\|d^{k}\|^{2}=0.$ (3.27) Now let us show that the sequence $\\{x^{k}\\}$ converges to $\bar{x}$ as $k\to\infty$ by using Ostrowski’s condition from [20, Proposition 8.3.10]. To accomplish this, we verify that there is a neighborhood of $\bar{x}$ within which no other limiting point of $\\{x^{k}\\}$ exists, and the following condition holds: $\lim_{k\to\infty}\|x^{k+1}-x^{k}\|=0.$ (3.28) Indeed, tilt stability of the local minimizer $\bar{x}$ of $\varphi$ ensures the existence of $\delta>0$ for which $\varphi$ is strongly convex on $\mathbb{B}_{\delta}(\bar{x})$ due to [10, Theorem 4.7]. Arguing by contraposition, suppose that there is $\widetilde{x}\in\mathbb{B}_{\delta}(\bar{x})$ such that $\widetilde{x}\neq\bar{x}$ and $\widetilde{x}$ is a limiting point of $\\{x^{k}\\}$. Theorem 3.4 tells us that $\widetilde{x}$ is also a tilt-stable local minimizer of $\varphi$, a contradiction with the strong convexity of $\varphi$ on $\mathbb{B}_{\delta}(\bar{x})$. Moreover, the construction of $\\{x^{k}\\}$ and the condition $\tau_{k}\in(0,1]$ imply the estimate $\|x^{k+1}-x^{k}\|^{2}=\tau_{k}^{2}\|d^{k}\|^{2}\leq\tau_{k}\|d^{k}\|^{2}\;\mbox{ for all }\;k\in{\rm I\\!N}.$ Passing there to the limit as $k\to\infty$ and using (3.27), we verify (3.28). Finally, it follows from [20, Proposition 8.3.10] that $\\{x^{k}\\}$ converges to $\bar{x}$, which thus completes the proof. ## 4 Rates of Convergence of GDNM for ${\cal C}^{1,1}$ Optimization This section is devoted to the study of convergence rates of the globally convergent Algorithm 3.2 for minimizing ${\cal C}^{1,1}$ functions. First we recall the standard notions of convergence rates used in what follows; see [20, Definition 7.2.1]. ###### Definition 4.1 (rates of convergence). Let $\\{x^{k}\\}\subset\mathbb{R}^{n}$ be a sequence of vectors converging to $\bar{x}$ as $k\to\infty$ with $\bar{x}\neq x^{k}$ for all $k\in{\rm I\\!N}$. The convergence rate is said to be (at least): * (i) R-linear if $0<\limsup_{k\to\infty}\left(\|x^{k}-\bar{x}\|\right)^{1/k}<1,$ i.e., there exist $\mu\in(0,1)$, $c>0$, and $k_{0}\in{\rm I\\!N}$ such that $\|x^{k}-\bar{x}\|\leq c\mu^{k}\quad\text{for all }\;k\geq k_{0}.$ * (ii) Q-linear if $\limsup_{k\to\infty}\frac{\|x^{k+1}-\bar{x}\|}{\|x^{k}-\bar{x}\|}<1,$ i.e., there exist $\mu\in(0,1)$ and $k_{0}\in{\rm I\\!N}$ such that $\|x^{k+1}-\bar{x}\|\leq\mu\|x^{k}-\bar{x}\|\quad\text{for all }\;k\geq k_{0}.$ * (iii) Q-superlinear if $\lim_{k\to\infty}\frac{\|x^{k+1}-\bar{x}\|}{\|x^{k}-\bar{x}\|}=0.$ Our first result here establishes the linear convergence of Algorithm 3.2 under the general assumptions formulated in the preceding section. ###### Theorem 4.2 (linear convergence of generalized damped Newton algorithm for ${\cal C}^{1,1}$ functions). In the setting of Theorem 3.4, suppose in addition that the sequence $\\{x^{k}\\}$ converges to some vector $\bar{x}$ being such that $x^{k}\neq\bar{x}$ for all $k\in{\rm I\\!N}$. Then we have the following assertions: * (i) The sequence $\\{\varphi(x^{k})\\}$ converges to $\varphi(\bar{x})$ at least Q-linearly. * (ii) The sequences $\\{x^{k}\\}$ and $\\{\|\nabla\varphi(x^{k})\|\\}$ converge to $\bar{x}$ and $0$, respectively at least R-linearly. [Proof.] Suppose that $\\{x^{k}\\}$ converges to $\bar{x}$. It follows from Theorem 3.4 that $\bar{x}$ is a tilt-stable local minimizer of $\varphi$. Using now the characterizations of tilt-stable local minimizers taken from [10, Theorem 4.7], we deduce that there exist $\kappa>0$ and $\delta>0$ such that $\varphi$ is strongly convex on $\mathbb{B}_{\delta}(\bar{x})$ with modulus $\kappa$ while satisfying $\langle z,w\rangle\geq\kappa\|w\|^{2}\quad\text{for all }\;z\in\partial^{2}\varphi(x)(w),\;x\in\mathbb{B}_{\delta}(\bar{x}),\;\mbox{ and }\;w\in\mathbb{R}^{n}.$ (4.29) Furthermore, due to the locally Lipschitz continuity around $\bar{x}$ of $\nabla\varphi$, we suppose without loss of generality that $\nabla\varphi$ is Lipschitz continuous on $\mathbb{B}_{\delta}(\bar{x})$ with some constant $\ell>0$. The strong convexity of $\varphi$ on $\mathbb{B}_{\delta}(\bar{x})$ tells us that $\varphi(x)\geq\varphi(u)+\langle\nabla\varphi(u),x-u\rangle+\frac{\kappa}{2}\|x-u\|^{2}\;\mbox{ and}$ (4.30) $\langle\nabla\varphi(x)-\nabla\varphi(u),x-u\rangle\geq\kappa\|x-u\|^{2}\;\mbox{ for all }\;x,u\in\mathbb{B}_{\delta}(\bar{x}).$ (4.31) Since $x^{k}\to\bar{x}$ as $k\to\infty$, we have that $x^{k}\in U$ for all $k\in{\rm I\\!N}$ sufficiently large. Substituting $x:=x^{k}$ and $u:=\bar{x}$ into (4.30) and (4.31), and then using the Cauchy-Schwarz inequality together with $\nabla\varphi(\bar{x})=0$ yield the estimates $\varphi(x^{k})\geq\varphi(\bar{x})+\frac{\kappa}{2}\|x^{k}-\bar{x}\|^{2}\;\mbox{ and}$ (4.32) $\|\nabla\varphi(x^{k})\|\geq\kappa\|x^{k}-\bar{x}\|$ (4.33) for all large $k\in{\rm I\\!N}$. The local Lipschitz continuity of $\nabla\varphi$ around $\bar{x}$ and the result of [33, Lemma A.11] ensure the existence of $\ell>0$ such that $\varphi(x^{k})-\varphi(\bar{x})=|\varphi(x^{k})-\varphi(\bar{x})-\langle\nabla\varphi(\bar{x}),x^{k}-\bar{x}\rangle|\leq\frac{\ell}{2}\|x^{k}-\bar{x}\|^{2}\quad\text{for large }\;k\in{\rm I\\!N}.$ (4.34) Furthermore, the Lipschitz continuity of $\nabla\varphi$ on $\mathbb{B}_{\delta}(\bar{x})$ together with the inclusion $-\nabla\varphi(x^{k})\in\partial^{2}\varphi(x^{k})(d^{k})$ implies by [44, Theorem 1.44] that $\|\nabla\varphi(x^{k})\|\leq\ell\|d^{k}\|\quad\text{for large }\;k\in{\rm I\\!N}.$ (4.35) Proceeding similarly to the proof of Theorem 3.4 tells us that the sequence $\\{\tau_{k}\\}$ is bounded from below by some constant $\gamma>0$. Combining the latter with (4.29) and (4.35) yields $\varphi(x^{k})-\varphi(x^{k+1})\geq\sigma\tau_{k}\langle-\nabla\varphi(x^{k}),d^{k}\rangle\geq\sigma\gamma\kappa\|d^{k}\|^{2}\geq\sigma\gamma\kappa\ell^{-2}\|\nabla\varphi(x^{k})\|^{2}$ (4.36) for all large $k\in{\rm I\\!N}$. Therefore, for such $k$ we deduce from (4.33), (4.34), and (4.36) that $\varphi(x^{k+1})-\varphi(x^{k})\leq-\sigma\gamma\kappa\ell^{-2}\|\nabla\varphi(x^{k})\|^{2}\leq-\sigma\gamma\kappa^{3}\ell^{-2}\|x^{k}-\bar{x}\|^{2}\leq-2\sigma\gamma\kappa^{3}\ell^{-3}\big{(}\varphi(x^{k})-\varphi(\bar{x})\big{)}.$ This allows us to find $k_{0}\in{\rm I\\!N}$ such that $\varphi(x^{k+1})-\varphi(\bar{x})\leq\mu\big{(}\varphi(x^{k})-\varphi(\bar{x})\big{)}\quad\text{whenever }\;k\geq k_{0},$ which verifies (i) with $\mu:=1-2\sigma\gamma\kappa^{3}\ell^{-3}\in(0,1)$. It readily follows from (4.32) that $\sqrt{\frac{2}{\kappa}\big{(}\varphi(x^{k})-\varphi(\bar{x})\big{)}}\leq\sqrt{\frac{2\mu}{\kappa}\big{(}\varphi(x^{k-1})-\varphi(\bar{x})\big{)}}\leq\ldots\leq\sqrt{\frac{2\mu^{k-k_{0}}}{\kappa}\big{(}\varphi(x^{k_{0}})-\varphi(\bar{x})\big{)}}$ whenever $k\geq k_{0}$. Thus we get $\|x^{k}-\bar{x}\|\leq M\lambda^{k}$ for such $k$, where $M:=\sqrt{\frac{2}{\kappa}\mu^{-k_{0}}\big{(}\varphi(x^{k_{0}})-\varphi(\bar{x})\big{)}}\quad\text{and}\quad\lambda:=\sqrt{\mu}.$ Since $\lambda\in(0,1)$, it follows that $\displaystyle\lim_{k\to\infty}\lambda^{k}=0$, which implies that the sequence $\\{x^{k}\\}$ converges at least R-linearly to $\bar{x}$ as $k\to\infty$. Employing again the Lipschitz continuity of $\nabla\varphi$ around $\bar{x}$ with constant $\ell>0$, we arrive at $\|\nabla\varphi(x^{k})\|=\|\nabla\varphi(x^{k})-\nabla\varphi(\bar{x})\|\leq\ell\|x^{k}-\bar{x}\|\leq\ell M\lambda^{k}\quad\text{for all }\;k\geq k_{0},$ which verifies assertion (ii) and thus completes the proof of the theorem. To proceed further with deriving verifiable conditions ensuring the Q-superlinear convergence of Algorithm 3.2, we need to recall some important notions from variational analysis. A crucial role in the theory and applications of nonsmooth Newton-type methods is played by a remarkable subclass of single-valued locally Lipschitzian mappings defined as follows; see the books [20, 33, 35] with the commentaries and references therein. A mapping $f\colon\mathbb{R}^{n}\to\mathbb{R}^{m}$ is semismooth at $\bar{x}$ if it is locally Lipschitzian around this point and the limit $\lim_{A\in\text{\rm co}\overline{\nabla}f(\bar{x}+tu^{\prime})\atop u^{\prime}\to u,t\downarrow 0}Au^{\prime}$ (4.37) exists for all $u\in\mathbb{R}^{n}$, where ‘co’ stands for the convex hull of a set, and where $\overline{\nabla}f$ is given by $\overline{\nabla}f(x):=\big{\\{}A\in\mathbb{R}^{m\times n}\big{|}\;\exists\ x_{k}\overset{\Omega_{f}}{\to}x\;\text{ such that }\;\nabla f(x_{k})\to A\big{\\}},\quad x\in\mathbb{R}^{n},$ with $\Omega_{f}:=\\{x\in\mathbb{R}^{n}\;|\;f\;\text{ is differentiable at }\;x\\}$. Quite recently [26], the concept of semismoothness has been improved and extended to set-valued mappings. To formulate the latter notion, recall first the construction of the directional limiting normal cone to a set $\Omega\subset\mathbb{R}^{s}$ at $\bar{z}\in\Omega$ in the direction $d\in\mathbb{R}^{s}$ introduced in [27] by $N_{\Omega}(\bar{z};d):=\big{\\{}v\in\mathbb{R}^{s}\;\big{|}\;\exists\,t_{k}\downarrow 0,\;d_{k}\to d,\;v_{k}\to v\;\mbox{ with }\;v_{k}\in\widehat{N}_{\Omega}(\bar{z}+t_{k}d_{k})\big{\\}}.$ (4.38) It is obvious that (4.38) agrees with the limiting normal cone (2.6) for $d=0$. The directional limiting coderivative of $F\colon\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{m}$ at $(\bar{x},\bar{y})\in\mbox{\rm gph}\,F$ in the direction $(u,v)\in\mathbb{R}^{n}\times\mathbb{R}^{m}$ is defined in [24] via (4.38) by the scheme $D^{*}F\big{(}(\bar{x},\bar{y});(u,v)\big{)}(v^{*}):=\big{\\{}u^{*}\in\mathbb{R}^{n}\;\big{|}\;(u^{*},-v^{*})\in N_{\text{gph}\,F}\big{(}(\bar{x},\bar{y});(u,v)\big{)}\big{\\}},\;v^{*}\in\mathbb{R}^{m}.$ (4.39) Using (4.39), we come to the aforementioned property of set-valued mappings introduced in [26]. ###### Definition 4.3 (semismooth∗ property of set-valued mappings). A mapping $F\colon\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{m}$ is semismooth∗ at $(\bar{x},\bar{y})\in\mbox{\rm gph}\,F$ if whenever $(u,v)\in\mathbb{R}^{n}\times\mathbb{R}^{m}$ we have $\langle u^{*},u\rangle=\langle v^{*},v\rangle\;\mbox{ for all }\;(v^{*},u^{*})\in\mbox{\rm gph}\,D^{*}F\big{(}(\bar{x},\bar{y});(u,v)\big{)}.$ Recall [26] that the semismoothness∗ holds if the graph of $F\colon\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{m}$ is represented as a union of finitely many closed and convex sets as well as for normal cone mappings generated by convex polyhedral sets. Note also that the semismooth∗ property of single-valued locally Lipschitzian mappings $f\colon\mathbb{R}^{n}\to\mathbb{R}^{m}$ around $\bar{x}$ agrees with the semismooth property (4.37) at this point provided that $f$ directionally differentiable at $\bar{x}$. Prior to deriving a major theorem on the $Q$-superlinear convergence of Algorithm 3.2, we present an important property of $\mathcal{C}^{1,1}$ functions with semismooth∗ derivatives. ###### Proposition 4.4 (directional estimate for functions with semismooth∗ derivatives). Let $\varphi\colon\mathbb{R}^{n}\to\mathbb{R}$ be continuously differentiable around $\bar{x}\in\mathbb{R}^{n}$ with $\nabla\varphi(\bar{x})=0$. Suppose that $\nabla\varphi$ is locally Lipschitzian around this point with modulus $\ell>0$, and that $\nabla\varphi$ is semismooth∗ at this point. Assume further that a sequence $\\{x^{k}\\}$ converges to $\bar{x}$ with $x^{k}\neq\bar{x}$ as $k\in{\rm I\\!N}$, and that a sequence $\\{d^{k}\\}$ is superlinearly convergent with respect to $\\{x^{k}\\}$, i.e., $\|x^{k}+d^{k}-\bar{x}\|=o(\|x^{k}-\bar{x}\|)$. Consider the following statements: * (i) $\nabla\varphi$ is directionally differentiable at $\bar{x}$, and there exists $\kappa>0$ such that $\langle\nabla\varphi(x^{k}),d^{k}\rangle\leq-\frac{1}{\kappa}\|d^{k}\|^{2}$ for all sufficiently large $k\in{\rm I\\!N}$. * (ii) There exists $\kappa>0$ such that $\varphi(x^{k}+d^{k})-\varphi(x^{k})\leq\langle\nabla\varphi(x^{k}+d^{k}),d^{k}\rangle-\frac{1}{2\kappa}\|d^{k}\|^{2}$ (4.40) for all sufficiently large $k\in{\rm I\\!N}$. Then we have the estimate $\varphi(x^{k}+d^{k})\leq\varphi(x^{k})+\sigma\langle\nabla\varphi(x^{k}),d^{k}\rangle\;\mbox{ for all large }\;k\in{\rm I\\!N}$ (4.41) provided that either (i) holds and $\sigma\in(0,1/2)$, or (ii) holds and $\sigma\in\left(0,1/(2\ell\kappa)\right)$ [Proof.] Suppose that (i) holds. The directional differentiability and semismoothness∗ of $\nabla\varphi$ at $\bar{x}$ ensures by [26, Corollary 3.8] that $\nabla\varphi$ is semismooth at $\bar{x}$. Using now [20, Proposition 8.3.18], we immediately get (4.41). Suppose next that (ii) is satisfied and $\sigma\in\left(0,1/(2\ell\kappa)\right)$. Since $\\{d^{k}\\}$ is superlinearly convergent with respect to $\\{x^{k}\\}$, by employing [20, Lemma 7.5.7] we have $\lim_{k\to\infty}\|x^{k}-\bar{x}\|/\|d^{k}\|=1,$ (4.42) which ensures in turn that $\|x^{k}+d^{k}-\bar{x}\|=o(\|d^{k}\|)\;\text{ as }\;k\to\infty.$ (4.43) Then the statement in (ii) leads us to the inequalities $\displaystyle\varphi(x^{k}+d^{k})-\varphi(x^{k})-\sigma\langle\nabla\varphi(x^{k}),d^{k}\rangle$ $\displaystyle\leq$ $\displaystyle\langle\nabla\varphi(x^{k}+d^{k}),d^{k}\rangle-\frac{1}{2\kappa}\|d^{k}\|^{2}-\sigma\langle\nabla\varphi(x^{k}),d^{k}\rangle$ $\displaystyle\leq$ $\displaystyle\|\nabla\varphi(x^{k}+d^{k})\|\cdot\|d^{k}\|-\frac{1}{2\kappa}\|d^{k}\|^{2}+\sigma\|\nabla\varphi(x^{k})\|\cdot\|d^{k}\|$ $\displaystyle\leq$ $\displaystyle\ell\|x^{k}+d^{k}-\bar{x}\|\cdot\|d^{k}\|-\frac{1}{2\kappa}\|d^{k}\|^{2}+\sigma\ell\|x^{k}-\bar{x}\|\cdot\|d^{k}\|$ $\displaystyle\leq$ $\displaystyle\|d^{k}\|^{2}\left(\ell\frac{\|x^{k}+d^{k}-\bar{x}\|}{\|d^{k}\|}-\frac{1}{2\kappa}+\sigma\ell\frac{\|x^{k}-\bar{x}\|}{\|d^{k}\|}\right)$ for all large $k\in{\rm I\\!N}$. Finally, it follows from the inequality $\sigma<1/(2\ell\kappa)$, (4.42), and (4.43) that $\varphi(x^{k}+d^{k})-\varphi(x^{k})-\sigma\langle\nabla\varphi(x^{k}),d^{k}\rangle\leq 0\quad\text{whenever }\;k\in{\rm I\\!N}\;\text{ is sufficiently large},$ which readily justifies (4.41) and completes the proof of the proposition. Now we are ready to derive the main result of this section that establishes the Q-superlinear convergence of Algorithm 3.2 under the imposed assumptions. ###### Theorem 4.5 (superlinear convergence of the generalized damped Newton algorithm for ${\cal C}^{1,1}$ functions). In the setting of Theorem 3.4, suppose that $\\{x^{k}\\}$ converges $\bar{x}$ as $k\to\infty$ with $x^{k}\neq\bar{x}$ for all $k\in{\rm I\\!N}$, and that $\nabla\varphi$ is locally Lipschitzian around $\bar{x}$ with some constant $\ell>0$ being also semismooth∗ at this point. Then the convergence rate of $\\{x^{k}\\}$ is at least Q-superlinear if either one of the following two conditions holds: * (i) $\nabla\varphi$ is directionally differentiable at $\bar{x}$. * (ii) $\sigma\in\left(0,1/(2\ell\kappa)\right)$, where $\kappa>0$ is a modulus of tilt stability of $\varphi$ at $\bar{x}$. Furthermore, in both cases (i) and (ii) the sequence $\\{\varphi(x^{k})\\}$ converges Q-superlinearly to $\varphi(\bar{x})$, and the sequence $\\{\nabla\varphi(x^{k})\\}$ converges Q-superlinearly to $0$ as $k\to\infty$. [Proof.] Suppose that the sequence $\\{x^{k}\\}$ converges as $k\to\infty$ to a point $\bar{x}\in\mathbb{R}^{n}$. We split the proof of the theorem into the following three claims. Claim 1: The sequence $\\{d^{k}\\}$ is superlinearly convergent with respect to $\\{x^{k}\\}$. Indeed, it follows from Theorem 3.4 that $\bar{x}$ is tilt- stable local minimizer of $\varphi$ with some modulus $\kappa>0$. By using the characterization of tilt-stable minimizers via the combined second-order subdifferential taken from [46, Theorem 3.5] and [10, Proposition 4.6], we find a positive number $\delta$ such that $\langle z,w\rangle\geq\frac{1}{\kappa}\|w\|^{2}\quad\text{for all }\;z\in\partial^{2}\varphi(x)(w),\;x\in\mathbb{B}_{\delta}(\bar{x}),\;\mbox{ and }\;w\in\mathbb{R}^{n}.$ (4.44) Using the subadditivity property of coderivatives obtained in [34, Lemma 5.6], we have $\partial^{2}\varphi(x^{k})(d^{k})\subset\partial^{2}\varphi(x^{k})(x^{k}+d^{k}-\bar{x})+\partial^{2}\varphi(x^{k})(-x^{k}+\bar{x}).$ Since $-\nabla\varphi(x^{k})\in\partial^{2}\varphi(x^{k})(d^{k})$, for all $k\in{\rm I\\!N}$ there exists $v^{k}\in\partial^{2}\varphi(x^{k})(-x^{k}+\bar{x})$ such that $-\nabla\varphi(x^{k})-v^{k}\in\partial^{2}\varphi(x^{k})(x^{k}+d^{k}-\bar{x}).$ Employing (4.44) and the Cauchy-Schwarz inequality, we have $\|x^{k}+d^{k}-\bar{x}\|\leq\kappa\|\nabla\varphi(x^{k})+v^{k}\|\quad\text{for sufficiently large }\;k\in{\rm I\\!N}.$ (4.45) By the semismoothness∗ of $\nabla\varphi$ at $\bar{x}$ together with $\nabla\varphi(\bar{x})=0$ and [34, Lemma 5.5], we have $\|\nabla\varphi(x^{k})+v^{k}\|=\|\nabla\varphi(x^{k})-\nabla\varphi(\bar{x})+v^{k}\|=o(\|x^{k}-\bar{x}\|).$ (4.46) Combining (4.45) and (4.46) tells us that $\|x^{k}+d^{k}-\bar{x}\|=o(\|x^{k}-\bar{x}\|)$ as $k\to\infty$, which thus completes the proof of this claim. Claim 2: We have $\tau_{k}=1$ for all $k\in{\rm I\\!N}$ sufficiently large provided that either condition (i) or (ii) of this theorem holds. To proceed, we need to show that for all $k\in{\rm I\\!N}$ sufficiently large, we get the inequality (4.41). Assume first that (i) holds. Due to the inclusion $-\nabla\varphi(x^{k})\in\partial^{2}\varphi(x^{k})(d^{k})$ for all $k\in{\rm I\\!N}$ and the estimate (4.44), we have $\langle\nabla\varphi(x^{k}),d^{k}\rangle\leq-\frac{1}{\kappa}\|d^{k}\|^{2}$ whenever $k\in{\rm I\\!N}$ is sufficiently large. Then by Proposition 4.4 we get (4.41), which readily verifies the claimed assertion in case (i). Assuming now the conditions in (ii) and using Claim 1, we easily get that the sequence $\\{d^{k}\\}$ converges to $0$ as $k\to\infty$, and that $x^{k}+d^{k}\to\bar{x}$ as $k\to\infty$. Employing the uniform second-order growth condition for tilt-stable minimizers from [46, Theorem 3.2] gives us a neighborhood $U$ of $\bar{x}$ such that $\varphi(x)\geq\varphi(u)+\langle\nabla\varphi(u),x-u\rangle+\frac{1}{2\kappa}\|x-u\|^{2}\quad\text{for all }\;x,u\in U,$ which implies the inequality (4.40). Then by Proposition 4.4 we get (4.41), and hence complete verifying the the claimed assertion in this case. Claim 3: The conclusions of the theorem about the Q-superlinear convergence of the sequences $\\{x^{k}\\}$, $\\{\varphi(x^{k})\\}$, and $\\{\nabla\varphi(x^{k})\\}$ hold provided that either condition (i) or (ii) of this theorem holds. To justify this assertion, we get from Claim 2 that $\tau_{k}=1$ for all sufficiently large $k\in{\rm I\\!N}$, and thus Algorithm 3.2 eventually becomes Algorithm 3.1. Hence the claimed convergence results follow from [34, Theorems 5.7 and 5.12]. This verifies the assertions of Claim 3 and completes the proof of the theorem. ## 5 GDNM for Problems of Quadratic Composite Optimization In this section we study a special class of constrained optimization problem given in the form $\displaystyle\text{minimize }\;\varphi(x):=f(x)+g(x),\quad x\in\mathbb{R}^{n},$ (5.47) where $f\colon\mathbb{R}^{n}\to\mathbb{R}$ is a convex and smooth function, while $g\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}$ is a convex and extended-real-valued one. This class is known as problems of convex composite optimization. Observe that, although (5.47) is written in the unconstrained format, it includes constrained optimization problems with the effective constraints $x\in\mbox{\rm dom}\,g$. In what follows we pay the main attention to a subclass of (5.47) described by $\displaystyle\text{minimize }\;\varphi(x):=\frac{1}{2}\langle Ax,x\rangle+\langle b,x\rangle+\alpha+g(x),\quad x\in\mathbb{R}^{n},$ (5.48) where $A\in\mathbb{R}^{n\times n}$ is a positive-semidefinite matrix, $b\in\mathbb{R}^{n}$, and $\alpha\in\mathbb{R}$. That is, (5.48) is a subclass of (5.47) with $f$ being a quadratic function. This allows us to label (5.48) as problems of quadratic composite optimization. Note that problems of type (5.48) frequently appear, e.g., in practical models of machine learning and statistics. In particular, various Lasso problems considered in Section 6 can be written in form (5.48). Here are some other important classes of problems arising in practical modeling, which are reduced to (5.48). ###### Example 5.1 (support vector machine problems). Given the training data $(x_{i},y_{i})$, $i=1,\ldots,m$, where $x_{i}\in\mathbb{R}^{n}$ are the observations and $y_{i}\in\\{-1,1\\}$ are the labels, the support vector classification is a problem of finding a hyperplane $y=\langle w,x\rangle+b$ such that the data with different labels can be separated by the hyperplane. One of the most popular support vector machine models [32] is the regularized penalty model $\displaystyle\text{minimize }\;\varphi(w,b):=\frac{1}{2}\|w\|^{2}+C\sum_{i=1}^{m}\xi(w,x_{i},y_{i},b)\;\mbox{ with }\;w\in\mathbb{R}^{n}\;\mbox{ and }\;b\in\mathbb{R},$ (5.49) where $C>0$ is a penalty parameter, and where $\xi\colon\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ is called a loss function. Typical loss functions are given as follows: * (i) L1-loss, or $\ell_{1}$ hinge loss: $\xi(w,x_{i},y_{i},b)=\max\\{1-y_{i}(\langle w,x_{i}\rangle+b),0\\}$. * (ii) L2-loss, or squared hinge loss: $\xi(w,x_{i},y_{i},b)=\max\\{1-y_{i}(\langle w,x_{i}\rangle+b),0\\}^{2}$. * (iii) Logistic loss: $\xi(w,x_{i},y_{i},b)=\log(1+e^{-y_{i}(\langle w,x_{i}\rangle+b)})$. ###### Example 5.2 (convex clustering problems). Let $A:=[a_{1},\ldots,a_{n}]\in\mathbb{R}^{d\times n}$ be a given matrix with $n$ observations and $d$ features. The convex clustering model [55] is described in the form of quadratic composite optimization (5.48) by: $\displaystyle\text{minimize }\;\frac{1}{2}\sum_{i=1}^{n}\|x_{i}-a_{i}\|^{2}+\gamma\sum_{i<j}\|x_{i}-x_{j}\|_{p},\quad X\in\mathbb{R}^{d\times n},$ (5.50) where $\gamma>0$ is a tuning parameter, and where $\|\cdot\|_{p}$ denotes the $p$-norm. Typically the norm parameter $p$ is chosen as $1,2,\infty$. ###### Example 5.3 (constrained quadratic optimization). Consider problem (5.48), where $g$ is the indicator function of a nonempty, closed, and convex set $\Omega$. Then (5.48) is known as a constrained quadratic optimization problem. Some of the typical constraints are given by: * (i) Box constrained set: $\Omega=\text{Box}[l,u]:=\big{\\{}x\in\mathbb{R}^{n}\;\big{|}\;l\leq x_{i}\leq u,\;i=1,\ldots,n\big{\\}}$. * (ii) Half-space: $\Omega=\big{\\{}x\in\mathbb{R}^{n}\;\big{|}\;\langle a,x\rangle\leq\alpha\big{\\}}$, where $a\in\mathbb{R}^{n}\setminus\\{0\\}$ and $\alpha\in\mathbb{R}$. * (iii) Affine set: $\Omega=\big{\\{}x\in\mathbb{R}^{n}\;\big{|}\;Ax=b\big{\\}}$, where $A$ is an $m\times n$ matrix and $b\in\mathbb{R}^{m}$. ###### Remark 5.4 (subproblems for other methods). Quadratic composite optimization problems of type (5.48) not only cover optimization models in machine learning, statistics, and other applied areas, but also arise as subproblems for some efficient algorithms including sequential quadratic programming methods (SQP) [6, 33], augmented Lagrangian methods [28, 31, 37, 58, 60, 61], proximal Newton methods [36, 50], etc. To develop now a globally convergent damped Newton method for solving quadratic composite optimization problems of type (5.48), we employ the basic machinery of variational analysis, which allows us to reduce (5.48) to unconstrained problems with ${\cal C}^{1,1}$ objectives. Following [62], recall the corresponding notions used in our subsequent developments. ###### Definition 5.5 (Moreau envelopes and proximal mappings). Given an extended-real-valued, proper, l.s.c. function $\varphi\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}$ and given a parameter value $\gamma>0$, the Moreau envelope $e_{\gamma}\varphi$ and the proximal mapping $\textit{\rm Prox}_{\gamma\varphi}$ are defined by $e_{\gamma}\varphi(x):=\inf\left\\{\varphi(y)+\frac{1}{2\gamma}\|y-x\|^{2}\;\Big{|}\;y\in\mathbb{R}^{n}\right\\},$ (5.51) $\textit{\rm Prox}_{\gamma\varphi}(x):={\rm argmin}\left\\{\varphi(y)+\frac{1}{2\gamma}\|y-x\|^{2}\;\Big{|}\;y\in\mathbb{R}^{n}\right\\}.$ (5.52) If $\gamma=1$, we use the notation $e\varphi(x)$ and $\text{\rm Prox}_{\varphi}(x)$ in (5.51) and (5.52), respectively. Both Moreau envelopes and proximal mappings have been well recognized in variational analysis and optimization as efficient tools of regularization and approximation of nonsmooth functions. Prior to establishing the main convergence results of this section, we present several lemmas of their own interest. The first one, taken from [1, Proposition 12.30], lists those properties of Moreau envelopes and proximal mappings for convex extended-real- valued functions that are needed to derive the main results obtained below. ###### Lemma 5.6 (Moreau envelopes and proximal mappings for convex functions). Let $\varphi\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}$ be a proper, l.s.c., convex function. Then the following assertions hold for all $\gamma>0$: * (i) The Moreau envelope $e_{\gamma}\varphi$ is of class of continuously differentiable functions, and its gradient is Lipschitz continuous with modulus $1/\gamma$ on $\mathbb{R}^{n}$. * (ii) The proximal mapping $\text{\rm Prox}_{\gamma\varphi}$ is single-valued, monotone, and nonexpansive, i.e., it is Lipschitz continuous with modulus $1$ on $\mathbb{R}^{n}$. * (iii) The gradient of $e_{\gamma}\varphi$ is calculated by $\nabla e_{\gamma}\varphi(x)=\frac{1}{\gamma}\Big{(}x-\text{\rm Prox}_{\gamma\varphi}(x)\Big{)}=\big{(}\gamma I+(\partial\varphi)^{-1}\big{)}^{-1}(x)\;\mbox{ for all }\;x\in\mathbb{R}^{n}.$ (5.53) The results of Lemma 5.6 allow us to pass from nonsmooth convex optimization problems of type (5.48) with extended-valued objectives (i.e., including constraints) to an unconstrained ${\cal C}^{1,1}$ problem given in form (1.1). Note that such an approach has been used in [34, 49] to design locally convergent pure Newton algorithms for optimization problems and subgradient inclusions associated with prox-regular functions [62]. However, now we go further from the numerical viewpoint. Exploiting the quadratic composite structure of problems (5.48) and their specifications leads us to the design and justification of a new globally convergent algorithm with constructive calculations of its parameters via the problem data. To proceed, let $\gamma>0$ be such that the matrix $I-\gamma A$ is positive- definite. Denoting $Q:=(I-\gamma A)^{-1}$, $c:=\gamma Qb$, and $P:=Q-I$, define the unconstrained optimization problem by minimize $\displaystyle\psi(u):=\frac{1}{2}\langle Pu,u\rangle+\langle c,u\rangle+\gamma e_{\gamma}g(u)\;\mbox{ subject to }\;u\in\mathbb{R}^{n}.$ (5.54) The following lemma reveals some important properties of the optimization problem (5.54). ###### Lemma 5.7 (quadratic composite problems via proximal mappings). Let $\psi$ be given in (5.54). Then $\psi$ is a continuously differentiable function represented by $\psi(u):=\frac{1}{2}\langle Pu,u\rangle+\langle c,u\rangle+\gamma g\big{(}\text{\rm Prox}_{\gamma g}(u)\big{)}+\frac{1}{2}\|u-\text{\rm Prox}_{\gamma g}(u)\|^{2}.$ (5.55) Moreover, the mapping $\nabla\psi$ is Lipschitz continuous on $\mathbb{R}^{n}$ with modulus $\ell:=1+\|Q\|$, and $\nabla\psi(u)=Qu-\text{\rm Prox}_{\gamma g}(u)+c.$ (5.56) If in addition $A$ is positive-definite, then $\psi$ is strongly convex with modulus $\lambda_{\text{\rm min}}(P)>0$. [Proof.] Due to the convexity of $g$ and Lemma 5.6, the Moreau envelope $e_{\gamma}g$ is continuously differentiable and the proximal mapping $\text{\rm Prox}_{\gamma g}$ is nonexpansive on $\mathbb{R}^{n}$. Thus the function $\psi$ is continuously differentiable as well. The representations in (5.55) and (5.56) follow from the definition of $\psi$ and formula (5.53). Furthermore, for any $u_{1},u_{2}\in\mathbb{R}^{n}$ we have $\|\nabla\psi(u_{1})-\nabla\psi(u_{2})\|=\|Qu_{1}-Qu_{2}-\text{\rm Prox}_{\gamma g}(u_{1})+\text{\rm Prox}_{\gamma g}(u_{2})\|\leq(1+\|Q\|)\|u_{1}-u_{2}\|=\ell\|u_{1}-u_{2}\|,$ which justifies the global Lipschitz continuity of $\psi$ on $\mathbb{R}^{n}$ with the uniform constant $\ell$ defined above. Suppose further that $A$ is positive-definite. Combining this with the positive-definiteness of $I-~{}\gamma A$ readily yields the positive-definiteness of $P$. Thus $\psi$ in (5.54) is strongly convex on $\mathbb{R}^{n}$ with modulus $\lambda_{\text{\rm min}}(P)>~{}0$. Next we establishes the relationship between the optimization problems (5.48) and (5.54). ###### Lemma 5.8 (reduction of quadratic composite problems to ${\cal C}^{1,1}$ optimization). Consider the optimization problems (5.48) and (5.54). The following are equivalent: * (i) The vector $\bar{x}$ is an optimal solution to (5.48). * (ii) The vector $\bar{x}=Q\bar{u}+c$, where $\bar{u}$ is an optimal solution to (5.54). [Proof.] Using [1, Theorem 26.2] and the expression $\nabla f(x):=Ax+b$ for all $x\in\mathbb{R}^{n}$ tells us that the optimal solution to (5.48) is fully characterized by the equation $x-\text{\rm Prox}_{\gamma g}\big{(}x-\gamma(Ax+b)\big{)}=0.$ (5.57) For each $x\in\mathbb{R}^{n}$ denote $u:=x-\gamma(Ax+b)=(I-\gamma A)x-\gamma b$ and observe by the positive-definiteness of the matrix $I-\gamma A$ that (5.57) is equivalent to $\begin{cases}Qu-\text{\rm Prox}_{\gamma g}(u)+c=0,\\\ x=Qu+c,\end{cases}$ (5.58) where $Q:=(I-\gamma A)^{-1}$ and $c:=\gamma Qb$. The positive-definiteness of $I-\gamma A$ and the positive-semidefiniteness of $A$ imply that $P=Q-I$ is positive-semidefinite. Furthermore, the convexity of $g$ and Lemma 5.7 ensure that $e_{\gamma}g$ is continuously differentiable on $\mathbb{R}^{n}$, and that $\bar{u}$ is a solution to (5.54) if and only if we have $0=\nabla\psi(\bar{u})=P\bar{u}+c+\gamma\nabla e_{\gamma}g(\bar{u})=Q\bar{u}-\text{\rm Prox}_{\gamma g}(\bar{u})+c.$ This verifies the equivalence between (i) and (ii) as stated in the lemma. The last lemma of this section provides the representation of the second-order subdifferential of the cost function $\psi$ in the reduced problem (5.54) via the second-order subdifferential of the given regularizer $g$ in the original one (5.48). ###### Lemma 5.9 (second-order subdifferentials of reduced cost functions). Let $\psi\colon\mathbb{R}^{n}\to\mathbb{R}$ be taken from (5.54). Then for each $u\in\mathbb{R}^{n}$ and $w\in\mathbb{R}^{n}$ we have the relationship $z\in\partial^{2}\psi(u)(w)\iff\frac{1}{\gamma}(z-Pw)\in\partial^{2}g\Big{(}\text{\rm Prox}_{\gamma g}(u),\frac{1}{\gamma}\big{(}u-\text{\rm Prox}_{\gamma g}(u)\big{)}\Big{)}\big{(}Qw-z\big{)}.$ [Proof.] Using the second-order subdifferential sum rule from [44, Proposition 1.121] gives us $\partial^{2}\psi(u)(w)=Pw+\gamma\partial^{2}e_{\gamma}g\Big{(}u,\frac{1}{\gamma}\big{(}\nabla\psi(u)-Pu-c\big{)}\Big{)}(w).$ This we have that $z\in\partial^{2}\psi(u)(w)$ if and only if $\frac{1}{\gamma}(z-Pw)\in\partial^{2}e_{\gamma}g\Big{(}u,\frac{1}{\gamma}\big{(}\nabla\psi(u)-Pu-c\big{)}\Big{)}(w).$ Due to [34, Lemma 6.4], the latter is equivalent to $\frac{1}{\gamma}(z-Pw)\in\partial^{2}g\Big{(}u-\nabla\psi(u)+Pu+c,\frac{1}{\gamma}\big{(}\nabla\psi(u)-Pu-c\big{)}\Big{)}(w-z+Pw).$ (5.59) Furthermore, we have the equalities ${u-\nabla\psi(u)+Pu+c=u-\gamma\nabla e_{\gamma}g(u)=\text{\rm Prox}_{\gamma g}(u),}$ (5.60) $\frac{1}{\gamma}(\nabla\psi(u)-Pu-c)=\frac{1}{\gamma}\big{(}u-\text{\rm Prox}_{\gamma g}(u)\big{)}.$ (5.61) Combining (5.59) with (5.60) and (5.61) completes the proof of the lemma. Now we are in a position to design the aforementioned generalized damped Newton-type algorithm to solve problems (5.48) of quadratic composite optimization. ###### Algorithm 5.10 (generalized damped Newton algorithm for problems of quadratic composite optimization). Input: $A\in\mathbb{R}^{n\times n}$, $b\in\mathbb{R}^{n}$, $g$, $\sigma\in\left(0,\frac{1}{2}\right)$, and $\beta\in(0,1)$. Do the following: Step 0: Choose $\gamma>0$ such that $I-\gamma A$ is positive-definite, calculate $Q:=(I-\gamma A)^{-1}$, $c:=\gamma Qb$, $P:=Q-I$, define the function $\psi$ as in (5.55), choose a starting point $u^{0}\in\mathbb{R}^{n}$, and set $k:=0$. Step 1: If $\nabla\psi(u^{k})=0$, then stop. Otherwise, set $v^{k}:=\text{\rm Prox}_{\gamma}g(u^{k})$. Step 2: Find $d^{k}\in\mathbb{R}^{n}$ such that $\frac{1}{\gamma}\big{(}-\nabla\psi(u^{k})-Pd^{k}\big{)}\in\partial^{2}g\Big{(}v^{k},\frac{1}{\gamma}(u^{k}-v^{k})\Big{)}\big{(}Qd^{k}+\nabla\psi(u^{k})\big{)}.$ (5.62) Step 3: Set $\tau_{k}=1$. Until Armijo’s inequality $\psi(u^{k}+\tau_{k}d^{k})\leq\psi(u^{k})+\sigma\tau_{k}\langle\nabla\psi(u^{k}),d^{k}\rangle$ is satisfied, set $\tau_{k}:=\beta\tau_{k}$. Step 4: Compute $u^{k+1}$ by $u^{k+1}:=u^{k}+\tau_{k}d^{k},\quad k=0,1,\ldots.$ Step 5: Increase $k$ by $1$ and go to Step 1. Output: $x^{k}:=Qu^{k}+c$. Note that the definitions of the second-order subdifferential (2.10) and the limiting coderivative (2.7) allow us to rewrite the implicit inclusion (5.62) for $d^{k}$ in the explicit form $\Big{(}\frac{1}{\gamma}\big{(}-\nabla\psi(u^{k})-Pd^{k}\big{)},-Qd^{k}-\nabla\psi(x^{k})\Big{)}\in N_{\text{gph}\,\partial g}\Big{(}v^{k},\frac{1}{\gamma}(u^{k}-v^{k})\Big{)}.$ (5.63) Explicit expressions for the sequences $\\{v^{k}\\}$ and $\\{d^{k}\\}$ in Algorithm 5.10 depend on given structures of the regularizers $g$, which are efficiently specified in applied models of machine learning and statistics; see, e.g., the above discussions and those presented in Section 6. ###### Remark 5.11 (stopping criterion). Note that $\bar{x}$ is a solution to (5.48) if and only if this point satisfies the stationary equation (5.57). In order to approximate the solution $\bar{x}$, we choose the termination/stopping criterion $\big{\|}x-\text{\rm Prox}_{\gamma g}\big{(}x-\gamma(Ax+b)\big{)}\big{\|}\leq\varepsilon$ (5.64) with a given tolerance parameter $\varepsilon>0$. The stopping criterion (5.64) is clearly equivalent to the condition $\|\nabla\psi(u)\|\leq\varepsilon$, where $u:=x-\gamma(Ax+b)=Q^{-1}(x-c)$, and where $\psi$ is defined in (5.55). Therefore, in practice the stopping criterion of Step 2 of Algorithm 5.10 can be replaced by the simpler one $\|\nabla\psi(u^{k})\|\leq~{}\varepsilon$. To proceed with establishing conditions for global convergence of Algorithm 5.10, we need to employ yet another notion of generalized second-order differentiability taken from [62, Chapter 13]. First recall that a mapping $h\colon\mathbb{R}^{n}\to\mathbb{R}^{m}$ is semidifferentiable at $\bar{x}$ if there exists a continuous and positively homogeneous operator $H\colon\mathbb{R}^{n}\to\mathbb{R}^{m}$ such that $h(x)=h(\bar{x})+H\big{(}x-\bar{x}\big{)}+o(\|x-\bar{x}\|)\quad\text{for all }\;x\;\text{ near }\;\bar{x}.$ Given $\varphi\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}$ with $\bar{x}\in\mbox{\rm dom}\,\varphi$, consider the family of second-order finite differences $\Delta^{2}_{\tau}\varphi(\bar{x},v)(u):=\frac{\varphi(\bar{x}+\tau u)-\varphi(\bar{x})-\tau\langle v,u\rangle}{\frac{1}{2}\tau^{2}}$ and define the second subderivative of $\varphi$ at $\bar{x}$ for $v\in\mathbb{R}^{n}$ and $w\in\mathbb{R}^{n}$ by $d^{2}\varphi(\bar{x},v)(w):=\liminf_{\tau\downarrow 0\atop u\to w}\Delta^{2}_{\tau}\varphi(\bar{x},v)(u).$ Then $\varphi$ is said to be twice epi-differentiable at $\bar{x}$ for $v$ if for every $w\in\mathbb{R}^{n}$ and every choice of $\tau_{k}\downarrow 0$ there exists a sequence $w^{k}\to w$ such that $\frac{\varphi(\bar{x}+\tau_{k}w^{k})-\varphi(\bar{x})-\tau_{k}\langle v,w^{k}\rangle}{\frac{1}{2}\tau_{k}^{2}}\to d^{2}\varphi(\bar{x},v)(w)\;\mbox{ as }\;k\to\infty.$ Twice epi-differentiability has been recognized as an important property in second-order variational analysis with numerous applications to optimization; see the aforementioned monograph by Rockafellar and Wets and the recent papers [40, 41, 42] developing a systematic approach to verify epi-differentiability via parabolic regularity, which is a major second-order property of sets and extended-real-valued functions. The next theorem provides verifiable conditions on the matrix $A$ and the function $g$ to run the GDNM Algorithm 5.10 for solving quadratic composite optimization problems (5.48). ###### Theorem 5.12 (convergence rate of GDNM for problems of quadratic composite optimization). Considering the optimization problem (5.48), suppose that $A$ is positive- definite. Then the following assertions hold: * (i) Algorithm 5.10 is well-defined, and the sequence of its iterates $\\{u^{k}\\}$ globally R-linearly converges to some $\bar{u}$ as $k\to\infty$. * (ii) The vector $\bar{x}:=Q\bar{u}+c$ is a tilt-stable local minimizer of the cost function $\varphi$ in (5.48), and $\bar{x}$ is the unique solution to (5.48). Furthermore, the rate of convergence of $\\{u^{k}\\}$ is at least Q-superlinear if the mapping $\partial g$ is semismooth∗ at $(\bar{x},\bar{v})$ with $\bar{v}:=-A\bar{x}-b$, and if one of two following conditions fulfills: * (a) $\sigma\in\left(0,1/(2\ell\kappa)\right)$, where $\ell:=1+\|Q\|$ and $\kappa:=1/\lambda_{\text{\rm min}}(P)$. * (b) $g$ is twice epi-differentiable at $\bar{x}$ for $\bar{v}$. [Proof.] Lemma 5.7 and Lemma 5.9 tell us that applying Algorithm 5.10 to solve the quadratic composite optimization problem (5.48) is equivalent to applying Algorithm 3.2 to solving the ${\cal C}^{1,1}$ optimization problem (5.54). We split the proof of the theorem into the following claims: Claim 1: The function $\psi$ from (5.54) satisfies Assumptions 1 and 2. Indeed, Lemma 5.7 ensures that $\psi$ is strongly convex with modulus $\lambda_{\text{\rm min}}(P)>0$, and thus Assumption 1 holds due to [9, Theorem 5.1]. Moreover, the strong convexity of $\psi$ clearly implies that for an arbitrary vector $u^{0}\in\mathbb{R}^{n}$ the level set $\Omega:=\big{\\{}u\in\mathbb{R}^{n}\;\big{|}\;\psi(u)\leq\psi(u^{0})\big{\\}}$ is bounded, and so Assumption 2 holds for the function $\psi$. Claim 2: Both statements (i) and (ii) of the theorem are satisfied. To proceed, we employ Claim 1 together with Theorems 3.7 and 4.2 to conclude that Algorithm 5.10 is well-defined, and that the sequence of its iterates $\\{u^{k}\\}$ globally R-linearly converges to some $\bar{u}$ as $k\to\infty$. Then Lemma 5.8 tells us that the vector $\bar{x}=Q\bar{u}+c$ is a solution to (5.48). The uniqueness and tilt stability of $\bar{x}$ follow immediately from the strong convexity of $\varphi$. Claim 3: The convergence rate of the sequence $\\{u^{k}\\}$ is at least Q-superlinear provided that $\partial g$ is semismooth∗ at all the points on its graph and that either one of the two conditions (a) and (b) is satisfied. Indeed, suppose that $\partial g$ is semismooth∗ at $(\bar{x},\bar{v})$. Then we deduce from [21, Proposition 6] that $\text{\rm Prox}_{\gamma g}$ is semismooth∗ at $\bar{x}-\gamma(A\bar{x}+b)$. Thus we obtain that the mapping $\nabla\psi(u)=Qu-\text{\rm Prox}_{\gamma g}(u)+c$ is semismooth∗ at $\bar{u}$ by employing [26, Proposition 3.6]. Assume now that condition (a) is satisfied. Then Lemma 5.7 tells us that $\ell$ is a Lipschitz constant of $\nabla\psi$ around $\bar{u}$, and that $\bar{u}$ is a tilt-stable local minimizer of $\psi$ with modulus $\kappa$. Thus the claimed assertion follows in case (a) directly from Theorem 4.5. Assuming by (b) that $g$ is twice epi-differentiable at $\bar{x}$ for $\bar{v}$, we deduce from [62, Theorem 13.40] that the subgradient mapping $\partial g$ is proto-differentiable of at $(\bar{x},\bar{v})$. Using [21, Corollary 8], we conclude that $\text{\rm Prox}_{\gamma g}$ is directionally differentiable at $\bar{x}-\gamma(A\bar{x}+b)$, which yields in turn the directional differentiability of $\nabla\psi$ at $\bar{u}$. Finally, Theorem 4.5 allows us to conclude that the sequence $\\{u^{k}\\}$ Q-superlinearly converges to $\bar{u}$ as $k\to\infty$. It is highly desired to obtain a counterpart of Theorem 5.12 on global convergence of Algorithm 5.10 under merely positive-semidefiniteness of the matrix $A$. However, we cannot do it at this stage of developments since the function $\psi$ from (5.54) may not satisfy Assumption 1. A natural idea to overcome such a challenge is to regularize the original problem via approximating it by a sequence of well-behaved problems. Perhaps the simplest way to realize this idea is to employ the classical Tikhonov regularization. To this end, consider in the setting of Lemma 5.8 the following family of optimization problem depending on the parameter $\varepsilon>0$: minimize $\displaystyle\psi_{\varepsilon}(u):=\frac{1}{2}\langle P_{\varepsilon}u,u\rangle+\langle c,u\rangle+\gamma e_{\gamma}g(u)\;\mbox{ subject to }\;u\in\mathbb{R}^{n},$ (5.65) where $P_{\varepsilon}:=P+\varepsilon I$. The next proposition discusses the relationship between (5.65) and (5.48). ###### Proposition 5.13 (Tikhonov regularization). Assume that (5.48) has a solution, and for each $\varepsilon>0$ consider the optimization problem (5.65). If $\bar{u}(\varepsilon)$ is a solution to (5.65), then we have the following assertions: * (i) The limit $\displaystyle\bar{u}:=\lim_{\varepsilon\to 0}\bar{u}(\varepsilon)$ exists being a solution to (5.54). * (ii) The vector $\bar{x}:=Q\bar{u}+c$ is a solution to (5.48). [Proof.] Observe that the optimization problem (5.54) is equivalent to the variational inequality problem VI($\mathbb{R}^{n},F$) written as: find a vector $u\in\mathbb{R}^{n}$ such that ${\langle F(u),z-u\rangle\geq 0\;\mbox{ for all }\;z\in\mathbb{R}^{n},}$ where $F:=\nabla\psi$. Since $\bar{u}(\varepsilon)$ is a solution to (5.65), we get that the family of approximate solutions $\\{\bar{u}(\varepsilon)\;|\;\varepsilon>0\\}$ is the Tikhonov trajectory of VI($\mathbb{R}^{n},F$); see, e.g., [20, Equation (12.2.2)]. It follows from the convexity of $\psi$ that $\nabla\psi\colon\mathbb{R}^{n}\to\mathbb{R}^{n}$ is a monotone operator. Since the optimization problem (5.48) has a solution, the set of solutions for VI($\mathbb{R}^{n},F$) is nonempty by Lemma 5.8. Using [20, Theorem 12.2.3], we have that the limit $\displaystyle\bar{u}=\lim_{\varepsilon\to 0}u(\varepsilon)$ exists and solves (5.54). Finally, assertion (ii) follows immediately from Lemma 5.8. ###### Remark 5.14 (generalized Newton algorithm based on Tikhonov regularization). Proposition 5.13 provides a precise relationship between solutions to (5.48) and solutions to (5.65). This plays a crucial role in solving (5.48) without assuming the positive-definiteness of $A$. Moreover, Proposition 5.13 motivates us to develop a generalized Newton-type algorithm based on the Tikhonov regularization to solve the class of optimization problems (5.48), where $A$ is merely positive-semidefinite. We will pursue this issue in our future research. ## 6 Solving Lasso Problems and Numerical Experiments This section is devoted to specifying our generalized damped Newton method (GDNM) developed in Section 5 for the basic class of Lasso problems, where Lasso stands for the Least Absolute Shrinkage and Selection Operator. Using on the obtained specification of GDNM to solve Lasso problems, we conduct numerical experiments by using our algorithm and the compare the computations with the performances of some major first-order and second-order algorithms applied to solving this class of problems of composite quadratic optimization. The basic Lasso problem, known also as the $\ell^{1}$-regularized least square optimization problem, was introduced by Tibshirani [64] and then has been largely investigated and applied to various issues in statistics, machine learning, image processing, etc. This problem is formulated as: $\displaystyle\text{minimize }\;\varphi(x):=\frac{1}{2}\|Ax-b\|_{2}^{2}+\mu\|x\|_{1}\quad\text{ subject to }\;x\in\mathbb{R}^{n},$ (6.66) where $A$ is an $m\times n$ matrix, $\mu>0$, $b\in\mathbb{R}^{m}$, and where $\|x\|_{1}:=\sum_{i=1}^{n}|x_{i}|,\quad\|x\|_{2}:=\left(\sum_{i=1}^{n}|x_{i}|^{2}\right)^{1/2}\quad\text{for all }\;x=(x_{1},x_{2},\ldots,x_{n}).$ There are other classes of Lasso problems modeled in the quadratic composite form $\displaystyle\text{minimize }\;\varphi(x):=\frac{1}{2}\|Ax-b\|_{2}^{2}+g(x),\quad x\in\mathbb{R}^{n},$ (6.67) where $A$ is an $m\times n$ matrix, $b\in\mathbb{R}^{m}$ and $g\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}$ is a given regularizer. More specifically, let us list several well-recognized versions of (6.67) in addition to (6.66): * (i) The elastic net regularized problem, or the Lasso elastic net problem [29] with $g(x):=\mu_{1}\|x\|_{1}+\mu_{2}\|x\|_{2}^{2},$ where $\mu_{1}$ and $\mu_{2}$ are given positive parameters. * (ii) The clustered Lasso problem [63] with $g(x):=\mu_{1}\|x\|_{1}+\mu_{2}\sum_{1\leq i\leq j\leq n}|x_{i}-x_{j}|,$ where $\mu_{1}$ and $\mu_{2}$ are given positive parameters. * (iii) The fused regularized problem, or the fused Lasso problem [65] with $g(x):=\mu_{1}\|x\|_{1}+\mu_{2}\|Bx\|_{1},$ where $\mu_{1}$ and $\mu_{2}$ are given positive numbers, and where $B$ is the $(n-1)\times n$ matrix $Bx:=[x_{1}-x_{2},x_{2}-x_{3},\ldots,x_{n-1}-x_{n}]\quad\text{for all }\;x\in\mathbb{R}^{n}.$ Although the developed Algorithm 5.10 allows us to efficiently solve all these Lasso problems, we concentrate here on numerical results for the basic one (6.66). It is easy to see that the Lasso problem (6.66) belongs to the quadratic composite class (5.48). Indeed, we represent (6.66) as minimizing the nonsmooth convex function $\varphi(x):=f(x)+g(x)$, where $f(x):=\frac{1}{2}\langle\bar{A}x,x\rangle+\langle\bar{b},x\rangle+\bar{\alpha}\quad\text{and }\;g(x):=\mu\|x\|_{1}$ (6.68) with $\bar{A}:=A^{*}A$, $\bar{b}:=-A^{*}b$, and $\bar{\alpha}:=\frac{1}{2}\|b\|^{2}$, and where the matrix $\bar{A}=A^{*}A$ is positive-semidefinite. Observe further that the Lasso problem (6.66) always admits an optimal solution; see [64]. In order to apply Algorithm 5.10 to solving problem (6.66), we begin with providing explicit calculations of the first-order and second-order subdifferentials of the regularizer $g(x)=\mu\|x\|_{1}$ together with the proximal mapping associated with this function. Using definition (5.52), it is not hard to compute the proximal mapping of $g(x)=\mu\|x\|_{1}$ by $\big{(}\text{\rm Prox}_{\gamma g}(x)\big{)}_{i}=\begin{cases}x_{i}-\mu\gamma&\text{if}\quad x_{i}>\mu\gamma,\\\ 0&\text{if}\quad-\mu\gamma\leq x_{i}\leq\mu\gamma,\\\ x_{i}+\mu\gamma&\text{if}\quad x_{i}<-\mu\gamma.\end{cases}$ (6.69) Now we compute the first-order and second-order subdifferentials of this function. ###### Proposition 6.1 (subdifferential calculations). For the regularizer $g(\cdot)=\mu\|\cdot\|_{1}$ in (6.66) we have the subgradient mapping $\partial g(x)=\left\\{v\in\mathbb{R}^{n}\;\bigg{|}\;\begin{array}[]{@{}cc@{}}v_{j}=\text{\rm sgn}(x_{j}),\;x_{j}\neq 0,\\\ v_{j}\in[-\mu,\mu],\;x_{j}=0\end{array}\right\\}\quad\text{whenever }\;x\in\mathbb{R}^{n}.$ (6.70) Further, for each $(x,y)\in\text{\rm gph}\,\partial g$ and $v=(v_{1},\ldots,v_{n})\in\mathbb{R}^{n}$, the second-order subdifferential of $g$ is computed by the formula $\partial^{2}g(x,y)(v)=\Big{\\{}w\in\mathbb{R}^{n}\;\Big{|}\;\Big{(}\frac{1}{\mu}w_{i},-v_{i}\Big{)}\in G\Big{(}x_{i},\frac{1}{\mu}y_{i}\Big{)},\;i=1,\ldots,n\Big{\\}},$ (6.71) where the mapping $G\colon\mathbb{R}^{2}\rightrightarrows\mathbb{R}^{2}$ is defined by $G(t,p):=\begin{cases}\\{0\\}\times\mathbb{R}&\text{\rm if}\quad t\neq 0,\;p\in\\{-1,1\\},\\\ \mathbb{R}\times\\{0\\}&\text{\rm if}\quad t=0,\;p\in(-1,1),\\\ (\mathbb{R}_{+}\times\mathbb{R}_{-})\cup(\\{0\\}\times\mathbb{R})\cup(\mathbb{R}\times\\{0\\})&\text{\rm if}\quad t=0,\;p=-1,\\\ (\mathbb{R}_{-}\times\mathbb{R}_{+})\cup(\\{0\\}\times\mathbb{R})\cup(\mathbb{R}\times\\{0\\})&\text{\rm if}\quad t=0,\;p=1,\\\ \emptyset&\text{\rm otherwise}.\end{cases}$ (6.72) [Proof.] These computations follow from [34, Proposition 8.1]. The next theorem provides an efficient condition on (6.66) expressed entirely in terms of its given data to ensure a global superlinear convergence of Algorithm 5.10 for solving (6.66). ###### Theorem 6.2 (solving Lasso). Considering the Lasso problem (6.66), suppose that the matrix $A^{*}A$ is positive-definite. Then we have: * (i) Algorithm 5.10 is well-defined, and the sequence of its iterates $\\{y^{k}\\}$ globally converges at least Q-superlinearly to $\bar{y}$ as $k\to\infty$. * (ii) The vector $\bar{x}:=Q\bar{y}+c$ is a unique solution to (6.66) being a tilt- stable local minimizer of the cost function $\varphi$. [Proof.] It follows from (6.70) that the graph of $\partial g$ is the union of finitely many closed convex sets, and hence $\partial g$ is semismooth∗ on its graph. Furthermore, $g$ is proper, convex, and piecewise linear-quadratic on $\mathbb{R}^{n}$. Then [62, Proposition 13.9] ensures that $g$ is twice epi- differentiable on $\mathbb{R}^{n}$. Applying Theorem 5.12, we arrive at all the conclusions of Theorem 6.2. To run Algorithm 5.10, we need to explicitly determine the sequences $\\{v^{k}\\}$ and $\\{d^{k}\\}$ generated by this algorithm. Using (6.69), (6.70), and (6.71) gives us the following expressions for all the components $i=1,\ldots,n$ of these vectors: $\left(v^{k}\right)_{i}=\begin{cases}y_{i}-\mu\gamma&\text{if}\quad y_{i}>\mu\gamma,\\\ 0&\text{if}\quad-\mu\gamma\leq y_{i}\leq\mu\gamma,\\\ y_{i}+\mu\gamma&\text{if}\quad y_{i}<-\mu\gamma,\end{cases}$ $\begin{cases}\big{(}Pd^{k}+\nabla\psi(y^{k})\big{)}_{i}=0&\text{if}\quad\left(v^{k}\right)_{i}\neq 0,\\\ \big{(}Qd^{k}+\nabla\psi(y^{k})\big{)}_{i}=0&\text{if}\quad\left(v^{k}\right)_{i}=0.\end{cases}$ ###### Remark 6.3 (Newton descent directions for Lasso). Let us emphasize that the algorithm directions $d^{k}$ can be computed through solving a system of linear equations for each $k\in{\rm I\\!N}$. Indeed, for the sequence $\\{d^{k}\\}$ generated by Algorithm 5.10, denote by $P_{i}$ and $Q_{i}$ are the $i$-th rows of the matrices $P$ and $Q$, respectively. Define $(X^{k})_{i}:=\begin{cases}P_{i}&\text{if}\quad v_{i}\neq 0,\\\ Q_{i}&\text{if}\quad v_{i}=0.\end{cases}$ Then $d^{k}$ is a solution to the system of linear equations $X^{k}d=-\nabla\psi(y^{k})$. Now we are in a position to conduct numerical experiments for solving the Lasso problem (6.66) by using our generalized damped Newton method (GDNM) via Algorithm 5.10. The obtained calculations are compared with those obtained by implementing the following highly recognized first-order and second-order algorithms: * (i) The Alternating Direction Methods of Multipliers¶¶¶https://web.stanford.edu/ boyd/papers/admm/lasso/lasso.html (ADMM); see [7, 22, 23]. * (ii) The Accelerated Proximal Gradient Method∥∥∥https://github.com/bodono/apg (APG); see [51, 52]. * (iii) The Fast Iterative Shrinkage-Thresholing Algorithm******https://github.com/he9180/FISTA-lasso (FISTA); see [4]. * (iv) The Semismooth Newton Augmented Lagrangian Method††††††https://www.polyu.edu.hk/ama/profile/dfsun/ (SSNAL) developed in [37]. All the numerical experiments are conducted on a desktop with 10th Gen Intel(R) Core(TM) i5-10400 processor (6-Core, 12M Cache, 2.9GHz to 4.3GHz) and 16GB memory. All the codes are written in MATLAB 2016a. Our numerical experiments are conducted with the test instances $(A,b)$ in (6.66) generated randomly following the Matlab commands $\displaystyle A={\rm randn}(m,n);\quad b={\rm randn}(n,1).$ In order to run Algorithm 5.10 for solving Lasso problems, the matrix $A^{*}A$ needs to be positive-definite due to Theorem 6.2. Since $\text{\rm rank}(A^{*}A)=\text{\rm rank}A$, the matrix $A^{*}A$ is singular if $m<n$. Therefore, a necessary condition for the positive-definite of $A^{*}A$ is that $m\geq n$, and thus we only test datasets in which the matrix $A$ has more rows than columns, or its rows and columns are equal. Note that the modes with $m\geq n$ appear in practical applications; see, e.g., [19] with applications to diabetes studies and [4] with applications to image processing. The GDNM code is publicly available on the website‡‡‡‡‡‡https://github.com/he9180/GDNM/. The initial points in all the experiments are set to be the zero vector. The following relative KKT residual $\eta_{k}$ in (6.73) suggested in [37] is used to measure the accuracy of an approximate optimal solution $x^{k}$ for (6.66): $\eta_{k}:=\frac{\|x^{k}-\text{\rm Prox}_{\mu\|\cdot\|_{1}}(x^{k}-A^{*}(Ax^{k}-b))\|}{1+\|x^{k}\|+\|Ax^{k}-b\|}.$ (6.73) We stop the algorithms in our experiments when either the condition $\eta_{k}<10^{-6}$ is satisfied, or they reach the maximum computation time of $6000$ seconds. For testing purpose, the regularization parameter $\mu$ in the Lasso problem (6.66) is chosen as $10^{-3}$ or as in [37], i.e. $\mu=10^{-3}\|A^{*}b\|_{\infty},\quad\text{where }\;\|x\|_{\infty}:=\max\\{|x_{1}|,|x_{2}|,\ldots,|x_{n}|\\},x=(x_{1},x_{2},\ldots,x_{n}).$ (6.74) The achieved numerical results are presented in Table 2 and Table 2. In these tables, “CPU time” stands for the time needed to achieve the prescribed accuracy of approximate solutions (the smaller the better). As we can see from the results presented in Tables 2 and 2, GDNM is more efficient in the cases where $m>>n$ and $n$ is not large, which confirms the need of positive definiteness of the matrix $A^{*}A$ for the superlinear convergence. On the other hand, ADMM performs well in all tests while SSNAL is more efficient for large-scale datasets. Table 1: $\mu=10^{-3}$ Problem size | | Number of iterations | | CPU time ---|---|---|---|--- m | n | | SSNAL | GDNM | ADMM | APG | FISTA | | SSNAL | GDNM | ADMM | APG | FISTA 1024 | 256 | | 4 | 4 | 9 | 42 | 133 | | 0.13 | 0.09 | 0.02 | 0.04 | 0.25 1024 | 1024 | | 30 | 22 | 12192 | 172326 | 190461 | | 10.68 | 1.23 | 35.71 | 485.51 | 2046.34 4096 | 256 | | 4 | 4 | 9 | 24 | 44 | | 0.14 | 0.11 | 0.03 | 0.05 | 0.34 4096 | 4096 | | 1306 | 281 | 12571 | 114879 | 38912 | | 4163.54 | 6000.00 | 775.59 | 6000.00 | 6000.00 Table 2: $\mu=10^{-3}\left\|A^{*}b\right\|_{\infty}$ Problem size | | Number of iterations | | CPU time ---|---|---|---|--- m | n | | SSNAL | GDNM | ADMM | APG | FISTA | | SSNAL | GDNM | ADMM | APG | FISTA 1024 | 256 | | 4 | 5 | 123 | 45 | 133 | | 0.62 | 0.11 | 0.03 | 0.03 | 0.24 1024 | 1024 | | 17 | 172 | 174 | 2638 | 26431 | | 2.38 | 10.72 | 0.56 | 7.41 | 273.84 4096 | 256 | | 4 | 4 | 248 | 26 | 44 | | 0.22 | 0.12 | 0.15 | 0.05 | 0.32 4096 | 4096 | | 18 | 355 | 343 | 1797 | 32412 | | 149.57 | 668.32 | 23.68 | 95.60 | 5121.51 ## 7 Concluding Remarks and Further Research This paper proposes and develops new globally convergent algorithms of the damped Newton type to solve some classes of nonsmooth optimization problems addressing minimization of ${\cal C}^{1,1}$ objectives and problems of quadratic composite optimization with extended-real-valued regularizers, which include nonsmooth problems of constrained optimization. We verify well- posedness of the proposed algorithms and their linear and superlinear convergence under rather nonrestrictive assumptions. Our approach is based on advanced machinery of second-order variational analysis and generalized differentiation. The obtained results are applied to some classes of optimization problems that arise in machine learning, statistics, and related areas with the efficient implementation to solving the well-recognized Lasso problems. The numerical experiments conducted to solve an important class of nonsmooth Lasso problems by using the suggested algorithm are compared in detail with the corresponding calculations by using some other first-order and second-order algorithms. Our future research includes efficient calculations of second-order subdifferentials and proximal mappings used in this paper for broader classes of convex and nonconvex problems with further applications to practically important models of machine learning, statistics, etc. We also intend to establish a global superlinear convergence of our damped generalized Newton algorithms for problems of quadratic composite optimization with extended- real-valued regularizers without the positive-definiteness requirement on the quadratic term. ## Acknowledgments The authors are grateful to two anonymous referees for their helpful remarks and comments that allowed us to significantly improve the original manuscript. Our thanks also go to Alexey Izmailov for his useful remarks on the algorithm developed in Section 3 and to Michal Kočvara and Defeng Sun for helpful discussions on numerical experiments to solve Lasso problems. ## References * [1] Bauschke H.H, Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edition. Springer, New York (2017) * [2] Beck, A.: Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with MATLAB. SIAM, Philadelphia, PA (2014) * [3] Beck, A.: First-Order Methods in Optimization. SIAM, Philadelphia, PA (2017) * [4] Beck, A., Teboulle, M.: (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009) * [5] Becker, S., Fadili, M.J.: A quasi-Newton proximal splitting method. Adv. Neural Inform. Process. Syst. 25, 2618–2626 (2012) * [6] Bonnans, J.F.: Local analysis of Newton-type methods for variational inequalities and nonlinear programming. Appl. Math. Optim. 29, 161–186 (1994) * [7] Boyd, S., Parikh, N,, Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learning, 3, 1–122 (2010) * [8] Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge, UK (2004) * [9] Chieu, N.H., Chuong, T.D., Yao, J.-C., Yen, N.D.: Characterizing convexity of a function by its Fréchet and limiting second-order subdifferentials. Set-Valued Var. Anal. 19, 75–96 (2011) * [10] Chieu, N.H., Lee, G.M., Yen, N.D.: Second-order subdifferentials and optimality conditions for ${\cal C}^{1}$-smooth optimization problems. Appl. Anal. Optim. 1, 461–476 (2017) * [11] Chieu, N.M., Hien, L.V., Nghia, T.T.A.: Characterization of tilt stability via subgradient graphical derivative with applications to nonlinear programming. SIAM J. Optim. 28, 2246–2273 (2018) * [12] Colombo, G., Henrion, R., Hoang, N.D., Mordukhovich, B.S.: Optimal control of the sweeping process over polyhedral controlled sets. J. Diff. Eqs. 260, 3397–3447 (2016) * [13] Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Bauschke, H.H. et al. (eds) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011) * [14] Ding, C., Sun, D., Ye, J.J.: First-order optimality conditions for mathematical programs with semidefinite cone complementarity constraints. Math. Program. 147, 539–379 (2014) * [15] Dontchev, A.L., Rockafellar, R.T.: Characterizations of strong regularity for variational inequalities over polyhedral convex sets. SIAM J. Optim. 6, 1087–1105 (1996) * [16] Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings: A View from Variational Analysis, 2nd edition. Springer, New York (2014) * [17] Drusvyatskiy, D., Lewis, A.S.: Tilt stability, uniform quadratic growth, and strong metric regularity of the subdifferential. SIAM J. Optim. 23, 256–267 (2013) * [18] Drusvyatskiy, D., Mordukhovich, B.S., Nghia, T.T.A.: Second-order growth, tilt stability, and metric regularity of the subdifferential. J. Convex Anal. 21, 1165–1192 (2014) * [19] Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Ann. Statist. 32, 407–499 (2004) * [20] Facchinei, F., Pang, J.-C.: Finite-Dimensional Variational Inequalities and Complementarity Problems, Vol. II. Springer, New York (2003) * [21] Friedlander, M.P., Goodwin, A., Hoheisel, T.: From perspective maps to epigraphical projections. arXiv:2102.06809 (2021) * [22] Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl. 2, 17–40 (1976) * [23] Glowinski, R., Marroco, A.: Sur lapproximation, par elements finis dordre un, et la resolution, par penalisation-dualite, dune classe de problemes de Dirichlet non lineares. Revue Francaise d’Automatique, Informatique et Recherche Operationelle 9, 41–76 (1975) * [24] Gfrerer, H.: On directional metric regularity, subregularity and optimality conditions for nonsmooth mathematical programs. Set-Valued Var. Anal. 21, 151–176 (2013) * [25] Gfrerer, H., Mordukhovich, B.S.: Complete characterization of tilt stability in nonlinear programming under weakest qualification conditions. SIAM J. Optim. 25, 2081–2119 (2015) * [26] Gfrerer, H., Outrata, J.V.: (2019) On a semismooth∗ Newton method for solving generalized equations. SIAM J. Optim. 31, 489–517 (2021) * [27] Ginchev, I., Mordukhovich, B.S.: On directionally dependent subdifferentials. C. R. Acad. Bulg. Sci. 64, 497–508 (2011) * [28] Hang, N.T.V,, Mordukhovich, B.S., Sarabi, M.E.: Augmented Lagrangian method for second-order conic programs under second-order sufficiency. J. Global Optim. 82, 51–81 (2022) * [29] Hastie, T., Zou, H.: Regularization and variable selection via the elastic net. J. Roy. Statist. Soc. Ser. B 67, 301–320 (2005) * [30] Henrion, R., Mordukhovich, B.S., Nam, N.M.: Second-order analysis of polyhedral systems in finite and infinite dimensions with applications to robust stability of variational inequalities. SIAM J. Optim. 20, 2199–2227 (2010) * [31] Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969) * [32] Hsieh, C.J., Chang, K.W., Lin, C.J.: A dual coordinate descent method for large-scale linear SVM. Proceedings 25th International Conference on Machine Learning, pp. 408–415. Helsinki, Finland (2008) * [33] Izmailov, A.F., Solodov, M.V.: Newton-Type Methods for Optimization and Variational Problems. Springer, New York (2014) * [34] Khanh, P.D., Mordukhovich, B.S., Phat, V.T.: A generalized Newton method for subgradient systems. arXiv:2009.10551 (2020) * [35] Klatte, D., Kummer, B.: Nonsmooth Equations in Optimization. Regularity, Calculus, Methods and Applications. Kluwer Academic Publishers, Dordrecht, The Netherlands (2002) * [36] Lee, J.D., Sun, Y., Saunders, M.A.: Proximal Newton-type methods for minimizing composite functions. SIAM J. Optim. 24, 1420–1443 (2014) * [37] Li, X., Sun, D., Toh, K.-C.: A highly efficient semismooth Newton augmented Lagrangian method for solving Lasso problems. SIAM J. Optim. 28, 433–458 (2018) * [38] Lichman, M.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, CA (2013) * [39] Lions, P.-L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979) * [40] Mohammadi, A., Mordukhovich, B.S., Sarabi, M.E.: Variational analysis of composite models with applications to continuous optimization. Math. Oper. Res. DOI: 10.1287/moor.2020.1074 (2020) * [41] Mohammadi, A., Mordukhovich, B.S., Sarabi, M.E.: Parabolic regularity in geometric variational analysis. Trans. Amer. Math. Soc. 374, 1711–1763 (2021) * [42] Mohammadi, A., Sarabi, M.E.: Twice epi-differentiability of extended-real-valued functions with applications in composite optimization. SIAM J. Optim. 30, 2379–2409 (2020) * [43] Mordukhovich, B.S.: Sensitivity analysis in nonsmooth optimization. In: Field, D.A., Komkov, V.(eds) Theoretical Aspects of Industrial Design, pp. 32–46. SIAM Proc. Appl. Math. 58. Philadelphia, PA (1992) * [44] Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, I: Basic Theory, II: Applications. Springer, Berlin (2006) * [45] Mordukhovich, B.S.: Variational Analysis and Applications. Springer, Cham, Switzerland (2018) * [46] Mordukhovich, B.S., Nghia, T.T.A.: Second-order characterizations of tilt stability with applications to nonlinear programming. Math. Program. 149, 83–104 (2015) * [47] Mordukhovich, B.S., Outrata, J.V.: On second-order subdifferentials and their applications. SIAM J. Optim. 12, 139–169 (2001) * [48] Mordukhovich, B.S., Rockafellar, R.T.: Second-order subdifferential calculus with applications to tilt stability in optimization. SIAM J. Optim. 22, 953–986 (2012) * [49] Mordukhovich, B.S., Sarabi, M.E.: Generalized Newton algorithms for tilt-stable minimizers in nonsmooth optimization. SIAM J. Optim. 31, 1184–1214 (2021) * [50] Mordukhovich, B.S., Yuan, X., Zheng, S., Zhang. J.: A globally convergent proximal Newton-type method in nonsmooth convex optimization. arXiv:2011.08166 (2020) * [51] Nesterov, Yu.: A method of solving a convex programming problem with convergence rate $\mathcal{O}(1/k^{2})$. Soviet Math. Dokl. 27, 372–376 (1983) * [52] Nesterov, Yu.: Lectures on Convex Optimization, 2nd edition. Springer, Cham, Switzerland (2018) * [53] Nocedal, J., Wright, S.: Numerical Optimization. Springer, New York (2006) * [54] Outrata, J.V., Sun, D.: On the coderivative of the projection operator onto the second-order cone. Set-Valued Anal. 16, 999–1014 (2008) * [55] Pelckmans, K., De Brabanter, J., De Moor, B., Suykens, J.A.K.: Convex clustering shrinkage. In: PASCAL Workshop on Statistics and Optimization of Clustering, pp. 1–6. London, UK (2005) * [56] Poliquin, R.A., Rockafellar, R.T.: Tilt stability of a local minimum. SIAM J. Optim. 8, 287–299 (1998) * [57] Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987) * [58] Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed) Optimization, pp. 283–298. Academic Press, New York (1969) * [59] Qi, L., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58, 353–367(1993) * [60] Rockafellar, R.T.: Augmented Lagrangian multiplier functions and duality in nonconvex programming. SIAM J. Control 12, 268–285 (1974) * [61] Rockafellar, R.T.: Augmented Lagrangians and hidden convexity in sufficient conditions for local optimality. http://sites.math.washington.edu/ rtr/papers/rtr256-HiddenConvexity.pdf (2020) * [62] Rockafellar, R.T., Wets R.J-B.: Variational Analysis. Springer, Berlin (1998) * [63] She, Y.: Sparse regression with exact clustering. Electron. J. Stat. 4, 1055–1096 (2010) * [64] Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. 58, 267–288 (1996) * [65] Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., Knight, K.: Sparsity and smoothness via the fused Lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 67, 91–108 (2005) * [66] Ulbrich, M.: Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function Spaces. SIAM, Philadelphia, PA (2011) * [67] Yao, J.-C., Yen, N.D.: Coderivative calculation related to a parametric affine variational inequality. Part 1: Basic calculation. Acta Math. Vietnam. 34, 157–172 (2009)
Stripe skyrmions and skyrmion crystals X. R. Wang1,2,∗, X. C. Hu1,2, and H. T. Wu1,2 1. 1. Physics Department, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. 2. 2. HKUST Shenzhen Research Institute, Shenzhen 518057, China. Correspondence and requests for materials should be addressed to X.R.W. (email<EMAIL_ADDRESS> Skyrmions are important in topological quantum field theory for being soliton solutions of a nonlinear sigma model and in information technology for their attractive applications. Skyrmions are believed to be circular and stripy spin textures appeared in the vicinity of skyrmion crystals are termed spiral, helical, and cycloid spin orders, but not skyrmions. Here we present convincing evidences showing that those stripy spin textures are skyrmions, “siblings” of circular skyrmions in skyrmion crystals and “cousins” of isolated circular skyrmions. Specifically, isolated skyrmions are excitations when skyrmion formation energy is positive. The skyrmion morphologies are various stripy structures when the ground states of chiral magnetic films are skyrmions. The density of skyrmion number determines the morphology of condensed skyrmion states. At the extreme of one skyrmion in the whole sample, the skyrmion is a ramified stripe. As the skyrmion number density increases, individual skyrmion shapes gradually change from ramified stripes to rectangular stripes, and eventually to disk-like objects. At a low skyrmion number density, the natural width of stripes is proportional to the ratio between the exchange stiffness constant and Dzyaloshinskii-Moriya interaction coefficient. At a high skyrmion number density, skyrmion crystals are the preferred states. Our findings reveal the nature and properties of stripy spin texture, and open a new avenue for manipulating skyrmions, especially condensed skyrmions such as skyrmion crystals. Skyrmions, originally used to describe resonance states of baryons [1], were first unambiguously observed in the form of skyrmion crystals in various chiral magnets and by various experimental techniques [2, 3, 4, 5, 6, 7]. Magnetic skyrmions are topological non-trivial spin textures of magnetic films characterized by a skyrmion number of $Q=\frac{1}{4\pi}\int\vec{m}\cdot(\partial_{x}\vec{m}\times\partial_{y}\vec{m}){\rm d}x{\rm d}y$, here $\vec{m}$ is the unit vector of magnetization. $Q$ must be an integer for an infinite magnetic film, and a non-zero $Q$ spin texture is called a skyrmion of skyrmion number $Q$. It was isolated circular skyrmions, not a skyrmion crystal, that were predicted in early theories [8, 9]. Isolated skyrmions were indeed observed later in confined structures and films [10, 11, 12, 13, 14, 15]. It is an experimental fact that skyrmion crystals form in very narrow magnetic-field-temperature windows. Outside of the windows, stripy phases appear. The stripy phases, which can even coexist with skyrmion crystals, are in fact easier to form than a skyrmion crystal does. In contrast, those stripy phases do not appear together with isolated skyrmions. Interestingly, the stripy phase was observed many years before the observations of skyrmion crystals [16]. These stripy phases are called helical, spiral, and cycloid spin orders. A one-dimensional model [9, 17, 18, 19, 20, 21] was used to describe the rotation of spins perpendicular to stripes. To date, a holistic description of various stripy structures, especially ramified stripes and stripy maze, [22, 23, 24, 25, 26, 27, 28, 29] is lacking. The general belief is that those stripes are not skyrmions and have zero skyrmion numbers [30] although some race-track stripes are called merons with 1/2 skyrmion number or bimerons[31, 32, 27]. In this letter we show that stripy magnetic textures appear in a chiral magnetic film with Dzyaloshinskii-Moriya interaction (DMI) are actually irregular skyrmions. Each stripy texture has exactly skyrmion number 1. The irregular shape is due to the negative skyrmion formation energy (relative to ferromagnetic state) when the ferromagnetic state is not the ground state. For a given system, the morphology of these skyrmions are random when the skyrmion number density is low. At extremely low density, magnetic textures are ramified stripes. The exact appearance of each pattern is very sensitive to the initial spin configuration and its dynamical path. The basic building blocks of irregular random skyrmions are stripes of well-defined width. The optimal width comes from the competition between the Heisenberg’s exchange energy and the DMI energy that respectively prefer a larger and smaller width in order to minimize exchange energy cost and maximize the negative skyrmion formation energy gain. Unexpectedly, this exchange energy and DMI energy dependence of width is opposite to the skyrmion size of an isolated skyrmion that increases with DMI interaction and decreases with exchange energy [34]. We consider an ultra-thin ferromagnetic film of thickness $d$ in the $xy$ plane. The film has an exchange energy $E_{\mathrm{ex}}$ with exchange stiffness constant $A$, an interfacial DMI energy $E_{\mathrm{DM}}$ with DMI coefficient $D$, an anisotropy energy $E_{\mathrm{an}}$ with a perpendicular easy-axis anisotropy $K$, and the Zeeman energy $E_{\mathrm{Ze}}$ in a perpendicular magnetic field $H$. The total energy $E_{total}$ reads $E_{total}=E_{\mathrm{ex}}+E_{\mathrm{DM}}+E_{\mathrm{an}}+E_{\mathrm{Ze}},$ (1) where $E_{\mathrm{ex}}=Ad\iint|\nabla\vec{m}|^{2}\mathrm{d}S$, $E_{\mathrm{DM}}=Dd\iint[m_{z}\nabla\cdot\vec{m}-(\vec{m}\cdot\nabla)m_{z}]\mathrm{d}S$, $E_{\mathrm{an}}=Kd\iint(1-m_{z}^{2})\mathrm{d}S$, and $E_{\mathrm{Ze}}=\mu_{0}HM_{\text{s}}d\iint(1-m_{z})\mathrm{d}S$. $M_{\text{s}}$ is the saturation magnetization and $\mu_{0}$ is the vacuum permeability. The integration is over the whole film. The energy is set to zero, $E_{total}=0$, for the ferromagnetic state of $m_{z}=1$. We have assumed $\vec{m}$ is uniform in thickness direction. The demagnetization effect is included in $E_{\mathrm{an}}$ through the effective anisotropy $K=K_{u}-\mu_{0}M_{\text{s}}^{2}/2$ corrected by the shape anisotropy, here $K_{u}$ is the perpendicular magnetocrystalline anisotropy. This is a good approximation when the film thickness $d$ is much smaller than the exchange length [34]. It is known that isolated circular skyrmions are metastable state of energy $8\pi Ad\sqrt{1-\kappa}$ when $\kappa=\pi^{2}D^{2}/(16AK)<1$ [34]. Here we use MuMax3 simulator [35] to numerically solve the Landau-Lifshitz- Gilbert (LLG) equation for the stable states in the opposite regime of $\kappa>1$ where circular skyrmions are not stable states [34], see Method. Figure 1: Numerical solutions of LLG equation under the periodical boundary conditions and with $A=10{\rm pJ/m},D=6{\rm mJ/m^{2}},K=0.49{\rm MJ/m^{3}},M_{s}=0.58{\rm MA/m}$ for a sample of $300\rm{nm}\times 300\rm{nm}\times 0.4\rm{nm}$. (a1,b1,c1) are different initial configurations of a disk of diameter $20\rm{nm}$ (a1), a hexagon of side length $10\rm{nm}$ (b1) and a square of length $20\rm{nm}$ (c1). (a2,b2,c2) are intermediate states at 0.3ns with irregular shapes due to the negative formation energy. (a3,b3,c3) are the final stable pattern with irregular ramified stripes. Skyrmion charge density $\rho$ is encoded by colours (the blue for positive and the red for negative) while the gray-scale encodes $m_{z}$. The dark black lines denote $m_{z}=0$. The positive and negative charges exist respectively only around convex and concave areas. The spin profile across the stripes at the green ⓝ is considered. (a4, b4, c4) are the evolution of total energy $E_{total}$ and topological skyrmion number $Q$ ($t$ is in the logarithmic scale). $Q$ reaches the skyrmion number 1 within $1ps$. Clearly, the skyrmion number is a constant and the total energy is negative and approaches a constant almost independent from the initial configurations. For a sample of $300\rm{nm}\times 300\rm{nm}\times 0.4\rm{nm}$ under the periodical boundary conditions and with $A=10{\rm pJ/m},D=6\rm{mJ/m^{2}},K=0.49\rm{MJ/m^{3}}$, and $M_{s}=0.58\rm{MA/m}$ that are typical values for those chiral magnets supporting skyrmion crystals [36, 37, 30], $\kappa>1$, so that single domain and isolated circular skyrmion are not stable any more [34]. We will start from a small nucleation magnetic domain with sharp domain wall to speed up skyrmion formation dynamics although stripe skyrmions will appear spontaneously due to thermal or other fluctuations in reality. Figure 1 shows how various initial states in the top panel (a1, b1, c1) evolve according to the LLG equation with $\alpha=0.25$. $\alpha$ will not change physics described below because we are interested in the spin textures of the LLG equation that do not vary with time. However, $\alpha$ can change the evolution path and energy dissipation rate so that it can change the intermediate states shown in (a2-c2) and influence which fixed point to pick when a system has many fixed points (textures) like the current situation. This is similar to the sensitiveness of attractor basins to the damping in a macro spin system [38]. The initial configurations are obtained by reversing spins in the white regions in (a1-c1) from $m_{z}=-1$ to $m_{z}=1$ such that the configurations have very sharp domain walls and zero skyrmion number $Q=0$. After a short time of an order of picosecond, the initial states transform into irregular structures of skyrmion number $Q=1$, no matter whether the initial shape is circular or non-circular as shown in (a4-c4) where the time is in logarithmic scale. As time goes on, $Q$ (the blue lines in a4-c4) stays at 1, and system energy $E_{total}$ (the red curves) is negative and keeps decreasing until it reaches a stable ramified stripy spin texture. Clearly, this irregular ramified spin texture is a non-circular skyrmion whose formation energy is negative. The negative formation energy explains why the skyrmion prefers to stretch out to occupy the whole space to lower its energy. This process is clearly demonstrated in the movie in the Supplementary Material, showing how the system evolves from (a1) to (a3) and how $Q$ grows from 0 to 1. The simulations show also that the exact pattern is very sensitive to the initial configuration and dynamical/kinetic paths as well as how energy dissipates. In a real system, the process shall proceed spontaneously from any randomly generated nucleation centre no matter by a thermal fluctuation or an intentional agitation. We use color to encode the skyrmion charge density $\rho$ defined as $\rho=\vec{m}\cdot(\partial_{x}\vec{m}\times\partial_{y}\vec{m})/(4\pi)$ as indicated by the color bar in Fig. 1. Interestingly, $\rho$ is non-zero only around convex (positive) and concave (negative) areas. $\rho$ is almost zero along the long straight stripe. This may explain why the skyrmion number of stripy textures were thought to be zero in the literature [30]. As shown in Fig. 1(a4-c4), the sum of all positive and negative skyrmion charge is always quantized to 1 protected by the topology. The continuous decrease of energy $E_{total}$ also indicate the final morphology of the ramified stripe skyrmions is not unique and depends on dynamical path of the system evolution that in turn relates to the initial configuration. One striking feature about the stripes shown in Fig. 1 is a well-defined stripe width that is usually referred to a given wave vector in experiments. The competition between the exchange energy and DMI energy can easily lead to a $A/D$ dependence [9, 17, 18, 19, 20]. However, it is highly non-trivial to understand the effect of the magnetic anisotropy on stripe width. In order to understand the underlying physics of this well-defined width, one needs a good spin texture profile along the direction perpendicular ($x$) to the stripes. We found that black ($m_{z}\leq 0$) and white ($m_{z}\geq 0$) stripes can be approximated by $\Theta(x)=2\arctan\left[\frac{\sinh(L/2w)}{\sinh(|x|/w)}\right]$ and $\Theta(x)=2\arctan\left[\frac{\sinh(|x|/w)}{\sinh(L/2w)}\right]$ ($|x|\leq L/2$), respectively. $\Theta$ is the polar angle of the magnetization at position $x$ and $x=0$ is the centre of a stripe. $L$ and $w$ measure respectively the stripe width and skyrmion wall thickness as schematically illustrated in Fig. 2(a). Figure 2(b) demonstrates the excellence of this approximate profile for a set of model parameters of $A=10\rm{pJ/m},D=6\rm{mJ/m^{2}},K=0.49\rm{MJ/m^{3}}$, and $M_{s}=0.58\rm{MA/m}$. The $y-$axis is $m_{z}$ and $x=0$ is the stripe centre where $m_{z}=1$. Different symbols are numerical data across different stripes labelled by the green ⓝ in Fig. 1(a3-c3). The solid curve is the fit of profile $\cos\Theta(x)$ with $L=10.78\rm{nm}$ and $w=3.05\rm{nm}$. All data from different stripes falling onto the same curve demonstrates that stripes, building blocks of pattern, are identical. Using the excellent spin profile, one can obtain magnetic energy of a film filled with the stripe skyrmions as a function of $L$ and $w$. Minimizing the energy against $L$ and $w$ allows us to obtain $A$, $D$, and $K$ dependence of stripe width $L$ and skyrmion wall thickness $w$. In terms of $L$, $\epsilon=L/(2w)$, $\xi=A/D$, and $\kappa$, the total system energy density of a film filled by such stripes is $E=\frac{4D}{\xi}\left[\frac{\xi^{2}}{L^{2}}g_{1}(\epsilon)-\frac{\pi\xi}{4L}+\frac{\pi^{2}}{64\kappa}g_{2}(\epsilon)\right],$ (2) where $g_{1}(\epsilon)=\int_{0}^{1}\frac{[2\epsilon\sinh(\epsilon)\cosh(\epsilon x)]^{2}}{[\sinh^{2}(\epsilon)+\cosh^{2}(\epsilon x)]^{2}}{\mathrm{d}}x$, and $g_{2}(\epsilon)=\int_{0}^{1}\frac{[2\sinh(\epsilon)\sinh(\epsilon x)]^{2}}{[\sinh^{2}(\epsilon)+\cosh^{2}(\epsilon x)]^{2}}{\mathrm{d}}x$. The optimal stripe width obtained from minimizing energy $E$ is $L=a\frac{4A}{\pi D}$, where $a$ depends weakly on $\kappa=\pi^{2}D^{2}/(16AK)>1$. The physics of this result is clear: DMI energy is negative, and one can add more stripes by reducing $L$ such that the total energy will be lowered. On the other hand, the exchange energy will increase with the decrease of $L$. As a result, $L$ is proportional to $A/D$ that is opposite to the behaviour of size of an isolated skyrmion whose size increases with $D$ and decreases with $A$ [34, 30]. These theoretical results agree very well with micro-magnetic simulations as shown in Fig. 2(c) with $a=7.61$ for $\kappa=1.3$; $a=6.82$ for $\kappa=1.8$; $a=6.47$ for $\kappa=2.9$. The dependence of $L$ on $\kappa$ for a fixed $A/D=1\rm{nm}$ is shown in the inset. One can see that $L$ depends weakly on large $\kappa\gg 1$. Unexpectedly, $L$ are the same no matter whether we have one, two, or more ramified and non-ramified stripe skyrmions. This is reflected in the skyrmion number density independence of $L$ as shown in Fig. 2(d). Figure 2: (a) Schematic diagram of spin texture of parallel stripes. (b) Symbols, from different stripes labelled by textcircledn in Fig. 1(a3-c3), are $m_{z}=\cos\Theta$ from MuMax3. The green solid line is the fit of $\Theta(x)=2\arctan\left[\frac{\sinh(|x|/w)}{\sinh(L/2w)}\right]$. ($\Theta\leq\pi/2$) and $\Theta(x)=2\arctan\left[\frac{\sinh(L/2w)}{\sinh((|x-L|))/w)}\right]$ ($\Theta\geq\pi/2$) with $L=10.78\rm{nm}$ and $w=3.05\rm{nm}$. $x=0$ is the centres of white stripes. All data from different stripes fall onto the same curve means that stripes of the same width are basic building blocks of stripe skyrmions. (c) $A/D$-dependence of stripe width $L$ for various $\kappa=\pi^{2}D^{2}/(16AK)=1.3$ (the green stars); 1.8 (the blue squares); and 2.9 (the red circles). The solid lines are the fits of $L=aA/D$ with $a=7.61$ for $\kappa=1.3$; $a=6.82$ for $\kappa=1.8$; $a=6.47$ for $\kappa=2.9$. Inset: the dependence of $L$ on $\kappa$ for $A/D=1\rm{nm}$. Symbols are the numerical data and the dashed line is the theoretical prediction without any fitting parameter. (d) The dependences of stripe width $L$ on skyrmion number density for $A=10\rm{pJ/m},D=6\rm{mJ/m^{2}},K=0.49\rm{MJ/m^{3}}$, and $M_{s}=0.58\rm{MA/m}$. The number of skyrmions varies from 1 to 113 in our sample of $300\rm{nm}\times 300\rm{nm}\times 0.4\rm{nm}$ under the periodical boundary conditions. The dash line is $L=10.52\rm{nm}$. To understand why condensed stripe skyrmions and skyrmion crystals can appear together in a given chiral magnetic film when the material parameters are fixed, we try to increase the number of skyrmions in our film of $300\rm{nm}\times 300\rm{nm}\times 0.4\rm{nm}$. Encouraged by the results in Fig. 1 that each nucleation domain creates one irregular stripe skyrmion, we place 2, 100 and 169 small disk domains of diameter $10\rm{nm}$ as shown in Fig. 3(a1-c1) and let them to evolve according to LLG equation. Fig. 3(a2-c2) are the final steady states. As expected, we indeed obtained two irregular ramified stripe skyrmions (a2). For the case of 100 skyrmions, some of them have rectangular shape and are arranged in a nematic phase while the rest of skyrmions look like disks and are in a lattice structure (b2). In the case of 169 skyrmions, skyrmions are disk-like and are in a triangular lattice (c2). The skyrmion nature of the spin textures can be clearly confirmed by the change of $Q$ (blue) with time as shown in Fig. 3(d) (the solid, dashed, and dotted lines for $Q=2,\ 100,$ and 169 respectively). (d) shows also how system energy (the red lines) changes with time. One interesting feature is that total energy is not sensitive to skyrmion number density before skyrmion- skyrmion distance is comparable to the optimal stripe width and skyrmions take stripe shape. The system energy starts to increase with the skyrmion number density, and condensed skyrmions transform from rectangular stripe skyrmions in nematic phase into circular skyrmion crystal. This feature does not favour skyrmion crystal formation, and may explain why skyrmion crystals were observed in the presence of an external magnetic field: With only one skyrmion in the whole system as those shown in Fig. 1, the net magnetic moment is zero because spin up (white area) and down spin (black area) are almost equal so that magnetic moment cancel each other. As skyrmion number increases, the net magnetic moment increases and becomes non-zero. Figure 3: Evolution of systems with different number of single domains $N=2$ (a), 100 (b) and 169 (c) spread in the sample of $300\rm{nm}\times 300\rm{nm}\times 0.4\rm{nm}$ with the periodical boundary conditions. The material parameters are the same as those in Fig. 1. To speed up the system reaching its final stable states, we use a large Gilbert damping constant of $\alpha=1$. (a1,b1,c1) are the initial configurations and (a2,b2,c2) are the final stable pattern. skyrmion charge density $\rho$ is encoded in colours (the red and the blue for positive and negative $\rho$ respectively) while $m_{z}$ is encoded in the grayscale. The dark black lines denote $m_{z}=0$. The positive and negative charges exist respectively only around canvex and concave areas. (d) is the evolution of total energy $E_{total}$ (the left y-axis) and topological skyrmion number $Q$ (the right y-axis with three ranges). Clearly, $Q$ reaches a constant integer in a very short time and $Q=N=2,100$ and $169$ respectively. The total energy is negative and approaches a constant, indicating a stable state. We have showed that stripe and ramified stripe skyrmions are essentially the same as the circular skyrmions in skyrmion crystals. The difference in skyrmion shapes at different skyrmion number density come from the skyrmion- skyrmion interaction. When the average distance between two nearby skyrmions is order of the stripe width, the skyrmion-skyrmion repulsion compress skyrmions into circular objects. This understanding permits a skyrmion crystal in the absence of an external magnetic field as long as one can use other means to add more skyrmions into a film such as a scanning tunnelling tip [6]. We have presented results for the interfacial DMI so far, similar results are also true for bulk DMI. As shown in Fig. 4, one ramified stripe skyrmion has very similar structures for interfacial (a) and bulk (b) DMIs when we start from the same initial configuration with the same interaction strength. The only difference is the change of Neel-type of stripe wall to the Bloch-type stripe wall as shown by the color-coding in the figures. A perpendicular magnetic field can modify the morphology and width of a stripe without changing its skyrmion number. Stripe width increases (decreases) with the field when out-of-plane spin component is anti-parallel (parallel) to the field. Figure 4: Structures of one stripe skyrmion for interfacial (a) and bulk (b) DMIs starting from the same initial configuration. The sample size is $300\rm{nm}\times 300\rm{nm}\times 0.4\rm{nm}$ with the same model parameters as those for Fig. 1 except $D=4{\rm mJ/m^{2}}$. The periodical boundary conditions are used in the simulations. The in-plane spin component is encoded by the colors. It requires certain amount energy to destroy a stable or metastable state by definition. Thus, all skyrmions discussed in this paper are stable against thermal noise as long as the thermal energy is smaller than their potential barriers. In the Supplemental Material, we provide a video to show that the state in Fig. 3(b2) is stable at 50K. Skyrmions provide a fertile ground for studying fundamental physics. For example, the topological Hall effect is a phenomenon about how non-collinear magnetization in skyrmion crystals affect electron transport. Knowing stripy phases are also condensed irregular skyrmions, it expands surely arena of topological Hall effect. We can not only study how electron transport be affected by the skyrmion crystals, but also skyrmions in other condensed phases such as nematic phases, or how the elongation and orientation of stripe skyrmion affect Hall transport. With the new discovery of stripe skyrmions, it will also allow us to investigate the interplay of topology, shape, spin and charge. One can investigate how the topology, the local and global geometries affect spin excitations separately. The assertion that ramified stripes and other stripes appeared together with skyrmion crystals are irregular skyrmions are firmly confirmed by the nano- magnetic simulations. These stripes have a well-defined width that is from the competition between exchange interaction energy and DMI energy. The inverse of the width is the well-known wave-vector used to describe the spiral spin order in the literature [16]. In contrast to isolated skyrmions whose size increases with DMI constant and decreases with the exchange stiffness constant, the stripe width increases with the exchange stiffness constant and decreases with the DMI constant. Counter-intuitively, skyrmion crystals are highly compressible like a gas, not like an atomic crystal. This detail property need a careful further study. We believe that our findings should have profound implications in skyrmion-based applications. Methods Numerical simulations. Spin dynamics is governed by the Landau-Lifshitz- Gilbert (LLG) equation, $\frac{\partial\vec{m}}{\partial t}=-\gamma\vec{m}\times\vec{H}_{\rm eff}+\alpha\vec{m}\times\frac{\partial\vec{m}}{\partial t},$ (3) where $\vec{m}$, $\gamma$, $\alpha$ are respectively the unit vector of the magnetization, gyromagnetic ratio, and the Gilbert damping. $\vec{H}_{\rm eff}=2A\nabla^{2}\vec{m}+2Km_{z}\hat{z}+\vec{H}_{d}+\vec{H}_{\rm DM}$ is the effective field including the exchange field characterized by the exchange stiffness $A$, crystalline anisotropy field, demagnetizing field $\vec{H}_{d}$, and DMI field $\vec{H}_{\rm DM}$. In the absence of energy source like an electric current, LLG equation describe a dissipative system whose energy can only decrease [39, 40]. Thus, solving LLG equation is an efficient way to find the stable spin textures. In the present case, we apply periodic boundary conditions to eliminate the edge effects. We use the Mumax3 package [35] to numerically solve the LLG equation with mesh size of $1\rm{nm}\times 1\rm{nm}\times 0.4\rm{nm}$ for those skyrmions of $L>5\rm{nm}$. For skyrmions of $L<5\rm{nm}$ , the mesh size is $0.1\rm{nm}\times 0.1\rm{nm}\times 0.4\rm{nm}$. The number of stable states and their structures should not depend on the Gilbert damping constant, but spin dynamics is very sensitive to $\alpha$. To speed up our simulations, we use large $\alpha$ of 0.25 and 1. We consider only material parameters that supports condensed skyrmion states. The skyrmion size $L$ is obtained directly from numerical data or by fitting the simulated spin profile to $\Theta(x)=2\arctan\left[\frac{\sinh(L/2w)}{\sinh(|x|/w)}\right]$ (the black stripes for $m_{z}\leq 0$) and $\Theta(x)=2\arctan\left[\frac{\sinh(|x|/w)}{\sinh(L/2w)}\right]$ (white stripes for $m_{z}\geq 0$) with stripe width $L$ and wall width $w$. Here $-L/2\leq x\leq L/2$ and $x=0$ is the centre of a stripe. It should be pointed out that the physics discussed here does not depend on the boundary conditions. However, different boundary conditions have different confinement potentials (or potential well depth) for stripe skyrmions, and can affect the maximal skyrmion number density below which a condensed skyrmion state is metastable. Energy density of a film filled by stripes. Following similar assumptions as those in Ref. [19], we can computer the energy density of a system filled with stripes of width $L$ that parallel to the $y-$axis and are periodically arranged along the $x-$axis. If $\Theta$ is the polar angle of spins and spin profile is $\Theta(x)=2\arctan\left[\frac{\sinh((|x-nL|)/w)}{\sinh(L/2w)}\right]+n\pi$ with integer $n$ and $x\in((n-0.5)L,(n+0.5)L)$ as shown in Fig. 2(a), the total energy of the film in the range of $y_{1}<y<y_{2}$ and $x_{1}<x<x_{2}$ is $\displaystyle E_{total}=d\int_{y_{1}}^{y_{2}}\int_{x_{1}}^{x_{2}}[A(\partial_{x}\Theta)^{2}-D\partial_{x}\Theta+K\sin^{2}\Theta]{\rm d}x{\rm d}y.$ $x_{2}-x_{1}$ and $y_{2}-y_{1}$ are much bigger than $L$, we need only to minimize energy density $E=E_{total}/[d(x_{2}-x_{1})(y_{2}-y_{1})]$. Then one has $E=\frac{1}{L}\int_{-L/2}^{L/2}\left[A\left(\frac{\partial\Theta}{\partial x}\right)^{2}-D\frac{\partial\Theta}{\partial x}+K\sin^{2}\Theta\right]{\rm d}x.$ (4) In terms of $\epsilon=L/(2w)$, terms in $E$ are, $\displaystyle E_{ex}$ $\displaystyle=\int_{-L/2}^{L/2}A\left[\frac{\partial\Theta(x)}{\partial x}\right]^{2}{\mathrm{d}}x$ $\displaystyle=\frac{A}{w^{2}}\int_{-L/2}^{L/2}\left[\frac{2\sinh(L/w)\cosh(x/w)}{\sinh^{2}(L/w)+\sinh^{2}(x/w)}\right]^{2}{\mathrm{d}}x$ $\displaystyle=\frac{2A}{L}\int_{-1}^{1}\left[\frac{2\epsilon\sinh(\epsilon)\cosh(\epsilon x)}{\sinh^{2}(\epsilon)+\sinh^{2}(\epsilon x)}\right]^{2}{\mathrm{d}}x,$ $\displaystyle=\frac{4A}{L}g_{1}(\epsilon)$ $\displaystyle E_{DM}$ $\displaystyle=-D\int_{-L/2}^{L/2}\frac{\partial\Theta(x)}{\partial x}{\mathrm{d}}x$ $\displaystyle=-D\left[\Theta(L/2)-\Theta(-L/2)\right]=-\pi D,$ $\displaystyle E_{an}$ $\displaystyle=\int_{-L/2}^{L/2}K\sin^{2}\Theta(x){\mathrm{d}}x$ $\displaystyle=K\int_{-L/2}^{L/2}\left[\frac{2\sinh(L/2w)\sinh(x/w)}{\sinh^{2}(L/2w)+\sinh^{2}(x/w)}\right]^{2}{\mathrm{d}}x$ $\displaystyle=\frac{K}{2}\int_{-1}^{1}\left[\frac{2\sinh(\epsilon)\sinh(\epsilon x)}{\sinh^{2}(\epsilon)+\sinh^{2}(\epsilon x)}\right]^{2}L{\mathrm{d}}x$ $\displaystyle=KLg_{2}(\epsilon).$ Add the three terms up, the energy density $E$ is $E=\frac{4A}{L^{2}}g_{1}(\epsilon)-\frac{\pi D}{L}+Kg_{2}(\epsilon).$ In terms of $L$, $\epsilon$, $\xi=A/D$ and $\kappa=\pi^{2}D^{2}/(16AK)$, we obtain Eq. (2), $E=\frac{4D}{\xi}\left[\frac{\xi^{2}}{L^{2}}g_{1}(\epsilon)-\frac{\pi\xi}{4L}+\frac{\pi^{2}}{64\kappa}g_{2}(\epsilon)\right].$ The first and the third terms on the right hand side are positive, thus $E\geq\frac{4D}{\xi}\left[\sqrt{\frac{4\pi^{2}\xi^{2}}{64L^{2}\kappa}g_{1}(\epsilon)g_{2}(\epsilon)}-\frac{\pi\xi}{4L}\right]=\frac{\pi D}{L}\left[\sqrt{\frac{g_{1}g_{2}}{\kappa}}-1\right].$ To have negative $E$, $\kappa$ must be larger than $\sqrt{g_{1}g_{2}}\geq 1$. Data availability. The data that support the plots within this paper and other findings of this study are available from the corresponding author on reasonable request. References ## References * [1] Skyrme, T. H. R. A unified field theory of mesons and baryons. Nucl. Phys. 31, 556-569 (1962). * [2] Mühlbauer, S. et al. Skyrmion lattice in a chiral magnet. Science 323, 915-919 (2009). * [3] Yu, X. Z. et al. Real-space observation of a two-dimensional skyrmion crystal. Nature 465, 901-904 (2010). * [4] Yu, X. Z. et al. Near room temperature formation of a skyrmion crystal in thin-films of the helimagnet FeGe. Nat. Mater. 10, 106-109 (2011). * [5] Heinze, S. et al. Spontaneous atomic-scale Magnetic skyrmion lattice in two dimensions. Nat. Phys. 7, 713-718 (2011). * [6] Romming, N. et al. Writing and deleting single magnetic skyrmions. Science 341, 636-639 (2013). * [7] Onose, Y., Okamura, Y., Seki, S., Ishiwata, S. & Tokura, Y. Observation of magnetic excitations of skyrmion crystal in a helimagnetic insulator $\mathrm{Cu}_{2}\mathrm{OSeO}_{3}$. Phys. Rev. Lett. 109, 037603 (2012). * [8] Bogdanov, A. N. & Rößler, U. K. Chiral symmetry breaking in magnetic thin films and multilayers. Phys. Rev. Lett. 87, 037203 (2001). * [9] Rößler, U. K., Bogdanov, A. N. & Pfleiderer, C. Spontaneous skyrmion ground states in magnetic metals. Nature 442, 797-801 (2006). * [10] Sampaio, J., Cros, V., Rohart, S., Thiaville, A. & Fert, A. Nucleation, stability and current-induced motion of isolated magnetic skyrmions in nanostructures. Nat. Nanotechnol. 8, 839–844 (2013). * [11] Li J. et al. Tailoring the topology of an artificial magnetic skyrmion. Nat. Commun. 5, 4704 (2014). * [12] Back, C. et al. The 2020 skyrmionics roadmap. J. Phys. D: Appl. Phys. 53, 363001 (2020). * [13] Jiang, W. et al. Blowing magnetic skyrmion bubbles. Science 349, 283-286 (2015). * [14] Du, H. et al. Edge-mediated skyrmion chain and its collective dynamics in a confined geometry. Nat. Commun. 6, 8504 (2015). * [15] Yuan, H. Y. & Wang, X. R. Creation and Manipulation by Nano-Second Current Pulses. Sci. Rep. 6, 22638 (2016). * [16] Uchida, M., Onose, Y., Matsui, Y. & Tokura, Y. Real-Space Observation of Helical Spin Order. Science 311, 359-361 (2006). * [17] Han, J. H., Zang, J. D., Yang, Z. H., Park, J. H. & Nagaosa, N. Skyrmion lattice in a two-dimensional chiral magnet. Phys. Rev. B 82, 094429 (2010). * [18] Butenko, A. B., Leonov, A. A. & Rößler, U. K. Stabilization of skyrmion textures by uniaxial distortions in noncentrosymmetric cubic helimagnets. Phys. Rev. B 82, 052403 (2010). * [19] Rohart, S. & Thiaville, A. Skyrmion confinement in ultrathin film nanostructures in the presence of Dzyaloshinskii-Moriya interaction. Phys. Rev. B 88, 184422 (2013). * [20] Leonov, A. O. et al. The properties of isolated chiral skyrmions in thin magnetic films. New J. Phys. 18, 065003 (2016). * [21] Bogdanov, A. & Hubert, A. Thermodynamically stable magnetic vortex states in magnetic crystals. J. Magn. Magn. Mater 138, 255-269 (1994). * [22] Jiang, W. et al. Nanoscale magnetic skyrmions and target states in confined geometries. Science 349, 283-286 (2015). * [23] Yu, G. et al. Room-temperature skyrmions in an antiferromagnet-based heterostructure. Nano. Lett. 18, 980-986 (2018). * [24] Birch, M. T. et al. Real-space imaging of confined magnetic skyrmion tubes. Nat. Commun. 11, 1-8 (2020). * [25] He, M. et al. Realization of zero-field skyrmions with high-density via electromagnetic manipulation in Pt/Co/Ta multilayers. Appl. Phys. Lett. 111, 202403 (2017). * [26] Jena, J. et al. Evolution and competition between chiral spin textures in nanostripes with D2d symmetry. Sci. Adv. 6, eabc0723 (2020). * [27] Müller, J. et al. Magnetic Skyrmions and Skyrmion Clusters in the Helical Phase of $\mathrm{Cu}_{2}\mathrm{OSeO}_{3}$ Phys. Rev. Lett. 119, 137201 (2017). * [28] Raju, M. et al. The evolution of skyrmions in Ir/Fe/Co/Pt multilayers and their topological Hall signature. Nat. Commun. 10, 1-7 (2019). * [29] Schoenherr, P. et al. Topological domain walls in helimagnets. Nat. Phys. 14, 465-468 (2018). * [30] Cortés-Ortun̈o, D. et al. Nanoscale magnetic skyrmions and target states in confined geometries. Phys. Rev. B 99, 214408 (2019). * [31] Ezawa, M. Compact merons and skyrmions in thin chiral magnetic films. Phys. Rev. B 83, 100408 (2011). * [32] Silva, R. L., Secchin, L. D., Moura-Melo, W. A., Pereira, A. R., & Stamps, R. L. Emergence of skyrmion lattices and bimerons in chiral magnetic thin films with nonmagnetic impurities. Phys. Rev. B 89, 054434 (2014). * [33] N. Nagaosa, and Y. Tokura, Topological properties and dynamics of magnetic skyrmions, Nat. Nanotech. 8, 899 (2013). * [34] Wang, X. S., Yuan, H. Y. & Wang, X. R. A theory on skyrmion size. Commun. Phys. 1, 31 (2018). * [35] Vansteenkiste, A. et al. The design and verification of MuMax3. AIP Adv. 4, 107133 (2014). * [36] Fert, A., Reyren, N. & Cros, V. Magnetic skyrmions: advances in physics and potential applications. Nat. Rev. Mater. 2, 17031 (2017). * [37] Zhang, X. et al. Skyrmion-electronics: writing, deleting, reading and processing magnetic skyrmions toward spintronic applications. J. Phys.: Condens. Matter 32, 143001 (2020). * [38] Sun, Z. Z. & Wang, X. R. Fast magnetization switching of Stoner particles: A nonlinear dynamics picture. Phys. Rev. B 71, 174430 (2005). * [39] Wang, X. R., Yan, P., Lu, J. & He, C. Magnetic field driven domain-wall propagation in magnetic nanowires. Ann. Phys. (NY) 324, 1815-1820 (2009). * [40] Wang, X. R., Yan, P. & Lu, J. High-field domain wall propagation velocity in magnetic nanowires. Europhys. Lett. 86, 67001 (2009). Acknowledgement This work is supported by Ministry of Science and Technology through grant MOST20SC04-A, the NSFC Grant (No. 11974296 and 11774296) and Hong Kong RGC Grants (No. 16301518 and 16301619). Partial support by the National Key Research and Development Program of China (Grant No. 2018YFB0407600) is also acknowledged. Author contributions X. R. Wang planned the project and wrote the manuscript. X.C.H. and H.T.W. performed theoretical and numerical simulations, and prepared the figures. All authors discussed the results and commented on the manuscript. Additional information Supplementary Information accompanies this paper at http:// Competing interests: The authors declare no competing interests.
# The ergodic Mean Field Game system for a type of state constraint condition Mariya Sardarli ## 1 Introduction The object of this paper is to establish well-posedness (existence and uniqueness) results for a type of state constraint ergodic Mean Field Game (MFG) system $\displaystyle\begin{cases}-\Delta u+H(Du)+\rho=F(x;m)&\text{ in }\Omega,\\\ \Delta m+div(mD_{p}H(Du))=0&\text{ in }\Omega,\\\ m\geq 0,\quad\int_{\Omega}m=1,\end{cases}$ (1.1) supplemented with the state constraint-type infinite Dirichlet boundary condition $\lim\limits_{d(x)\to 0}u(x)=\infty,$ (1.2) where $d(x):=d(x,\partial\Omega)$ is the distance to the boundary. Here $\Omega\subset\mathbb{R}^{n}$ is an open bounded subset of $\mathbb{R}^{n}$, $H:\mathbb{R}^{n}\times\overline{\Omega}\to\mathbb{R}$ is the Hamiltonian associated with the cost function of an individual agent, $\rho$ is the ergodic constant, $m$ is the distribution of the agents, and $F:\Omega\times L^{1}(\Omega)\to\mathbb{R}$ is the interaction term. The Hamilton-Jacobi- Bellman (HJB) equation is understood in the viscosity sense and the Kolmogorov-Fokker-Planck (KFP) equation, in the distributional sense. The MFG $(\ref{eq:proto})$ with the boundary condition $(\ref{intro:bds})$ can be interpreted as a state constraint-type problem. Indeed, the infinite boundary condition prevents the underlying stochastic trajectories from reaching the boundary, forcing them to stay in the domain. The theory of MFGs was introduced by Lasry and Lions [8] and, in a particular setting, by Caines, Huang, and Malhamé [4], to describe the interactions of a large number of small and indistinguishable rational agents. In the absence of common noise, the behavior of the agents leads to a forward-backward coupled system of PDEs, consisting of a backward HJB equation describing the individual agent’s value function, and a forward KFP equation for the distribution of the law (density) of the population. The forward-backward system with periodic boundary conditions and either local or non-local coupling, as well its ergodic stationary counterpart, was first studied in [8]. Summaries of results may also be found in Cardaliguet [2], Lions [9], and Gomes, Pimentel, and Voskanyan [3]. Fewer results exist, however, in the case of either Dirichlet, state constraint or Neumann boundary conditions. We next state the main result of this paper for superlinear power-like Hamiltonians, that is, $H(p,x)=|p|^{q}\text{ with }\quad q>1,$ (1.3) and suitable assumptions on the coupling $F$ (see $(F1)$, $(F2)$ and $(F3)$ in Section 2.2). We emphasize that the particular form of the Hamiltonian is by no means essential to the analysis that follows. It simply provides a sufficiently general model problem that results in the necessary asymptotics of $u$ and $m$ near $\partial\Omega$. In particular, for $1<q\leq 2$, Hamiltonians $H(p)$ for which $\delta^{q/(q-1)}H(\delta^{-1/(q-1)}p)-|p|^{q}\to 0,$ (1.4) locally uniformly in $p$ as $\delta\to 0$ are permissible. We state the result below, deferring the precise meaning of a solution to the system until Section 3. ###### Theorem. Let $1<q\leq 2$ and assume either $(F1)$ or $(F2)$. Then, for all $r>1$, there exists a solution $(u,\rho,m)\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}\times W^{1,r}_{loc}(\Omega)$ to the system $\displaystyle\begin{cases}-\Delta u+|Du|^{q}+\rho=F(x;m)&\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u(x)=\infty,\\\ \Delta m+div(mq|Du|^{q-2}Du)=0&\text{ in }\Omega,\\\ m\geq 0,\quad\int_{\Omega}m=1.\end{cases}$ (1.5) Moreover, if $F$ satisfies $(F3)$, then the solution is unique. As will be explained below, the conditions $(F1)$ and $(F2)$ describe non- local and local couplings respectively, while $(F3)$ is the usual monotonicity condition. ### 1.1 Background Heuristically, HJB equations with convex (in the momentum) Hamiltonian $H$ are associated with the stochastic control problem governed by the dynamics $dX_{t}=\alpha_{t}dt+\sqrt{2}dB_{t},$ where $(\alpha_{t})_{t\geq 0}$ is a non-anticipating process. In feedback form, the optimal policy is $-D_{p}H(Du)$. In settings in which the trajectories of the agents reach the boundary but do not leave the domain or, as is the case we consider here, do not reach the boundary at all, the solutions of the HJB equation are referred to as state constraint solutions. If the drift $-D_{p}H(Du)$ is bounded, then for all $t>0$, $P(X_{t}\in\Omega^{c})>0$. It follows that for trajectories to remain inside $\Omega$, the drift must become unbounded near the boundary, pushing the agents back into the domain. For Hamiltonians of the form (1.3), or more generally $(\ref{eq:gen})$, state constraint solutions exist when the equation is paired with an infinite Neumann condition (reflection cost) or infinite Dirichlet condition (exit cost), the “correct” choice of boundary condition depending on whether the Hamiltonian is sub- or super-quadratic. Indeed, when $q>2$, solutions are bounded but the normal component of the drift blows up at the boundary. In the sub-quadratic case, the solution itself also blows up. In the sub-quadratic case, the trajectories never reach the boundary. Hence, the population density is “almost zero” near $\partial\Omega$. As the behavior of $m$ at the boundary is coupled with the blowup of $Du$ on $\partial\Omega$, no boundary condition is required on $m$ in (1.5). A type of MFG state-constraint problem has been studied by Porretta and Ricciardi in [10]. The authors impose structural assumptions on their Hamiltonian so that, for all choices of the control, an agent governed by these dynamics does not exit the domain $\Omega$. In this instance, no explicit boundary conditions on the value function are necessary to achieve well posedness. However, the assumptions of [10] do not permit coercive Hamiltonians, and, in particular, do not apply to the ones considered in this paper. ### 1.2 Infinite Dirichlet boundary conditions In the context of HJB equations, infinite boundary conditions with power-like Hamiltonians were studied by Lasry and Lions in [7], who considered the HJB equation $-\Delta u+|Du|^{q}+\lambda u=g\text{ in }\Omega,$ (1.6) with $1<q\leq 2$ and $g\in L^{\infty}(\Omega)$, as well as the ergodic problem $-\Delta u+|Du|^{q}+\rho=g\text{ in }\Omega,$ both coupled with the infinite Dirichlet boundary condition $(\ref{intro:bds})$. It was shown in [7] that the boundary value problem $\displaystyle\begin{cases}-\Delta u+|Du|^{q}+\rho=g\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u(x)=\infty,\end{cases}$ (1.7) has a unique viscosity solution $(u,\rho)$, where uniqueness for $u$ is understood to be modulo an additive constant. Such a solution has unbounded drift in the direction of the boundary and the agents remain in the domain. Due to the coupled nature of the MFG system, the existence and well posedness of the corresponding KFP equation depends in an essential way on the asymptotics of $u$ and $Du$ near the boundary. In general, if $b\in L^{1}_{loc}(\Omega)$, the distributional solution $m$ of $\displaystyle\Delta m+div(mb)=0\text{ in }\Omega,\quad\int_{\Omega}m=1,\quad m\geq 0,$ (1.8) need not be unique unless a boundary condition is specified. On the other hand, if $\displaystyle\lim\limits_{d(x)\to 0}b(x)d(x)=C\nu,$ (1.9) where $C>0$ and $\nu$ is the outward unit normal to the boundary at the point closest to $x$, Huang, Ji, Liu and Yi showed in [5] that solutions are unique, despite the absence of boundary conditions. To show $(\ref{eq:13})$ for $b=D_{p}H(Du)$ and apply this result, we need to make precise the asymptotics of $u$ and $Du$ near the boundary. In [7], the authors proved that if $(u,\rho)$ is a solution of $(\ref{eq:hjb})$, the asymptotics are precisely of this type. Namely, there exists a constant $C>0$ such that $\displaystyle\lim\limits_{d(x)\to 0}\dfrac{Du(x)\cdot\nu(x)}{d^{-1/(q-1)}}=C.$ In this paper, we refine the result and show that the rate of convergence of the above limit is controlled by the $L^{\infty}-$norm of $g$. This becomes important when proving stability results related to $(\ref{eq:mainintro})$. ### 1.3 The methodology As usual, the existence of a solution to the MFG system $(\ref{eq:mainintro})$ is proven via a fixed point argument. To apply such an argument, it is crucial to obtain bounds on $m$ up to $\partial\Omega$. Toward that end, for $\delta>0$, we define the subdomain $\Omega_{\delta}\subset\Omega$ by $\Omega_{\delta}:=\\{x\in\Omega:d(x,\partial\Omega)>\delta\\}.$ As a first step, we show that if $b$ satisfies $(\ref{eq:13})$ with constant $C$, then there exists $\gamma=\gamma(C)>0$, $\delta=\delta(C)>0$, and $L=L(C,||b||_{L^{\infty}(\Omega_{\delta})})>0$ such that any solution $m$ of $(\ref{eq:mbla})$ satisfies $\int_{\Omega}d^{-\gamma}m\leq L.$ (1.10) This estimate is needed in both the local and non-local cases. The space in which the fixed point argument is carried out differs in the non- local and local coupling settings. In the former, the fixed point is obtained in the space $\mathcal{W}_{\gamma}(\Omega):=\\{m\in L^{1}(\Omega):\int_{\Omega}d^{-\gamma}m<\infty\\},$ (1.11) where $\gamma$ is as in $(\ref{eq:gam|})$ and, here, depends only on $q$. The map $T:\mathcal{W}_{\gamma}(\Omega)\to\mathcal{W}_{\gamma}(\Omega)$ formed by composing the solution maps of $(\ref{eq:hjb})$ and $(\ref{eq:mbla}$) satisfies the conditions of Schaefer’s fixed point theorem. In particular, the continuity of $T$ is straightforward due to the regularizing effect of the coupling. By contrast, the continuity of the solution map is far from obvious in the local case, where $F(x;m)=f(m(x))$ for a continuous and bounded function $f$. Convergence of a sequence $m_{n}$ in $\mathcal{W}_{\gamma}(\Omega)$ does not yield local uniform convergence of $F(x;m_{n})$ as it does in the non-local case. Thus the fixed point argument must be performed in a different space. To deal with this issue, we introduce, for $\delta>0$, the approximating coupled ergodic system $\displaystyle\begin{cases}-\Delta u_{\delta}+|Du_{\delta}|^{q}+\rho_{\delta}=f(\tilde{m}_{\delta}(x)),&\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u_{\delta}(x)=\infty,\\\ \Delta m_{\delta}+div\left(m_{\delta}q|Du_{\delta}|^{q-2}Du_{\delta}\right)=0&\text{ in }\Omega_{\delta},\\\ \left(Dm_{\delta}+m_{\delta}q|Du_{\delta}|^{q-2}Du_{\delta}\right)\cdot\nu=0&\text{ on }\partial\Omega_{\delta},\\\ \int_{\Omega}m_{\delta}=1,\end{cases}$ (1.12) where $(u_{\delta},\rho_{\delta},m_{\delta})\in W^{1,r}\times\mathbb{R}\times C^{0,\alpha}(\Omega_{\delta})$ and $\tilde{m}_{\delta}$ is a suitable extension of ${m_{\delta}}$ to all of $\Omega$ satisfying $||\tilde{m}_{\delta}||_{C^{0,\alpha}(\Omega)}=||m_{\delta}||_{C^{0,\alpha}(\Omega_{\delta})}.$ We note that (1.12) is not the standard MFG system with Neumann boundary conditions, in which the HJB and KFP equations are set in the same domain and both have Neumann boundary conditions. Instead, the HJB equation is set in the entire domain $\Omega$, allowing us to to take advantage of the known asymptotics of $u$ and $Du$ near the boundary. On the other hand, the Neumann conditions on the KFP equation allow us to exploit in regularity of $m_{\delta}$ up to $\partial\Omega_{\delta}$. As we discuss below, for $b\in L^{\infty}(\Omega)$, the KFP equation $\displaystyle\begin{cases}\Delta m_{\delta}+div(m_{\delta}b)=0&\text{ in }\Omega_{\delta},\\\ (Dm_{\delta}+m_{\delta}b)\cdot\nu=0&\text{ on }\partial\Omega_{\delta},\end{cases}$ has a a unique (modulo a multiplicative constant) distributional solution, which is positive and Hölder continuous up to the boundary with Hölder constant depending on the $L^{\infty}-$norm of the drift. If $u$ is the solution of a HJB equation like $(\ref{eq:hjb})$, $Du$ is bounded on $\Omega_{\delta}$ and this result applies to the KFP equation in $(\ref{eq:deltap})$. Moreover, as shown in [7], the local bounds on the solutions $u_{\delta}$ of the HJB equation are uniform in the $L^{\infty}-$norm of $f$, which is independent of $m_{\delta}$. Therefore, as we show below, it is possible to carry out a fixed point argument in $C^{0,\alpha}(\Omega_{\delta})$. We are then able to pass from the solution $(u_{\delta},\rho_{\delta},m_{\delta})$ of the approximating system to a solution $(u,\rho,m)$ of the original system on $\Omega$ by the stability of the HJB equation and the local uniform bounds on $m_{\delta}$, which in turn follow from local uniform bounds on $Du_{\delta}$. Lastly, under the usual monotonicity assumption we establish uniqueness in both the local and non-local case. The key observation in the argument is that, for solutions $(\rho,u,m)$ of (1.5), $m$ decays near the boundary as in (1.10). ### 1.4 Organization of the paper The paper is organized as follows. The assumptions on the domain $\Omega$ and the coupling $F$ are stated in Section 2. Section 3 introduces the notions of weak solutions of the KFP and HJB equations and proves (local) regularity and stability results. Section 4 treats the non-local coupling case. In Section 5, we show the approximating problem (1.12) has a solution and pass to the limit to obtain a solution to $(\ref{eq:mainintro})$ on the entire domain. Section 6 establishes the uniqueness of the system and quantifies the sense in which $m$ vanishes at the boundary. The Appendix contains two technical lemmata used in Sections 3 and 5. ## 2 Assumptions and definitions ### 2.1 The domain Throughout this paper it is assumed that $\Omega\text{ is a bounded open subset of }\mathbb{R}^{n}\text{ with }C^{2}\text{-boundary}.$ In particular, the domain satisfies the uniform interior ball condition. Namely, there exists a $\delta_{0}>0$ such that for every $x\in\partial\Omega$, the ball $B_{\delta_{0}}(x-\delta_{0}\nu(x))$ is contained in $\Omega$. Recall the definition of the subdomains $\Omega_{\delta}:=\\{x\in\Omega:d(x,\partial\Omega)>\delta\\}.$ Due to the $C^{2}$-regularity of $\Omega$, there exists $\epsilon_{0}\in(0,1)$ such that $d(x,\partial\Omega)\in C^{2}(\Omega\backslash\Omega_{\epsilon_{0}})$ and for all $x\in\Omega\backslash\Omega_{\epsilon_{0}}$ there exists a unique point $\overline{x}\in\partial\Omega$ such that $d(x,\partial\Omega)=|x-\overline{x}|$. Moreover, on $\Omega\backslash\Omega_{\epsilon_{0}}$, $Dd(x,\partial\Omega)=Dd(\overline{x},\partial\Omega)=-\nu(\overline{x}).$ Lastly, we define $d$ to be a $C^{2}(\overline{\Omega})$ extension of $d(x,\partial\Omega)$ which is equal to $d(x,\partial\Omega)$ on $\Omega\backslash\Omega_{\epsilon_{0}}$. We additionally require that $d(x)\leq 1$ in $\Omega$ and $d(x)\geq M_{0}$ in $\Omega_{\epsilon_{0}}$ for some positive constant $M_{0}$. ### 2.2 The coupling Our analysis permits a coupling $F:\Omega\times L^{1}(\Omega)\to\mathbb{R}$ which is either non-local, on the one hand, or local, continuous, and bounded on the other. Namely, we consider couplings $F$ satisfying one of the following: 1. (F1) The map $m\mapsto F(\cdot\ ;m)$ sends bounded sets in $L^{1}(\Omega)$ to bounded sets in $L^{\infty}(\Omega)$, and is continuous from $L^{1}(\Omega)$ into $L^{\infty}(\Omega)$. 2. (F2) $F(x;m)=f(m(x))$ for $f$ a continuous, bounded function. A coupling satisfying assumption $(F1)$ has an $L^{\infty}(\Omega)$-bound depending only on $\int_{\Omega}m$. A coupling satisfying $(F2)$ inherits regularity from the regularity of $m$ and is bounded by $||f||_{L^{\infty}(\mathbb{R})}$. When proving uniqueness we will require the additional monotonicity assumption 1. (F3) $\int_{\Omega}(F(x;m_{1})-F(x;m_{2}))(m_{1}(x)-m_{2}(x))dx\geq 0$, with equality only if $m_{1}=m_{2}$ a.e. ### 2.3 The space of measures for the non-local setting In the introduction, we defined a space $\mathcal{W}_{\gamma}(\Omega)$ which quantifies the boundary behavior of solutions to KFP equations with an unbounded drift term. We repeat the definition here for completeness. ###### Definition 2.1. For $\gamma>0$, define a norm $||\cdot||_{\gamma,\Omega}$ by $\displaystyle||m||_{\gamma,\Omega}=\int_{\Omega}d^{-\gamma}|m|,$ and recall that $\mathcal{W}_{\gamma}(\Omega)$ is the Banach space induced by this norm, as in $(\ref{wgamma})$. ### 2.4 The extension of measures defined on subdomains In the local coupling setting discussed in Section 5 we will consider measures defined on the subdomain $\Omega_{\delta}$. Since $F$ is defined on $\Omega\times L^{1}(\Omega)$, it will be necessary to define a suitable extension of measures on $\Omega_{\delta}$ to $\Omega$ that preserves their Hölder regularity. For $\mu\in C^{0,\alpha}(\Omega_{\delta})$ and $x\in\Omega$ we define $\tilde{\mu}$ by $\displaystyle\tilde{\mu}(x):=\inf_{y\in\Omega_{\delta}}\\{\mu(y)+||\mu||_{C^{0,\alpha}(\Omega_{\delta})}|x-y|^{\alpha}\\}.$ (2.1) It is straightforward to check that $\tilde{\mu}(x)=\mu(x)$ on $\Omega_{\delta}$ and that $\tilde{\mu}\in C^{0,\alpha}(\Omega)$ with the same Hölder modulus as $\mu$. In the sections that follow, we abuse notation slightly and define the coupling $F:\Omega\times C^{0,\alpha}(\Omega_{\delta})\to\mathbb{R}$ by $\displaystyle F(x;\mu):=F(x;\tilde{\mu}).$ ## 3 Preliminaries In this section, we recall some existence and regularity results for the HJB equation and the KFP equation. At the end we prove stability results that will be used in Section 5. ### 3.1 The Kolmogorov Fokker-Planck equation Throughout this paper we will need to consider the KFP equation in subdomains of $\Omega$. To that end, we study the following KFP boundary value problem for a general open bounded domain $V$ and $B\in L^{\infty}(V)$. $\displaystyle\begin{cases}\Delta\mu+div(\mu B)=0\ &\text{ in }V,\\\ (D\mu+\mu B)\cdot\nu=0&\text{ on }\partial V,\\\ \int_{V}\mu=1,\,\mu>0\,\text{ in }V.\end{cases}$ (3.1) Later we will apply these results when $V=\Omega_{\delta}$ and $B=D_{p}(H(Du))$. ###### Definition 3.1. Given $B\in L^{\infty}(V)$, we say that $\mu\in W^{1,2}(V)$ is a Neumann weak solution of $(\ref{mdelta})$ if $\mu$ is a positive probability measure such that, for all $\phi\in W^{1,2}(V)$, $\displaystyle\int_{V}[D\phi(x)\cdot D\mu(x)+B(x)\cdot D\phi(x)\mu(x)]\ dx=0.$ The next lemma gathers existence and uniqueness results for Neumann weak solutions of $(\ref{mdelta})$ as well as $W_{loc}^{1,r}(V)$ estimates. ###### Lemma 3.1. For $B\in L^{\infty}(V)$, $(\ref{mdelta})$ admits a unique Neumann weak solution $\mu\in W^{1,2}(V)$. In addition, $\mu\in W^{1,r}(V)$ for all $r>1$ and, for compact sets $K\subset K^{\prime}\subset V$, $||\mu||_{W^{1,r}(K)}$ is bounded uniformly by a constant that depends only on $K,K^{\prime}$ and $||B||_{L^{\infty}(K^{\prime})}$. ###### Proof. The existence, positivity, and uniqueness are proven in [6]. To obtain the local uniform $W^{1,r}$ estimate, we recall that Theorem 1.1 of [1] yields a constant $C_{1}=C_{1}(K,||B||_{L^{\infty}(K^{\prime})})>0$ such that $||\mu||_{W^{1,r}(K)}\leq C_{1}||\mu||_{L^{\infty}(K)}.$ (3.2) Then Harnack inequality and the normalization condition $\int_{V}\mu=1$ give some $C_{2}=C_{2}(K,K^{\prime})>0$, such that $\sup_{K}\mu\leq C_{2}\inf_{K}\mu\leq C_{2}\frac{1}{|K|}.$ (3.3) Combining $(\ref{eq:r})$ and $(\ref{eq:r2})$ completes the proof. ∎ If the drift in the KFP equation blows-up at the boundary, it is necessary to modify the notion of weak solution as follows. ###### Definition 3.2. Given $b\in L^{1}_{loc}(\Omega)$ and $r>1$, we say $\mu\in W^{1,r}_{loc}(\Omega)$ is a _weak solution_ of $\displaystyle\Delta\mu+div(\mu b)=0\text{ in }\Omega,$ (3.4) if $\mu$ is a non-negative function such that, for all $\phi\in C^{2}_{c}(\Omega)$, $\displaystyle\int_{\Omega}[\Delta\phi(x)-b(x)\cdot D\phi(x)]\mu(x)\ dx=0.$ Moreover, $\mu$ is a _proper weak solution_ if $\int_{\Omega}\mu=1$. ###### Remark. The class of test functions in Definition 3.2 can be extended to $C^{2}$ functions that are constant in a neighborhood of $\partial\Omega$. The following lemma shows that weak solutions of $(\ref{eq:m})$ may arise as the limit of Neumann weak solutions. The rest of the section provides conditions under which the limit is a proper weak solution. ###### Lemma 3.2. Let $m_{\delta}\in W^{1,r}(\Omega_{\delta})$ be a Neumann weak solution of $(\ref{mdelta})$ for velocity fields $b_{\delta}\in L^{\infty}(\Omega_{\delta})$. Assume that, as $\delta\to 0$, $b_{\delta}\to b$ locally uniformly. Then, along subsequences, $m_{\delta}\to m\in W^{1,r}_{loc}(\Omega)$ and $m$ is a weak solution of $\displaystyle\Delta m+div(mb)=0\text{ in }\Omega.$ ###### Proof. Recalling that $||m_{\delta}||_{W^{1,r}(K)}$ is uniformly bounded for each compact $K\subset\Omega$, it is possible to extract a subsequence such that $m_{\delta}$ converges locally uniformly to a non-negative $W^{1,r}_{loc}(\Omega)$ function $m$. To see that $m$ satisfies the desired equation, consider a test function $\phi\in C^{2}_{c}(\Omega)$. Since $b_{\delta}$ converges uniformly to $b$ in the support of $\phi$, $\Delta\phi- b_{\delta}\cdot D\phi$ converges uniformly to $\Delta\phi-b\cdot D\phi$. As $m_{\delta}\to m$ uniformly in the support of $\phi$, letting $\delta\to 0$ in the weak formulation $\int_{\Omega_{\delta}}(\Delta\phi-b_{\delta}\cdot D\phi)m_{\delta}$ leads to the desired result. ∎ It follows from Fatou’s lemma that the limit $m$ in the above lemma is integrable on $\Omega$ and $\int_{\Omega}m\leq 1$. To ensure that $m$ is a proper weak solution it is necessary to impose additional structural conditions on the vector fields $b_{\delta}$. In [1], Bogachev and Rökner give conditions on the drifts under which the sequence of weak solutions $m_{\delta}$ is uniformly tight, which in turn yields $L^{1}(\Omega)$ convergence of $m_{\delta}$ to $m$. The version of the theorem relevant to our setting appears below. ###### Lemma 3.3 (Lemma 1.1 in [1]). Let $U_{k}$ by a sequence of nested domains and $\mu_{k}$ the corresponding sequence of Neumann weak solutions to $\Delta\mu_{k}+div(\mu_{k}b_{k})=0$ on $U_{k}$. Suppose that $b_{k}\in L^{1}_{loc}(U_{k})$ and that, for every $R>0$, there exists $\alpha_{R}>n$ such that $\displaystyle\kappa_{R}:=\sup_{k}\int_{U_{R}}|b_{k}(x)|^{\alpha_{R}}dx<\infty.$ If, in addition, there exists $V\in C^{2}(U)$ such that, for some $c_{k}\to\infty$, $U_{k}=\\{V<c_{k}\\}$, $\\{V=c_{k}\\}$ has Lebesgue measure zero, $\displaystyle\lim_{d(x,\partial U)\to 0}V(x)=\infty,$ (3.5) and $\displaystyle\lim_{d(x,\partial U)\to 0}\sup_{k}[\Delta V-DV\cdot b_{k}]=-\infty,$ (3.6) then the sequence $\mu_{k}$ is uniformly tight. A function $V$ satisfying the above conditions is referred to as a Lyapanov function. The connection between Lyapanov functions and existence and uniqueness of weak solutions of $(\ref{eq:m})$ has been extensively studied in [1], as well as in [5]. It follows from the Lemma 3.3 that, if there exists a $V$ satisfying $(\ref{v1})$, and $\displaystyle\lim_{d(x,\partial U)\to 0}\Delta V-DV\cdot b=-\infty,$ (3.7) then, setting $b_{k}=b\chi_{U_{k}}$, we obtain a sequence of measures $m_{k}$ that converge to a proper weak solution $m$ of $\displaystyle\Delta m+div(mb)=0\text{ in }\Omega.$ (3.8) It is proven in [5] that such a solution is unique. In the present setting, these results will be applied by considering $b(x)$ satisfying, for some $C>1$ and $\epsilon\in(0,C-1)$, $\displaystyle\lim_{d(x)\to 0}(b(x)\cdot Dd(x))d(x)=-C,$ (3.9) and $V(x)=d(x)^{-C+1+\epsilon}$, which satisfies $(\ref{v1})$ and $(\ref{vb2})$. This is stated precisely in the next result, the proof of which appears in the Appendix. ###### Lemma 3.4. Given $b:\Omega\to\mathbb{R}^{d}$ satisfying $(\ref{eq:asym})$ with $C>1$, there exists a unique proper weak solution $m\in W^{1,r}_{loc}(\Omega)$ of $\displaystyle\Delta m+div(mb)=0\text{ in }\Omega,$ and, for every $\epsilon\in(0,C-1)$, there exists a $\delta=\delta(C,\epsilon)$, such that $\displaystyle\int_{\Omega}d^{-C-1+\epsilon}m\leq\hat{C}=\hat{C}(C,||b||_{L^{\infty}(\Omega_{\delta})}).$ ###### Remark. The bound on $\int_{\Omega}d^{-C-1+\epsilon}m$ makes precise the intuition that $m$ “approaches 0” near the boundary $\partial\Omega$. It is proven in [7] that, for solutions $u$ of the HJB equation in $(\ref{eq:mainintro})$, $b=q|Du|^{q-2}Du$ satisfies precisely $(\ref{eq:asym})$ for $C=q/(q-1)$. This is discussed further in the next section. ### 3.2 The Hamilton-Jacobi-Bellman equation In this section, we fix $q\in(1,2]$ and $g\in L^{\infty}(\Omega)$ and consider the stationary HJB equation with infinite Dirichlet conditions $\displaystyle\begin{cases}-\Delta u+|Du|^{q}+\rho=g\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u(x)=\infty.\end{cases}$ (3.10) The estimates detailed here will be used to prove the HJB stability results in Theorem 3.8. We begin by stating the definition of solutions for concreteness. ###### Definition 3.3. For $1<q\leq 2$ and $g\in L^{\infty}(\Omega)$, $(u,\rho)\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}$ is an _explosive solution_ of $(\ref{udef})$ if $u$ is a viscosity solution in $\Omega$ and $\displaystyle\lim_{d(x)\to 0^{+}}u(x)=\infty.$ The existence and uniqueness of solutions to such HJB equations was proven in [7], along with several local estimates. These results are collected in the next two theorems. The second theorem is a straightforward consequence of the bounds established in Theorem IV.I of [7], with a slight modification to fit our ergodic framework. ###### Theorem 3.5. (Theorem VI.I in [7])) Let $1<q\leq 2$ and $g\in L^{\infty}(\Omega)$. The equation $(\ref{udef})$ admits an explosive solution $(u,\rho)\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}$ for all $1<r<\infty$. ###### Theorem 3.6. Let $1<q<2$, $g\in L^{\infty}(\Omega)$ and $(u,\rho)\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}$ be an explosive solution of $(\ref{eq:hj})$ with $u(x_{0})=0$, for some fixed $x_{0}\in\Omega$ . There exist positive constants $C_{1},C_{2},C_{3}$, depending only on $\Omega$ and $||g||_{L^{\infty}(\Omega)}$, such that, 1. (i) $|\rho|\leq C_{1}$, 2. (ii) $|Du(x)|\leq C_{2}d(x)^{-1/(q-1)}$, and 3. (iii) $|u(x)|\leq C_{3}d(x)^{(q-2)/(q-1)}$. In addition, for any $K\subset K^{\prime}$ compact subsets of $\Omega$, there exists a constant $C_{4}=C_{4}(d(K,K^{\prime}),n,r)>0$ such that 1. (iv) $||u||_{W^{2,r}(K)}\leq C_{4}(||u||_{L^{\infty}(K^{\prime})}+||g||_{{L^{q}}(K^{\prime})})$. If $q=2$, the same estimates hold but $(iii)$ must be replaced by 1. (iii)’ $|u(x)|\leq C_{3}\log(d(x))$. ###### Sketch of Proof of Theorem 3.6. The first result follows from the construction of $\rho$ in Theorem VI.I of [7] which shows that $\rho$ arises as the local uniform limit of $\lambda u_{\lambda}$ as $\lambda\to 0$, where $u_{\lambda}$ is the unique viscosity solution of $\displaystyle\begin{cases}-\Delta u_{\lambda}+|Du_{\lambda}|^{q}+\lambda u_{\lambda}&=g\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u_{\lambda}(x)=\infty.\end{cases}$ If $1<q<2$, for arbitrary $\epsilon>0$, $u_{\lambda}$ has as sub- and super- solutions $(C_{q}\mp\epsilon)d^{(q-2)/(q-1)}\mp C_{\epsilon}/\lambda$, where $C_{q}=(q-1)^{(q-2)/(q-1)}(2-q)^{-1}$ and $C_{\epsilon}$ is a function of $||g||_{L^{\infty}(\Omega)}$. Similarly, for $q=2$, $u_{\lambda}$ has as sub- and super-solutions $-(1\mp\epsilon)\log(d)\mp C_{\epsilon}/\lambda$, where $C_{\epsilon}$ is a function of $||g||_{L^{\infty}(\Omega)}$. Sending $\lambda\to 0$ yields the desired bound on $\rho$. The second result $(ii)$ is proven in Theorem IV.I of [7] and $(iii)$ is an immediate consequence of $(ii)$ as $u(x_{0})$ is fixed. The last interior estimate follows from a bootstrap argument. ∎ In order to study the solution of the KFP equation near the boundary, it will be necessary to know the precise asymptotics of $Du$. In [7], Lasry and Lions, and later Porretta and Véron in [11], established the following results. ###### Theorem 3.7. (Theorem I.1 and II.3 in [7]; Theorem 2.3 in [11]) Let $1<q<2$, $g\in L^{\infty}(\Omega)$ and $(u,\rho)\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}$ be a solution of $(\ref{eq:hj})$. Then, 1. (i) $\lim\limits_{d(x)\to 0}\dfrac{u(x)}{d^{(q-2)/(q-1)}}=(q-1)^{(q-2)/(q-1)}(2-q)^{-1}$, 2. and 3. (ii) $\lim\limits_{d(x)\to 0}\dfrac{Du(x)\cdot\nu(x)}{d^{-1/(q-1)}}=(q-1)^{-1/(q-1)}.$ The second result also holds for $q=2$, while the former must be replaced by 1. (i)’ $\lim\limits_{d(x)\to 0}\dfrac{u(x)}{-\log(d(x))}=1$. ###### Remark. Lemma 7.1 in the Appendix is a slightly stronger version of this theorem, where the convergence is shown to be uniform in $||g||_{L^{\infty}(\Omega)}$. This stronger version will be necessary in Section 5. Lastly, we provide a stability result particular to the types of couplings we are studying. ###### Theorem 3.8. Assume $(F1)$ and fix $x_{0}\in\Omega$. Let $\mu_{n}$ be a sequence of probability measures on $\Omega$ which converge in $L^{1}(\Omega)$ to $\mu$, and $(u_{n},\rho_{n})\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}$ be the unique explosive solution of $\displaystyle\begin{cases}-\Delta u_{n}+|Du_{n}|^{q}+\rho_{n}=F(x;\mu_{n})\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u_{n}(x)=\infty,\end{cases}$ subject to the normalization condition $u_{n}(x_{0})=0$. Then there exists $(u,\rho)\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}$, such that $\rho_{n}\to\rho$ and $u_{n}\to u$ locally uniformly as $n\to\infty$. Moreover, $(u,\rho)$ is the unique explosive solution of $\displaystyle\begin{cases}-\Delta u+|Du|^{q}+\rho=F(x;\mu)\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u(x)=\infty,u(x_{0})=0.\end{cases}$ (3.11) ###### Proof. It follows from $(F1)$ that $F(x;\mu_{n})$ and $F(x;\mu)$ are bounded in $L^{\infty}(\Omega)$. Theorem 3.6 yields that $\rho_{n}$ is bounded and hence converges along a subsequence to a constant $\rho$. The sequence $u_{n}$ is bounded in $W_{loc}^{2,r}(\Omega)$ so, by Arzelà-Ascoli, we can extract a subsequence converging locally uniformly to $u\in W^{2,r}_{loc}(\Omega)$ such that $Du_{n}\to Du$ locally uniformly. By pointwise convergence, $u(x_{0})=0$. That $u(x)\to\infty$ as $d(x)\to 0$ follows from the fact that the sequence of equations have a common explosive subsolution. When $1<q<2$, one such subsolution is of the form $(C_{q}-\epsilon)d(x)^{(q-2)/(q-1)}-C_{\epsilon}$, where $C_{q}=(q-1)^{(q-2)/(q-1)}(2-q)^{-1}$, $\epsilon>0$, and $C_{\epsilon}=C_{\epsilon}(||F(x;\mu)||_{L^{\infty}(\Omega;L^{1}(\Omega))},\epsilon,C_{q})$. When $q=2$ the corresponding subsolution is $-(1-\epsilon)\log(d(x))$. Lastly, in view of $(F1)$, $F(x;\mu_{n})$ converges uniformly to $F(x;\mu)$. It follows from classic viscosity theory that $(u,\rho)$ is a viscosity solution of $(\ref{ustab})$. As the explosive solution satisfying $u(x_{0})=0$ is unique, the limit is indepedent of the subsequence. ∎ ### 3.3 Definition and Main Result We finish this section with the definition of a solution to the system $(\ref{eq:mainintro})$ and the main existence and uniqueness result stated in the introduction. ###### Definition 3.4. A triplet $(u,\rho,m)\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}\times W^{1,r}_{loc}(\Omega)$ is a _solution_ of $(\ref{eq:mainintro})$ if $(u,\rho)$ is an explosive solution of $\displaystyle\begin{cases}-\Delta u+|Du|^{q}+\rho=F(x;m)\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u(x)=\infty,\end{cases}$ and $m$ is a proper weak solution of $\displaystyle\Delta m+div\left(mq|Du|^{q-2}Du\right)=0\text{ in }\Omega.$ We are now able to state the main result of the paper. ###### Theorem 3.9. Assume that $1<q\leq 2$ and either $(F1)$ or $(F2)$ holds. Then $(\ref{eq:mainintro})$ has a solution for all $r>1$. Moreover, if $(F3)$ also holds, then the solution is unique. ## 4 Fixed point for the non-local case In this section we will use Schaefer’s fixed point theorem in $\mathcal{W}_{\gamma}(\Omega)$ to prove the existence of a solution to $(\ref{eq:mainintro})$ when $(F1)$ holds. Let $2<\gamma<(2q-1)/(q-1)$. Define $T_{1}:\mathcal{W}_{\gamma}(\Omega)\to W^{1,r}_{loc}(\Omega)$ to be the map that sends measures $\mu$ to $q|Du|^{q-2}Du$, where $(u,\rho)$ is any explosive solution of $\displaystyle\begin{cases}-\Delta u+|Du|^{q}+\rho=F(x;{\mu})\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u(x)=\infty.\end{cases}$ (4.1) It is immediate from Theorem 3.5 that $T_{1}$ is well defined as explosive solutions are unique up to a constant. Define $T_{2}:T_{1}(\mathcal{W}_{\gamma}(\Omega))\to\mathcal{W}_{\gamma}(\Omega)$ to be the map that sends $b$ in the image of $T_{1}$ to the proper weak solution $m$ of the associated continuity equation $\displaystyle\Delta m+div(mb)=0\text{ in }\Omega.$ (4.2) In view of Lemma 3.4 and Theorem 3.7, $(\ref{eq:fpb})$ has a unique proper weak solution $m$ for $b$ in the image of $T_{1}$. Finally define $T=T_{2}\circ T_{1}$. In the next result, we apply Schaefer’s fixed point theorem to $T$, thereby obtaining a solution of the MFG system $(\ref{eq:mainintro})$. ###### Theorem 4.1. For $F$ satisfying $(F1)$ and $2<\gamma<(2q-1)/(q-1)$, $T:\mathcal{W}_{\gamma}(\Omega)\to\mathcal{W}_{\gamma}(\Omega)$ has a fixed point. ###### Proof. Recall that to apply Schaefer’s fixed point theorem, we need to show that $T$ is continuous, $T$ maps bounded sets to precompact sets, and if $F\subset W_{\gamma}$ is the set defined by $\displaystyle F=\\{m\in\mathcal{W}_{\gamma}(\Omega)|m=\lambda T(m)\text{ for some }0\leq\lambda\leq 1\\},$ (4.3) then $F$ is bounded. We start by showing that $T$ is continuous. Consider a sequence $\mu_{n}$ that converges to $\mu$ in $\mathcal{W}_{\gamma}(\Omega)$. To simplify the notation, define $b_{n}:=T_{1}(\mu_{n})$, $b:=T_{1}(\mu)$, and $m_{n}:=T_{2}(b_{n})$. We first show that $b_{n}\to b$ locally uniformly. We then show that the sequence $m_{n}$ converges in $\mathcal{W}_{\gamma}(\Omega)$ to a measure $m$ and prove that $m=T_{2}(b)=T(\mu)$. Fix $x_{0}\in\Omega$ and let $(u_{n},\rho_{n})$ be the unique explosive solution of $(\ref{eq:hjmu})$ that corresponds to $F(x;{\mu}_{n})$ and satisfies $u_{n}(x_{0})=0$. Note that $T_{1}(\mu_{n})=q|Du_{n}|^{q-2}Du_{n}$ by the remark following $(\ref{eq:hjmu})$. Since $\mu_{n}$ converges to $\mu$ in $L^{1}(\Omega)$, the local uniform convergence of $T_{1}(\mu_{n})$ to $T_{1}(\mu)$ follows from Theorem 3.8. In view of the local uniform bounds on $b_{n}$, Theorem 3.1 implies the sequence of measures $m_{n}$ are uniformly bounded in $W^{1,r}_{loc}(\Omega)$ for all $r>1$ and, hence, locally uniformly bounded in $C^{0,\alpha}(\Omega)$ for $\alpha<1-n/r$. Along subsequences, the measures $m_{n}$ converge locally uniformly to a measure $m$. Consequently, we can pass to the limit in the weak formulation of $m_{n}$ and obtain that $m$ is a non-negative measure that satisfies the KFP equation with drift $b=T_{1}(\mu)$. It remains to show that $m_{n}$ converges to $m$ in $\mathcal{W}_{\gamma}(\Omega)$. Lemma 3.4 yields that if $(2q-1)/(q-1)>\gamma^{\prime}>\gamma$, then the sequence $m_{n}$ is bounded in $\mathcal{W}_{\gamma^{\prime}}(\Omega)$. This implies that the sequence $d^{-\gamma}m_{n}$ is uniformly tight. Indeed, for some constant $C$, independent of the choice of $\gamma$ and $\gamma^{\prime}$, $\displaystyle C$ $\displaystyle\geq\int_{\Omega}d^{-\gamma^{\prime}}m_{n}\geq\delta^{-\gamma^{\prime}+\gamma}\left(\int_{\Omega\backslash\Omega_{\delta}}d^{-\gamma}m_{n}\right)$ It then follows from the bound $\displaystyle\int_{\Omega}d^{-\gamma}|m_{n}-m|\leq\int_{\Omega_{\delta}}d^{-\gamma}|m_{n}-m|+\int_{\Omega\backslash\Omega_{\delta}}d^{-\gamma}m_{n}+\int_{\Omega\backslash\Omega_{\delta}}d^{-\gamma}m,$ that $||m_{n}-m||_{\gamma,\Omega}\to 0$ as $n\to\infty$. Since convergence in $\mathcal{W}_{\gamma}(\Omega)$ implies convergence in $L^{1}(\Omega)$, this also proves that $\int_{\Omega}m=1$. The uniqueness of $m$ then yields the convergence of the full sequence $m_{n}=T(\mu_{n})$ to $T(\mu)$. Next we show that $T$ maps bounded sets to precompact ones. Indeed let $\mu_{n}$ be a bounded sequence in $\mathcal{W}_{\gamma}(\Omega)$, and consider $u_{n}$, $b_{n}$, and $m_{n}$ defined as before. It suffices to show that there is a convergent subsequence of $m_{n}$. The sequence $\mu_{n}$ is bounded in $L^{1}(\Omega)$ and $F(x;\mu_{n})$ is bounded in $L^{\infty}(\Omega;L^{1}(\Omega))$. Moreover, Theorem 3.6 yields that $b_{n}$ is locally uniformly bounded. Finally, Lemma 3.1 implies that $m_{n}$ is uniformly bounded in $C^{0,\alpha}_{loc}(\Omega)$. Passing to subsequences, the measures $m_{n}$ converge locally uniformly to a measure $m$. Proceeding as in the continuity argument, we find that the sequence $d^{-\gamma}m_{n}$ is uniformly tight and $||m_{n}-m||_{\gamma,\Omega}\to 0$. It remains to check that the set $F$ defined above is bounded. To see this, first observe that if $m=\lambda T(m)$, then $\int_{\Omega}m=\lambda\leq 1$. Hence $||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))}$ is bounded by assumption $(F1)$. Next we apply Theorem 3.6 to find that $q|Du|^{q-2}Du$ is bounded on $\Omega_{\delta}$ by a constant $C(\delta,||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))})$. Lemma 3.4 then implies that $||m||_{\gamma,\Omega}$ is bounded. ∎ ## 5 Fixed point argument for the local setting In this section we will show existence of solutions to $(\ref{eq:mainintro})$ under assumption $(F2)$ by considering the system $(\ref{eq:deltap})$, introduced in Section 1, which approximates $(\ref{eq:mainintro})$. First, we will use Schaefer’s fixed point theorem to show the system $(\ref{eq:deltap})$ has a solution. Then, using the uniform tightness results of Lemma 3.3 and the local uniform bounds of Theorem 3.6, we will show that solutions of $(\ref{eq:deltap})$ converge locally uniformly to solutions of $(\ref{eq:mainintro})$. ### 5.1 The approximate problem We begin by making precise the notion of solution to the approximating system $(\ref{eq:deltap})$. We then show the existence of a solution using a fixed point theorem argument similar to that used in the non-local setting. To utilize the asymptotic behavior of the explosive solutions to HJB equations at the boundary of $\Omega$ as well as the $W_{loc}^{1,r}$ bounds of Neumann weak solutions to KFP equations on $\Omega_{\delta}$, the KFP equation of $(\ref{eq:deltap})$ is solved in the subdomain $\Omega_{\delta}$ and the HJB equation is solved in $\Omega$. ###### Definition 5.1. A triplet $(u_{\delta},\rho_{\delta},m_{\delta})\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}\times W^{1,r}(\Omega_{\delta})$ is a _solution_ of $(\ref{eq:deltap})$ if $(u_{\delta},\rho_{\delta})$ is an explosive solution of $\displaystyle\begin{cases}-\Delta u_{\delta}+|Du_{\delta}|^{q}+\rho_{\delta}=F(x;{m}_{\delta})\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u_{\delta}(x)=\infty,\end{cases}$ and $m_{\delta}$ is a Neumann weak solution of $\displaystyle\Delta m_{\delta}+div\left(m_{\delta}q|Du_{\delta}|^{q-2}Du_{\delta}\right)=0\text{ in }\Omega_{\delta}.$ When $F$ satisfies $(F2)$, existence of a solution to this system follows from Schaefer’s fixed point theorem. Indeed, let $T_{1}:C^{0,\alpha}(\Omega_{\delta})\to W^{1,r}_{loc}(\Omega)$ be the map that sends measures $\mu$ to $q|Du|^{q-2}Du$, where $(u,\rho)$ is the explosive solution of $\displaystyle\begin{cases}-\Delta u+|Du|^{q}+\rho=F(x;{\mu})\text{ in }\Omega\\\ \lim\limits_{d(x)\to 0}u(x)=\infty,\end{cases}$ which is well defined for all $\alpha>0$ and $1<r<\infty$. It follows from Theorem 3.5 that the system has a solution $u\in W^{2,r}_{loc}(\Omega)$ for every $r>1$, and $Du$ is unique. Let $T_{2}:W^{1,r}_{loc}(\Omega)\to C^{0,\alpha}(\Omega_{\delta})$ be the map that sends $b$ to the Neumann weak solution $m_{\delta}$ of $\displaystyle\begin{cases}\Delta m_{\delta}+div(m_{\delta}b_{\delta})=0&\text{ in }\Omega_{\delta}\\\ (Dm_{\delta}+m_{\delta}b_{\delta})\cdot\nu=0&\text{ on }\partial\Omega_{\delta},\end{cases}$ where $b_{\delta}$ is the restriction of $b$ to the domain $\Omega_{\delta}$, which is also well defined. In view of Theorem 3.5, there is a unique Neumann weak solution $m_{\delta}$. Moreover, $m_{\delta}$ is in $W^{1,s}(\Omega_{\delta})$ for all $1<s<\infty$ and, by Sobelov embedding, is in $C^{0,\alpha}(\Omega_{\delta})$ for all $0<\alpha<1$. Composing the two, we define $T=T_{2}\circ T_{1}$. ###### Theorem 5.1. For $F$ satisfying $(F2)$ and $\alpha>0$, $T:C^{0,\alpha}(\Omega_{\delta})\to C^{0,\alpha}(\Omega_{\delta})$ has a fixed point. ###### Proof. We apply Schaefer’s fixed point theorem as in Theorem 4.1. As we will be working with an HJB equation set in $\Omega$ and a KFP equation set in $\Omega_{\delta}$, we will be exploiting that the method for extending measures from $\Omega_{\delta}$ to $\Omega$ defined in $(\ref{tilde})$ is $C^{0,\alpha}$-norm preserving. To show the continuity of $T$, we consider a sequence $\mu_{n}$ which converges to $\mu$ in $C^{0,\alpha}(\Omega_{\delta})$, and we start by showing that $T_{1}(\mu_{n})$ converges to $T_{1}(\mu)$ locally uniformly. To simplify notation, we let $b_{n}:=T_{1}(\mu_{n})$, $b:=T_{1}(\mu)$, and $m_{n}:=T_{2}(b_{n})$. Let $(u_{n},\rho_{n})$ be the unique explosive solution of $(\ref{eq:hj})$ that corresponds to $F(x;{\mu}_{n})$ and satisfies $u_{n}(x_{0})=0$, where $x_{0}\in\Omega$ is fixed. As the sequence $\mu_{n}$ converges to $\mu$ in $C^{0,\alpha}(\Omega_{\delta})$, $\mu_{n}$ converges uniformly to $\mu$ in $\Omega_{\delta}$, and as a result $\tilde{\mu}_{n}$ converges uniformly to $\tilde{\mu}$ in $\Omega$, where $\tilde{\mu}_{n}$ and $\tilde{\mu}$ are defined as in $(\ref{tilde})$. Due to the stability properties of viscosity solutions, to prove continuity of $T_{1}$ under assumption $(F2)$, it is sufficient to show that if $x_{n}\to x$ in $\Omega$, then $f(\tilde{\mu}_{n}(x_{n}))\to f(\tilde{\mu}(x))$. However, this follows immediately from the continuity of $f$ and the equicontinuity of $\tilde{\mu}_{n}$. Next we show that $T_{2}$ is continuous. Since $||b_{n}||_{L_{loc}^{\infty}(\Omega)}$ is bounded in $n$, $m_{n}$ is uniformly bounded in $W^{1,r}(\Omega_{\delta})$ and, hence, bounded in $C^{0,\beta}(\Omega_{\delta})$ for $\alpha<\beta<1-n/r$. In view of the compact embedding of $C^{0,\beta}(\Omega_{\delta})$ in $C^{0,\alpha}(\Omega_{\delta})$, there is a subsequence of $m_{n}$ that converges in $C^{0,\alpha}(\Omega_{\delta})$ to some $m\in C^{0,\alpha}(\Omega_{\delta})$. Moreover, $Dm_{n}$ converges weakly to $Dm$. Consequently, we can pass to the limit in the weak formulation of $m_{n}$ and obtain that $m\in W^{1,r}(\Omega_{\delta})$ is a positive probability measure that satisfies the KFP equation with drift $b=T_{1}(\mu)$. The uniqueness of $m$ yields convergence of $T(\mu_{n})$ to $T(\mu)$ along the full sequence. Next, to show that $T$ is precompact, we will use the bounds in Theorem 3.6, Lemma 3.1, and the Sobelov embedding theorem. For $\mu\in C^{0,\alpha}(\Omega_{\delta})$ and $b=T_{1}(\mu)$, Theorem 3.6 yields $|b(x)|\leq Cd(x)^{-1}$, where $C$ depends on $\Omega$ and $||F(x;{\mu})||_{L^{\infty}(\Omega;L^{1}(\Omega))}$. Note that by assumption $(F2)$, $||b||_{L^{\infty}(\Omega_{\delta})}$ is bounded by a constant depending on $||f||_{{L^{\infty}}(\mathbb{R})}$. Lemma 3.1 yields that $||T(\mu)||_{W^{1,r}(\Omega_{\delta})}$ is bounded by a constant depending on $||f||_{L^{\infty}(\mathbb{R})}$. Since $W^{1,r}(\Omega_{\delta})$ embeds compactly in $C^{0,\alpha}(\Omega_{\delta})$ for $\alpha<1-n/r$, choosing appropriately large $r$, $T$ sends bounded sets of $C^{0,\alpha}(\Omega_{\delta})$ to precompact ones. It remains to check that if $F$ is the set defined by $\displaystyle F=\\{m\in C^{0,\alpha}(\Omega_{\delta})|m=\lambda T(m)\text{ for some }0\leq\lambda\leq 1\\},$ then $F$ is bounded. This follows from Theorem 3.1 as $q|Du|^{q-2}Du$ is bounded in $L^{\infty}(\Omega_{\delta})$. ∎ ### 5.2 Limit of the approximate problem In this section we prove that in the local setting, the solutions $(u_{\delta},\rho,m_{\delta})$ of $(\ref{eq:deltap})$ converge locally uniformly to solutions of $(\ref{eq:mainintro})$. We begin by showing that, along subsequences, $u_{\delta}$ has a local uniform limit $u$ using the Arzelà-Ascoli Theorem. We then construct a Lyapanov function and use Lemma 3.2 and Lemma 3.3 to show that the measures $m_{\delta}$ converge locally uniformly to a proper weak solution $m$ of a continuity equation. Lastly we show that $u$ is the explosive solution corresponding to $f(m(x))$. ###### Theorem 5.2. Given $\delta>0$ and $F$ satisfying $(F2)$, let $x_{0}\in\Omega$, and $(u_{\delta},\rho_{\delta},m_{\delta})\in W_{loc}^{2,r}(\Omega)\times\mathbb{R}\times W^{1,r}(\Omega_{\delta})$ be a solution of $(\ref{eq:deltap})$ satisfying $u_{\delta}(x_{0})=0$. Along subsequences, $\rho_{\delta}$ converges to a constant $\rho$ and $u_{\delta}$ converges in $C^{1}_{loc}(\Omega)$ to $u\in W_{loc}^{2,r}(\Omega)$. ###### Proof. It follows from Theorem 3.6, and the uniform in $\delta$ bounds on $||F(x;m_{\delta})||_{L^{\infty}(\Omega;L^{1}(\Omega))}$, that $\rho_{\delta}$ and the $W^{1,r}(\Omega_{\delta})$ bounds of $u_{\delta}$ are bounded uniformly in $\delta$. The desired convergence along subsequences follows by the Arzelà-Ascoli Theorem. ∎ For now we make no claims about the equation $u$ solves. We must first make sense of the limit of the measures $m_{\delta}$. ###### Theorem 5.3. Given $\delta>0$ and $F$ satisfying $(F2)$, let $(u_{\delta},\rho_{\delta},m_{\delta})\in W_{loc}^{2,r}(\Omega)\times\mathbb{R}\times W^{1,r}(\Omega_{\delta})$ be a solution of $(\ref{eq:deltap})$. Along subsequences, $m_{\delta}$ converges locally uniformly to $m\in W^{1,r}_{loc}(\Omega)$ which is a proper weak solution of $\displaystyle\Delta m+div(mb)=0\text{ in }\Omega,$ where $b=q|Du|^{q-2}Du$ and $u$ is the limit function from Theorem 5.2. ###### Proof. In view of Lemma 3.2 and Lemma 3.3, it suffices to construct a function $V$ satisfying $(\ref{v1})$ and $(\ref{v2})$ for $b_{\delta}=q|Du_{\delta}|^{q-2}Du_{\delta}$. Consider $V(x)=d(x)^{-C+1+\epsilon}$, where $C=q/(q-1)$ and $0<\epsilon<C-1$, which satisfies the desired growth condition $(\ref{v1})$ at the boundary. The $L^{\infty}(\Omega;L^{1}(\Omega))$-norm of the coupling term $F(x;{m}_{\delta})$ is uniformly bounded in $\delta$ by $||f||_{L^{\infty}(\mathbb{R})}$. By Lemma 7.1, $b_{\delta}(x)\cdot\nu(x)d(x)\to C$ uniformly in $\delta$ as $d(x)\to 0$. It follows that the limit $\Delta V-b_{\delta}\cdot DV\to-\infty$ at the boundary is likewise uniform in $\delta$, and $(\ref{v2})$ is satisfied. The uniform tightness of the measures $m_{\delta}$ ensures that $\int_{\Omega}m=1$ and $m_{\delta}$ converges to $m$ in $L^{1}(\Omega)$. ∎ We are now ready to state the equation that $u$ solves. ###### Theorem 5.4. Let $F$ satisfy $(F2)$ and $(u_{\delta},\rho_{\delta},m_{\delta})\in W_{loc}^{2,r}(\Omega)\times\mathbb{R}\times W^{1,r}(\Omega_{\delta})$ be a solution of $(\ref{eq:deltap})$. Let $u$, $m$ and $\rho$ be as constructed in Theorems 5.1 and 5.2. Then $(u,\rho)$ is the explosive solution of $\displaystyle\begin{cases}-\Delta u+|Du|^{q}+\rho=F(x;m)\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u(x)=\infty.\end{cases}$ ###### Proof. The proof is very similar to the stability argument in Theorem 5.1, and follows from the local uniform Hölder bounds on $m_{\delta}$. Indeed, consider a sequence $x_{\delta}\to x$ contained in a compact set $K$, and let $M=\sup_{\delta}||m_{\delta}||_{C^{0,\alpha}(K)}$. Then, $\displaystyle|m_{\delta}(x_{\delta})-m(x)|$ $\displaystyle\leq|m_{\delta}(x)-m(x)|+|m_{\delta}(x_{\delta})-m_{\delta}(x)|$ $\displaystyle\leq|m_{\delta}(x)-m(x)|+M|x_{\delta}-x|^{\alpha}.$ (5.1) By the uniform convergence of $m_{\delta}$ to $m$ on $K$, $(\ref{eq:expression})$ goes to $0$ as $\delta\to 0$. The continuity of $f$ allows us to pass to the limit in the HJB equation, obtaining that $u$ is a viscosity solution of $\displaystyle-\Delta u+|Du|^{q}+\rho=F(x;m)\text{ in }\Omega.$ The argument that $u$ blows up at the boundary is the same as in Theorem 3.8. ∎ ## 6 Uniqueness We are now ready to address the uniqueness claim in Theorem 3.9. Throughout the discussion we take $1<q<2$ and deal with the case $q=2$ at the end of the section. ###### Proof of Theorem 3.9. It only remains to show that uniqueness holds when $F$ satisfies $(F3)$. Here we follow the classical Lasry-Lions uniqueness argument. Consider two solutions $(u_{1},\rho_{1},m_{1}),(u_{2},\rho_{2},m_{2})\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}\times W^{1,r}_{loc}(\Omega)$ of $(\ref{eq:mainintro})$. Without loss of generality, we may assume that $u_{1}(x_{0})=u_{2}(x_{0})=0$ for a fixed $x_{0}\in\Omega$. Letting $\overline{u}:=u_{1}-u_{2}$, $\overline{\rho}:=\rho_{1}-\rho_{2}$, and $\overline{m}:=m_{1}-m_{2}$, it follows that $(\overline{u},\overline{\rho},\overline{m})$ satisfies the system $\displaystyle\begin{cases}-\Delta\overline{u}+|Du_{1}|^{q}-|Du_{2}|^{q}+\overline{\rho}=F(x;m_{1})-F(x;m_{2})&\text{ in }\Omega,\\\ \Delta\overline{m}+div(m_{1}b_{1}-m_{2}b_{2})=0&\text{ in }\Omega,\end{cases}$ where $b_{1}:=q|Du_{1}|^{q-2}Du_{1}$ and $b_{2}:=q|Du_{2}|^{q-2}Du_{2}$. Consider a non-negative and compactly supported function $\phi\in C^{\infty}(\Omega)$; the particular choice of $\phi$ will be made later. Multiplying the first equation by $\overline{m}\phi$ and integrating by parts, we obtain $\displaystyle\int_{\Omega}\left[(D\overline{u})\cdot D(\overline{m}\phi)+(|Du_{1}|^{q}-|Du_{1}|^{q})\overline{m}\phi+(\overline{\rho})\overline{m}\phi\right]=\int_{\Omega}(F(x;m_{1})-F(x;m_{2}))\overline{m}\phi.$ Similarly, using $\overline{u}\phi$ as a test function in the KFP equation, we obtain $\displaystyle\int_{\Omega}\left[-D(\overline{u}\phi)\cdot(D\overline{m})-(m_{1}b_{1}-m_{2}b_{2})\cdot D(\overline{u}\phi)\right]=0.$ Adding the equations and regrouping gives: $\displaystyle\int_{\Omega}(F(x;m_{1})-F(x;m_{2}))\overline{m}\phi$ $\displaystyle=\int_{\Omega}\phi\left[(|Du_{1}|^{q}-|Du_{1}|^{q})\overline{m}-(m_{1}b_{1}-m_{2}b_{2})\cdot D(\overline{u})\right]$ $\displaystyle+\int_{\Omega}(\overline{\rho})\overline{m}\phi$ $\displaystyle-\int_{\Omega}(D\phi)\cdot[(m_{1}b_{1}-m_{2}b_{2})\overline{u}]$ $\displaystyle+\int_{\Omega}(D\phi)\cdot(\overline{m}D\overline{u}-\overline{u}D\overline{m})$ $\displaystyle=\int_{\Omega}\phi m_{1}(|Du_{1}|^{q}-|Du_{2}|^{q}-b_{1}\cdot D(u_{1}-u_{2}))$ (6.1) $\displaystyle+\int_{\Omega}\phi m_{2}(|Du_{2}|^{q}-|Du_{1}|^{q}-b_{2}\cdot D(u_{2}-u_{1}))$ (6.2) $\displaystyle+\int_{\Omega}(\overline{\rho})\overline{m}\phi$ $\displaystyle-\int_{\Omega}(D\phi)\cdot[(m_{1}b_{1}-m_{2}b_{2})\overline{u}+2\overline{m}D\overline{u}].$ $\displaystyle+\int_{\Omega}\overline{m}\Delta\phi\overline{u}$ The left-hand side is positive by the monotonicity assumption $(F3)$, and $(\ref{con1})$ and $(\ref{con2})$ are negative by the convexity of the Hamiltonian. We will show that the remaining terms on the right hand side can be made arbitrarily small for the right choice of $\phi$. We are now ready to define $\phi$. Let $\psi(s):[0,\infty)\to[0,1]$ be a smooth function that is $0$ in a neighborhood of $[0,1]$, $1$ in a neighborhood of $2$, and constant for $s\geq 2$. For $\delta<\min(1,\epsilon_{0})/2$, let $\phi(x)=\psi(d(x)/\delta)$. There exists a constant $C_{1}=C_{1}(\Omega,\psi)>0$ such that, in $\Omega_{\delta}\backslash\Omega_{2\delta}$, $\displaystyle|D\phi|\leq\dfrac{1}{\delta}||\psi^{\prime}||_{L^{\infty}([0,\infty))}||Dd||_{L^{\infty}(\Omega_{\delta}\backslash\Omega_{2\delta})}\leq C_{1}\dfrac{1}{\delta}.$ (6.3) and $\displaystyle|\Delta\phi|\leq\dfrac{1}{\delta^{2}}||\psi^{\prime\prime}||_{L^{\infty}([0,\infty))}||Dd||_{L^{\infty}(\Omega_{\delta}\backslash\Omega_{2\delta})}^{2}+\dfrac{1}{\delta}||\psi^{\prime}||_{L^{\infty}([0,\infty))}||\Delta d||_{L^{\infty}(\Omega_{\delta}\backslash\Omega_{2\delta})}\leq C_{1}\dfrac{1}{\delta^{2}}.$ Together with the local bounds on $|u_{i}|$ and $|Du_{i}|$ from Theorem 3.6, it follows from $(\ref{eq:dpsi})$ that there exists a constant $C_{2}>0$, independent of $\delta$, for which $\displaystyle\left|\int_{\Omega}(D\phi)\cdot[(m_{1}b_{1}-m_{2}b_{2})\overline{u}+2\overline{m}D\overline{u}]\right|$ $\displaystyle\leq C_{2}\int_{\Omega_{\delta}\backslash\Omega_{2\delta}}\dfrac{1}{\delta}(d^{-1/(q-1)})^{q-1}(m_{1}+m_{2})d^{(q-2)/(q-1)}$ $\displaystyle+C_{2}\int_{\Omega_{\delta}\backslash\Omega_{2\delta}}\dfrac{1}{\delta}d^{-1/(q-1)}|\overline{m}|$ $\displaystyle\leq 2C_{2}\int_{\Omega_{\delta}\backslash\Omega_{2\delta}}d^{-q/(q-1)}(m_{1}+m_{2}+|\overline{m}|).$ (6.4) Similarly there exists a constant $C_{3}>0$, independent of $\delta$, for which $\displaystyle\left|\int_{\Omega}(\Delta\phi)(\overline{u})\overline{m}\right|$ $\displaystyle\leq C_{3}\int_{\Omega_{\delta}\backslash\Omega_{2\delta}}\dfrac{1}{\delta^{2}}(m_{1}+m_{2})d^{(q-2)/(q-1)}$ (6.5) $\displaystyle\leq 4C_{3}\int_{\Omega_{\delta}\backslash\Omega_{2\delta}}d^{-q/(q-1)}(m_{1}+m_{2}).$ (6.6) We recall from Lemma 3.4 that, for small $\epsilon$, $\int_{\Omega}d(x)^{-q/(q-1)-1+\epsilon}m(x)$ is finite. It follows that $(\ref{b2})$ and $(\ref{b3})$ can be made arbitrarily small for $\delta$ small enough. To bound the remaining term, we use that $m_{1}+m_{2}\in L^{1}(\Omega)$ and $\int_{\Omega}(m_{1}-m_{2})=0$, obtaining $\displaystyle\left|\int_{\Omega}(\overline{\rho})\overline{m}\phi\right|$ $\displaystyle\leq|\overline{\rho}|\int_{\Omega_{\delta}\backslash\Omega_{2\delta}}|m_{1}-m_{2}|+|\overline{\rho}|\left|\int_{\Omega_{2\delta}}m_{1}-m_{2}\right|$ $\displaystyle\leq|\overline{\rho}|\left(\int_{\Omega\backslash\Omega_{2\delta}}(m_{1}+m_{2})+\left|0-\int_{\Omega\backslash\Omega_{2\delta}}(m_{1}-m_{2})\right|\right)$ $\displaystyle\leq 2|\overline{\rho}|\int_{\Omega\backslash\Omega_{2\delta}}(m_{1}+m_{2}),$ which can also be made arbitrarily small for $\delta$ small enough. It follows from the monotonicity of $F(x;m)$, that $m_{1}=m_{2}$. That $u_{1}=u_{2}$ and $\rho_{1}=\rho_{2}$ follows from the uniqueness of solutions to $(\ref{eq:hj})$. ∎ ###### Remark. To prove uniqueness of solutions to $(\ref{eq:mainintro})$ when $q=2$, $(\ref{b2})$ should be replaced by an estimate of the form $\displaystyle C\int_{\Omega_{\delta}\backslash\Omega_{2\delta}}d^{-1}(-\log(d))(m_{1}+m_{2}),$ (6.7) where $C>0$ is a constant independent of $\delta$. As $d^{-1}(-\log(d))$ has growth slower than $d^{-2+\epsilon}$, the rest of the proof then proceeds as in the $1<q<2$ case. ## 7 Appendix In this section we prove Lemma 7.1, a stronger version of $(ii)$ in Theorem 3.7, and Lemma 3.4. ###### Lemma 7.1. Let $m\in W^{1,r}_{loc}(\Omega)$ and consider the unique solution $(u_{m},\rho_{m})\in W^{2,r}_{loc}(\Omega)\times\mathbb{R}$ of $\displaystyle\begin{cases}-\Delta u_{m}+|Du_{m}|^{q}+\rho_{m}=F(x;m)\text{ in }\Omega,\\\ \lim\limits_{d(x)\to 0}u_{m}(x)=\infty.\end{cases}$ (7.1) Let $b_{m}:=q|Du_{m}|^{q-2}Du_{m}$. Then, uniformly in $||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))}$, $\lim_{d(x)\to 0}(b_{m}(x)\cdot\nu(x))d(x)=\dfrac{q}{q-1}.$ It will be enough to show uniform convergence of $(Du_{m}(x)\cdot\nu(x))d(x)^{1/(q-1)}$ as $d(x)\to 0$. We prove this by following the proof of Theorem 2.3 in [11] and tracking how the convergence depends on $||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))}$. In the proof that follows, we argue first when $1<q<2$, and discuss the $q=2$ case at the end of the proof. The same arguments also hold for Hamiltonians satisfying $(\ref{eq:gen})$ locally uniformly in $p$ as $\delta\to 0$. ###### Proof of Lemma 7.1. Since $\Omega$ has a uniform $C^{2}$ boundary, by Theorem 3.7 there exists a $\delta_{0}>0$, depending on $||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))}$, such that for every $x\in\partial\Omega$, the ball $B_{\delta_{0}}(x-\delta_{0}\nu(x))$ is contained in $\Omega$, and, for $d(x)<\delta_{0}$ , $\displaystyle\left|\dfrac{u_{m}(x)}{d(x)^{(q-2)/(q-1)}}-C_{q}\right|\leq\dfrac{C_{q}}{2},$ (7.2) where $C_{q}=(q-1)^{(q-2)/(q-1)}(2-q)^{-1}$. We begin by fixing $x_{0}\in\partial\Omega$ and a system of coordinates $(\eta_{1},\ldots,\eta_{n})$ centered at $x_{0}$ whose $\eta_{1}$-axis is the inner normal direction. Although we have fixed an $x_{0}$, any bounds and rates that follow will not depend on $x_{0}\in\partial\Omega$. Let $O_{\delta}=(\delta,\ldots,0)$ and consider the domain $\displaystyle D_{\delta}=B(O_{\delta},\delta^{1-\sigma})\cap B((\delta_{0},0,\ldots,0),\delta_{0}-\delta)\text{ , }\sigma\in(0,1/2).$ Applying the change of variable $\zeta=(\eta-O_{\delta})/\delta$, the domain $\tilde{D}_{\delta}:=\\{(\eta-O_{\delta})/\delta:\eta\in D_{\delta}\\}$ converges to the half space $\\{\zeta:\zeta_{1}>0\\}$, where $\zeta_{1}$ is the component of the vector in the $-\nu(x)$ direction. Under this change of variable we have the bounds $\displaystyle(\zeta_{1}+1)\delta\leq d(\eta)\leq(\zeta_{1}+1)\delta+O(\delta^{2-2\sigma}).$ (7.3) It is now possible to introduce, for $\delta<\delta_{0}$, the blow up function $\displaystyle v_{\delta,m}(\zeta)=\dfrac{u_{m}(\delta\zeta+O_{\delta})}{\delta^{(q-2)/(q-1)}},$ with domain $\tilde{D}_{\delta}$, which, in view of $(\ref{d_0})$, is uniformly bounded in $\delta,||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))},$ and $\zeta$. Straightforward computations show that $v_{\delta,m}$ describes the blow-up of $u_{m}$ near the boundary. Indeed, for $\eta_{1}=\delta\zeta_{1}+O_{\delta}$, $\displaystyle Dv_{\delta,m}(\zeta_{1})=\dfrac{Du_{m}(\delta\zeta_{1}+O_{\delta})}{C_{q}\delta^{-1/(q-1)}}=\dfrac{Du_{m}(\eta_{1})}{C_{q}d(\eta_{1})^{-1/(q-1)}}(1+\zeta_{1})^{-1/(q-1)}.$ In $\tilde{D}_{\delta}$, $v_{\delta,m}$ solves, in the viscosity sense, $\displaystyle-\Delta v_{\delta,m}+\dfrac{\rho_{m}\delta^{q/(q-1)}}{C_{q}}+C_{q}^{q-1}|Dv_{\delta,m}|^{q}=\dfrac{F(\delta\zeta+O_{\delta};m)\delta^{q/(q-1)}}{C_{q}}.$ (7.4) Moreover, $Dv_{\delta,m}(\zeta)$ satisfies the estimate $\displaystyle|Dv_{\delta,m}(\zeta)|=\dfrac{|Du_{m}(\delta\zeta+O_{\delta})|}{C_{q}\delta^{-1/(q-1)}}\leq C_{m}d(\eta)^{-1/(q-1)}\delta^{1/(q-1)}\leq C_{m},$ where $C_{m}=C_{m}(||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))})$ comes from the locally uniform bound on $Du$ from Theorem 3.6. Consider a family of $\\{v_{\delta,m}\\}$ with $\delta\to 0$ and measures $m$ such that $||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))}$ is uniformly bounded. This is the case, for example, if $(F1)$ or $(F2)$ are satisfied and the measures $m$ are uniformly bounded in $L^{1}(\Omega)$. It follows from elliptic regularity that $\\{v_{\delta,m}\\}$ has a convergent subsequence converging in $C^{1}_{loc}$ to $v$ defined in the half-plane $\\{\zeta:\zeta_{1}>0\\}$. Since $\dfrac{\rho_{m}\delta^{q/(q-1)}}{C_{q}}\to 0$ and $\dfrac{F(\delta\zeta+O_{\delta};m)\delta^{q/(q-1)}}{C_{q}}\to 0$ uniformly, $v$ is a viscosity solution of $\displaystyle-\Delta v+C_{q}^{q-1}|Dv|^{q}=0\text{ in }\\{\zeta:\zeta_{1}>0\\}.$ (7.5) In addition, $v$ satisfies the boundary conditions $\displaystyle\lim_{\zeta_{1}\to 0^{+}}v(\zeta)=C_{q}\text{ and }\lim_{\zeta_{1}\to\infty}v(\zeta)=0.$ The boundary conditions follow from $\displaystyle v_{\delta,m}(\zeta)=\dfrac{u_{m}(\eta)}{d(\eta)^{(q-2)/(q-1)}}\dfrac{d(\eta)^{(q-2)/(q-1)}}{\delta^{(q-2)/(q-1)}},$ (7.6) and the estimates in $(\ref{eta})$. As shown in Theorem 4.1 of [11], the equation $(\ref{v})$ paired with boundary conditions $(\ref{bounds})$ has a unique positive solution in the half-plane, $v(\zeta)=C_{q}(1+\zeta_{1})^{(q-2)/(q-1)}$. From $(\ref{d_0})$, $v$ is indeed positive. Thus $Dv_{\delta,m}$ converges locally uniformly to $C_{q}(1+\zeta_{1})^{-1/(q-1)}(2-q)/(q-1)\nu(x_{0})$, and $(Du_{m}(x)\cdot\nu(x))d(x)^{1/(q-1)}$ converges uniformly in $||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))}$, as $d(x)\to 0$. When $q=2$, rescaling $u_{m}$ as before does not yield an equation of the type $(\ref{vd})$. To circumvent this, we consider a function of the form $u_{m}(\eta)+\log(d(\eta))$, which is uniformly bounded in $||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))}$ and has a limiting equation of the desired form. For the case $q=2$, consider a coordinate system centered at $x_{0}\in\partial\Omega$, denoted by $O$, with the $\eta_{1}$-axis as the inner normal direction. By the inner sphere property, there is $\delta_{0}>0$ such that for every $x\in\partial\Omega$, the ball $B_{\delta_{0}}(x-\delta_{0}\nu(x))$ is contained in $\Omega$. Under this new coordinate system define $\displaystyle D_{\delta}=B(O,\delta^{1-\sigma})\cap B((\delta_{0},0,\ldots,0),\delta_{0})\text{ , }\sigma\in(0,1/2).$ Note that we changed the definition of $D_{\delta}$ here. Restricting $u_{m}$ to $D_{\delta}$, the function $\displaystyle v_{\delta,m}(\zeta)=u_{m}(\delta\zeta)+\log(\delta),$ is defined on the domain $\tilde{D}_{\delta}:=\\{\eta/\delta:\eta\in D_{\delta}\\}$. As before, $\tilde{D}_{\delta}$ converges to the half space $\\{\zeta:\zeta_{1}>0\\}$, where $\zeta_{1}$ is the component of the vector in the $-\nu(x)$ direction. Under this change of variable we have the bounds $\displaystyle\zeta_{1}\delta\leq d(\eta)\leq\zeta_{1}\delta+O(\delta^{2-2\sigma}).$ In $\tilde{D}_{\delta}$, $v_{\delta,m}$ satisfies, in the viscosity sense, $\displaystyle-\Delta v_{\delta,m}+\rho_{m}\delta^{2}+|Dv_{\delta,m}|^{2}=F(\delta\zeta;m)\delta^{2}.$ By Theorem II.3 of [7], $u_{m}(\delta\zeta)+\log(d(\eta))$ is bounded by a constant $C$ which is uniform in $||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))}$. From the bounds $(\ref{eta})$ on $d(\eta)$ it follows that $v_{\delta,m}$ is bounded locally uniformly. From standard elliptic theory, $v_{\delta}$ is relatively compact in $C^{1}_{loc}$. Thus, along subsequences $v_{\delta,m}$ converges in $C^{1}_{loc}$ to a function $v$, defined in the upper-half plane, which is a viscosity solution of $\displaystyle-\Delta v+|Dv|^{2}=0.$ Moreover, for constant $C$ as defined above, $\displaystyle v_{\delta,m}$ $\displaystyle=u_{m}(\delta\zeta)+\log(d(\eta))+\log\left(\dfrac{\delta}{d(\eta)}\right)$ $\displaystyle\geq-C+\log\left(\dfrac{1}{\zeta_{1}+O(\delta^{1-2\sigma})}\right).$ Thus we have the boundary condition $\displaystyle\lim\limits_{\zeta_{1}\to 0^{+}}v(\zeta)=\infty.$ Solutions of the system $\displaystyle\begin{cases}-\Delta v+|Dv|^{2}=0\text{ in }\\{\zeta\in\mathbb{R}^{n}\,|\,\zeta_{1}>0\\},\\\ \lim\limits_{\zeta_{1}\to 0^{+}}v(\zeta)=\infty,\end{cases}$ are of the form $-\log\zeta_{1}+\kappa$ for a constant $\kappa$, which follows from the change of variable $w=e^{-v}$. Thus $Dv_{\delta,m}$ converges locally uniformly to $(1/\zeta_{1})\nu(x_{0})$, and $(Du_{m}(x)\cdot\nu(x))d(x)$ converges uniformly in $||F(x;m)||_{L^{\infty}(\Omega;L^{1}(\Omega))}$, as $d(x)\to 0$. ∎ We now prove Lemma 3.4 from Section 3.1. ###### Lemma 3.4. Let $b:\Omega\to\mathbb{R}^{d}$ satisfy $(\ref{eq:asym})$ with $C>1$. Then there exists a unique proper weak solution $m\in W^{1,r}_{loc}(\Omega)$ of $\displaystyle\Delta m+div(mb)=0\text{ in }\Omega,$ (7.7) and, for every $\epsilon\in(0,C-1)$, there exists a $\delta=\delta(C,\epsilon)$, such that $\displaystyle\int_{\Omega}d^{-C-1+\epsilon}m\leq\hat{C}=\hat{C}(C,||b||_{L^{\infty}(\Omega_{\delta})}).$ (7.8) ###### Proof. The existence of a proper weak solution $m$ was proven in Section 3.1. To prove (7.8), we mimick the proof of Lemma 1.1 in [1]. Let $V(x)=d(x)^{-C+1+\epsilon}$ for $0<\epsilon<C-1$. In the neighborhood $\Omega\backslash\Omega_{\epsilon_{0}}$ of $\partial\Omega$, $\displaystyle\Delta V-b\cdot DV$ $\displaystyle=(-C+1+\epsilon)d^{-C-1+\epsilon}[(-C+\epsilon)|Dd|^{2}+d\Delta d-(b\cdot Dd)d]$ $\displaystyle=(-C+1+\epsilon)d^{-C-1+\epsilon}[-C+\epsilon+d\Delta d-(b\cdot Dd)d].$ Since the last factor approaches $\epsilon$ as $d\to 0$, in a neighborhood of $\partial\Omega$, we have $\displaystyle\Delta V-b\cdot DV\lesssim-d^{-C-1+\epsilon}.$ (7.9) Let $\delta_{0}>0$ be such that, for $x\in\Omega/\Omega_{\delta_{0}}$, $\displaystyle\Delta V-b\cdot DV\leq-1,$ and define $S:=\int_{\Omega_{\delta_{0}}}|\Delta V-b\cdot DV|m$. Note that since $m$ is a probability measure on $\Omega$, $S$ is bounded by the $L^{\infty}$ norm of $|\Delta V-b\cdot DV|$ on $\Omega_{\delta_{0}}$. We claim that, for all $\delta<\delta_{0}$, $\int_{\Omega_{\delta}}|\Delta V-b\cdot DV|m\leq 2S.$ (7.10) Since $S$ is independent of $\delta$, it would then follow that $\displaystyle\int_{\Omega}|\Delta V-b\cdot DV|m\leq 2S,$ and the growth of $\Delta V-b\cdot DV$ in $(\ref{eq:Vass})$ completes the proof of the lemma. To prove $(\ref{eq:V})$, fix $\delta$, $s$, and $r$, such that $0<\delta_{0}^{-C+1+\epsilon}<s<\delta^{-C+1+\epsilon}<r$. Consider a nondecreasing cut-off function $\phi\in C^{2}(\mathbb{R})$ such that $\phi^{\prime\prime}\leq 0$, $\phi(t)=t$ for $0\leq t\leq s$, and $\phi(t)=r$ for $t\geq r$. Since $\phi(V(x))$ is constant in a neighborhood of the boundary $\partial\Omega$, it follows that it is a permissible test function in $(\ref{eq:mapp})$. The sign of $\phi^{\prime\prime}$ yields, $\displaystyle 0$ $\displaystyle=\int_{\Omega}[\Delta(\phi(V))-b\cdot D(\phi(V)]$ $\displaystyle=\int_{\Omega}[\phi^{\prime\prime}(V)|DV|^{2}+\phi^{\prime}(V)\Delta V-\phi^{\prime}(V)b\cdot DV]m$ $\displaystyle\leq\int_{\Omega}\phi^{\prime}(V)[\Delta V-b\cdot DV]m.$ The last integrand is zero outside $\\{V\leq r\\}$, equals $(\Delta V-b\cdot DV)m$ on $\\{V\leq s\\}$ and, is non-positive on $\\{s\leq V\leq r\\}$. Consequently, $\displaystyle\int_{\\{V\leq s\\}}(\Delta V-b\cdot DV)m\geq 0.$ Letting $s\to\delta^{-C+1+\epsilon}$ yields $\displaystyle\int_{\Omega_{\delta}}(\Delta V-b\cdot DV)m\geq 0.$ Since $\Delta V-b\cdot DV\leq 0$ on $\Omega_{\delta}\backslash\Omega_{\delta_{0}}$, and $\int_{\Omega_{\delta_{0}}}|\Delta V-b\cdot DV|m=S$, it follows that $\displaystyle\int_{\Omega_{\delta}}|\Delta V-b\cdot DV|m\leq 2S.$ ∎ ## References * [1] V. Bogachev and M. Röckner. A generalization of Khasminskii’s theorem on the existence of invariant measures for locally integrable drifts. (Russian). Veroyatnost. i Primenen., 45(3):417–436, 2000. * [2] P. Cardaliaguet. Notes on mean field games(from p.-l. lions’ lectures at collége de france), 2013. * [3] D. Gomes, E. Pimentel, and V. Voskanyan. Regularity Theory for Mean-Field Game Systems. SpringerBriefs in Mathematics. Springer International Publishing, 2016\. * [4] M. Huang, R. P. Malhamé, and P. E. Caines. Large population stochastic dynamic games: closed-loop mckean-vlasov systems and the nash certainty equivalence principle. 2006\. * [5] W. Huang, M. Ji, Z. Liu, and Y. Yi. Steady states of fokker–planck equations: I. existence. Journal of Dynamics and Differential Equations, 27:721–742, 2015\. * [6] O. A. Ladyzhenskaya and N. N. Uraltseva. Linear and quasilinear elliptic equations. Academic Press New York, 1968. * [7] J.-M. Lasry and P.-L. Lions. Nonlinear elliptic equations with singular boundary conditions and stochastic control with state constraints. Math. Ann., 283:583–630, 1989. * [8] J.-M. Lasry and P.-L. Lions. Mean field games. Japanese Journal of Mathematics, 2:229–260, 2007. * [9] P.-L. Lions. Cours au collège de france. http://www.college-de-france.fr. * [10] A. Porretta and M. Ricciardi. Mean field games under invariance conditions for the state space. Communications in Partial Differential Equations, 45:146 – 190, 2019. * [11] A. Porretta and L. Véron. Asymptotic behaviour for the gradient of large solutions to some nonlinear elliptic equations. Adv. Nonlinear Stud., 2006.
# Isolated Skyrmions in the $CP^{2}$ nonlinear $\sigma$-model with a Dzyaloshinskii-Moriya type interaction Yutaka Akagi Department of Physics, Graduate School of Science, The University of Tokyo, Bunkyo, Tokyo 113-0033, Japan Yuki Amari BLTP, JINR, Dubna 141980, Moscow Region, Russia Department of Mathematical Physics, Toyama Prefectural University, Kurokawa 5180, Imizu, Toyama, 939-0398, Japan Nobuyuki Sawado Department of Physics, Tokyo University of Science, Noda, Chiba 278-8510, Japan Yakov Shnir BLTP, JINR, Dubna 141980, Moscow Region, Russia ###### Abstract We study two dimensional soliton solutions in the $CP^{2}$ nonlinear $\sigma$-model with a Dzyaloshinskii-Moriya type interaction. First, we derive such a model as a continuous limit of the $SU(3)$ tilted ferromagnetic Heisenberg model on a square lattice. Then, introducing an additional potential term to the derived Hamiltonian, we obtain exact soliton solutions for particular sets of parameters of the model. The vacuum of the exact solution can be interpreted as a spin nematic state. For a wider range of coupling constants, we construct numerical solutions, which possess the same type of asymptotic decay as the exact analytical solution, both decaying into a spin nematic state. ## 1 Introduction In the 1960s, Skyrme introduced a (3+1)-dimensional $O(4)$ nonlinear (NL) $\sigma$-model [1, 2] which is now well-known as a prototype of a classical field theory that supports topological solitons (See Ref. [3], for example). Historically, the Skyrme model has been proposed as a low-energy effective theory of atomic nuclei. In this framework, the topological charge of the field configuration is identified with the baryon number. The Skyrme model, apart from being considered a good candidate for the low- energy QCD effective theory, has attracted much attention in various applications, ranging from string theory and cosmology to condensed matter physics. One of the most interesting developments here is related to a planar reduction of the NL$\sigma$-model, the so-called baby Skyrme model [4, 5, 6]. This (2+1)-dimensional simplified theory resembles the basic properties of the original Skyrme model in many aspects. The baby Skyrme model finds a number of physical realizations in different branches of modern physics. Originally, it was proposed as a modification of the Heisenberg model [7, 4, 5]. Then, it was pointed out that Skyrmion configurations naturally arise in condensed matter systems with intrinsic and induced chirality [8, 9, 10, 11, 12]. These baby Skyrmions, often referred to as magnetic Skyrmions, were experimentally observed in non-centrosymmetric or chiral magnets [13, 14, 15]. This discovery triggered extensive research on Skyrmions in magnetic materials. This direction is a rapidly growing area both theoretically and experimentally [16]. A typical stabilizing mechanism of magnetic skyrmions is the existence of Dzyaloshinskii-Moriya (DM) interaction [17, 18], which stems from the spin- orbit coupling. In fact, the magnetic Skyrmions in chiral magnets can be well described by the continuum effective Hamiltonian $H=\int{\mathrm{d}}^{2}x\left[\frac{J}{2}\left(\nabla\bm{m}\right)^{2}+\kappa\leavevmode\nobreak\ \bm{m}\cdot\left(\nabla\times\bm{m}\right)-Bm^{3}+A\left\\{|\bm{m}|^{2}+\left(m^{3}\right)^{2}\right\\}\right],$ (1.1) where $\bm{m}(\bm{r})=\left(m^{1},m^{2},m^{3}\right)$ is a three component unit magnetization vector which corresponds to the spin expectation value at position $\bm{r}$. The first term in Eq. (1.1) is the continuum limit of the Heisenberg exchange interaction, i.e., the kinetic term of the $O(3)$ NL$\sigma$-model, which is often referred to as the Dirichlet term. The second term there is the DM interaction term, the third one is the Zeeman coupling with an external magnetic field $B$, and the last, symmetry breaking term $A\left\\{|\bm{m}|^{2}+\left(m^{3}\right)^{2}\right\\}$ represents the uniaxial anisotropy. It is remarkable that in the limiting case $A=\kappa^{2}/2J,B=0$, the Hamiltonian (1.1) can be written as the static version of the $SU(2)$ gauged $O(3)$ NL$\sigma$-model [19, 20] $\displaystyle H=\frac{J}{2}\int{\mathrm{d}}^{2}x\left({\partial}_{k}\bm{m}+\bm{A}_{k}\times\bm{m}\right)^{2},\qquad k=1,2$ (1.2) with a background gauge field $\bm{A}_{1}=(-\kappa/J,\leavevmode\nobreak\ 0,\leavevmode\nobreak\ 0),\leavevmode\nobreak\ \bm{A}_{2}=(0,-\kappa/J,\leavevmode\nobreak\ 0)$. Though the DM term is usually introduced phenomenologically, a mathematical derivation of the Hamiltonian (1.2) with arbitrary $\bm{A}_{k}$ has been developed recently [19], i.e.; it has been shown that the Hamiltonian can be derived mathematically in a continuum limit of the tilted (quantum) Heisenberg model ${\cal H}=-J\sum_{\langle ij\rangle}\left({\cal W}_{i}S^{a}_{i}{\cal W}^{-1}_{i}\right)\left({\cal W}_{j}S^{a}_{j}{\cal W}^{-1}_{j}\right)\leavevmode\nobreak\ ,$ (1.3) where the sum $\langle ij\rangle$ is taken over the nearest-neighbor sites, $S^{a}_{i}$ denotes the $a$-th component of spin operators at site $i$ and ${\cal W}_{i}\in SU(2)$. It was reported that the tilting Heisenberg model can be derived from a Hubbard model at half-filling in the presence of spin-orbit coupling [21]. Therefore, the background field $\bm{A}_{k}$ can still be interpreted as an effect of the spin-orbit coupling. There are two advantages of utilizing the expression (1.2) for the theoretical study of baby Skyrmions in the presence of the so-called Lifshitz invariant, an interaction term which is linear in a derivative of an order parameter [22, 23], like the DM term. The first advantage of the form Eq. (1.2) is that one can study a NL$\sigma$-model with various form of Lifshitz invariants which are mathematically derived by choice of the background field $\bm{A}_{k}$, although Lifshitz invariants have, in general, a phenomenological origin corresponding to the crystallographic handedness of a given sample. The second advantage of the model (1.2) is that it allows us to employ several analytical techniques developed for the gauged NL$\sigma$-model. It has been recently reported in Ref. [20] that the Hamiltonian (1.2) with a specific choice of the potential term exactly satisfies the Bogomol’nyi bound, and the corresponding Bogomol’nyi-Prasad-Sommerfield (BPS) equations have exact closed-form solutions [20, 24, 25]. Geometrically, the planar Skyrmions are very nicely described in terms of the $CP^{1}$ complex field on the compactified domain space $S^{2}$ [6]. Further, there are various generalizations of this model; for example, two-dimensional $CP^{2}$ Skyrmions have been studied in the pure $CP^{2}$ NL$\sigma$-model [26, 27, 28] and in the Faddeev-Skyrme type model [29, 30]. Remarkably, the two dimensional $CP^{2}$ NL$\sigma$-model can be obtained as a continuum limit of the $SU(3)$ ferromagnetic (FM) Heisenberg model [31, 32] on a square lattice defined by the Hamiltonian ${\cal H}=-\frac{J}{2}\sum_{\langle ij\rangle}T^{m}_{i}T^{m}_{j},$ (1.4) where $J$ is a positive constant, and $T^{m}_{i}$ ($m=1,...,8$) stand for the $SU(3)$ spin operators of the fundamental representation at site $i$ satisfying the commutation relation $\left[T^{l}_{i},T^{m}_{i}\right]=if_{lmn}T^{n}_{i}.$ (1.5) Here, the structure constants are given by $f_{lmn}=-\frac{i}{2}\mathrm{Tr}\left(\lambda_{l}\left[\lambda_{m},\lambda_{n}\right]\right)$, where $\lambda_{m}$ are the usual Gell-Mann matrices. The $SU(3)$ FM Heisenberg model may play an important role in diverse physical systems ranging from string theory [33] to condensed matter, or quantum optical three-level systems [34]. It can be derived from a spin-1 bilinear- biquadratic model with a specific choice of coupling constants, so-called FM $SU(3)$ point, see, e.g., Ref. [35]. The $SU(3)$ spin operators can be defined in terms of the $SU(2)$ spin operators $S^{a}$ ($a=1,2,3$) as $\left(\begin{array}[]{c}T^{7}\\\ T^{5}\\\ T^{2}\end{array}\right)=\left(\begin{array}[]{c}S^{1}\\\ -S^{2}\\\ S^{3}\end{array}\right),\qquad\left(\begin{array}[]{c}T^{3}\\\ T^{8}\\\ T^{1}\\\ T^{4}\\\ T^{6}\end{array}\right)=-\left(\begin{array}[]{c}\left(S^{1}\right)^{2}-\left(S^{2}\right)^{2}\\\ \frac{1}{\sqrt{3}}\left[\bm{S}\cdot\bm{S}-3\left(S^{3}\right)^{2}\right]\\\ S^{1}S^{2}+S^{2}S^{1}\\\ S^{3}S^{1}+S^{1}S^{3}\\\ S^{2}S^{3}+S^{3}S^{2}\end{array}\right).$ (1.6) Using the $SU(2)$ commutation relation $\left[S^{a}_{i},S^{b}_{i}\right]=i\varepsilon_{abc}S^{c}_{i}$ where $\varepsilon_{abc}$ denotes the anti-symmetric tensor, one can check that the operators (1.6) satisfy the $SU(3)$ commutation relation (1.5). In the present paper, we study baby Skyrmion solutions of an extended $CP^{2}$ NL$\sigma$-model composed of the $CP^{2}$ Dirichlet term, a DM type interaction term, i.e., the Lifshitz invariant, and a potential term. The Lifshitz invariant, instead of being introduced ad hoc in the continuum Hamiltonian, can be derived in a mathematically well-defined way via consideration of a continuum limit of the $SU(3)$ tilted Heisenberg model. Below we will implement this approach in our derivation of the Lifshitz invariant. In the extended $CP^{2}$ NL$\sigma$-model, we derive exact soliton solutions for specific combinations of coupling constants called the BPS point and solvable line. For a broader range of coupling constants, we construct solitons by solving the Euler-Lagrange equation numerically. The organization of this paper is the following: In the next section, we derive an $SU(3)$ gauged $CP^{2}$ NL$\sigma$-model from the $SU(3)$ tilted Heisenberg model. Similar to the $SU(2)$ case described as Eq. (1.2), the term linear in a background field can be viewed as a Lifshitz invariant term. In Sec. 3, we study exact Skyrmionic solutions of the $SU(3)$ gauged $CP^{2}$ NL$\sigma$-model in the presence of a potential term for the BPS point and solvable line using the BPS arguments. The numerical construction of baby Skyrmion solutions off the solvable line is given in Sec. 4. Our conclusions are given in Sec. 5. ## 2 Gauged $\bm{CP^{2}}$ NL$\bm{\sigma}$-model from a spin system To find Lifshitz invariant terms relevant for the $CP^{2}$ NL$\sigma$-model, we begin to derive an $SU(3)$ gauged $CP^{2}$ NL$\sigma$-model, a generalization of Eq. (1.2), from a spin system on a square lattice. By analogy with Eq. (1.2), the Lifshitz invariant, in that case, can be introduced as a term linear in a non-dynamical background gauge potential of the gauged $CP^{2}$ model. Following the procedure to obtain a gauged NL$\sigma$-model from a spin system, as discussed in Ref. [19], we consider a generalization of the $SU(3)$ Heisenberg model defined by the Hamiltonian ${\cal H}=-\frac{J}{2}\sum_{\langle ij\rangle}T^{m}_{i}(\hat{U}_{ij})_{mn}T^{n}_{j},$ (2.1) where $\hat{U}_{ij}$ is a background field which can be recognized as a Wilson line operator along with the link from the point $i$ to the point $j$, which is an element of the $SU(3)$ group in the adjoint representation. As in the $SU(2)$ case [19], the field $\hat{U}_{ij}$ may describe effects originated from spin (nematic)-orbital coupling, complicated crystalline structure, and so on. This Hamiltonian can be viewed as the exchange interaction term for the tilted operator $\tilde{T}^{m}_{i}={\cal W}_{i}T^{m}_{i}{\cal W}^{-1}_{i}$ where ${\cal W}_{i}\in SU(3)$, because one can write ${\cal W}_{j}T^{m}_{j}{\cal W}^{-1}_{j}=(R_{j})_{mn}T_{j}^{n}$ where $R_{j}$ is an element of $SU(3)$ in the adjoint representation. Clearly, $\hat{U}_{ij}=R_{i}^{\rm T}R_{j}$, where $\rm T$ stands for the transposition. Let us now find the classical counterpart of the quantum Hamiltonian (2.1). It can be defined as an expectation value of Eq. (2.1) in a state possessing over-completeness, through a path integral representation of the partition function. In order to construct such a state for the spin-1 system, it is convenient to introduce the Cartesian basis $\left|{x^{1}}\right>=\frac{i}{\sqrt{2}}\left(\left|{+1}\right>-\left|{-1}\right>\right),\qquad\left|{x^{2}}\right>=\frac{1}{\sqrt{2}}\left(\left|{+1}\right>+\left|{-1}\right>\right),\qquad\left|{x^{3}}\right>=-i\left|{0}\right>,$ (2.2) where $\left|{m}\right>=\left|{S=1,m}\right>$ ($m=0$, $\pm 1$). In terms of the Cartesian basis, an arbitrary spin-1 state at a site $j$ can be expressed as a linear combination $\left|{Z}\right>_{j}=Z^{a}(\bm{r}_{j})\left|{x^{a}}\right>_{j}$ where ${\bm{r}}_{j}$ stands for the position of the site $j$, and $\bm{Z}=\left(Z^{1},Z^{2},Z^{3}\right)^{\rm T}$ is a complex vector of unit length [36, 31]. Since the state $\left|{Z}\right>_{j}$ satisfies an over- completeness relation, one can obtain the classical Hamiltonian using the state $\left|{Z}\right>=\otimes_{j}\left|{Z}\right>_{j}=\otimes_{j}Z^{a}(\bm{r}_{j})\left|{x^{a}}\right>_{j}\leavevmode\nobreak\ .$ (2.3) Since $\bm{Z}$ is normalized and has the gauge degrees of freedom corresponding to the overall phase factor multiplication, it takes values in $S^{5}/S^{1}\approx CP^{2}$. In terms of the basis (2.2), the $SU(3)$ spin operators can be defined as $T^{m}=\left(\lambda_{m}\right)_{ab}\left|{x^{a}}\right>\left<{x^{b}}\right|\qquad m=1,2,\cdots,8,$ (2.4) where $\lambda_{m}$ is the $m$-th component of the Gell-Mann matrices. One can check that they satisfy the $SU(3)$ commutation relation (1.5). The expectation values of the $SU(3)$ operators in the state (2.3) are given by $\langle{T^{m}_{j}}\rangle\equiv n^{m}(\bm{r}_{j})=\left(\lambda_{m}\right)_{ab}\bar{Z}^{a}(\bm{r}_{j})Z^{b}(\bm{r}_{j}),$ (2.5) where $\bar{Z}^{a}$ denotes the complex conjugation of $Z^{a}$. In the context of QCD, the field $n^{m}$ is usually termed a color (direction) field [37]. The color field satisfies the constraints $n^{m}n^{m}=\frac{4}{3},\qquad n^{m}=\frac{3}{2}d_{mpq}n^{p}n^{q}\leavevmode\nobreak\ ,$ (2.6) where $d_{mpq}=\frac{1}{4}\mathrm{Tr}\left(\lambda_{m}\left\\{\lambda_{p},\lambda_{q}\right\\}\right)$. Consequently, the number of degrees of freedom of the color field reduces to four. Note that, combining the constraints (2.6), one can get the Casimir identity $d_{mpq}n^{m}n^{p}n^{q}=8/9$. In terms of the color field, the classical Hamiltonian is given by $\displaystyle H$ $\displaystyle\equiv\left<{Z}\right|{\cal H}\left|{Z}\right>=-\frac{J}{2}\sum_{\langle ij\rangle}n^{l}(\bm{r}_{i})(\hat{U}_{ij})_{lm}n^{m}(\bm{r}_{j}).$ (2.7) Let us write the position of a site $j$ next to a site $i$ as $\bm{r}_{j}=\bm{r}_{i}+a\epsilon\bm{e}_{k}$ where $\bm{e}_{k}$ is the unit vector in the $k$-th direction, $\epsilon=\pm 1$, and $a$ stands for the lattice constant. For $a\ll 1$, the field $\hat{U}_{ij}$ can be approximated by the exponential expansion $\hat{U}_{ij}\approx e^{ia\epsilon A^{m}_{k}(\bm{r}_{i})\hat{l}_{m}}={\mathbb{1}}+ia\epsilon A_{k}^{m}(\bm{r}_{i})\hat{l}_{m}-\frac{a^{2}}{2}A_{k}^{m}(\bm{r}_{i})A_{k}^{n}(\bm{r}_{i})\hat{l}_{m}\hat{l}_{n}+{\cal O}(a^{3}),$ (2.8) where ${\mathbb{1}}$ is the unit matrix and $\hat{l}_{m}$ are the generators of $SU(3)$ in the adjoint representation, i.e., $(\hat{l}_{m})_{pq}=if_{mpq}$. In addition, since the model (2.1) is ferromagnetic, it is natural to assume that nearest-neighbor spins are oriented in the almost same direction, which allows us to use the Taylor expansion $n^{m}(\bm{r}_{j})=n^{m}(\bm{r}_{i})+a\epsilon{\partial}_{k}n^{m}(\bm{r}_{i})+{\cal O}(a^{2}).$ (2.9) Replacing the sum over the lattice sites in Eq. (2.7) by the integral $\displaystyle{a^{-2}\int{\mathrm{d}}^{2}x}$, we obtain a continuum Hamiltonian, except for a constant term, of the form $\displaystyle H=\frac{J}{8}\int{\mathrm{d}}^{2}x\left[\mathrm{Tr}\left({\partial}_{k}\mathfrak{n}{\partial}_{k}\mathfrak{n}\right)-2i\mathrm{Tr}\left(A_{k}\left[\mathfrak{n},{\partial}_{k}\mathfrak{n}\right]\right)-\mathrm{Tr}\left(\left[A_{k},\mathfrak{n}\right]^{2}\right)\right],$ (2.10) where $A_{k}=A^{m}_{k}\lambda_{m}$ and $\mathfrak{n}=n^{m}\lambda_{m}$. Similar to its $SU(2)$ counterpart expressed as Eq. (1.2), this Hamiltonian can also be written as the static energy of an $SU(3)$ gauged $CP^{2}$ NL$\sigma$-model $H=\frac{J}{8}\int{\mathrm{d}}^{2}x\mathrm{Tr}\left(D_{k}\mathfrak{n}\leavevmode\nobreak\ D_{k}\mathfrak{n}\right),$ (2.11) where $D_{k}\mathfrak{n}={\partial}_{k}\mathfrak{n}-i\left[A_{k},\mathfrak{n}\right]$ is the $SU(3)$ covariant derivative. Since the Hamiltonian is given by the $SU(3)$ covariant derivative, Eq. (2.11) is invariant under the $SU(3)$ gauge transformation $\mathfrak{n}\to g\mathfrak{n}g^{-1},\qquad A_{k}\to gA_{k}g^{-1}+ig{\partial}_{k}g^{-1},$ (2.12) where $g\in SU(3)$. Note that, however, since the Hamiltonian (2.11) does not include kinetic terms for the gauge field, like the Yang-Mills term, or the Chern-Simons term, the gauge potential is just a background field, not the dynamical one. We suppose that the gauge field is fixed beforehand by the structure of a sample and give the value by hand, like the $SU(2)$ case. The gauge fixing allows us to recognize the second term in Eq. (2.10) as a Lifshitz invariant term. We would like to emphasize that we do not deal with Eq. (2.11) as a gauge theory. Rather, we deem it the $CP^{2}$ NL$\sigma$-model with a Lifshitz invariant, and show the existence of the exact and the numerical solutions. For the baby Skyrmion solutions we shall obtain, the color field $\mathfrak{n}$ approaches to a constant value $\mathfrak{n}_{\infty}$ at spatial infinity so that the physical space $\mathbb{R}^{2}$ can be topologically compactified to $S^{2}$. Therefore, they are characterized by the topological degree of the map $\mathfrak{n}:\mathbb{R}^{2}\sim S^{2}\mapsto CP^{2}$ given by $Q=-\frac{i}{32\pi}\int{\mathrm{d}}^{2}x\leavevmode\nobreak\ \varepsilon_{jk}\mathrm{Tr}\left(\mathfrak{n}\left[{\partial}_{j}\mathfrak{n},{\partial}_{k}\mathfrak{n}\right]\right).$ (2.13) Combining with the assumption that the gauge is fixed, it is reasonable to identify this quantity (2.13) with the topological charge in our model111 If one extends the model (2.11) with a dynamical gauge field, the topological charge is defined by the $SU(3)$ gauge invariant quantity which is directly obtained by replacing the partial difference in Eq. (2.13) with the covariant derivative.. ## 3 Exact solutions of the $\bm{SU(3)}$ gauged $\bm{CP^{2}}$ NL$\bm{\sigma}$-model In this section, we derive exact solutions of the model with the Hamiltonian (2.11) supplemented by a potential term. We first remark on the validity of the variational problem. As discussed in Refs. [20, 25] for the $SU(2)$ case, a surface term, which appears in the process of variation, cannot be ignored if the physical space is non-compact and the gauge potential $A_{k}$ does not vanish at the spatial infinity like the DM term. This problem can be cured by introducing an appropriate boundary term, like [20] $H_{\rm Boundary}=\mp 4\rho\int{\mathrm{d}}^{2}x\leavevmode\nobreak\ \varepsilon_{jk}{\partial}_{j}\mathrm{Tr}(\mathfrak{n}A_{k})\,,$ (3.1) where $\rho=J/8$. Here the gauge potential $A_{k}$ satisfies $\left[\mathfrak{n}_{\infty},A_{j}\right]\pm\frac{i}{2}\varepsilon_{ij}\left[\mathfrak{n}_{\infty},\left[\mathfrak{n}_{\infty},A_{k}\right]\right]=0,$ (3.2) where $\mathfrak{n}_{\infty}$ is the asymptotic value of $\mathfrak{n}$ at spatial infinity. Note that Eq. (3.2) corresponds to the asymptotic form of the BPS equation, which we shall discuss in the next subsection. Hence, all field configurations we consider in this paper satisfy this equation automatically. Since (3.1) is a surface term, it does not contribute to the Euler-Lagrange equation, i.e., the classical Heisenberg equation. Note that the solutions derived in the following sections satisfy Derrick’s scaling relation with the boundary term, which is obtained by keeping the background field $A_{k}$ intact under the scaling, i. e., $E_{1}+2E_{0}=0$ where $E_{1}$ denotes the energy contribution from the first derivative terms including the boundary term (3.1) and $E_{0}$ from no derivative terms. ### 3.1 BPS solutions Recently, it has been proved that the $SU(2)$ gauged $CP^{1}$ NL$\sigma$-model (1.2) possesses BPS solutions in the presence of a particular potential term [20, 24]. Here, we show that BPS solutions also exist in the $SU(3)$ gauged $CP^{2}$ model with a special choice of the potential term, which is given by $H_{\rm pot}=\pm 4\rho\int{\mathrm{d}}^{2}x\mathrm{Tr}\left(\mathfrak{n}F_{12}\right),$ (3.3) where $F_{jk}={\partial}_{j}A_{k}-{\partial}_{k}A_{j}-i\left[A_{j},A_{k}\right]$. As we shall see in the next subsection, the potential term can possess a natural physical interpretation for some background gauge field. It follows that the Hamiltonian we study here reads $H=\rho\int{\mathrm{d}}^{2}x\mathrm{Tr}\left(D_{k}\mathfrak{n}\leavevmode\nobreak\ D_{k}\mathfrak{n}\right)\pm 4\rho\int{\mathrm{d}}^{2}x\mathrm{Tr}\left(\mathfrak{n}F_{12}\right)\mp 4\rho\int{\mathrm{d}}^{2}x\leavevmode\nobreak\ \varepsilon_{jk}{\partial}_{j}\mathrm{Tr}(\mathfrak{n}A_{k}),$ (3.4) where the double-sign corresponds to that of Eq. (3.1). First, let us show that the lower energy bound of Eq. (3.4) is given by the topological charge (2.13). The first term in Eq. (3.4) can be written as $\displaystyle\rho\int{\mathrm{d}}^{2}x\leavevmode\nobreak\ \mathrm{Tr}\left(D_{k}\mathfrak{n}\leavevmode\nobreak\ D_{k}\mathfrak{n}\right)$ $\displaystyle=\frac{\rho}{2}\int{\mathrm{d}}^{2}x\left[\mathrm{Tr}\left(D_{k}\mathfrak{n}\leavevmode\nobreak\ D_{k}\mathfrak{n}\right)+\left(\frac{i}{2}\right)^{2}\mathrm{Tr}\left(\left[\mathfrak{n},D_{k}\mathfrak{n}\right]^{2}\right)\right]$ $\displaystyle=\frac{\rho}{2}\int{\mathrm{d}}^{2}x\mathrm{Tr}\left(D_{j}\mathfrak{n}\pm\frac{i}{2}\varepsilon_{jk}\left[\mathfrak{n},D_{k}\mathfrak{n}\right]\right)^{2}\pm\frac{i\rho}{2}\int{\mathrm{d}}^{2}x\leavevmode\nobreak\ \varepsilon_{jk}\mathrm{Tr}\left(\mathfrak{n}\left[D_{j}\mathfrak{n},D_{k}\mathfrak{n}\right]\right)$ $\displaystyle\geq\pm\frac{i\rho}{2}\int{\mathrm{d}}^{2}x\leavevmode\nobreak\ \varepsilon_{jk}\mathrm{Tr}\left(\mathfrak{n}\left[D_{j}\mathfrak{n},D_{k}\mathfrak{n}\right]\right).$ (3.5) It follows that the equality is satisfied if $D_{j}\mathfrak{n}\pm\frac{i}{2}\varepsilon_{jk}\left[\mathfrak{n},D_{k}\mathfrak{n}\right]=0,$ (3.6) which reduces to Eq. (3.2) at the spatial infinity. Therefore, one obtains the lower bound of the form $\displaystyle H$ $\displaystyle\geq\pm\frac{\rho}{2}\int{\mathrm{d}}^{2}x\left[i\varepsilon_{jk}\mathrm{Tr}\left(\mathfrak{n}\left[D_{j}\mathfrak{n},D_{k}\mathfrak{n}\right]\right)+8\mathrm{Tr}\left(\mathfrak{n}F_{12}\right)-8\varepsilon_{jk}{\partial}_{j}\mathrm{Tr}\left(\mathfrak{n}A_{k}\right)\right]$ $\displaystyle=\pm\frac{i\rho}{2}\int{\mathrm{d}}^{2}x\leavevmode\nobreak\ \varepsilon_{jk}\mathrm{Tr}\left(\mathfrak{n}\left[{\partial}_{j}\mathfrak{n},{\partial}_{k}\mathfrak{n}\right]\right)$ $\displaystyle=\mp 16\pi\rho\leavevmode\nobreak\ Q,$ (3.7) where the corresponding BPS equation is given by Eq. (3.6). Note that, unlike the energy bound of the $CP^{N}$ self-dual solutions [7, 27], the energy bound (3.7) can be negative, and it is not proportional to the absolute value of the topological charge. As is often the case in two-dimensional BPS equations [7, 20], solutions can be best described in terms of the complex coordinates $z_{\pm}=x^{1}\pm ix^{2}$. Further, we make use of the associated differential operator and background field defined as ${\partial}_{\pm}=\frac{1}{2}\left({\partial}_{1}\mp i{\partial}_{2}\right)$ and $A_{\pm}=\frac{1}{2}(A_{1}\mp iA_{2})$. Then, the BPS equation (3.6) can be written as $D_{\pm}\mathfrak{n}-\frac{1}{2}\left[\mathfrak{n},D_{\pm}\mathfrak{n}\right]=0.$ (3.8) Similar to the $SU(2)$ case [20], Eq. (3.8) with a plus sign can be solved if the background field has the form $A_{+}=ig^{-1}{\partial}_{+}g,$ (3.9) where $g\in SL(3,\mathbb{C})$. Note that Eq. (3.9) is not necessarily a pure gauge. Similarly, Eq. (3.8) with the minus sign on the right-hand side can be solved if $A_{-}=ig^{-1}{\partial}_{-}g$. For the background field (3.9), one finds that the BPS equation (3.8) is equivalent to ${\partial}_{+}\tilde{\mathfrak{n}}-\frac{1}{2}\left[\tilde{\mathfrak{n}},{\partial}_{+}\tilde{\mathfrak{n}}\right]=0,\leavevmode\nobreak\ \leavevmode\nobreak\ \tilde{\mathfrak{n}}=g\mathfrak{n}g^{-1},$ (3.10) because, under the $SL(3,\mathbb{C})$ gauge transformation, the fields are changed as $\mathfrak{n}\to\tilde{\mathfrak{n}}=g\mathfrak{n}g^{-1}$ and $A_{+}\to\tilde{A}_{\pm}=gA_{+}g^{-1}+ig{\partial}_{\pm}g^{-1}=0$. In the following, we only consider Eq. (3.9) to simplify our discussion. In order to solve the equation (3.10), we introduce a tractable parameterization of the color field $\mathfrak{n}=-\frac{2}{\sqrt{3}}U\lambda_{8}U^{\dagger},$ (3.11) with $U=\left(\bm{Y}_{1},\bm{Y}_{2},\bm{Z}\right)\in SU(3)$, where $\bm{Z}$ is the continuum counter part of the vector $\bm{Z}$ in Eq. (2.3) and $\bm{Y}_{1},\bm{Y}_{2}$ are vectors forming an orthonormal basis for $\mathbb{C}^{3}$ with $\bm{Z}$. Up to the gauge degrees of freedom, the components $\bm{Y}_{i}$ can be written as $\bm{Y}_{1}=\frac{\left(-\bar{Z}^{3},0,\bar{Z}^{1}\right)^{\rm T}}{\sqrt{1-|Z^{2}|^{2}}},\qquad\bm{Y}_{2}=\frac{\left(-\bar{Z}^{2}Z^{1},1-|Z^{2}|^{2},-\bar{Z}^{2}Z^{3}\right)^{\rm T}}{\sqrt{1-|Z^{2}|^{2}}}.$ (3.12) Therefore, the vector $\bm{Z}$ fully defines the color field $\mathfrak{n}$. Accordingly, we can write $\tilde{\mathfrak{n}}=-\frac{2}{\sqrt{3}}W\lambda_{8}W^{-1},$ (3.13) with $W=gU=\left(\bm{W}_{1},\bm{W}_{2},\bm{W}_{3}\right)\in SL(3,\mathbb{C})$. It follows that the field $\bm{Z}$, which is the fundamental field of the model, is given by $\bm{Z}=g^{-1}\bm{W}_{3}$. Substituting the field (3.13) into the equation (3.10), one finds that Eq. (3.10) reduces to the coupled equation $\left\\{\begin{aligned} \bm{W}_{1}^{-1}{\partial}_{+}\bm{W}_{3}=0\\\ \bm{W}_{2}^{-1}{\partial}_{+}\bm{W}_{3}=0\end{aligned}\right.,$ (3.14) where $\bm{W}_{l}^{-1}=\bm{Y}_{l}^{\dagger}g^{-1}$ $(l=1,2)$. Since the three vectors $\\{\bm{Y}_{1},\bm{Y}_{2},\bm{Z}\\}$ form an orthonormal basis, Eq. (3.14) implies ${\partial}_{+}\bm{W}_{3}=\beta\bm{W}_{3}$ where the function $\beta$ is given by $\beta=\beta\bm{W}^{-1}_{3}\bm{W}_{3}=\bm{W}^{-1}_{3}{\partial}_{+}\bm{W}_{3}.$ Therefore, the equation (3.10) is solved by any configuration satisfying ${\cal D}_{+}\bm{W}_{3}=0,$ (3.15) where ${\cal D}_{+}{\bm{\Phi}}={\partial}_{+}{\bm{\Phi}}-({\bm{\Phi}}^{-1}{\partial}_{+}{\bm{\Phi}}){\bm{\Phi}}$ for arbitrary non-zero vector $\bm{\Phi}$. Moreover, we write $\bm{W}_{3}=\sqrt{|\bm{W}_{3}|^{2}}\leavevmode\nobreak\ \bm{w},$ (3.16) where $\bm{w}$ is a three component unit vector, i.e. $|\bm{w}|^{2}=\bm{w}^{\dagger}\bm{w}=1$. Then, Eq. (3.15) can be reduced to ${\cal D}_{+}\bm{w}\equiv{\partial}_{\mu}\bm{w}-\left(\bm{w}^{\dagger}{\partial}_{\mu}\bm{w}\right)\bm{w}=0,$ (3.17) which is the very BPS equation of the standard $CP^{2}$ NL$\sigma$-model. Thus, a general solution of Eq. (3.15), up to the gauge degrees of freedom, is given by $\bm{w}=\frac{{\bm{P}}}{|{\bm{P}}|},\qquad{\bm{P}}=\left(P_{1}(z_{-}),P_{2}(z_{-}),P_{3}(z_{-})\right)^{\rm T},$ (3.18) where ${\bm{P}}$ has no overall factor, and $P_{a}$ is a polynomial in $z_{-}$. Therefore, we finally obtain the solution for the $\bm{Z}$ field $\bm{Z}=g^{-1}\bm{W}_{3}=\chi g^{-1}\bm{w}=\chi g^{-1}{\bm{P}},$ (3.19) where $\chi$ is a normalization factor. ### 3.2 Properties of the BPS solutions As the BPS bound (3.7) indicates, the lowest energy solution among Eq. (3.19) with a given background function $g$ possesses the highest topological charge. In terms of the explicit calculation of the topological charge, we discuss the conditions for the lowest energy solutions. The topological charge (2.13) can be written in terms of $\bm{Z}$ as $Q=-\frac{i}{2\pi}\int{\mathrm{d}}^{2}x\leavevmode\nobreak\ \varepsilon^{ij}\left({\cal D}_{i}\bm{Z}\right)^{\dagger}{\cal D}_{j}\bm{Z}.$ (3.20) We employ the constant background gauge field $A_{+}$ for simplicity. Then, the matrix $g$ in Eq. (3.9) becomes $g=\exp\left(-iA_{+}z_{+}\right)\,,$ (3.21) so that the components of $g^{-1}$ are given by power series in $z_{+}$. It allows us to write Eq. (3.20) as a line integral along the circle at spatial infinity $Q=\frac{1}{2\pi}\int_{S^{1}_{\infty}}C,$ (3.22) with $C=-i\bm{Z}^{\dagger}{\mathrm{d}}\bm{Z}$ [27, 38], since the one-form $C$ becomes globally well-defined. To evaluate the integral in Eq. (3.22), we write explicitly $\bm{Z}=\frac{\chi}{\sqrt{|P_{1}|^{2}+|P_{2}|^{2}+|P_{3}|^{2}}}\sum_{a}\left(\begin{array}[]{c}g^{-1}_{1a}(z_{+})P_{a}(z_{-})\\\ g^{-1}_{2a}(z_{+})P_{a}(z_{-})\\\ g^{-1}_{3a}(z_{+})P_{a}(z_{-})\end{array}\right),$ (3.23) where $g^{-1}_{ab}$ is the $(a,b)$ component of the inverse matrix $g^{-1}$. Let $N_{a}$ ($K_{ab}$) be the highest power in $P_{a}$ ($g^{-1}_{ab}$). Note that though $g^{-1}_{ab}$ are formally represented as power series in $z_{+}$, the integers $K_{ba}$ are not always infinite; especially, if a positive integer power of $A_{+}$ is zero, all of $K_{ba}$ become finite because $g^{-1}$ reduces to a polynomial of finite degree in $z_{+}$. Using the plane polar coordinates $\\{r,\theta\\}$, one can write $g^{-1}_{ba}(z_{+})P_{a}(z_{-})\sim r^{N_{a}+K_{ba}}\exp[-i(N_{a}-K_{ba})\theta]$ at the spatial boundary and find that only the components of the highest power in $r$ contribute to the integral (3.22). Since we are interested in constructing topological solitons, we consider the case when the physical space $\mathbb{R}^{2}$ can be compactified to the sphere $S^{2}$, i.e., the field $\bm{Z}$ takes some fixed value on the spatial boundary. Such a compactification is possible if there is only one pair $\\{N_{a},K_{ba}\\}$ giving the largest sum $N_{a}+K_{ba}$ or any pairs $\\{N_{a},K_{ba}\\}$, sharing the largest sum, have the same value of the difference. For such configurations, the topological charge is given by $Q=-N_{a}+K_{ba},$ (3.24) where the combination $\\{N_{a},K_{ba}\\}$ yields the largest sum among any pairs $\\{N_{c},K_{dc}\\}$. This equation (3.24) indicates that the highest topological charge configuration is given by the choice $N_{a}=0$ for a particular value of $a$ which gives the biggest $K_{ba}$. Figure 1: Topological charge density of the axial symmetric solution (3.28) with $\kappa=1$. We are looking for the lowest energy solutions with an explicit background field. As a particular example, let us consider $A_{1}=\kappa\left(\lambda_{1}+\lambda_{4}+\lambda_{5}\right),\qquad A_{2}=\kappa\left(\lambda_{2}+\lambda_{4}-\lambda_{5}\right),$ (3.25) where $\kappa$ is a constant. Clearly, this choice yields the potential term $V=4\mathrm{Tr}\left(\mathfrak{n}F_{12}\right)=-16\sqrt{3}\kappa^{2}n^{8}=16\kappa^{2}\left(2-3\langle{(S^{3})^{2}}\rangle\right),$ (3.26) which can be interpreted as an easy-axis anisotropy, or quadratic Zeeman term, which naturally appears in condensed matter physics. In this case, the solution (3.19) can be written as $\bm{Z}=\frac{\chi}{\sqrt{\Delta}}\left(\begin{array}[]{c}P_{1}(z_{-})+\sqrt{2}\kappa z_{+}e^{\frac{\pi i}{4}}P_{3}(z_{-})\\\ P_{2}(z_{-})+i\kappa z_{+}P_{1}(z_{-})+\frac{\kappa^{2}z_{+}^{2}}{\sqrt{2}}e^{\frac{3\pi i}{4}}P_{3}(z_{-})\\\ P_{3}(z_{-})\end{array}\right).$ (3.27) Therefore, the solution with the highest topological charge is given by $P_{1}=\alpha_{1}$, $P_{2}=\alpha_{2}z_{-}+\alpha_{3}$ with $\alpha_{i}\in\mathbb{C}$, and $P_{3}$ being a nonzero constant. Choosing $P_{1}=P_{2}=0$, one can obtain the axially-symmetric solution $\bm{Z}=\frac{1}{\sqrt{\Delta}}\left(\begin{array}[]{c}\sqrt{2}\kappa z_{+}e^{\frac{\pi i}{4}}\\\ \frac{\kappa^{2}z_{+}^{2}}{\sqrt{2}}e^{\frac{3\pi i}{4}}\\\ 1\end{array}\right),\qquad\Delta=1+2\kappa^{2}z_{+}z_{-}+\frac{\kappa^{4}}{2}z_{+}^{2}z_{-}^{2},$ (3.28) which possesses the topological charge $Q=2$. Note that this configuration also satisfies the BPS equation of the pure $CP^{2}$ NL$\sigma$-model [26, 27, 31]. Figure 1 shows the distribution of the topological charge (3.20) of this solution (3.28) with $\kappa=1$. We find that the topological charge density has a single peak, although higher charge topological solitons with axial symmetry are likely to possess a volcano structure, see e.g., Ref. [39]. These highest charge solutions give the asymptotic values at spatial infinity of the color field $\left(n_{\infty}^{1},n_{\infty}^{2},n_{\infty}^{3},n_{\infty}^{4},n_{\infty}^{5},n_{\infty}^{6},n_{\infty}^{7},n_{\infty}^{8}\right)=(0,0,-1,0,0,0,0,1/\sqrt{3})\leavevmode\nobreak\ .$ (3.29) It indicates that $\mathfrak{n}$ takes the vacuum value in the Cartan subalgebra of $SU(3)$. Hence, the vacuum of the model corresponds to a spin nematic, i.e., $\langle{S^{1}}\rangle=\langle{S^{2}}\rangle=\langle{S^{3}}\rangle=0$ and $\langle{\left(S^{2}\right)^{2}}\rangle=0,\langle{\left(S^{1}\right)^{2}}\rangle=\langle{\left(S^{3}\right)^{2}}\rangle=1$. Unlike the pure $CP^{2}$ model, there is no degeneracy between the spin nematic state and ferromagnetic state in our model because the $SU(3)$ global symmetry is broken. As shown in Fig. 2, the spin nematic state is partially broken around the soliton because the expectation values $\langle{S^{a}}\rangle$ become finite. Fig. 3 shows that $\langle{\left(S^{a}\right)^{2}}\rangle$ of the solution (3.28) are axially symmetric, although the expectation values $\langle{S^{a}}\rangle$ have angular dependence. Figure 2: The expectation values $\langle{S^{a}}\rangle$ for the solution (3.28) with $\kappa=1$. Figure 3: The expectation values $\langle{\left(S^{a}\right)^{2}}\rangle$ for the solution (3.28) with $\kappa=1$. ### 3.3 Exact solutions off the BPS point Note that the Hamiltonian (1.1) with $B=2A$ admits closed-form analytical solutions [40]. Further, the $CP^{1}$ BPS truncation corresponds to the restricted choice of the parameters, $B=2A=\kappa^{2}$. The relation $B=2A$ is referred to as the solvable line, whereas the restriction $B=2A=\kappa^{2}$ is called the BPS point [25]. Here we show that similar restrictions occur in our model. For this purpose, we consider the generalized Hamiltonian $H=H_{\rm D}+H_{\rm L}+H_{\rm Boundary}+\nu^{2}H_{\rm ani}+\mu^{2}H_{\rm pot},$ (3.30) where $\nu$ and $\mu$ are real coupling constants. Here, $H_{\rm D}$ indicates the $CP^{2}$ Dirichlet term, i.e., the first term in the r.h.s of Eq. (2.10), and $H_{\rm L}$ does the Lifshitz invariant term which is the second term of that. Explicitly, these and other terms read $\displaystyle H_{\rm D}=\rho\int{\mathrm{d}}^{2}x\mathrm{Tr}\left({\partial}_{k}\mathfrak{n}\leavevmode\nobreak\ {\partial}_{k}\mathfrak{n}\right),$ (3.31) $\displaystyle H_{\rm L}=-2i\rho\int{\mathrm{d}}^{2}x\mathrm{Tr}\left(A_{k}\left[\mathfrak{n},{\partial}_{k}\mathfrak{n}\right]\right),$ (3.32) $\displaystyle H_{\rm ani}=-\rho\int{\mathrm{d}}^{2}x\left[\mathrm{Tr}\left(\left[A_{k},\mathfrak{n}\right]^{2}\right)-\mathrm{Tr}\left(\left[A_{k},\mathfrak{n}_{\infty}\right]^{2}\right)\right],$ (3.33) $\displaystyle H_{\rm pot}=4\rho\int{\mathrm{d}}^{2}x\left[\mathrm{Tr}\left(\mathfrak{n}F_{12}\right)-\mathrm{Tr}\left(\mathfrak{n}_{\infty}F_{12}\right)\right],\,$ (3.34) where $A_{k}$ is a constant background field, as before. Finally, the boundary term $H_{\rm Boundary}$ is defined by Eq. (3.1) with the negative sign in the r.h.s., the same as before. Note that we also introduced constant terms in Eqs. (3.33) and (3.34) in order to guarantee the finiteness of the total energy. Clearly, the Hamiltonian (3.30) is reduced to Eq. (3.4) as we set $\nu^{2}=\mu^{2}=1$. The existence of exact solutions of the Hamiltonian (3.30) with $\nu^{2}=\mu^{2}$ can be easily shown if we rescale the space coordinates as $\vec{x}\to r_{0}\vec{x}$, where $r_{0}$ is a positive constant, while the background gauge field $A_{k}$ remains intact. By rescaling, the Hamiltonian (3.30) becomes $H=H_{\rm D}+r_{0}\left(H_{\rm L}+H_{\rm Boundary}\right)+r_{0}^{2}\left(\nu^{2}H_{\rm ani}+\mu^{2}H_{\rm pot}\right).$ (3.35) Setting $\nu^{2}=\mu^{2}$ and choosing the scale parameter $r_{0}=\nu^{-2}$, one gets $H^{r_{0}=\nu^{-2}}_{\nu^{2}=\mu^{2}}=H_{\rm D}+\nu^{-2}\left(H_{\rm L}+H_{\rm Boundary}+H_{\rm ani}+H_{\rm pot}\right).$ (3.36) Notice that since the solutions (3.19) with $P_{i}$ being arbitrary constants are holomorphic maps from $S^{2}$ to $CP^{2}$, they satisfy not only the variational equations $\delta H_{\nu^{2}=\mu^{2}=1}=0$ but also the equations $\delta H_{\rm D}=0$, where $\delta$ denotes the variation with respect to $\mathfrak{n}$ with preserving the constraint (2.6). Therefore, the solutions also satisfy the equations $\delta H^{r_{0}=\nu^{-2}}_{\nu^{2}=\mu^{2}}=0$. This implies that, in the limit $\mu^{2}=\nu^{2}$, the Hamiltonian (3.30) supports a family of exact solutions of the form $\bm{Z}(\nu^{2})=\exp\left[i\nu^{2}A_{+}z_{+}\right]\bm{c}\,,$ (3.37) where $\bm{c}$ is a three-component complex unit vector. Since the solution (3.37) is a BPS solution of the pure $CP^{2}$ model with the positive topological charge $Q$, one gets $H_{\rm D}[\bm{Z}(\nu^{2})]=16\pi\rho Q$. In addition, the lower bound at the BPS point (3.7) indicates that $H_{\nu^{2}=\mu^{2}=1}[\bm{Z}(\nu^{2}=1)]=-16\pi\rho Q$. Combining these bounds, we find that the total energy of the solution (3.37) is given by $\displaystyle H_{\nu^{2}=\mu^{2}}[\bm{Z}(\nu^{2})]=16\pi\rho\left(1-\frac{2}{\nu^{2}}\right)Q.$ (3.38) Since the energy becomes negative if $\nu^{2}<2$, we can expect that for small values of the coupling $\nu^{2}$, the homogeneous vacuum state becomes unstable, and then separated 2D Skyrmions (or a Skyrmion lattice) emerges as a ground state. ## 4 Numerical solutions ### 4.1 Axial symmetric solutions In this section, we study baby Skyrmion solutions of the Hamiltonian (3.30) with various combinations of the coupling constants. Apart from the solvable line, no exact solutions could find analytically, and then we have to solve the equations numerically. Here, we restrict ourselves to the case of the background field given by Eq. (3.25). For the background field (3.25), by analogy with the case of the single $CP^{1}$ magnetic Skyrmion solution, we can look for a configuration described by the axially symmetric ansatz $\bm{Z}=\left(\sin F(r)\cos G(r)e^{i\Phi_{1}(\theta)},\sin F(r)\sin G(r)e^{i\Phi_{2}(\theta)},\cos F(r)\right),$ (4.1) where $F$ and $G$ ($\Phi_{1}$ and $\Phi_{2}$) are real functions of the plane polar coordinates $r$ ($\theta$). The exact solution on the solvable line $\nu^{2}=\mu^{2}$ with axial symmetry can be written in terms of the ansatz with the functions $F=\tan^{-1}\sqrt{2\nu^{4}\kappa^{2}r^{2}+\frac{\nu^{8}\kappa^{4}r^{4}}{2}},\qquad G=\tan^{-1}\left(\frac{\nu^{2}\kappa r}{2}\right),\qquad\Phi_{1}=\theta+\frac{\pi}{4},\qquad\Phi_{2}=2\theta+\frac{3\pi}{4}.$ (4.2) Further, the solution (3.28) is given by Eq. (4.2) with $\nu^{2}=1$. This configuration is a useful reference point in the configuration space as we discuss below some properties of numerical solutions in the extended model (3.30). For our numerical study, it is convenient to introduce the energy unit $8\rho$ and the length unit $\kappa^{-1}$, in order to scale the coupling constants. Then, the rescaled components of the Hamiltonian with the ansatz (4.1) become $\displaystyle H_{\rm D}=\int{\mathrm{d}}^{2}x\left[F^{\prime 2}+\sin^{2}FG^{\prime 2}+\right.$ $\displaystyle\left.\qquad\qquad\quad+\frac{\sin^{2}F}{r^{2}}\left\\{\dot{\Phi}_{1}^{2}\cos^{2}G+\dot{\Phi}_{2}^{2}\sin^{2}G\right\\}-\frac{\sin^{4}F}{r^{2}}\left(\dot{\Phi}_{1}\cos^{2}G+\dot{\Phi}_{2}\sin^{2}G\right)^{2}\right],$ (4.3) $\displaystyle H_{\rm L}=-2\int\frac{{\mathrm{d}}^{2}x}{r}\Bigg{[}\sqrt{2}\cos\left(\theta+\frac{\pi}{4}-\Phi_{1}\right)\left\\{r\left(\cos GF^{\prime}-\sin 2F\sin G\frac{G^{\prime}}{2}\right)+\sin 2F\cos G\frac{\dot{\Phi}_{1}}{2}\right.$ $\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\sin 2F\sin^{2}F\cos G\left(\cos^{2}G\dot{\Phi}_{1}+\sin^{2}G\dot{\Phi}_{2}\right)\Big{\\}}$ $\displaystyle\qquad\qquad\qquad\qquad\quad-\sin\left(\theta+\Phi_{1}-\Phi_{2}\right)\left\\{r\sin^{2}FG^{\prime}+\frac{1}{2}\sin^{2}F\sin 2G\left(\dot{\Phi}_{1}+\dot{\Phi}_{2}\right)\right.$ $\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\qquad-\sin^{4}F\sin 2G\left(\cos^{2}G\dot{\Phi}_{1}+\sin^{2}G\dot{\Phi}_{2}\right)\Big{\\}}\Bigg{]},$ (4.4) $\displaystyle H_{\rm ani}=\frac{1}{2}\int{\mathrm{d}}^{2}x\Bigg{[}16\sin^{2}F\cos^{2}G\left\\{\cos^{2}F-\frac{1}{\sqrt{2}}\cos\left(2\Phi_{1}-\Phi_{2}+\frac{\pi}{4}\right)\sin 2F\sin G+\sin^{2}F\sin G^{2}\right\\}$ $\displaystyle\qquad\qquad\qquad\qquad+\sin^{2}2F(1+2\sin^{2}G)+8(\cos^{2}F-\cos^{2}G\sin^{2}F)^{2}+4\cos^{2}2G\sin^{4}F-4\Bigg{]},$ (4.5) $\displaystyle H_{\rm pot}=2\int{\mathrm{d}}^{2}x\left(1-\sqrt{3}n^{8}\right)=6\int{\mathrm{d}}^{2}x\cos^{2}F,$ (4.6) where the prime ′ and the dot $\dot{}$ stands for the derivatives with respect to the radial coordinate $r$ and angular coordinate $\theta$, respectively. The system of corresponding Euler-Lagrange equations for $\Phi_{i}$ can be solved algebraically for an arbitrary set of the coupling constants, and the solutions are $\Phi_{1}=\theta+\frac{\pi}{4},\qquad\Phi_{2}=2\theta+\frac{3\pi}{4}+m\pi\,,$ (4.7) where $m$ is an integer. Without loss of generality, we choose $m=0$ by transferring the corresponding multiple windings of the phase $\Phi_{2}$ to the sign of the profile function $G$. Then, the system of the Euler-Lagrange equations for the profile functions with the phase factor (4.7) reads $\begin{split}&\frac{\delta H_{\rm D}}{\delta F}+\frac{\delta H_{\rm L}}{\delta F}+\nu^{2}\frac{\delta H_{\rm ani}}{\delta F}+\mu^{2}\frac{\delta H_{\rm pot}}{\delta F}=0,\\\ &\frac{\delta H_{\rm D}}{\delta G}+\frac{\delta H_{\rm L}}{\delta G}+\nu^{2}\frac{\delta H_{\rm ani}}{\delta G}+\mu^{2}\frac{\delta H_{\rm pot}}{\delta G}=0,\end{split}$ (4.8) with $\displaystyle\frac{\delta H_{\rm D}}{\delta F}=\left[2rF^{\prime\prime}+2F^{\prime}-\sin 2F\left\\{rG^{\prime 2}+\frac{1+3\sin^{2}G}{r}-\frac{2\sin^{2}F}{r}\left(1+\sin^{2}G\right)^{2}\right\\}\right],$ (4.9) $\displaystyle\frac{\delta H_{\rm L}}{\delta F}=-2\left[2\sqrt{2}\sin^{2}F\\{-r\sin GG^{\prime}+\cos G+\cos G\left(1+\sin^{2}G\right)\left(4\cos^{2}F-1\right)\\}\right.$ $\displaystyle\left.\qquad\qquad\qquad\qquad-r\sin 2FG^{\prime}-\frac{3}{2}\sin 2F\sin 2G+4\cos F\sin^{3}F\sin 2G\left(1+\sin^{2}G\right)\right],$ (4.10) $\displaystyle\frac{\delta H_{\rm ani}}{\delta F}=2r\left[4\sqrt{2}\sin G\cos^{2}G\sin^{2}F\left(3-4\sin^{2}F\right)-4\cos F\sin^{3}F\cos^{2}2G\right.$ $\displaystyle\left.\qquad\qquad\qquad+4\sin 2F\left\\{\cos^{2}F-\sin^{2}F\cos^{2}G\left(1+\sin^{2}G\right)\right\\}-\sin 2F\cos 2F(1+2\sin^{2}G)\right],$ (4.11) $\displaystyle\frac{\delta H_{\rm pot}}{\delta F}=6r\sin 2F,$ (4.12) $\displaystyle\frac{\delta H_{\rm D}}{\delta G}=\left[2r\sin F^{2}G^{\prime\prime}+2r\sin 2FF^{\prime}G^{\prime}+2\sin^{2}FG^{\prime}-\frac{\sin^{2}F\sin 2G}{r}\left\\{3-2\sin^{2}F\left(1+\sin^{2}G\right)\right\\}\right],$ (4.13) $\displaystyle\frac{\delta H_{\rm L}}{\delta G}=-2\left[\sqrt{2}\sin^{2}F\sin G\left\\{2rF^{\prime}+\sin 2F\left(1-3\sin^{2}G\right)\right\\}\right.$ $\displaystyle\left.\qquad\qquad\qquad\qquad+r\sin 2FF^{\prime}+\sin^{2}F(1-3\cos 2G)+\sin^{4}F\left(1+3\cos 2G-2\cos^{2}2G\right)\right],$ (4.14) $\displaystyle\frac{\delta H_{\rm ani}}{\delta G}=r\left[8\sqrt{2}\cos F\sin^{3}F\cos G\left(1-3\sin^{2}G\right)+16\sin^{4}F\cos^{3}G\sin G-\sin^{2}2F\sin 2G\right],$ (4.15) $\displaystyle\frac{\delta H_{\rm pot}}{\delta G}=0.$ (4.16) We solve the equations for $\nu^{2}\neq\mu^{2}$ numerically with the boundary condition $F(0)=G(0)=0,\qquad\lim_{r\to\infty}F(r)=\lim_{r\to\infty}G(r)=\pi/2,$ (4.17) which the exact solution (4.2) satisfies. This vacuum corresponds to the spin nematic state (3.29). Figure 4: Plot of the profile functions $\\{F,G\\}$ (left) and the topological charge density (right) of numerical solutions for changing the coupling constant $\nu^{2}$ at $\mu^{2}=1.5$. The gray line indicates the quantities of the exact solution (4.2) on the solvable line. $\nu^{2}$ | $H$ | $H_{\rm D}$ | $H_{\rm L}$ | $\nu^{2}H_{\rm ani}$ | $\mu^{2}H_{\rm pot}$ | $H_{\rm Boundary}$ | Derrick | $Q$ ---|---|---|---|---|---|---|---|--- 0.1 | -117.47 | 13.51 | -136.48 | 125.49 | 5.67 | -125.67 | -2.00 | 2.00 0.3 | -34.02 | 13.41 | -53.60 | 41.37 | 6.69 | -41.89 | -1.99 | 2.00 0.8 | -8.46 | 13.06 | -29.37 | 14.73 | 8.82 | -15.71 | -1.91 | 2.00 1.5 | -4.19 | 12.57 | -16.76 | 1.09 | 15.66 | -16.76 | -2 | 2 Table 1: The Hamiltonian and topological charge for the numerical solutions with $\mu^{2}=1.5$ where ”Derrick” denotes the value $(H_{\rm L}+H_{\rm Boundary})/(\nu^{2}H_{\rm ani}+\mu^{2}H_{\rm pot})$, which is expected to be $-2$ by the scaling argument. For $\nu^{2}=1.5$, we used the exact solution (4.2) so that the ”Derrick” and topological charge for $\nu^{2}=1.5$ are exact values. Let us consider the asymptotic behavior of the solutions of the equations (4.8). Near the origin, the leading terms in the power series expansion are $F\approx c_{F}\leavevmode\nobreak\ r,\qquad G\approx c_{G}\leavevmode\nobreak\ r,$ (4.18) where $c_{F}$ and $c_{G}$ are some constants implicitly depending on the coupling constants of the model. To see the behavior of solutions at large $r$, we shift the profile functions as $F=\frac{\pi}{2}-{\cal F},\qquad G=\frac{\pi}{2}-{\cal G}.$ (4.19) Then, one obtains linearized asymptotic equations on the functions ${\cal F}$ and ${\cal G}$ of the forms $\begin{split}&\left({\cal F}^{\prime\prime}+\frac{{\cal F}^{\prime}}{r}-\frac{4{\cal F}}{r^{2}}\right)+2\sqrt{2}\left({\cal G}^{\prime}-\frac{{\cal G}}{r}\right)-2\left(\nu^{2}+3\mu^{2}\right){\cal F}=0\leavevmode\nobreak\ ,\\\ &\left({\cal G}^{\prime\prime}+\frac{{\cal G}^{\prime}}{r}-\frac{{\cal G}}{r^{2}}\right)-2\sqrt{2}\left({\cal F}^{\prime}+\frac{2{\cal F}}{r}\right)=0\leavevmode\nobreak\ .\end{split}$ (4.20) Unfortunately, the equations (4.20) may not support an analytical solution. However, these equations imply that the asymptotic behavior of the profile functions is similar to that of the functions (4.2), by a replacement $\nu^{2}\kappa$ with $(\nu^{2}+3\mu^{2})/4$. Indeed, the asymptotic equations (4.20) depend on such a combination of the coupling constants, and there may exist an exact solution on the solvable line with the same character of asymptotic decay as the localized soliton solution of the equation (4.8). To implement a numerical integration of the coupled system of ordinary differential equations (4.8), we introduce the normalized compact coordinate $X\in(0,1]$ via $r=\frac{1-X}{X}.$ (4.21) The integration was performed by the Newton-Raphson method with the mesh point $N_{\rm MESH}=2000$. In Fig. 4, we display some set of numerical solutions for different values of the coupling $\nu^{2}$ at $\mu^{2}=1.5$ and their topological charge density ${\cal Q}$ defined through $Q=2\pi\int r{\cal Q}{\mathrm{d}}r$. The solutions enjoy Derrick’s scaling relation and possess a good approximated value of the topological charge, as shown in Table 1. One observes that as the value of the coupling $\nu^{2}$ becomes relatively small, the function $G$ is delocalizing while the profile function $F$ is approaching its vacuum value everywhere in space except for the origin. This is an indication that any regular non- trivial solution does not exist $\nu^{2}=0$. ### 4.2 Asymptotic behavior Asymptotic interaction of solitons is related to the overlapping of the tails of the profile functions of well-separated single solitons [3]. Bounded multi- soliton configurations may exist if there is an attractive force between two isolated solitons. Considering the above-mentioned soliton solutions of the gauged $CP^{2}$ NL$\sigma$-model, we have seen that the exact solution (4.2) has the same type of asymptotic decay as any solution of the general system (4.8). Therefore, it is enough to examine the asymptotic force between the solutions on the solvable line (4.2) to understand whether or not the Hamiltonian (3.30) supports multi-soliton solutions of higher topological degrees. Thus, without loss of generality, we can set $\mu^{2}=\nu^{2}$. Following the approach discussed in Ref. [3], let us consider a superposition of two exact solutions above. This superposition is no longer a solution of the Euler-Lagrange equation, except for in the limit of infinite separation, because there is a force acting on the solitons. The interaction energy of two solitons can be written as $E_{\rm int}(R)=H_{\rm sp}(R)-2H_{\rm exact},$ (4.22) where $H_{\rm sp}(R)$ is the energy of two BPS solitons separated by some large but finite distance $R$ from each other, and $H_{\rm exact}$ stands for the static energy of a single exact solution. Notice that the lower bound of the Hamiltonian (3.30) with $\mu^{2}=\nu^{2}$ is given $H=\nu^{-2}H_{\nu^{2}=\mu^{2}=1}+(1-\nu^{2})H_{\rm D}\geq 2\pi(1-2\nu^{-2})Q,$ (4.23) where the equality is enjoyed only by holomorphic solutions. Therefore, we immediately conclude $H_{\rm sp}(R)\geq 2H_{\rm exact},$ (4.24) where the equality is satisfied only at the limit $R\to\infty$. It follows that the interaction energy is always positive for finite separation, and the interaction is repulsive. Since the exact solution has the topological charge $Q=2$, it implies that there are no isolated soliton solutions with the topological charge $Q\geq 4$ in this model. Note that, however, as the BPS solution (3.19) suggests, there can exist soliton solutions with an arbitrary negative charge, which are topological excited states on top of the homogeneous vacuum state. ## 5 Conclusion In this paper, we have studied two-dimensional Skyrmions in the $CP^{2}$ NL$\sigma$-model with a Lifshitz invariant term which is an $SU(3)$ generalization of the DM term. We have shown that the $SU(3)$ tilted FM Heisenberg model turns out to be an $SU(3)$ gauged $CP^{2}$ NL$\sigma$-model in which the term linear in a background gauge field can be viewed as a Lifshitz invariant. We have found exact BPS-type solutions of the gauged $CP^{2}$ model in the presence of a potential term with a specific value of the coupling constant. The least energy configuration among the BPS solutions has been discussed. We have reduced the gauged $CP^{2}$ model to the (ungauged) $CP^{2}$ model with a Lifshitz invariant by choosing a background gauge field. In the reduced model, we have constructed an exact solution for a special combination of coupling constants called the solvable line and numerical solutions for a wider range of them. For numerical study, we chose the background field, generating a potential term that can be interpreted as the quadratic Zeeman term or uniaxial anisotropic term. One can also choose a background field generating the Zeeman term; if the background field is chosen as $A_{1}=-\kappa\lambda_{7}$ and $A_{2}=\kappa\lambda_{5}$, the associated potential term is proportional to $\langle{S^{3}}\rangle$. The Euler-Lagrange equation for the extended $CP^{2}$ model with this background field is not compatible with the axial symmetric ansatz (4.1). Therefore, a two-dimensional full simulation is required to obtain a solution with this background field. This problem, numerical simulation for non-axial symmetric solutions in the $CP^{2}$ model with a Lifshitz invariant, is left to future study. In addition, the construction of a $CP^{2}$ Skyrmion lattice is a challenging problem. The physical interpretation of the Lifshitz invariants is also an important future task. The microscopic derivation of the $SU(3)$ tilted Heisenberg model [21] may enable us to understand the physical interpretation and physical situation where the Lifshitz invariant appears. Other future work would be the extension of the present study to the $SU(3)$ antiferromagnetic Heisenberg model where soliton/sphaleron solutions can be constructed [41, 42, 43]. We restricted our analysis on the case that the additional potential term $\mu^{2}H_{\rm pot}$ is balanced or dominant against the anisotropic potential term $\nu^{2}H_{\rm ani}$, i.e., $\nu^{2}\leq\mu^{2}$. We expect that a classical phase transition occurs outside of the condition, and it causes instability of the solution. At the moment, the phase structure of the model (3.30) is not clear, and we will discuss it in our subsequent work. Moreover, it has been reported that in some limit of a three-component Ginzburg-Landau model [44, 45], and of a three-component Gross-Pitaevskii model [46, 47], their vortex solutions can be well-described by planar $CP^{2}$ Skyrmions. We believe that our result provides a hint to introduce a Lifshitz invariant to the models, and that our solutions find applications not only in $SU(3)$ spin systems but also in superconductors and Bose-Einstein condensates described by the extended models, including the Lifshitz invariant. Acknowledgments This work was supported by JSPS KAKENHI Grant Nos. JP17K14352, JP20K14411, and JSPS Grant-in-Aid for Scientific Research on Innovative Areas “Quantum Liquid Crystals” (KAKENHI Grant No. JP20H05154). Ya.S. gratefully acknowledges support by the Ministry of Education of Russian Federation, project FEWF-2020-0003. Y. Amari would like to thank Tokyo University of Science for its kind hospitality. ## References * [1] T. Skyrme, “A Nonlinear field theory”, Proc. Roy. Soc. Lond. A 260, 127–138 (1961) . * [2] T. Skyrme, “A Unified Field Theory of Mesons and Baryons”, Nucl. Phys. 31, 556–569 (1962) . * [3] N. Manton, and P. Sutcliffe, “Topological solitons”, Cambridge Monographs on Mathematical Physics, Cambridge University Press, 2004. * [4] A. Bogolubskaya, and I. Bogolubsky, “Stationary Topological Solitons in the Two-dimensional Anisotropic Heisenberg Model With a Skyrme Term”, Phys. Lett. A 136, 485–488 (1989) . * [5] A. Bogolyubskaya, and I. Bogolyubsky, “ON STATIONARY TOPOLOGICAL SOLITONS IN TWO-DIMENSIONAL ANISOTROPIC HEISENBERG MODEL”, Lett. Math. Phys. 19, 171–177 (1990) . * [6] R. A. Leese, M. Peyrard, and W. J. Zakrzewski, “Soliton Scatterings in Some Relativistic Models in (2+1)-dimensions”, Nonlinearity 3, 773–808 (1990) . * [7] A. M. Polyakov, and A. Belavin, “Metastable States of Two-Dimensional Isotropic Ferromagnets”, JETP Lett. 22, 245–248 (1975) . * [8] A. N. Bogdanov, and D. Yablonskii, “Thermodynamically stable “vortices” in magnetically ordered crystals. The mixed state of magnets”, Zh. Eksp. Teor. Fiz 95 (1), 178 (1989) . * [9] A. Neubauer, C. Pfleiderer, B. Binz, A. Rosch, R. Ritz, P. Niklowitz, and P. Böni, “Topological Hall effect in the A phase of MnSi”, Phys. Rev. Lett. 102 (18), 186602 (2009) . * [10] N. Nagaosa, and Y. Tokura, “Topological properties and dynamics of magnetic skyrmions”, Nat. Nanotechnol. 8 (12), 899–911 (2013) . * [11] A. Leonov, I. Dragunov, U. Rößler, and A. Bogdanov, “Theory of skyrmion states in liquid crystals”, Phys. Rev. E 90 (4), 042502 (2014) . * [12] I. I. Smalyukh, Y. Lansac, N. A. Clark, and R. P. Trivedi, “Three-dimensional structure and multistable optical switching of triple-twisted particle-like excitations in anisotropic fluids”, Nat. Mater. 9 (2), 139–145 (2010) . * [13] S. Mühlbauer, B. Binz, F. Jonietz, C. Pfleiderer, A. Rosch, A. Neubauer, R. Georgii, and P. Böni, “Skyrmion lattice in a chiral magnet”, Science 323 (5916), 915–919 (2009) . * [14] X. Yu, Y. Onose, N. Kanazawa, J. Park, J. Han, Y. Matsui, N. Nagaosa, and Y. Tokura, “Real-space observation of a two-dimensional skyrmion crystal”, Nature 465 (7300), 901–904 (2010) . * [15] S. Heinze, K. Von Bergmann, M. Menzel, J. Brede, A. Kubetzka, R. Wiesendanger, G. Bihlmayer, and S. Blügel, “Spontaneous atomic-scale magnetic skyrmion lattice in two dimensions”, Nat. Phys. 7 (9), 713–718 (2011) . * [16] C. Back, V. Cros, H. Ebert, K. Everschor-Sitte, A. Fert, M. Garst, T. Ma, S. Mankovsky, T. L. Monchesky, M. Mostovoy, N. Nagaosa, S. S. P. Parkin, C. Pfleiderer, N. Reyren, A. Rosch, Y. Taguchi, Y. Tokura, K. von Bergmann, and J. Zang, “The 2020 skyrmionics roadmap”, J. Phys. D: Appl. Phys. 53 (36), 363001 (2020) . * [17] I. Dzyaloshinsky, “A thermodynamic theory of “weak” ferromagnetism of antiferromagnetics”, J. Phys. Chem. Solids 4 (4), 241–255 (1958) . * [18] T. Moriya, “Anisotropic superexchange interaction and weak ferromagnetism”, Physical review 120 (1), 91 (1960) . * [19] Y.-Q. Li, Y.-H. Liu, and Y. Zhou, “General spin-order theory via gauge Landau-Lifshitz equation”, Phys. Rev. B 84 (20), 205123 (2011) . * [20] B. J. Schroers, “Gauged Sigma Models and Magnetic Skyrmions”, SciPost Phys. 7 (3), 030 (2019) . * [21] S. Zhu, Y.-Q. Li, and C. D. Batista, “Spin-orbit coupling and electronic charge effects in Mott insulators”, Phys. Rev. B 90 (19), 195107 (2014) . * [22] A. Sparavigna, “Role of Lifshitz invariants in liquid crystals”, Materials 2 (2), 674–698 (2009) . * [23] P. Yudin, and A. Tagantsev, “Fundamentals of flexoelectricity in solids”, Nanotechnology 24 (43), 432001 (2013) . * [24] B. Barton-Singer, C. Ross, and B. J. Schroers, “Magnetic Skyrmions at Critical Coupling”, Commun. Math. Phys. 375 (3), 2259–2280 (2020) . * [25] C. Ross, N. Sakai, and M. Nitta, “Skyrmion Interactions and Lattices in Solvable Chiral Magnets”arXiv:2003.07147. * [26] V. Golo, and A. Perelomov, “Solution of the Duality Equations for the Two-Dimensional SU(N) Invariant Chiral Model”, Phys. Lett. B 79, 112–113 (1978) . * [27] A. D’Adda, M. Luscher, and P. Di Vecchia, “A 1/n Expandable Series of Nonlinear Sigma Models with Instantons”, Nucl. Phys. B 146, 63–76 (1978) . * [28] A. Din, and W. Zakrzewski, “General Classical Solutions in the CP(n-1) Model”, Nucl. Phys. B 174, 397–406 (1980) . * [29] L. Ferreira, and P. Klimas, “Exact vortex solutions in a $CP^{N}$ Skyrme-Faddeev type model”, JHEP 10, 008 (2010) . * [30] Y. Amari, P. Klimas, N. Sawado, and Y. Tamaki, “Potentials and the vortex solutions in the $CP^{N}$ Skyrme-Faddeev model”, Phys. Rev. D 92 (4), 045007 (2015) . * [31] B. Ivanov, R. Khymyn, and A. Kolezhuk, “Pairing of Solitons in Two-Dimensional S=1 Magnets”, Phys. Rev. Lett. 100 (4), 047203 (2008) . * [32] A. Smerald, and N. Shannon, “Theory of spin excitations in a quantum spin-nematic state”, Phys. Rev. B 88, 184430 (2013) . * [33] R. Hernandez, and E. Lopez, “The SU(3) spin chain sigma model and string theory”, JHEP 2004 (04), 052 (2004) . * [34] M. Greiter, S. Rachel, and D. Schuricht, “Exact results for SU(3) spin chains: Trimer states, valence bond solids, and their parent Hamiltonians”, Phys. Rev. B 75 (6), 060401 (2007) . * [35] K. Penc, and A. M. Läuchli, “Spin nematic phases in quantum spin systems”, in: Introduction to Frustrated Magnetism, Springer, 2011, pp. 331–362. * [36] B. A. Ivanov, and A. K. Kolezhuk, “Effective field theory for the S=1quantum nematic”, Phys. Rev. B 68 (5). * [37] K.-I. Kondo, S. Kato, A. Shibata, and T. Shinohara, “Quark confinement: Dual superconductor picture based on a non-Abelian Stokes theorem and reformulations of Yang–Mills theory”, Phys. Rept. 579, 1–226 (2015) . * [38] W. J. Zakrzewski, “Low dimensional sigma models”, Hilger, 1989. * [39] B. Piette, B. Schroers, and W. Zakrzewski, “Multi-solitons in a two-dimensional Skyrme model”, Z. Phys. C 65, 165–174 (1995) . * [40] L. Döring, and C. Melcher, “Compactness results for static and dynamic chiral skyrmions near the conformal limit”, Calc. Var. Partial Differ. Equ. 56 (3), 60 (2017) . * [41] D. Bykov, “Classical solutions of a flag manifold $\sigma$-model”, Nucl. Phys. B 902, 292–301 (2016) . * [42] H. T. Ueda, Y. Akagi, and N. Shannon, “Quantum solitons with emergent interactions in a model of cold atoms on the triangular lattice”, Phys. Rev. A 93, 021606(R) (2016) . * [43] Y. Amari, and N. Sawado, “BPS sphalerons in the ${F}_{2}$ nonlinear sigma model”, Phys. Rev. D 97, 065012 (2018) . * [44] J. Garaud, J. Carlstrom, and E. Babaev, “Topological solitons in three-band superconductors with broken time reversal symmetry”, Phys. Rev. Lett. 107, 197001 (2011) . * [45] J. Garaud, J. Carlström, E. Babaev, and M. Speight, “Chiral $CP^{2}$ skyrmions in three-band superconductors”, Phys. Rev. B 87 (1), 014507 (2013) . * [46] M. Eto, and M. Nitta, “Vortex trimer in three-component Bose-Einstein condensates”, Phys. Rev. A 85, 053645 (2012) . * [47] M. Eto, and M. Nitta, “Vortex graphs as N-omers and CP(N-1) Skyrmions in N-component Bose-Einstein condensates”, EPL 103 (6), 60006 (2013) .
# Light and Airy: a simple solution for relativistic quantum acceleration radiation Michael R.R. Good1,2 Eric V. Linder2,3 1Physics Department, Nazarbayev University, Nur-Sultan, Kazakhstan 2Energetic Cosmos Laboratory, Nazarbayev University, Nur-Sultan, Kazakhstan 3Berkeley Center for Cosmological Physics & Berkeley Lab, University of California, Berkeley, CA, USA ###### Abstract We study the quantum radiation of particle production by vacuum from an ultra- relativistic moving mirror (dynamical Casimir effect) solution that allows (possibly for the first time) analytically calculable time evolution of particle creation and an Airy particle spectral distribution. The reality of the beta Bogoliubov coefficients is responsible for the simplicity, and the mirror is asymptotically inertial at the speed of light, with finite energy production. We also discuss general relations regarding negative energy flux, the transformation to the 1-D Schrödinger equation, and the incompleteness of entanglement entropy. ## I Introduction Acceleration radiation with finite energy production is physically well- motivated. In the case of black hole evaporation, for example, this is a conspicuous sign that the evolution has finished, energetic radiation has stopped, and conservation of energy is upheld. The canonical moving mirror model of DeWitt-Davies-Fulling DeWitt (1975); Fulling and Davies (1976); Davies and Fulling (1977), for a single perfectly reflecting boundary point in flat (1+1)-D spacetime, has solutions demonstrating in a simple way total finite energy production (e.g. the four decade old solution of Walker-Davies which first derived a finite amount of energy creation Walker and Davies (1982)). Recently, several finite energy mirror solutions have been found that demonstrate close connections to strong gravitational systems. These gravity analog models are called accelerated boundary correspondences (ABCs). The finite energy ABC solutions111The infinite energy ABC solutions correspond to the most well-known spacetimes, e.g. Schwarzschild Good _et al._ (2016), Reissner-Nordström (RN) Good and Ong (2020), Kerr Good _et al._ (2020a), and de Sitter Good _et al._ (2020b). closely characterize interesting well-known curved spacetime end-states, including extremal black holes (asymptotic uniformly accelerated mirrors Liberati _et al._ (2000); Good (2020); Good _et al._ (2020a); Rothman (2000); Foo and Good (2021)), black hole remnants (asymptotic constant-velocity mirrors Good _et al._ (2017, 2019); Good (2018); Myrzakul and Good (2018); Good and Ong (2015); Good (2017)) and complete black hole evaporation (asymptotic zero-velocity mirrors Walker and Davies (1982); Good _et al._ (2020c, d); Good and Linder (2017); Good _et al._ (2013); Good and Linder (2018, 2019)). Despite this progress, it has been very hard to find a mirror solution whose particle spectrum is simple. Only two known solutions have analytic forms, one whose spectrum is an infinite sum of terms Good (2017) and another which is so lengthy as to be prohibitively cumbersome Good _et al._ (2020c, d). Consequently, analytic time evolution is impossible to find for the above spectra. Further investigation of the particle production at any given moment is hobbled because one must instead resort to numerical analysis and finite sized frequency-time bins utilizing the discrete nature of orthonormal wave packets Good _et al._ (2013). Motivated by simplicity, we take a step back and consider that any Bogoliubov transformation can be broken down into two types: (1) the trivial unitary transformation with $\beta$ Bogoliubov coefficient zero, $\beta=0$, indicating no particle production and (2) squeezing transformations where the $\beta\neq 0$ is given by a transformation matrix that is diagonal Sørensen (2012) (see the Bloch-Messiah decomposition or the theory of singular values). The simplest examples of the non-trivial transformations are those where the Bogoliubov coefficients are real-valued. We therefore look for some mirror motion (i.e. ABC) that should lead to a real non-zero beta Bogoliubov coefficient for particle creation, and anticipate corresponding simplicity in the resulting spectrum. We take the simplest possible choice for global mirror motion with characteristics leading to the desired reality of the Bogoliubov coefficient, and indeed find a simple solution for the particle production spectrum. Remarkably, a transformation to the time domain on this spectrum analytically gives the particle production at any given moment. Our paper is organized as follows: in Sec. II we give a very brief motivation of the connection between the reality of the beta Bogoliubov coefficient and the mirror trajectory properties. We analyze this accelerated trajectory in Sec. III, computing the key relativistic dynamical properties such as rapidity, speed, and acceleration. In Sec. IV we derive the energy radiated, by analysis of the quantum stress tensor, and in Sec. V we derive the particle spectrum, finding a unique Airy-Ai form for the radiation and confirming consistency with the stress tensor results. Finally, in Sec. VI we compute the time evolution of particle creation analytically. Appendices A and B discuss some general properties leading to necessary negative energy flux, and connecting to the 1-D Schrödinger equation, respectively. Appendix C is a note on the connection between rapidity and entanglement entropy. Throughout we use natural units, $\hbar=c=1$. ## II Reality, Acceleration, and Inertia The beta Bogoliubov coefficient controls quantum particle production. In light-cone coordinates $(u,v)$, with retarded time $u=t-x$ and advanced time $v=t+x$, the moving mirror trajectory $f(v)$ gives retarded time position, and the beta Bogoliubov coefficient is Birrell and Davies (1984) $\beta_{\omega\omega^{\prime}}=\frac{-1}{4\pi\sqrt{\omega\omega^{\prime}}}\int_{-\infty}^{+\infty}\operatorname{d}\\!{v}~{}e^{-i\omega^{\prime}v-i\omega f(v)}\left(\omega f^{\prime}(v)-\omega^{\prime}\right)\,,$ (1) where $\omega$ and $\omega^{\prime}$ are the frequencies of the outgoing and incoming modes respectively Carlitz and Willey (1987). To maintain finite energy and the simplicity of no information loss, there must not be a horizon at finite time, and the acceleration must vanish at infinity (i.e. the mirror motion must be asymptotically inertial). Under these conditions we can carry out an integration by parts to give $\beta_{\omega\omega^{\prime}}=\frac{1}{2\pi}\sqrt{\frac{\omega^{\prime}}{\omega}}\int_{-\infty}^{+\infty}\operatorname{d}\\!{v}\>e^{-i\omega^{\prime}v-i\omega f(v)}\,.$ (2) To guarantee a real-valued beta Bogoliubov coefficient, the mirror trajectory $f(v)$ must be an odd function so that the exponential over the symmetric interval turns into a cosine of the argument, i.e. a real valued function. The simplest odd function that accelerates in the required manner is $f(v)\sim v+v^{3}$. We will find this results in not only interesting dynamics, but analytic calculation of particle production spectrum and time evolution. ## III Trajectory Motion As motivated in the previous section, we expect the accelerated mirror trajectory $f(v)=v+\kappa^{2}\frac{v^{3}}{3}\,,$ (3) to have interesting physical properties. Here $\kappa$ is a quantity related to the acceleration (and the surface gravity in the black hole case). We can also write the trajectory in spacetime coordinates, $t=-x+\frac{1}{\kappa}(-6\kappa x)^{1/3}\,,$ (4) taking the real cube root, or $x=-t-\frac{1}{2\kappa}\left[A_{+}^{2/3}A_{-}^{1/3}+A_{+}^{1/3}A_{-}^{2/3}\right]\,,$ (5) where $A_{\pm}=3\kappa t\pm\sqrt{9\kappa^{2}t^{2}+8}\,.$ (6) Note at late times $x\to-t+{\mathcal{O}}(t^{1/3})$. These forms make it obvious that asymptotically the mirror travels at the speed of light. A spacetime plot with time on the vertical axis is given of the trajectory in Figure 1. A conformal diagram is plotted in Figure 2. We next investigate the dynamics of the trajectory Eq. (3). We compute the rapidity $\eta(v)$ by $2\eta(v)\equiv-\ln f^{\prime}(v)$ where the prime is a derivative with respect to the argument, $\eta(v)=-\frac{1}{2}\ln\left(\kappa^{2}v^{2}+1\right).$ (7) From the rapidity we may easily compute the velocity $V\equiv\tanh\eta$, plugging in Eq. (7), $V(v)=-\tanh\left[\frac{1}{2}\ln\left(\kappa^{2}v^{2}+1\right)\right]=\frac{-\kappa^{2}v^{2}}{2+\kappa^{2}v^{2}}\,,$ (8) and the proper acceleration, which follows from $\alpha(v)\equiv e^{\eta(v)}\eta^{\prime}(v)$, $\alpha(v)=-\frac{\kappa^{2}v}{\left(\kappa^{2}v^{2}+1\right)^{3/2}}\,.$ (9) At $x=t=0=v$, the velocity and acceleration are zero. At asymptotic infinity, the velocity is the speed of light and the acceleration goes to zero. The magnitude of the velocity, Eq. (8), along with the proper acceleration, Eq. (9), are plotted in Figure 3. Figure 1: A spacetime diagram of the mirror trajectory, Eq. (3) with $\kappa=1$. It starts off asymptotically inertial with zero acceleration and light-speed velocity and decelerates, eventually reaching zero speed (at $t=0$), and then accelerates again approaching the speed of light in an asymptotically inertial way. Note that field modes moving to the left will always hit the mirror, demonstrating no horizon, despite the mirror accelerating to light-speed. Figure 2: A Penrose diagram of the mirror trajectory, Eq. (3). The mirror is moving at light-speed at $v\to\pm\infty$. Since the acceleration is asymptotically zero as $v\to\pm\infty$ then this mirror is asymptotically inertial. The various colors correspond to different maximum accelerations; here $\kappa=1,4,16,64$ from red, blue, black, and green. Figure 3: The velocity and proper acceleration as a function of light- cone coordinate advanced time $v=t+x$ for the mirror trajectory, Eq. (3). At $v=0$, the velocity $V=0$, but asymptotically $|V|\to 1$ and the proper acceleration vanishes, $\alpha\to 0$. The maximum acceleration occurs at $|\alpha_{\textrm{max}}|=2\kappa/(3\sqrt{3})=0.385\kappa$. Here $v$ is in units of $1/\kappa$ and the maximum accelerations happen at advanced time $\kappa v=\pm 1/\sqrt{2}=0.707$. ## IV Energy Flux and Total Energy ### IV.1 Energy Flux The quantum stress tensor reveals the energy flux emitted by the moving mirror. Typically, one will see Fulling and Davies (1976) $F(u)=-\frac{1}{24\pi}\\{p(u),u\\},$ (10) where the energy flux, $F(u)$, is a function of light-cone coordinate retarded time $u=t-x$ Davies and Fulling (1977); Birrell and Davies (1984) and the brackets define the Schwarzian derivative. The trajectory in light-cone coordinates of the mirror is $p(u)$ which is the advanced time position “$v$” as a function of retarded time $u$. However, since we want advanced time $v$ as the independent variable, we write the radiated energy flux using $f(v)$ Good _et al._ (2017, 2020b), $F(v)=\frac{1}{24\pi}\\{f(v),v\\}f^{\prime}(v)^{-2}\,,$ (11) where the Schwarzian brackets are defined as usual, $\\{f(v),v\\}\equiv\frac{f^{\prime\prime\prime}}{f^{\prime}}-\frac{3}{2}\left(\frac{f^{\prime\prime}}{f^{\prime}}\right)^{2}\,.$ (12) For $f(v)$ given by Eq. (3), this yields $F(v)=\frac{\kappa^{2}}{12\pi}\,\frac{1-2\kappa^{2}v^{2}}{\left(\kappa^{2}v^{2}+1\right)^{4}}\ .$ (13) It is clear that asymptotically $F(v)\to 0$ for both $v\to\pm\infty$. Figure 4 shows the energy flux as a function of advanced time $v$. Figure 4: The energy flux, Eq. (13), is asymptotically zero at $v=\pm\infty$. The total energy, as we shall see in Eq. (16), is therefore finite, $E=\kappa/96$. Notice the emission of negative energy flux near early and late advanced times. The maximum flux $F_{\textrm{max}}=\kappa^{2}/(12\pi)$ occurs at $v=0$ and the minimum flux $F_{\textrm{min}}=-\kappa^{2}/(192\pi)$ occurs at $v=\pm 1/\kappa$. The energy flux crosses zero, $F=0$, at $v=\pm 1/(\sqrt{2}\kappa)$. Here $\kappa=1$. ### IV.2 Total Energy The total energy measured by a far away observer at $\mathscr{I}^{+}_{R}$ is Walker (1985) $E=\int_{-\infty}^{\infty}F(u)\operatorname{d}\\!{u}\,,$ (14) where integration occurs over retarded time (it takes the energy time to reach $\mathscr{I}^{+}_{R}$). Since we are using advanced time $v$, we write this with $du=\frac{\operatorname{d}\\!{f}}{\operatorname{d}\\!{v}}dv$ to get the Jacobian correct, $E=\int_{-\infty}^{+\infty}F(v)f^{\prime}(v)dv\,.$ (15) Plugging in Eq. (3) and Eq. (13) into Eq. (15), with Jacobian $du/dv=\kappa^{2}v^{2}+1$, the simple result is $E=\frac{\kappa}{96}\,,$ (16) which is finite and positive. Physically, the finite value tells us the evaporation process stops, similar to the ABC’s of extremal black holes (asymptotic uniformly accelerated mirrors), black hole remnants (non-horizon sub-light-speed asymptotic coasting mirrors), and complete black hole evaporation (asymptotic static moving mirrors). The fact that the total energy is positive is consistent with the quantum interest conjecture Ford and Roman (1999) as derived from quantum inequalities Ford and Roman (1995). ### IV.3 Negative Energy Flux As seen from Figure 4, there are regions of negative energy flux (NEF). This is required by the unitarity sum rule (see Appendices A and B and, e.g., Good _et al._ (2020c)). These regions extend for $|v|>1/(\kappa\sqrt{2})$. The total negative energy is, by symmetry, $E_{NEF}=2\int_{v=+\frac{1}{\kappa\sqrt{2}}}^{v=+\infty}F(v)f^{\prime}(v)\operatorname{d}\\!{v}\,,$ (17) which gives an analytic result $E_{NEF}=\frac{\kappa\left(-10\sqrt{2}+3\pi-6\cot^{-1}\sqrt{2}\right)}{288\pi}=-0.00930\kappa$ (18) As a ratio, the emission of NEF to positive energy flux (PEF) is $\frac{|E_{NEF}|}{E_{PEF}}\approx 47.1\%\,.$ (19) Note one cannot judge by eye this ratio in Figure 4 due to the redshift Jacobian $f^{\prime}(v)$ in Eq. (17). Another interesting aspect is that because the rapidity diverges, so does the entropy flux $S=-\eta/6$. However, since there is no horizon there is no information loss. This indicates that entanglement entropy is not a comprehensive measure of the unitary, finite energy, information preserving dynamics, due to the inertial light speed asymptote (see Appendix C). ## V Particle Spectrum The particle spectrum can be obtained from the beta Bogoliubov coefficient, given by Eq. (2) in Sec. II. For the particular trajectory Eq. (3), as promised the Bogoliubov coefficient is real, $\beta_{\omega\omega^{\prime}}=\frac{-1}{(\omega\kappa^{2})^{1/3}}\sqrt{\frac{\omega^{\prime}}{\omega}}\,\text{Ai}\left(\frac{\omega+\omega^{\prime}}{(\omega\kappa^{2})^{1/3}}\right)\,,$ (20) which is highly unusual. This corresponds to the Bogoliubov transformation being a pure boost without rotation, i.e. there is no phase on the beta coefficient, giving us a natural choice for both field modes and coefficients (and potentially an action integral whose real part defines the vacuum–vacuum amplitude Nikishov (2003)). To obtain the particle spectrum, we take the modulus square, $N_{\omega\omega^{\prime}}\equiv|\beta_{\omega\omega^{\prime}}|^{2}$, which gives $N_{\omega\omega^{\prime}}=\frac{\omega^{\prime}}{\kappa^{4/3}\omega^{5/3}}\,\text{Ai}^{2}\left(\frac{\omega+\omega^{\prime}}{\kappa^{2/3}\omega^{1/3}}\right).$ (21) The Airy-Ai function is perhaps most well-known as the solution to the time- independent Schrödinger equation for a particle confined within a triangular potential well and for a particle in a one-dimensional constant force field.222The triangular potential well solution is directly relevant for the understanding of electrons trapped in semiconductor heterojunctions. The spectrum Eq. (21), $|\beta_{\omega\omega^{\prime}}|^{2}$, is explicitly non- thermal and plotted as a contour plot in Figure 5. Figure 5: The Airy-Ai spectrum, $|\beta_{\omega\omega^{\prime}}|^{2}$ from Eq. (21), as a contour plot, here with $\kappa=1$. The brighter the contours the more particle production. Notice the asymmetry between $\omega$ and $\omega^{\prime}$ which are uniformly scaled. This asymmetry ultimately shows up in the infinite total particle count due to the infrared divergence of $\omega$ in $N_{\omega}$ but makes it possible to analytically integrate $N_{\omega\omega^{\prime}}$ over $\omega^{\prime}$. This demonstrates a new spectrum of radiation emanating from a moving mirror trajectory. Eq. (21) can be compared to the late time (equilibrium after formation) spectra of non-extremal black holes (e.g. Schwarzschild, RN, Kerr), $N_{\omega\omega^{\prime}}=\frac{1}{2\pi\kappa\omega^{\prime}}\frac{1}{e^{2\pi\omega/\kappa}-1}\,,$ (22) and extremal black holes (e.g. ERN, EK, EKN), $N_{\omega\omega^{\prime}}=\frac{e^{-\pi\omega c/\mathcal{A}}}{{\pi^{2}\mathcal{A}^{2}}}\ \left|K_{1+i\omega c/\mathcal{A}}\left(\frac{2}{\mathcal{A}}\sqrt{\omega\omega^{\prime}}\right)\right|^{2}\,.$ (23) (For EK, $c=\sqrt{2}$; for ERN, $c=2$; for EKN, $c=\mathcal{A}/\kappa$.) Here $\kappa$ is the surface gravity, i.e. $\kappa=1/(4M)$ in the case of a Schwarzschild black hole, or outer horizon surface gravity for the RN and Kerr non-extremal black holes. In addition, $\mathcal{A}$ is the extremal parameter, or the asymptotic uniform acceleration Foo and Good (2021) in the case of the mirror system, while $K_{\nu}$ is the modified Bessel function of the second kind with order $\nu$. Furthermore, it is remarkable that the spectrum $N_{\omega}=\int_{0}^{\infty}N_{\omega\omega^{\prime}}d\omega^{\prime}\,,$ (24) is analytic, $N_{\omega}=\frac{2\sqrt{\bar{\omega}}}{3\kappa}\text{Ai}^{2}(\bar{\omega})-\frac{\text{Ai}(\bar{\omega})\text{Ai}^{\prime}(\bar{\omega})}{3\kappa\bar{\omega}^{3/2}}-\frac{2\text{Ai}^{\prime 2}(\bar{\omega})}{3\kappa\sqrt{\bar{\omega}}}\,,$ (25) where $\bar{\omega}\equiv(\omega/\kappa)^{2/3}$. This analytic $N(\omega)$ spectrum is plotted in Figure 6 for all $\kappa$. Figure 6: The Airy particle spectrum, $N(\omega)$, Eq. (25). Note the infrared divergence at $\omega\to 0$; the soft particle divergence results in infinite total particle count characteristic of asymptotic coasting mirrors. Larger maximum acceleration as measured by $\kappa$ results in more particles for a wider range of frequencies, i.e. $N$ scales as $1/\kappa$ as seen by the $\kappa N$ curve plotted vs $\omega/\kappa$. The Airy functions can be reformulated into Bessel functions using the identities $\displaystyle{\rm Ai}(x)$ $\displaystyle=$ $\displaystyle\sqrt{\frac{x}{3\pi^{2}}}\,K_{1/3}\left(\frac{2}{3}x^{3/2}\right)$ (26) $\displaystyle{\rm Ai}^{\prime}(x)$ $\displaystyle=$ $\displaystyle\frac{-x}{\sqrt{3\pi^{2}}}\,K_{2/3}\left(\frac{2}{3}x^{3/2}\right)\,.$ (27) This turns Eq. (21) into $N_{\omega\omega^{\prime}}=\frac{1}{3\pi^{2}\kappa^{2}}\frac{q^{\prime}(q+q^{\prime})}{q^{2}}\,K^{2}_{1/3}\left(\frac{2(q+q^{\prime})^{3/2}}{3q^{1/2}}\right)\,,$ (28) which has similarities to the extremal black hole expression. Here $q=\omega/\kappa$, $q^{\prime}=\omega^{\prime}/\kappa$. For the particle spectrum we get $\displaystyle 9\pi^{2}\kappa\,N_{\omega}$ $\displaystyle=$ $\displaystyle 2qK^{2}_{1/3}(2q/3)+K_{1/3}(2q/3)\,K_{2/3}(2q/3)$ (29) $\displaystyle-2qK^{2}_{2/3}(2q/3)\,.$ In the small and large $\omega$ limits the leading order terms are, respectively, $\displaystyle N_{\omega}$ $\displaystyle\to$ $\displaystyle\frac{1}{6\sqrt{3}\pi\omega}\,,\qquad\omega\to 0\,,$ (30) $\displaystyle\to$ $\displaystyle\frac{\kappa}{16\pi\omega^{2}}\,e^{-4\omega/(3\kappa)}\,,\qquad\omega\to\infty\,.$ (31) The $1/\omega$ in the small frequency limit (note this is independent of $\kappa$) demonstrates the infrared divergence leading to an infinite total particle count commonly associated with constant-velocity moving mirror solutions Good _et al._ (2017, 2019); Good (2018); Myrzakul and Good (2018); Good and Ong (2015); Good (2017), that are not asymptotically static (asymptotic zero-velocity Walker and Davies (1982); Good _et al._ (2020c, d); Good and Linder (2017); Good _et al._ (2013); Good and Linder (2018, 2019)). To check that the energy is indeed carried away by the particles, we look for consistency between Eq. (21) and the total energy, Eq. (16), found from the stress tensor. This is done by quantum summing, $E=\int_{0}^{\infty}\int_{0}^{\infty}\omega N_{\omega\omega^{\prime}}\operatorname{d}\\!{\omega}\operatorname{d}\\!{\omega}^{\prime},$ (32) that is, associating a quantum of energy $\omega$ with the particle distribution and integrating over all the frequencies. The result is pleasingly analytic: $E=\frac{\kappa}{96}\,.$ (33) Since this is also the result of Eq. (16), the beta spectrum Eq. (21), or Eq. (29), is consistent with the quantum stress tensor, Eq. (13). The time dependence of particle creation can be computed via wavepacket analysis treated in Hawking Hawking (1975), and explicitly numerically computed in Good _et al._ (2020d, c). Wave packet localization, particularly via orthonormal and complete sets in the moving mirror model, was first carried out in detail in Good (2013). For completeness, we utilize the same code to illustrate particle creation in time and present the results in Figure 7. The rate of emission of particles is finite only in a given time and frequency interval which can be seen by these complete orthonormal family of wave packets constructed from the beta Bogoliubov coefficients, following Hawking’s notation, $\beta_{jn\omega^{\prime}}=\frac{1}{\sqrt{\epsilon}}\int_{j\epsilon}^{(j+1)\epsilon}d\omega\,e^{2\pi i\omega n/\epsilon}\beta_{\omega\omega^{\prime}}\,,$ (34) where $j\geq 0$ and $n$ are integers. These packets are built at future right null infinity, $\mathscr{I}^{+}_{R}$, and peak at delayed exterior time, $u=2\pi n/\epsilon$, with width $2\pi/\epsilon$. Therefore the vertical axis in Figure 7 has a discrete and intuitive physical interpretation, giving the counts of a particle detector sensitive to only frequencies within $\epsilon$ of $\omega_{j}=j\epsilon$, for a time $2\pi/\epsilon$ at $u=2\pi n/\epsilon$. Late times correspond to large quantum number $n$ (for the mirror Eq. (5), late times have $u\approx 2t[1+{\mathcal{O}}(\kappa t)^{-2/3}]$). For excellent time resolution, only one frequency bin is needed, where the particles pile up, $j=0$, and a relatively large value of $\epsilon$ resolves the count in time. The text of Fabbri-Navarro-Salas Fabbri and Navarro-Salas (2005) also describes the details needed to reconstruct Figure 7 by first packetizing the beta coefficient as done in Eq. (34) and then secondly numerically integrating over $\omega^{\prime}$ from $0$ to $\infty$, and third, computing the results, $N_{jn}$, $N_{jn}=\int_{0}^{+\infty}d\omega^{\prime}|\beta_{jn\omega^{\prime}}|^{2},$ (35) for each individual time bin, $n$, for a set frequency bin, $j$ (in our fine- grained time resolution case, $j=0$). While this numerical approach evolves the particle count in time, it is not particularly stream-lined, fast, nor arbitrarily accurate. In Sec. VI we will find an analytic approach to the evolution process, resolving these issues. Figure 7: The particle count in time, via wave packet localization. The detector is set with $\epsilon=10$, a relative large value ($\epsilon>1$) in order to get clear time resolution. The scale of the system is $\kappa=1$ and the frequency bin is in the lowest possible $j=0$ value, where most of the particle production occurs, and finer resolution in time is possible. Notice there is no plateau, hence indicative of non-thermal radiation. This emission includes the ‘phantom radiation’ of soft particles as described in Liberati _et al._ (2000). It is symmetric in delayed time, $u$, centered around time bin $n=0$. ## VI Analytic Time Evolution The spectrum, Eq. (25), is simple enough that analytical time evolution without discrete wave packetization is possible – possibly uniquely in the literature. Typically we would like to employ a Fourier transform converting from frequency to time. Since this does not work out in a straightforward manner, we consider that the Fourier transform of a radially symmetric function in the plane can be expressed as a Hankel transform. The Hankel transform, $N_{u}=H(N_{\omega})/2$ – where by time symmetry we have divided the spectrum by 2 so that retarded time $u$ ranges from $-\infty$ to $+\infty$ – is analytically tractable for the spectrum Eq. (25): $\displaystyle\frac{384}{\kappa}N_{u}$ $\displaystyle=$ $\displaystyle 5\,_{3}F_{2}\left(\frac{7}{6},\frac{3}{2},\frac{11}{6};1,2;-\frac{9}{16}u^{2}\kappa^{2}\right)$ (36) $\displaystyle+$ $\displaystyle 4\,_{3}F_{2}\left(\frac{1}{2},\frac{5}{6},\frac{7}{6};1,1;-\frac{9}{16}u^{2}\kappa^{2}\right)$ $\displaystyle-$ $\displaystyle 7\,_{3}F_{2}\left(\frac{5}{6},\frac{3}{2},\frac{13}{6};1,2;-\frac{9}{16}u^{2}\kappa^{2}\right).$ The particle spectrum dies off at large times as $u^{-1}$, so the total number indeed diverges. Turning to the energy, a consistency check can be done by Hankel transforming the quantum of energy $\omega N_{\omega}$, and integrating over all time. The result for the transform, $E_{u}=H(\omega N_{\omega})/2$, is $E_{u}=\frac{\sinh\theta}{3\sqrt{3}\pi\kappa u^{3}}-\frac{\cosh\theta}{3\sqrt{3}\pi u^{2}\sqrt{9\kappa^{2}u^{2}+16}}\,,$ (37) where $\theta\equiv\frac{1}{3}\sinh^{-1}\left(\frac{3\kappa u}{4}\right)$. Eq. (37) dies off as $u^{-8/3}$ for large times, so the total energy is finite. The result for the total energy by integrating over all time is also analytic, $E=\int_{-\infty}^{+\infty}E_{u}\;du=\frac{\kappa}{96}\,,$ (38) which agrees with the total energy as derived by the stress tensor, Eq. (16), and the total energy as derived by integration of the particle spectrum with respect to frequency, Eq. (33). As far as we know, this is the first solution for analytic time evolution of particle production from the quantum vacuum. Notice there is no need to resort to wavepacket discreteness as the creation is continuous. Nor have we made any analytic approximations. A plot of the evolution is given in Figure 8. Figure 8: The continuous time evolution of particle creation, Eq. (36), and time evolution of energy quanta, Eq. (37). Here $\kappa=1$ (though $N_{u}/\kappa$ and $E_{u}/\kappa^{2}$ have invariant forms as a function of $\kappa u$). ## VII Conclusions An interesting connection between the reality of the beta Bogoliubov coefficient, asymptotic inertia and finite energy, and mirror motion near the speed of light leads to particle radiation by quantum vacuum that is analytic in the energy flux, simple in the particle spectrum – an Airy function – and, remarkably, analytic expression of the time evolution of particle creation. We evaluate the simplest allowed accelerated mirror with the needed conditions and derive all these physical quantities. The Airy mirror is asymptotically inertial, coasting at the speed of light; the total energy radiated is finite and simply $\kappa/96$ despite a soft particle divergence; the beta Bogoliubov coefficient is given by a real Airy-Ai function; the particle creation time evolution is analytic and exact. The mirror has no horizon, and so there is no information loss. The finite energy corresponds to the black hole analog case where evaporation ceases, related to extremal black holes, remnants, or complete evaporation. The asymptotic inertia is responsible for finite energy, but inertial motions that asymptotically approach the speed of light do not preserve the interpretation of entanglement entropy derived from the rapidity as an adequate measure of unitarity (see Appendix C). The radiated flux exhibits regions of negative energy flux (NEF); these are required by unitarity for the conditions present, and we expand on this “necessity of negativity” in the Appendices, showing it follows directly from the asymptotically inertial (the lack of a horizon ensures information conservation333Information loss occurs from an inertial horizon Good and Abdikamalov (2020).) nature. We further connect it to the 1-D Schrödinger equation and interpretation of the rapidity as a Lorentz transformation and wavefunction in a potential well defined by the acceleration properties. While obtaining a real, and simple, Bogoliubov coefficient is a significant advance, we further derive an analytic particle spectrum (integrating over the beta coefficient squared), time evolution (through a Hankel transform), and energy (further integrating over the spectrum times frequency). An exact analytic time evolution solution for particle production from the quantum vacuum may be unique in the literature. No discrete wave packetization is required (although we also show those results, consistent with the analytic one). The techniques of accelerating boundary correspondences (ABC) and moving mirrors continue to deliver intriguing insights into connections between acceleration (or surface gravity), particle creation, and information. Furthermore, these lead to interesting directions for research in the properties of black holes (for which they serve as analogs) and quantum information, entanglement, and gravity. ###### Acknowledgements. MG acknowledges funding from state-targeted program “Center of Excellence for Fundamental and Applied Physics” (BR05236454) by the Ministry of Education and Science of the Republic of Kazakhstan, and the FY2018-SGP-1-STMM Faculty Development Competitive Research Grant No. 090118FD5350 at Nazarbayev University. This work is supported in part by the Energetic Cosmos Laboratory. EL is supported in part by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under contract no. DE-AC02-05CH11231. ## Appendix A Necessity of Negativity We emphasize that negative energy flux is a common, and indeed required, component of certain acceleration dynamics. That this follows from unitarity is discussed in Good _et al._ (2020c) and references therein. Here we give two quick derivations. From Eq. (11) and the relations $f^{\prime}(v)=e^{-2\eta}$ and $\alpha(v)=\eta^{\prime}(v)\,e^{\eta}$, we can write $24\pi F(v)=-2e^{4\eta}\left[\eta^{\prime\prime}+(\eta^{\prime})^{2}\right]=-2e^{3\eta}\alpha^{\prime}(v)\,.$ (39) This immediately implies $-12\pi\int_{-\infty}^{\infty}dv\,e^{-3\eta}F(v)=\int_{-\infty}^{\infty}dv\,\frac{d\alpha}{dv}=\alpha\big{|}^{\infty}_{-\infty}\,.$ (40) Whenever the acceleration $\alpha$ vanishes asymptotically – as it does for any asymptotically inertial dynamics – (or if it is time symmetric), then the left hand side must be zero. Since $e^{-3\eta}$ is positive, then $F(v)$ must have negative regions. This depends only on the conditions mentioned in the previous paragraph and not on the specific mirror trajectory used in this paper. One can also see this even more directly in terms of proper time $\tau$: $12\pi F(\tau)=-\alpha^{\prime}(\tau)\,e^{2\eta(\tau)}\,,$ (41) so $12\pi\int d\tau\,e^{-2\eta}F(\tau)=-\int d\tau\,\frac{d\alpha}{d\tau}\,.$ (42) ## Appendix B Zero-Energy Resonance The simple harmonic oscillator is the basis of many diverse physics areas. Here we consider a relation between particle radiation from an accelerated system and the oscillator equation. Let us adapt the usual form, $\ddot{\phi}(t)+\omega(t)\phi(t)=0$ (in the time domain) or $\phi^{\prime\prime}(x)+k(x)\phi(x)=0$ (in the space domain) and write it in terms of the light-cone coordinate retarded time $u=t-x$, $\psi^{\prime\prime}(u)+V(u)\,\psi(u)=0\,.$ (43) We allow the resonance frequency or spring constant to be spacetime dependent, and write it as $V(u)$ for reasons discussed below. We immediately have the consequence that $\int_{-\infty}^{+\infty}du\,V(u)\,\psi(u)=-\int du\,\frac{d\psi^{\prime}}{du}\,.$ (44) This looks quite similar to Eq. (42). If $\psi^{\prime}$ vanishes at asymptotically early and late times, $|u|\to\infty$, then we find that for positive $\psi$ the “potential” $V$ must have negative regions. Let us make the analogy more concrete. If $\psi(u)=e^{-\eta}$ then $-\psi^{\prime}(u)=\alpha(u)$, the acceleration444While $\psi^{\prime}(u)=-\alpha$, it is worth pointing out that $\psi(u)$ itself is the Lorentz transformation (LT) in retarded time from un-tilded to tilded boosted frame $\tilde{u}=e^{-\eta}\,u$. Here the LT acts like a wave function.; so our constraint on $\psi^{\prime}$ vanishing at infinity is exactly our condition in Appendix A, and the asymptotically inertial case we treat in the main text. Note that indeed $\psi$ is always positive. Now the derivatives of $\eta$, and hence $\psi$, are also related through the Schwarzian in Eq. (10) to the energy flux $F(u)$ – which arises from the acceleration – through $V(u)\equiv 12\pi F(u)=-\frac{1}{2}\\{p(u),u\\}=\eta^{\prime}(u)^{2}-\eta^{\prime\prime}(u)\,.$ (45) Under these definitions, Eq. (44) is identical to Eq. (42). Thus again we see the “necessity of negativity”. The derivation in Appendix A relied on accelerating system dynamics while the one here arose from the simple harmonic oscillator equation. The harmonic oscillator can also be related to the 1-D Schrödinger equation $-\frac{\hbar^{2}}{2m}\psi^{\prime\prime}+V\psi=E\psi\,,$ (46) for a spacetime-dependent potential where the “spring constant” $k\leftrightarrow\frac{2m(E-V)}{\hbar^{2}}\,.$ (47) Absorbing the $\hbar$ and $m$ factors, and taking the zero energy case, we see we can rewrite the Schrödinger equation as Eq. (43). Hence our $V(u)=12\pi F(u)$ does act like a potential and $\psi(u)$ acts like a wave function. The moving mirror differential equation for energy flux, Eq. (45), and the zero- energy case with absorption of a negative sign into the definition of the potential, Eq. (46), corresponds to the physics of resonance transmission for a potential, $V(u)=V(-u)$, of a 1-D scattering threshold anomaly Senn (1988). For the particular trajectory of the main text, we have the asymptotic condition $\psi^{\prime}\to 0$ but to keep the wave function zero at infinity we perform a parity flip, $x\to-x$, on the mirror trajectory $f(v)$, Eq. (3), resulting in $p(u)=u+\kappa^{2}\frac{u^{3}}{3}\,.$ (48) With $2\eta(u)=\ln p^{\prime}(u)$, the rapidity $\eta(u)=\frac{1}{2}\ln(\kappa^{2}u^{2}+1)$, hence asymptotically $+\infty$ rather than $-\infty$ without the parity flip, i.e. the mirror approaches an observer located at $\mathscr{I}^{+}_{R}$ at the speed of light, instead of receding at the speed of light as is the case with Eq. (3). The wave function form is then $\psi(u)=\frac{1}{\sqrt{\kappa^{2}u^{2}+1}}\,,\qquad\psi(\pm\infty)=0\,,$ (49) plotted in Figure 9. The wave function is normalized by setting $\kappa=\pi$ so $\int_{-\infty}^{\infty}|\psi(u)|^{2}\operatorname{d}\\!{u}=1\,.$ (50) Plugging Eq. (48) into the Schwarzian relation, Eq. (10), gives $F(u)=\frac{\kappa^{2}(2\kappa^{2}u^{2}-1)}{12\pi\left(\kappa^{2}u^{2}+1\right)^{2}}\,.$ (51) which is PT symmetric $u\to-u$. Phrasing this as the potential $V(u)=12\pi F(u)$ of the Schrödinger equation we see in Figure 9 how the wave function is localized within the potential well. Figure 9: The potential Eq. (45) with Eq. (51), and the wave function Eq. (49). $|\psi^{2}|$ is normalized according to Eq. (50) where $\kappa=\pi$. The potential maxima occur at $u_{m}=\pm\sqrt{2}/\kappa$ with maximum value $V_{m}(u_{m})=\kappa^{2}/3$; the zero crossings are at $u_{0}=\pm 1/(\kappa\sqrt{2})$. ## Appendix C Entanglement Entropy and the Speed of Light Entropy diverges because rapidity does, $S=-\eta/6$. Interestingly, a divergent information measure like entanglement entropy is, at first glance, seemingly at odds with the obvious unitarity of the dynamics as seen in the Penrose diagram. However, the entanglement-rapidity formula has a subtle caveat in that it was carefully derived Myrzakul _et al._ (2021); Fitkevich _et al._ (2020); Good and Ong (2015); Bianchi and Smerlak (2014); Chen and Yeom (2017) assuming unitarity a priori only in the cases where entropy (rapidity) achieves a constant non-infinite value in the far future. Since this is not the case for an asymptotic light speed moving mirror, the entropy as rapidity interpretation is not a good measure of unitarity Bianchi and Smerlak (2014) for such cases. This example highlights the need for caution because the entanglement as rapidity approach may not hold much utility for general motions that approach the speed of light, so $\eta\to\infty$. ## References * DeWitt (1975) B. S. DeWitt, Phys. Rept. 19, 295 (1975). * Fulling and Davies (1976) S. A. Fulling and P. C. W. Davies, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 348, 393 (1976). * Davies and Fulling (1977) P. Davies and S. Fulling, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences A356, 237 (1977). * Walker and Davies (1982) W. R. Walker and P. C. W. Davies, Journal of Physics A: Mathematical and General 15, L477 (1982). * Good _et al._ (2016) M. R. R. Good, P. R. Anderson, and C. R. Evans, Phys. Rev. D 94, 065010 (2016), arXiv:1605.06635 [gr-qc] . * Good and Ong (2020) M. R. R. Good and Y. C. Ong, Eur. Phys. J. C 80, 1169 (2020), arXiv:2004.03916 [gr-qc] . * Good _et al._ (2020a) M. R. Good, J. Foo, and E. V. Linder, (2020a), arXiv:2006.01349 [gr-qc] . * Good _et al._ (2020b) M. R. R. Good, A. Zhakenuly, and E. V. Linder, Phys. Rev. D 102, 045020 (2020b), arXiv:2005.03850 [gr-qc] . * Liberati _et al._ (2000) S. Liberati, T. Rothman, and S. Sonego, Phys. Rev. D 62, 024005 (2000), arXiv:gr-qc/0002019 . * Good (2020) M. R. R. Good, Phys. Rev. D 101, 104050 (2020), arXiv:2003.07016 [gr-qc] . * Rothman (2000) T. Rothman, Phys. Lett. A 273, 303 (2000), arXiv:gr-qc/0006036 . * Foo and Good (2021) J. Foo and M. R. R. Good, JCAP 01, 019 (2021), arXiv:2006.09681 [gr-qc] . * Good _et al._ (2017) M. R. R. Good, K. Yelshibekov, and Y. C. Ong, JHEP 03, 013 (2017), arXiv:1611.00809 [gr-qc] . * Good _et al._ (2019) M. R. Good, Y. C. Ong, A. Myrzakul, and K. Yelshibekov, Gen. Rel. Grav. 51, 92 (2019), arXiv:1801.08020 [gr-qc] . * Good (2018) M. R. Good, Universe 4, 122 (2018). * Myrzakul and Good (2018) A. Myrzakul and M. R. Good, in _15th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories_ (2018) arXiv:1807.10627 [gr-qc] . * Good and Ong (2015) M. R. R. Good and Y. C. Ong, JHEP 07, 145 (2015), arXiv:1506.08072 [gr-qc] . * Good (2017) M. R. R. Good, _Reflecting at the Speed of Light_ (World Scientific, Singapore, 2017) arXiv:1612.02459 [gr-qc] . * Good _et al._ (2020c) M. R. Good, E. V. Linder, and F. Wilczek, Phys. Rev. D 101, 025012 (2020c), arXiv:1909.01129 [gr-qc] . * Good _et al._ (2020d) M. R. R. Good, E. V. Linder, and F. Wilczek, Modern Physics Letters A 35, 2040006 (2020d). * Good and Linder (2017) M. R. R. Good and E. V. Linder, Phys. Rev. D 96, 125010 (2017), arXiv:1707.03670 [gr-qc] . * Good _et al._ (2013) M. R. R. Good, P. R. Anderson, and C. R. Evans, Phys. Rev. D 88, 025023 (2013), arXiv:1303.6756 [gr-qc] . * Good and Linder (2018) M. R. Good and E. V. Linder, Phys. Rev. D 97, 065006 (2018), arXiv:1711.09922 [gr-qc] . * Good and Linder (2019) M. R. Good and E. V. Linder, Phys. Rev. D 99, 025009 (2019), arXiv:1807.08632 [gr-qc] . * Sørensen (2012) O. Sørensen, _Exact treatment of interacting bosons in rotating systems and lattices_ , Ph.D. thesis (2012). * Birrell and Davies (1984) N. Birrell and P. Davies, _Quantum Fields in Curved Space_, Cambridge Monographs on Mathematical Physics (Cambridge Univ. Press, Cambridge, UK, 1984). * Carlitz and Willey (1987) R. D. Carlitz and R. S. Willey, Phys. Rev. D 36, 2327 (1987). * Walker (1985) W. R. Walker, Phys. Rev. D 31, 767 (1985). * Ford and Roman (1999) L. Ford and T. A. Roman, Phys. Rev. D 60, 104018 (1999), arXiv:gr-qc/9901074 . * Ford and Roman (1995) L. Ford and T. A. Roman, Phys. Rev. D 51, 4277 (1995), arXiv:gr-qc/9410043 . * Nikishov (2003) A. I. Nikishov, J. Exp. Theor. Phys. 96, 180 (2003), arXiv:hep-th/0207085 . * Hawking (1975) S. Hawking, Commun. Math. Phys. 43, 199 (1975). * Good (2013) M. R. Good, Int. J. Mod. Phys. A 28, 1350008 (2013), arXiv:1205.0881 [gr-qc] . * Fabbri and Navarro-Salas (2005) A. Fabbri and J. Navarro-Salas, _Modeling Black Hole Evaporation_ (Imperial College Press, 2005). * Good and Abdikamalov (2020) M. Good and E. Abdikamalov, Universe 6, 131 (2020), arXiv:2008.08776 [gr-qc] . * Senn (1988) P. Senn, American Journal of Physics 56, 916 (1988). * Myrzakul _et al._ (2021) A. Myrzakul, C. Xiong, and M. R. R. Good, (2021), arXiv:2101.08139 [gr-qc] . * Fitkevich _et al._ (2020) M. Fitkevich, D. Levkov, and Y. Zenkevich, JHEP 20, 184 (2020), arXiv:2004.13745 [hep-th] . * Bianchi and Smerlak (2014) E. Bianchi and M. Smerlak, Phys. Rev. D 90, 041904 (2014), arXiv:1404.0602 [gr-qc] . * Chen and Yeom (2017) P. Chen and D.-h. Yeom, Phys. Rev. D 96, 025016 (2017), arXiv:1704.08613 [hep-th] .
# Equilibrium shapes and floatability of static and vertically vibrated heavy liquid drops on the surface of a lighter fluid Andrey Pototsky Department of Mathematics, Swinburne University of Technology, Hawthorn, Victoria, 3122, Australia<EMAIL_ADDRESS>Alexander Oron Department of Mechanical Engineering, Technion-Israel Institute of Technology, Haifa 3200003, Israel Michael Bestehorn Institute of Physics, Brandenburg University of Technology, 03013 Cottbus-Senftenberg, Germany ###### Abstract A small drop of a heavier fluid may float on the surface of a lighter fluid supported by surface tension forces. In equilibrium, the drop assumes a radially symmetric shape with a circular triple-phase contact line. We show theoretically and experimentally that such a floating liquid drop with a sufficiently small volume has two distinct stable equilibrium shapes: one with a larger and one with a smaller radius of the triple-phase contact line. Next, we experimentally study the floatability of a less viscous water drop on the surface of a more viscous and less dense oil, subjected to a low frequency (Hz-order) vertical vibration. We find that in a certain range of amplitudes, vibration helps heavy liquid drops to stay afloat. The physical mechanism of the increased floatability is explained by the horizontal elongation of the drop driven by subharmonic Faraday waves. The average length of the triple- phase contact line increases as the drop elongates that leads to a larger average lifting force produced by the surface tension. ## I Introduction Floatation of a small liquid droplet on the surface of a less dense carrier fluid is a striking manifestation of the strength of tensile forces that exist between immiscible liquid and gaseous phases. When in equilibrium, the drop assumes a radially symmetric shape with a circular triple-phase contact line that separates the drop into a sessile upper cap and a pendant lower part. The quintessential question of determining the largest possible drop volume capable of staying afloat was first formulated and studied over 40 year ago (Hartland & Burri, 1976). From the theoretical point of view, a stationary shape of a liquid drop can be found by solving the minimal surface problem for liquid menisci, subject to a zero net force condition acting on the triple- phase contact line (Princen, 1963; Princen & Mason, 1965). Due to its nonlinear nature and complexity of the boundary conditions, this problem can only be approached numerically for a particular combination of fluids. Prior to the modern era of personal computers, the computation of stationary drop shapes was a highly tedious task that involved manual manipulation of the tabulated solutions of the minimal surface equations. This method was used by (Hartland & Burri, 1976) to determine the maximum possible drop volume as a function of the fluid densities and interfacial tensions. Extending the original work (Hartland & Burri, 1976), subsequent studies focused on experimental verification of the floatability of water drops on oil surfaces (Phan et al., 2012; Phan, 2014), addressed the role of the line tension (George et al., 2016; Bratukhin et al., 2001) and developed a simplified model by assuming that the upper sessile cap of the drop is approximately flat (Bratukhin & Makarov, 1994). Here we use numerical continuation method and conduct a series of experiments to reveal a previously unreported multistability of sufficiently small liquid drops floating on the surface of a less dense bulk fluid. Depending on the deposition method, a small quantity of a heavy fluid may form a floating drop with two different equilibrium radially symmetric shapes: one with a smaller and one with a larger radius of the triple phase contact line. It should be emphasized that the existence of two different equilibrium shapes with a fixed drop volume was first mentioned in (Hartland & Burri, 1976), but the shape with the smaller value of the contact radius was discarded as being unstable. We demonstrate here theoretically and experimentally that both shapes are, in fact, stable for drop volumes below a certain critical value. Our next goal is to experimentally study the dynamic floatability of a vertically vibrated water drop on the surface of a more viscous and less dense oil bath. Contrary to its anticipated destructive effect, external vibration is known to suppress the Rayeigh-Taylor instability in stratified liquid films and in liquid films on the underside of a solid plate (Wolf, 1969, 1970; Lapuerta et al., 2001; Sterman-Cohen et al., 2017; Bestehorn & Pototsky, 2016; Pototsky & Bestehorn, 2016). In a series of recent experiments, vertical shaking was shown to create levitating layers of a heavy fluid up to 20 cm in width floating on top of a lighter fluid (Apffel et al., 2020). A rather remarkable phenomenon of the upside-down floatability was revealed, when a solid body positioned at the lower interface of a levitating layer, acquired a stable buoyancy position under the action of vertical vibration (Apffel et al., 2020). Earlier, we developed a long-wave hydrodynamic model to investigate the saturation of the Rayleigh-Taylor instability in isolated vertically vibrated two-dimensional liquid drops on the surface of a finite-thickness carrier liquid film (Pototsky et al., 2019). In the absence of vibration, a small quantity of a heavy fluid, deposited on the surface of a less dense carrier film stretches into a liquid column. The tip of the extending column eventually reaches the solid substrate and the carrier film ruptures. We have shown that an external vertical vibration prevents film rupture at non-zero Reynolds numbers, leading to a formation of a stable floating drop. Motivated by these findings, we conduct here a series of experiments with water drops on a vibrated oil surface and demonstrate that in a certain range of the vibration amplitudes and frequencies, the floatability of the drop is enhanced, allowing a larger quantity of water to stay afloat. In this regime, the drop elongates horizontally driven by the subharmonic surface Faraday waves that develop in its upper sessile cap (Pucci et al., 2011, 2013). The time-averaged total length of the triple-phase contact line increases for the elongated drop, leading to a larger lifting force generated by tensile forces. This new dynamical regime exists in a narrow window of the vibration amplitudes: above the onset of the Faraday waves in the drop and below the onset of the Faraday waves on the more viscous oil surface. An even stronger vibration destroys the balance of vertical forces and the drop sinks. The paper is organized as follows. Section II presents the summary of the analysis of equilibrium drop shapes in the static system in case when the line tension is neglected. The set of the minimal surface equations for each of the three interfaces is written in the form of a boundary-value problem with integral constraints that take into account the vertical force balance and satisfy the Neumann triangle condition at the triple-phase contact line. We use the method of numerical continuation implemented with AUTO (Doedel et al., 2007) to continue the analytically known solution that corresponds to a spherical drop in case of zero gravity towards non-zero values of $g$. For any fixed drop volume at sufficiently small gravity $g$, there exist two different equilibrium shapes: one with a larger and one with a smaller radius of the circular triple phase contact line. In Section III, we use the Helmzoltz free energy to study the static stability of the drop and find that both equilibrium shapes can be stable for sufficiently small drops. This conclusion is experimentally verified using a small $<10$ $\mu$L water drop deposited onto the surface of a commercial vegetable oil. The effect of the vertical vibration is experimentally studied in Section IV. ## II Stationary floating drop We consider a sufficiently small drop of a heavy fluid $(1)$ capable of floating at the interface between a lighter fluid $(2)$, and the ambient gas $(g)$ as shown in Fig.1(a). In equilibrium, the drop is radially symmetric. The shapes of the three interfaces that connect at the triple-phase contact line are found from the balance between the Laplace and hydrostatic pressure. A recent overview of basic phenomena related to floating droplets referred to as liquid lenses at liquid-gas interfaces, including wetting, dewetting and hydrodynamic instabilities can be found in (Nepomnyashchy, 2021). The upper part of the drop, above the horizontal dashed line represents a sessile drop. Using standard set of coordinates (Lohnstein, 1906; Boucher et al., 1975), the upper tip of the drop is taken as the origin of the local coordinate system, with the $z_{1}$-axis pointing downward. Introducing an angle $\phi_{1}$ between the tangent to the drop profile at point $(r_{1},z_{1})$ and the horizontal, the pressure balance yields (Lohnstein, 1906; Boucher et al., 1975) $\displaystyle\sigma_{1g}\left(\frac{d\phi_{1}}{ds_{1}}+\frac{\sin(\phi_{1})}{r_{1}}\right)=\rho_{1}gz_{1}+\frac{2\sigma_{1g}}{R_{1}},$ (1) where $s_{1}$ is the arc length of the drop profile from the origin and $R_{1}$ is the radius of curvature at the upper tip. Note that the gas density is neglected in Eqs. (1) and (2) below. Similar, the lower part of the drop, below the horizontal dashed line represents a pendant drop. The lower tip of the drop is taken as the origin of the local coordinate system, with the $z_{2}$-axis pointing upward. At point $\left(r_{2},z_{2}\right)$, $\phi_{2}$ is the angle between the tangent to the drop profile and the horizontal, and $s_{2}$ is the arc length measured from the lower tip. Similar to Eq.(1), the pressure balance can be written as (Boucher et al., 1975) $\displaystyle\sigma_{12}\left(\frac{d\phi_{2}}{ds_{2}}+\frac{\sin(\phi_{2})}{r_{2}}\right)=(\rho_{2}-\rho_{1})gz_{2}+\frac{2\sigma_{12}}{R_{2}},$ (2) where $R_{2}$ is the radius of curvature at the lower tip. In addition to Eq.(1) and (2), the coordinates $(r_{i},z_{i})$, $i=1,2$ satisfy $\displaystyle\frac{dr_{i}}{ds_{i}}=\cos(\phi_{i}),~{}~{}\frac{dz_{i}}{ds_{i}}=\sin(\phi_{i}).$ (3) Finally, the meniscus around the drop can be best described in terms of the height $h(r)$, as shown in Fig.1(a). The requirement that pressure is constant at any given vertical level, leads to (Landau & Lifshitz, 1987) $\displaystyle\sigma_{2g}\left(\frac{h^{\prime\prime}}{(1+h^{\prime 2})^{3/2}}+\frac{h^{\prime}}{r\sqrt{1+h^{\prime 2}}}\right)=\rho_{2}gh-\rho_{2}gh_{0},$ (4) where a prime stands for the derivative with respect to $r$, $h_{0}$ denotes the height of the meniscus far away from the drop and $r$ is the radial coordinate measured from the vertical symmetry axis of the system. The system of Eqs. (1) - (4) must be solved as a boundary value problem, supplemented with the following boundary and integral constraints. Equations (1) and (3) with $i=1$ are solved on the interval $0\leq s_{1}\leq S_{1}$, where $S_{1}$ is the arc length of the upper part of the drop measured from the upper tip to the triple-phase contact line. Similar, Eq. (2) and Eq. (3) with $i=2$ are solved on the interval $0\leq s_{2}\leq S_{2}$, where $S_{2}$ is the arc length of the lower part of the drop measured from the lower tip to the triple-phase contact line. At the tips of the drop, we set the initial conditions $\displaystyle r_{i}(0)=z_{i}(0)=\phi_{i}(0)=0,\,\,i=1,2.$ (5) The requirement of continuity of the interface yields $\displaystyle r_{1}(S_{1})=r_{2}(S_{2})=R,~{}~{}{\rm{and~{}~{}}}z_{1}(S_{1})=z_{2}(S_{2}),$ (6) where $R$ denotes the contact radius at the triple-phase contact line. Equation (4) is solved on the interval $R\leq r\leq\infty$ with $h(R)=0$, $h^{\prime}(\infty)=0$ and $h(\infty)=h_{0}$. The Newmann triangle, formed by three tensile forces ${\bm{\sigma}}_{12}+{\bm{\sigma}}_{1g}+{\bm{\sigma}}_{2g}=0$, as shown in Fig.1(a), reflects the requirement that the net force acting on a small element of the contact line vanishes in equilibrium. Introducing the contact angles $\Phi_{i}=\phi_{i}(S_{i}),~{}i=1,2$ between phase $(i)$ and the horizontal at the triple-phase contact line, Neumann’s triangle is replaced by two scalar equations $\displaystyle 0$ $\displaystyle=$ $\displaystyle\sigma_{1g}\cos{(\Phi_{1})}+\sigma_{12}\cos{(\Phi_{2})}-\sigma_{2g}\frac{1}{\sqrt{1+\left[h^{\prime}(R)\right]^{2}}},$ $\displaystyle 0$ $\displaystyle=$ $\displaystyle\sigma_{1g}\sin{(\Phi_{1})}-\sigma_{12}\sin{(\Phi_{2})}+\sigma_{2g}\frac{h^{\prime}(R)}{\sqrt{1+\left[h^{\prime}(R)\right]^{2}}}.$ (7) Figure 1: (a) A liquid drop of a heavy fluid (1) floating at the interface between a lighter fluid (2) and gas (g). Equilibrium Newmann’s triangle at the triple-phase contact line is the requirement that the net force acting on a small element of the contact line is zero, i.e. ${\bm{\sigma}}_{12}+{\bm{\sigma}}_{1g}+{\bm{\sigma}}_{2g}=0$. (b) Variation in the total drop height $H$ on the gravity constant $g$ for a $50$ $\mu L$ water drop floating at the oil-air interface. The fluid parameters are: $\rho_{1}=1000$ kg/m3, $\rho_{2}=916$ kg/m3, $\sigma_{1g}=0.045$ N/m, $\sigma_{2g}=0.022$ N/m, $\sigma_{12}=0.032$ N/m (Phan et al., 2012). The drop profiles at points $(1,2,3)$ are shown in the right panel. Continuation starts at point $P_{1}$ that corresponds to the analytical solution given by Eqs. (12). The solution at the point $P_{2}$ corresponds to a circular drop fully submerged in fluid $(2)$ and touching the interface $2-g$ at a point. The volumes $V_{i},~{}i=1,2$ of the sessile and pendant parts are given by $\displaystyle V_{i}=\int_{0}^{S_{i}}\pi r_{i}^{2}\sin(\phi_{i})ds_{i}.$ (8) The total buoyancy force $F_{b}$ experienced by the drop is given by the weight of fluid $2$ displaced by the volume of pendant part $\rho_{2}gV_{2}$ and the weight of fluid $2$ in the cylindrical volume above the contact line level ( the dashed line in Fig. 1(a)) with the radius $R$ and the meniscus height $h_{0}$, i.e. $\displaystyle F_{b}=\rho_{2}gV_{2}+\rho_{2}g\pi R^{2}h_{0}.$ (9) Finally, the vertical force balance implies that the weight of the drop $\rho_{1}g(V_{1}+V_{2})$ is balanced by the buoyancy force $F_{b}$ and the vertical component of the surface tension force exerted by fluid $(2)$ onto the drop. The latter is given by $\displaystyle 2\pi\sigma_{2g}Rh^{\prime}(R)\left[1+\left[h^{\prime}(R)\right]^{2}\right]^{-1/2}$ so that the balance of vertical forces can be written in the form $\displaystyle\rho_{1}g(V_{1}+V_{2})=F_{b}+2\pi\sigma_{2g}R\frac{h^{\prime}(R)}{\sqrt{1+\left[h^{\prime}(R)\right]^{2}}}.$ (10) We use numerical continuation method AUTO (Doedel et al., 2007) to solve the above boundary value problem with an integral condition given by the requirement that the total volume $V_{1}+V_{2}$ is fixed. Details of the numerical continuation method are summarized in Appendix A. As a starting point of the numerical continuation method, we use the analytical solution in case of zero gravity, when the upper and the lower parts of the drop are both spherical and the interface between fluid $2$ and gas is non-deformed. By setting $h^{\prime}(R)=0$, we obtain from Eqs. (II) the contact angles $\Phi_{i}$ $\displaystyle\Phi_{1}$ $\displaystyle=$ $\displaystyle\cos^{-1}\left(\frac{\sigma_{1g}^{2}+\sigma_{2g}^{2}-\sigma_{12}^{2}}{2\sigma_{1g}\sigma_{2g}}\right),$ $\displaystyle\Phi_{2}$ $\displaystyle=$ $\displaystyle\cos^{-1}\left(\frac{\sigma_{12}^{2}+\sigma_{2g}^{2}-\sigma_{1g}^{2}}{2\sigma_{12}\sigma_{2g}}\right).$ (11) It is easy to see that the solution of Eqs. (1) - (3) at zero gravity is $\displaystyle\phi_{i}=\frac{s_{i}}{R_{i}},~{}~{}r_{i}=R_{i}\sin\left(\frac{s_{i}}{R_{i}}\right),~{}~{}z_{i}=R_{i}\left[1-\cos\left(\frac{s_{i}}{R_{i}}\right)\right],~{}~{}0\leq s_{i}\leq R_{i}\Phi_{i},$ (12) where the radii $R_{i}$ of the sessile and pendant parts are determined by the contact angles $\Phi_{i}$ and the horizontal radius $R$ at the level of the contact line, i.e. $R_{i}\sin(\Phi_{i})=R$. The volumes of the sessile and pendant parts are found from Eq. (8) $V_{i}=\pi R_{i}^{3}\left(\frac{2}{3}+\frac{1}{3}\cos^{3}(\Phi_{i})-\cos(\Phi_{i})\right)$. First, we continue the solution Eqs. (12) in parameter $g$ with the fluid parameters taken from (Phan et al., 2012). The total height of the drop $H=z_{1}(S_{1})+z_{2}(S_{2})$ is displayed as a function of $g$ in Fig.1b. The continuation starts at the point $P_{1}$ that corresponds to the analytical solution Eqs. (12). As $g$ increases, the drop profile changes, as shown by the three selected solutions $1,2,3$ in the right panel Fig. 1. No stationary solutions exist past the saddle-node bifurcation point (solution $2$) in terms of $g$. For any $g$ below the saddle-node point, there exist two stationary drop profiles: one with a lower value of the radius $R$ (solution $1$) and one with a larger value of $R$ (solution $3$). At vanishingly small $g$, the second solution (point $P_{2}$) collapses to a spherical drop fully submerged in fluid $2$ that only touches the interface $2-g$ at one point. Figure 2: (a) Two water drops with the identical volume $V=90$ $\mu$L at the oil-air interface. (b) Contact line radius $R$ as a function of the volume $V$ at $g=9.8$ m/s2. Circles indicate the location of the solutions in (a). After the value of $g=9.8$ m/s2 is reached, the solution is followed using AUTO (Doedel et al., 2007) with the drop volume $V$ as a continuation parameter. The radius $R$ of the contact line is shown as a function of $V$ in Fig.2(b). Two different drops with the identical volume $V=90$ $\mu$L are shown in Fig.2(a). It is remarkable that multiple stationary profiles of floating drops have never been studied in details. In the early study (Hartland & Burri, 1976), the existence of multiple profiles has been mentioned, but the solutions with the smaller value of $R$ were declared unstable. The reason for the instability (Hartland & Burri, 1976) was the qualitative comparison of a floating drop with a heavy rigid sphere capable of floating on the surface of a fluid that does not wet the surface of the sphere (Hartland & Robinson, 1971). Indeed, in case of a rigid sphere, the stability condition is derived by looking at the position of the center of mass $z_{c}(g)$ of the sphere as a function of the gravity constant $g$. Thus, if $g$ is slightly increased (decreased), the center of mass of a stable configuration must descend (ascend) so that after $g$ is set back to its original value, the sphere will bounce back up (down) to recover the original stationary position. It was shown (Hartland & Robinson, 1971) that solutions with a larger radius of the contact line are stable, while those with the smaller value of radius are unstable. However, to the best of our knowledge, no such calculations are available for a floating liquid drop. A comprehensive stability analysis that takes into account static and dynamic perturbations is still missing. ## III Static stability of a floating drop We approach the stability of a floating drop from the point of view of the Helmholtz free energy, by considering the so-called static axisymmetric perturbations of the drop profile that satisfy the Laplace pressure balance. Such static stability analysis neglects the motion of the fluids and, therefore, can only be considered as a precursor to the true dynamic stability with respect to non-axisymmetric perturbations. Note that a similar method has been applied to study the static stability of pendant drops hanging from a solid plate (Padday & Pitt, 1973). We take the level of the carrier fluid $2$ far away from the drop, as a zero level for the vertical axis directed upwards, as shown in Fig.3. The excess total energy $E_{t}$ of the system is constructed not counting the infinite energy of the semi-infinite fluid $2$ with a flat fluid-gas interface in the absence of the drop. Figure 3: Excess potential energy $U_{e}$ of the fluid bath is given by the potential energy of the shaded part filled with fluid $2$ with negative density $-\rho_{2}$. The excess surface energy is $\displaystyle E_{s}=\sigma_{12}S_{12}+\sigma_{1g}S_{1g}+\sigma_{2g}S_{2g},$ (13) where $S_{12}=\int_{0}^{S_{2}}2\pi r_{2}ds_{2}$ and $S_{1g}=\int_{0}^{S_{1}}2\pi r_{1}ds_{1}$ are the surface areas of the fluid $1$-fluid $2$ and fluid $1$-gas interfaces and $S_{2g}=\int_{R}^{\infty}2\pi r(\sqrt{1+h^{\prime 2}}-1)dr$ is the excess surface area of the fluid $2$-gas interface. The excess potential energy of the system is $\displaystyle E_{p}=Mgz_{c}+U_{e},$ (14) where $M=\rho_{1}(V_{1}+V_{2})$ is the total mass of the drop, $z_{c}$ is the vertical coordinate of the centre of mass of the drop and $U_{e}$ is the excess potential energy of the fluid bath. $U_{e}$ can be further represented in the form $\displaystyle U_{e}=-g\rho_{2}V_{e}Z_{e},$ (15) where $V_{e}$ is the volume of the shaded part in Fig.3 and $Z_{e}$ is the coordinate of the center of mass of the shaded part in Fig.3, filled with a homogeneous density. The excess total energy $E_{t}$ is thus given by $E_{t}=E_{p}+E_{s}$. Note that the values $z_{c},Z_{e},V_{e}$ and $E_{s}$ depend on $g$, thus $z_{c}=z_{c}(g),~{}Z_{e}=Z_{e}(g),~{}V_{e}=V_{e}(g),~{}E_{s}=E_{s}(g).$ (16) Now let us assume that the gravity constant has been slightly changed to $g+\delta g$. This can be practically achieved by placing the system in a lift that accelerates either upward or downward, or alternatively, as a result of a localized in time displacing disturbance of the system. The drop will assume a new stationary shape that corresponds to the new value of $g+\delta g$. Now assume that the lift (or the disturbance) suddenly stops and the gravity constant is instantaneously set back to the original value $g$. At this moment of time, the drop and meniscus still have new shapes, so that the total energy of the system is $\displaystyle\tilde{E}_{t}=Mgz_{c}(g+\delta g)-g\rho_{2}V_{e}(g+\delta g)Z_{e}(g+\delta g)+E_{s}(g+\delta g).$ (17) The system is stable if $\tilde{E}_{t}>E_{t}$, i.e. $\displaystyle\tilde{E}_{t}-E_{t}$ $\displaystyle=$ $\displaystyle Mgz_{c}(g+\delta g)-Mgz_{c}(g)$ (18) $\displaystyle-$ $\displaystyle g\rho_{2}V_{e}(g+\delta g)Z_{e}(g+\delta g)+g\rho_{2}V_{e}(g)Z_{e}(g)$ $\displaystyle+$ $\displaystyle E_{s}(g+\delta g)-E_{s}(g)>0.$ Expanding Eq. (18) into powers of $\delta g$, we obtain the leading term $\displaystyle\delta E_{t}=\left(Mg\frac{dz_{c}(g)}{dg}-g\rho_{2}\frac{d(V_{e}Z_{e})}{dg}+\frac{dE_{s}(g)}{dg}\right)\delta g=0,$ (19) which must vanish for arbitrary small $\delta g$, because the drop shape at the original value of $g$ is in equilibrium. Differentiating $\delta E_{t}/\delta g=0$ with respect to $g$, we obtain $\displaystyle\frac{d}{dg}\left(\frac{\delta E_{t}}{\delta g}\right)=M\frac{dz_{c}(g)}{dg}+Mg\frac{d^{2}z_{c}(g)}{dg^{2}}-\rho_{2}\frac{d(V_{e}Z_{e})}{dg}-g\rho_{2}\frac{d^{2}(V_{g}Z_{g})}{dg^{2}}+\frac{d^{2}E_{s}(g)}{dg^{2}}=0.~{}~{}$ (20) The second term in the expansion of Eq. (18) in terms of $\delta g$ is $\displaystyle\delta^{2}E_{t}=\left(Mg\frac{d^{2}z_{c}(g)}{dg^{2}}-g\rho_{2}\frac{d^{2}(V_{e}Z_{e})}{dg^{2}}+\frac{d^{2}E_{s}(g)}{dg^{2}}\right)\delta g^{2}>0.$ (21) Equation (20) can now be used to remove the second derivative terms in Eq. (21) and to finally obtain the static stability condition $\displaystyle\rho_{2}\frac{d(V_{e}Z_{e})}{dg}-M\frac{dz_{c}(g)}{dg}>0.$ (22) Equation (22) is now recast in the form $df(g)/dg<0$, where $\displaystyle f(g)=z_{c}(g)-\frac{\rho_{2}V_{e}(g)Z_{e}(g)}{\rho_{1}V},$ (23) and $V$ is the fixed drop volume. Function $f(g)$ can be expressed in terms of the variables $z_{i}$, $\phi_{i}$, $r_{i}$ and $h(r)$ from Eqs. (1) - (3), as shown in Appendix B. Figure 4: Function $f(g)$ given by Eq. (23) determined for $V=50$, $20$ and $10$ $\mu$L water drop floatation on oil with the parameters as in Fig.1(b). Stable (unstable) solutions are marked by the solid (dashed) lines. The zero gravity solution given by Eqs. (12) corresponds to the larger value of $f$ at $g=0$. For each volume, the solutions on the upper branch have a larger contact radius $R$. Drops smaller than $20$ $\mu$L have two stable floatation shapes at $g=9.8$ m/s2: one with a larger and one with a smaller value of the contact radius. The function $f(g)$ from Eq. (23) is shown in Fig. 4 for three different volumes $V=50$, $20$ and $10$ $\mu$L of a water drop on oil surface with fluid parameters as in Fig.1(b). Statically stable shapes correspond to $df/dg<0$, as indicated by the solid curves, whereas statically unstable states are marked by the dashed parts of the curves. For each volume, the zero-gravity solution given by Eqs. (12) has a larger value of $f(g=0)$. At any fixed value of $g$ below the saddle-node bifurcation point, the solution with larger (smaller) values of the contact radius $R$ belongs to the upper (lower) branch of $f(g)$. At terrestrial gravity of $g=9.8$ m/s2, as indicated by the vertical line in Fig. 4, larger drops with the volume $V>~{}20$ $\mu$L have only one stable shape with a larger value of $R$ (upper branch). However, smaller drops with the volume $V<~{}20$ $\mu$L have two stable shapes: one with a larger (upper branch) and one with a smaller (lower branch) contact radius $R$. Figure 5: Two stable shapes of a water drop with approximately identical volume $5\pm 0.5$ $\mu$L floating on the surface of a commercial vegetable oil. The thick solid line is obtained by fitting the experimental drop profile with a pendant drop solution of Eqs. (2), (3) with the fluid parameters as in Fig.1 and the fitted value of the radius of curvature at the lower tip $R_{2}\approx 1.13$ mm in both cases. The shape with a smaller contact radius (a) is obtained by depositing a drop from a pipette directly onto the oil surface. The shape with the larger contact radius (b) is obtained by depositing the drop from the pipette onto the vertical wall of the container and allowing the drop to slowly slide down the wall to touch the oil surface. In order to experimentally verify the existence of two stable floatation shapes, we use two different methods to deposit a small quantity of water $<10$ $\mu$L onto a surface of a commercial vegetable oil. To obtain a shape with a smaller contact radius, we use a similar technique as in (Phan et al., 2012), namely, a pendant water drop with the maximal possible volume is produced freely hanging in air from a pipette with the diameter of the nostril of $2.5$ mm. After the drop is carefully brought into contact with the oil surface, it separates from the pipette and starts to float on the oil surface, as shown in Fig.5(a). In the second deposition method, we produce a drop with a similar volume hanging freely from a pipette. The drop is carefully brought into contact with a vertical wall of the polystyrene container filled with oil. The drop slowly slides down the wall under the action of gravity, and eventually comes into contact with the meniscus formed by oil. The spreading coefficient of water on oil is generally larger than that of water on polystyrene. Therefore, the drop is pulled by capillary forces away from the container wall. This process creates the second stable shape with a larger contact radius, as shown in Fig.5(b). In both cases, the estimated volume of the water drop is approximately identical $V\approx 5\pm 0.5$ $\mu$L. The drops are photographed and the contours of their pendant parts submerged in oil are extracted using Matlab. Note that the contour of the upper cap of the drop is difficult to extract, as it is obstructed by the meniscus formed by oil with the container wall. The experimentally obtained profiles are fitted by the analytical solution of the pendant drop Eqs. (2) and (3) with fluid parameters as in Fig.1. We use the radius of curvature at the lower tip $R_{2}$ as the fitting parameter and obtain the value of $R_{2}\approx 1.13$ mm for both shapes. The analytical solution is shown by the thick solid line in Fig.5. The volumes of the submerged parts, determined using the fitted analytical solution, are found to be $6$ $\mu$L in Fig.5(a) and $5$ $\mu$L in Fig.5(b). The difference in volumes is due to the unaccounted volume of the obstructed upper caps. ## IV Vertical vibration helps liquid drops to stay afloat Figure 6: Phase diagram separating floating circular, floating elongated and sinking water drops vibrated at $60$ Hz in an olive oil. Symbols indicate an experimentally determined critical acceleration $a/g$, that corresponds to the onset of the Faraday instability in a hanging water drop of the varying volume. In the absence of vibration, smaller drops with $V\lesssim 0.1$ mL hang at the oil-air interface, while larger drops $V\gtrsim 0.1$ mL sink to the bottom of the container. For $0.1$ mL $\lesssim V\lesssim 0.15$ mL the drop floats at the oil-air interface when vibrated with the amplitude $a/g>5$. Insets: representative snapshots of a floating circular, floating elongated and a sunk water drop. The vertical scale bar next to the floating circular drop indicates the length of 5 mm. An ellipse (solid line) in the top view of the floating elongated drop highlights an approximate location of the contact line. The ratio between the major and the minor axes of the ellipse is $2:3$. To investigate how external vertical vibration affects the floatability of the drop, we conduct a series of experiments with drops of distilled water with the volume $V\lesssim 0.15$ mL deposited on top of a vertically vibrated $1$ cm thick layer of commercial olive oil. The oil was placed in a circular container with the diameter of 10 cm mounted on a $6.5$” $45$ W RMS audio speaker (Sony, Japan) powered by a $30$W stereo amplifier (Yamaha TSS-15, China) of a sinusoidal signal produced by a digital tone generator. The container was vertically vibrated at frequencies in the range 40–80 Hz. The vibration amplitudes were sufficiently large to excite subharmonic Faraday waves on the surface of the water drop but not strong enough to excite Faraday waves on the free surface of the oil layer. The vertical coordinate of the container $z_{\rm{p}}(t)$ oscillates with the amplitude $A$ and the period $T=1/f$ according to $\displaystyle z_{\rm p}(t)=A\cos{(\phi+2\pi t/T)},$ (24) where $f$ is the frequency in Hertz and $\phi$ is a fixed phase. In the co- moving frame of reference, the time-dependent downward acceleration is $\displaystyle g+\ddot{z}_{\rm{p}}(t)=g-A\left(\frac{2\pi}{T}\right)^{2}\cos{(\phi+2\pi t/T)},$ (25) where $\dot{z}$ stands for the time derivative of $z$. We use the ADXL 326 accelerometer (Analog Devices, USA) attached to the container and a digital oscilloscope (Tektronix TDS 210, USA) to measure the acceleration amplitude $\displaystyle\frac{a}{g}=\frac{A(2\pi f)^{2}}{g}$ in units of $g$. The motion of the drop was recorded at up to 240 fps using a high-speed camera and the obtained images were post-processed in Matlab for a further analysis. As a first step, we determine the maximal possible volume of a water drop $V_{m}$ that hangs at the olive oil-air interface without vibration and find that $V_{m}\approx 0.1$ mL. Note that with a commercial vegetable oil, a stable hanging water drop of up to $0.17$ mL can be achieved (Phan et al., 2012). Such a drop stretches down to reach a vertical size of $1$ cm and more due to a larger value of the vegetable oil-air surface tension. In order to avoid any direct contact between the drop and the bottom of the $1$ cm deep container used in our experiments, we have chosen olive oil over vegetable oil. Previous studies have demonstrated that liquid lenses of a lighter fluid floating on the surface of a heavier and more viscous fluid undergo a spontaneous horizontal elongation under the action of vertical vibration (Pucci et al., 2011, 2013, 2015; Pototsky & Bestehorn, 2018). The underlying physical mechanism was traced down to the Faraday instability at the upper drop surface. The onset of the Faraday instability corresponds to a period- doubling bifurcation (Pototsky & Bestehorn, 2018): the lens starts to oscillate at half of the driving frequency and elongates horizontally in a randomly chosen direction. The resultant equilibrium shape of the elongated lense, as seen from the top, is dictated by the balance between the radiation pressure of the unidirectional Faraday waves and the Laplace pressure which always tries to return the lens back into a circular shape (Pucci et al., 2011, 2013). Here we observe that similar to floating lenses, a heavy water drop that floats on the surface of a lighter and more viscous oil also undergoes a horizontal elongation, when Faraday waves are excited on the upper drop surface. Remarkably, the floatability of the elongated drop increases, allowing drops with volumes greater than the maximal static volume $V_{m}\approx 0.1$ mL to stay afloat. Our experimental protocol consists of a careful deposition of a water drop of various volumes with a pipette on the surface of the vibrated oil bath, and different acceleration amplitudes $a$. Below the Faraday instability threshold, the drop assumes a circular shape when viewed from the top as seen in the upper right inset of Fig. 6. A side view of such a floating circular drop is also shown in the inset of Fig. 6 (floating circular). Above the Faraday threshold the drop elongates as displayed in the inset in Fig. 6 (floating elongated). The shape of the contact line in the elongated state is approximately elliptic with a relatively small eccentricity (ratio of the major and minor axes) of $2:3$. Contrary to an anticipated destructive effect of shaking, the drop continues to float, when vibrated at up to $a=6.5g$. For $a>6.5g$, the Faraday instability in the oil layer sets in. Following this protocol we record the Faraday stability threshold for different drop volumes in Fig. 6 vibrated at $f=60$ Hz (a representative value). The vertical and horizontal error bars indicate the uncertainty in measurements of the acceleration amplitude and the drop volume, respectively. We found that drops with the volume $0.1$ mL < $V~{}<~{}0.15$ mL can float at the oil-air interface when vibrated at $a>5g$. These drops will otherwise sink to the bottom of the container when the vibration amplitude is reduced below $a<5g$. Notably, the action of vibration delays the emergence of the critical event of drop sinkage similar to delaying of film rupture (Sterman-Cohen et al., 2017) or delaying of liquid bridge breaking (Benilov, 2016). ### IV.1 The origin of the excess lifting force In order to understand the physical mechanism responsible for the excess lifting force, required to keep larger drops afloat we consider the balance of the vertical forces acting on the drop averaged over at least two forcing periods $2T$: $\displaystyle Mg=\langle F_{b}\rangle+\langle F_{t}\rangle,$ (26) where $M$ is the total mass of the drop, $F_{b}$ is the buoyancy force, $F_{t}$ is the lifting force due to surface tension acting at the triple-phase oil-water-air contact line and $\langle\dots\rangle=(2T)^{-1}\int_{0}^{2T}(\dots)\,dt$ denotes averaging over time. Note that averaging over $2T$ is necessary due to a subharmonic nature of the Faraday waves. Similar to Eq. (10) in the static case, the instantaneous value of $F_{t}$ for the elongated drop can be represented in the form $\displaystyle F_{t}=\sigma_{2g}\int_{C(t)}\sin(\alpha(l,t))\,dl,$ (27) where $\alpha(l,t)$ is the local instantaneous contact angle between the oil- gas interface and the horizontal, and the integration contour $C(t)$ represents the instantaneous shape of the contact line of the elongated drop, parametrised by the arc length $l$. In the static case, $\sin{\left(\alpha(l,t)\right)}=h^{\prime}(R)/\sqrt{1+\left[h^{\prime}(R)\right]^{2}}$ is constant and $C$ is a circle with the radius $R$ so that Eq.(10) is recovered. Figure 7: (a) Series of $50$ contours of the upper part of a $0.12$ mL floating drop vibrated at $60$ Hz extracted from a 200 fps slow motion video. The vertical axes in (a,b) represent time in the units of the vibration period $T=1/60$ s. The horizontal axis in (b) is the vertical coordinate of the container $z_{\rm{p}}(t)$ vibrating at $60$ Hz. (c) Circles correspond to the scaled volume factor $s(t)$ associated with the upper part of the drop above the oil-air interface estimated from (a). The dashed curve in (c) is the fitting line $\sim S_{0}\cos(\psi+2\pi t/T)$ to the experimental data. (d) The function $s(t)\ddot{z}_{\rm{p}}(t)$ together with its time-average value represented by the thick horizontal line. Panels 1 - 6 display snapshots of the drop showing also the blobs emerging above the oil-gas interface. The six snapshots correspond to the first six contours in (a) from the time interval between $0$ and $1.5T$. The excess lifting force can be explained if $\langle F_{t}\rangle$ is larger than its static value, given by $2\pi R\sigma_{2g}h^{\prime}(R)/\sqrt{1+\left[h^{\prime}(R)\right]^{2}}$. However, it is important to emphasize that the average value of the integral in Eq. (27) cannot be simply estimated, due to the time-dependent contact angle $\alpha(t,l)$, as can be appreciated from the side view of the floating elongated drop in the inset in Fig.6. As we explain below, the estimation of the average value of the buoyancy force $\langle F_{b}\rangle$ is experimentally easier accessible than the estimation of $\langle F_{t}\rangle$. We therefore proceed to show that $\langle F_{b}\rangle$ measured for the vibrated drop is, in fact, smaller than the static value of a fully submerged drop, i.e. $\langle F_{b}\rangle<(\rho_{1}-\rho_{2})gV_{0}$, where $V_{0}$ represents the total volume of the drop. This confirms that the excess lifting force is due to the increased value of $\langle F_{t}\rangle$. We record a side-view slow-motion video of a $0.12$ mL drop. To increase contrast with yellow oil, we add a small amount of red food coloring fluid to the water. The drop is stretched horizontally and the stretching direction is perpendicular to the view line of the camera, as shown in the inset in Fig.6. Individual frames are extracted from the slow-motion video and post-processed in Matlab to detect the contour of the drop. Figure 7(a) displays $50$ contours of the upper part of the drop exposed above the oil-air interface. The time interval between any two neighboring contours is $1/200$ s. For comparison, we show in Fig.7(b) the vertical coordinate of the vibrating container $z_{c}(t)$ in the laboratory frame. It is clear that the drop oscillates sub-harmonically at half of the driving frequency $f/2=30$ Hz. The left and the right edges of the drop periodically bulge out to develop a small blob every $1/30$ s in anti-phase to each other. Before proceeding, we estimate the average dynamic pressure $P_{d}$ acting on the water-oil interface at the submerged part of the drop. Due to continuity of the velocity field, the difference of the dynamic pressures in oil and water is given by $P_{d}=|\rho_{1}-\rho_{2}|u^{2}$, where $\rho_{1}$ and $\rho_{2}$ are the densities of water and oil, respectively. The typical velocity inside the water drop can be extracted from Fig.7(a). The maximal height of the blobs periodically developing above the oil-air interface are $\sim 1\dots 3$ mm. Since the blobs develop over the interval of time $1/f=1/60$ s, the average speed associated with this motion is $u\sim 10^{-1}$ m/s. This yields the average dynamic pressure at the oil-water interface of $P_{d}\sim|\rho_{1}-\rho_{2}|\times 10^{-2}$ N/m2. The dynamic pressure must be compared with the hydrostatic pressure $P_{s}=|\rho_{1}-\rho_{2}|gh$, where $h\approx 10^{-2}$ m is the vertical size of the drop. Thus, we estimate that the dynamic pressure $P_{d}\approx 10^{-1}P_{s}$ can be neglected and the total buoyancy force is given by $\displaystyle\langle F_{b}\rangle=(\rho_{1}-\rho_{2})\langle(g+\ddot{z}_{\rm{p}}(t))V_{s}(t)\rangle,$ (28) where $V_{s}(t)$ is the time-dependent volume of the oil displaced by the drop. We assume $V_{s}(t)\approx V_{0}-\Delta V(t)$, where $V_{0}$ is the total volume of the drop and $\Delta V(t)\geq 0$ is the instantaneous volume of the part of the drop above the oil-air interface. It is important to note that this assumption neglects the existence of the air pockets (dark regions in the insets in Fig.6). Finally, we obtain $\displaystyle\langle F_{b}\rangle=(\rho_{1}-\rho_{2})(gV_{0}-\langle\ddot{z}_{\rm{p}}(t)\Delta V(t)\rangle-g\langle\Delta V(t)\rangle),$ (29) where it was used that $\langle\ddot{z}_{\rm{p}}(t)\rangle=0$. Equation (29) shows that the average buoyancy force $\langle F_{b}\rangle$ is larger than the static buoyancy $(\rho_{1}-\rho_{2})gV_{0}$ only if $\langle\ddot{z}_{\rm{p}}(t)\Delta V(t)\rangle+g\langle\Delta V(t)\rangle<0$. Note that the average of a product of two sinusoidal functions $\langle f(t)g(t+\phi)\rangle$ with a zero average value $\langle f(t)\rangle=\langle g(t)\rangle=0$ can be positive or negative depending on the phase shift between them. For example, $\langle a\cos(\omega t)\cos(\phi+\omega t)\rangle=a\cos(\phi)/2$. Our analysis is continued to show that $\langle\ddot{z}_{\rm{p}}(t)\Delta V(t)\rangle>0$, which together with $\langle\Delta V(t)\rangle>0$ implies that the average buoyancy force is smaller than the static buoyancy $\langle F_{b}\rangle<(\rho_{1}-\rho_{2})gV_{0}$. To show that $\langle\ddot{z}_{\rm{p}}(t)\Delta V(t)\rangle>0$, we assume that the true volume $\Delta V(t)$ of the upper part of the drop is proportional to the area $S(t)$ under the two-dimensional drop contour, shown in Fig. 7(a). The scaled normalized area $s(t)=S(t)/\langle S(t)\rangle-1$ is calculated for each of the 50 profiles in Fig. 7(a) and presented versus time in Fig. 7(c). The data is fitted by $S_{0}\cos(\psi+2\pi t/T)$ shown by the dashed line in Fig. 7(c). The function $\ddot{z}_{\rm{p}}(t)s(t)$ is also shown in Fig. 7(c). along with the time-average value represented by the thick horizontal line. Since $\langle\ddot{z}_{\rm{p}}(t)s(t)\rangle>0$, we finally conclude that $\langle\ddot{z}_{\rm{p}}(t)\Delta V(t)\rangle>0$. Consequently, the floatation of the drop is necessarily enabled by the increased surface tension force $\langle F_{t}\rangle$, which supports the drop at the triple-phase contact line. The average $\langle F_{t}\rangle$ becomes larger than the corresponding force in the static case, due to the increased total length of the contact line in a horizontally elongated drop, as observed in our experiments. ## V Conclusion To conclude, we have studied theoretically and experimentally the stability and dynamics of a liquid drop of a heavier fluid, floating on the surface of a lighter and more viscous fluid. In equilibrium, small drops may stay afloat assuming two different radially symmetric shapes. One stable shape has a smaller and one has a larger value of the triple-phase contact-line radius. We have experimentally demonstrated the possibility to create both stable shapes using two different deposition methods of the water drop onto the oil surface. As the volume of the drop is gradually increased beyond a certain critical volume, the shape with a smaller contact radius loses its stability and the drop sinks. The second stable shape with a larger contact radius remains stable until another critical volume is reached, beyond which no static floating drops exist. Remarkably, the floatability of the drop can be slightly increased if the drop is vibrated vertically with the frequency in Hz-order. We have performed a series of experiments with water drops on an olive oil surface and found that drops with the volume between $0.1$ mL and $0.15$ mL remain afloat when vibrated at $60$ Hz with the acceleration amplitude between $5g$ and $6g$. In the absence of vibration, water drops larger than $0.1$ mL detach from the oil surface and sink. The origin of the excess lifting force is rooted in the horizontal elongation of the drop, driven by the unidirectional Faraday waves that develop on the upper drop surface. The average length of the contact line in an elongated state appears to be larger than in a static radially symmetric shape. As a result, the average lifting force exerted by tensile forces increases, allowing for heavier drops to stay afloat when vibrated. Finally, the results presented here, apart of their academic interest, may serve as a basis for a research in emulsification of dispersed systems containing several immissible liquids of different densities and utilized in pharmaceutical applications for stabilization of emulsions in their suspended state by shaking. ## Acknowledgments ## Appendix A In order to use the numerical continuation method (Doedel et al., 2007), we write the minimal surface equations Eqs. (1)- (3) as a system of nine first- order autonomous equations $\displaystyle\frac{d\phi_{1}}{ds_{1}}$ $\displaystyle=$ $\displaystyle-\frac{\sin(\phi_{1})}{r_{1}}+\frac{\rho_{1}g}{\sigma_{1g}}z_{1}+\frac{2}{R_{1}},$ $\displaystyle\frac{dr_{1}}{ds_{1}}$ $\displaystyle=$ $\displaystyle\cos(\phi_{1}),$ $\displaystyle\frac{dz_{1}}{ds_{1}}$ $\displaystyle=$ $\displaystyle\sin(\phi_{1}),$ $\displaystyle\frac{d\phi_{2}}{ds_{2}}$ $\displaystyle=$ $\displaystyle-\frac{\sin(\phi_{2})}{r_{2}}+\frac{(\rho_{2}-\rho_{1})g}{\sigma_{12}}z_{2}+\frac{2}{R_{2}},$ $\displaystyle\frac{dr_{2}}{ds_{2}}$ $\displaystyle=$ $\displaystyle\cos(\phi_{2}),$ $\displaystyle\frac{dz_{2}}{ds_{2}}$ $\displaystyle=$ $\displaystyle\sin(\phi_{2}),$ $\displaystyle\frac{dh}{dr}$ $\displaystyle=$ $\displaystyle h^{\prime},$ $\displaystyle\frac{dh^{\prime}}{dr}$ $\displaystyle=$ $\displaystyle(1+h^{\prime 2})^{3/2}\left(\frac{\rho_{2}g}{\sigma_{2g}}h-\frac{\rho_{2}g}{\sigma_{2g}}h_{0}-\frac{h^{\prime}}{\tilde{r}\sqrt{1+h^{\prime 2}}}\right),$ $\displaystyle\frac{d\tilde{r}}{dr}$ $\displaystyle=$ $\displaystyle 1,$ (30) The new variable $\tilde{r}=r$ is introduced to allow writing the meniscus equation as an autonomous system of first-order differential equations, where we have taken a special care of Eq. (4) by truncating the semi-infinite interval $R\leq r<\infty$ to $R\leq r<$ $R_{\infty}$, with a fixed upper bound of $R_{\infty}=10$ cm. Equations (1)-(3) are solved on the interval $0\leq s_{1}\leq S_{1}$ and Eqs. (4)-(6) are solved on the interval $0\leq s_{2}\leq S_{2}$. Note that the first and the fourth equations in Eqs. (A) are singular at $s_{i}=0$ due to the presence of the term $\displaystyle\frac{\sin(\phi_{i})}{r_{i}}$. In order to avoid singularity we introduce the following set of fourteen regularized boundary conditions $\displaystyle\phi_{1}(0)$ $\displaystyle=$ $\displaystyle 10^{-5},$ $\displaystyle r_{1}(0)$ $\displaystyle=$ $\displaystyle 10^{-5}R_{1},$ $\displaystyle z_{1}(0)$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle r_{1}(S_{1})$ $\displaystyle=$ $\displaystyle R,$ $\displaystyle\phi_{2}(0)$ $\displaystyle=$ $\displaystyle 10^{-5},$ $\displaystyle r_{2}(0)$ $\displaystyle=$ $\displaystyle 10^{-5}R_{2},$ $\displaystyle z_{2}(0)$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle r_{2}(S_{2})$ $\displaystyle=$ $\displaystyle R,$ $\displaystyle h^{\prime}(R_{\infty})$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle h(R)$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle\tilde{r}(R)$ $\displaystyle=$ $\displaystyle R,$ $\displaystyle 0$ $\displaystyle=$ $\displaystyle\sigma_{1g}\cos(\phi_{1}(S_{1}))+\sigma_{12}\cos(\phi_{2}(S_{2}))-\sigma_{2g}\frac{1}{\sqrt{1+\left[h^{\prime}(R)\right]^{2}}},$ $\displaystyle 0$ $\displaystyle=$ $\displaystyle\sigma_{1g}\sin(\phi_{1}(S_{1}))-\sigma_{12}\sin(\phi_{2}(S_{2}))+\sigma_{2g}\frac{h^{\prime}(R)}{\sqrt{1+\left[h^{\prime}(R)\right]^{2}}},$ $\displaystyle\rho_{1}g(V_{1}+V_{2})$ $\displaystyle=$ $\displaystyle\rho_{2}gV_{2}+\rho_{2}g\pi R^{2}h_{0}+\sigma_{2g}2\pi R\frac{h^{\prime}(R)}{\sqrt{1+\left[h^{\prime}(R)\right]^{2}}},$ (31) where the last condition represents the balance of the vertical forces acting at the triple-phase contact line. The condition $r_{i}(0)=10^{-5}R_{i}$ together with $\phi_{1}(0)=10^{-5}$ ensure that $\sin(\phi_{i})/r_{i}=R_{i}^{-1}$ at $s_{i}=0$. Finally, we add two integral conditions associated with the volumes of the pendant part $V_{2}$ and the total drop volume $V_{1}+V_{2}$ $\displaystyle V_{2}$ $\displaystyle=$ $\displaystyle\int_{0}^{S_{2}}\pi r_{2}^{2}\sin(\phi_{2})\,ds_{2},$ $\displaystyle V_{1}+V_{2}$ $\displaystyle=$ $\displaystyle\int_{0}^{S_{1}}\pi r_{1}^{2}\sin(\phi_{1})\,ds_{1}+\int_{0}^{S_{2}}\pi r_{2}^{2}\sin(\phi_{2})\,ds_{2}.$ To be able to continue a solution of the set of nine equations with fourteen boundary and two integral conditions one requires $14+2-9+1=8$ active continuation parameters. These are chosen to be $(S_{1},S_{2},R_{1},R_{2},R,h_{0},V_{2})$ with one additional principal continuation parameter, such as, for example, the gravity constant $g$ or the total drop volume $V_{1}+V_{2}$. The AUTO files are available on demand. ## Appendix B This section summarizes the computation of the stability function $f(g)$, from Eq. (23) in terms of the variables $z_{i}$, $\phi_{i}$, $r_{i}$ and $h(r)$ based on Eqs. (1) - (3). Taking the level of the fluid bath far away from the drop as a zero level, the vertical coordinate of the centre of mass of the drop $z_{c}$ can be expressed as $\displaystyle z_{c}=\frac{\pi}{V}\left(\int_{0}^{S_{1}}r_{1}^{2}(z_{1}(S_{1})-h_{0}-z_{1})\sin(\phi_{1})ds_{1}+\int_{0}^{S_{2}}r_{2}^{2}(z_{2}-h_{0}-z_{2}(S_{2}))\sin(\phi_{2})ds_{2}\right),$ where $V=V_{1}+V_{2}$ is the total volume of the drop and all other variables have been introduced in Section II. After some simplification we obtain $\displaystyle z_{c}=\frac{V_{1}z_{1}(S_{1})-\int_{0}^{S_{1}}\pi r_{1}^{2}z_{1}\sin(\phi_{1})ds_{1}-V_{2}z_{2}(S_{2})+\int_{0}^{S_{2}}\pi r_{2}^{2}z_{2}\sin(\phi_{2})ds_{2}}{V_{1}+V_{2}}-h_{0}.$ (34) The excess potential energy $U_{e}$ from Eq. (15) of the fluid bath is given by the potential energy of the shaded part in Fig.3, filled with fluid $2$ with the negative density $-\rho_{2}$. $U_{e}$ can be split into the energy of the pendant part of the drop and the energy of the remaining upper part of the shaded region in Fig.3 $\displaystyle U_{e}=-g\rho_{2}\left[\int_{0}^{S_{2}}\pi r_{2}^{2}(z_{2}-h_{0}-z_{2}(S_{2}))\sin(\phi_{2})\,ds_{2}+\int_{R}^{\infty}\pi r^{2}(h(r)-h_{0})h^{\prime}(r)\,dr\right],$ where $R$ is the contact radius. Finally, the stability function $f(g)$ is $\displaystyle f(g)=z_{c}+\frac{U_{e}}{gV\rho_{1}}.$ (36) ## References * Apffel et al. (2020) Apffel, B., Novkoski, F., Eddi, A. & Fort, E. 2020 Floating under a levitating liquid. Nature 585, 48–52. * Benilov (2016) Benilov, E. S. 2016 Stability of a liquid bridge under vibration. Phys. Rev. E 93, 063118. * Bestehorn & Pototsky (2016) Bestehorn, Michael & Pototsky, Andrey 2016 Faraday instability and nonlinear pattern formation of a two-layer system: A reduced model. Phys. Rev. Fluids 1, 063905. * Boucher et al. (1975) Boucher, E. A., Evans, M. J. B. & Frank, Frederick Charles 1975 Pendent drop profiles and related capillary phenomena. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 346 (1646), 349–374. * Bratukhin et al. (2001) Bratukhin, Yu, K., Makarov, S. O. & Teplova, O. V. 2001 Equilibrium shapes and stability of floating drops. Fluid Dynamics 36, 529–537. * Bratukhin & Makarov (1994) Bratukhin, Yu. K. & Makarov, S. O. 1994 Interphase convection. Perm University Press, Perm . * Doedel et al. (2007) Doedel, Eusebius J., Fairgrieve, Thomas F., Sandstede, Björn, Champneys, Alan R., Kuznetsov, Yuri A. & Wang, Xianjun 2007 Auto-07p: Continuation and bifurcation software for ordinary differential equations. Tech. Rep.. * George et al. (2016) George, D., Damodara, S., Iqbal, R. & Sen, A. K. 2016 Flotation of denser liquid drops on lighter liquids in non-neumann condition: Role of line tension. Langmuir 40, 10276–10283. * Hartland & Burri (1976) Hartland, S. & Burri, J. 1976 Das maximale volumen einer linse an einer fluid-flüssig grenzfläche. The Chemical Engineering Journal 11 (1), 7 – 17. * Hartland & Robinson (1971) Hartland, S & Robinson, J.D 1971 The dynamic equilibrium of a rigid sphere at a deformable liquid-liquid interface. Journal of Colloid and Interface Science 35 (3), 372 – 378\. * Landau & Lifshitz (1987) Landau, L. D. & Lifshitz, E. M. 1987 Fluid Mechanics, Second Edition: Volume 6 (Course of Theoretical Physics), 2nd edn. Butterworth-Heinemann. * Lapuerta et al. (2001) Lapuerta, Victoria, Mancebo, Francisco J. & Vega, José M. 2001 Control of Rayleigh-Taylor instability by vertical vibration in large aspect ratio containers. Phys. Rev. E 64, 016318\. * Lohnstein (1906) Lohnstein, Theodor 1906 Zur theorie des abtropfens mit besonderer rïcksicht auf die bestimmung der kapillaritätskonstanten durch tropfversuche. Annalen der Physik 325 (7), 237–268. * Nepomnyashchy (2021) Nepomnyashchy, Alexander 2021 Droplet on a liquid substrate: Wetting, dewetting, dynamics, instabilities. Current Opinion in Colloid and Interface Science 51, 101398. * Padday & Pitt (1973) Padday, J. F. & Pitt, A. 1973 The stability of axisymmetric menisci. Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 275, 489 – 528. * Phan (2014) Phan, Chi M. 2014 Stability of a floating water droplet on an oil surface. Langmuir 30 (3), 768–773. * Phan et al. (2012) Phan, Chi M., Allen, Benjamin, Peters, Luke B., Le, Thu N. & Tade, Moses O. 2012 Can water float on oil? Langmuir 28 (10), 4609–4613. * Pototsky & Bestehorn (2016) Pototsky, Andrey & Bestehorn, Michael 2016 Faraday instability of a two-layer liquid film with a free upper surface. Phys. Rev. Fluids 1, 023901. * Pototsky & Bestehorn (2018) Pototsky, Andrey & Bestehorn, Michael 2018 Shaping liquid drops by vibration. EPL (Europhysics Letters) 121 (4), 46001. * Pototsky et al. (2019) Pototsky, Andrey, Oron, Alexander & Bestehorn, Michael 2019 Vibration-induced floatation of a heavy liquid drop on a lighter liquid film. Physics of Fluids 31 (8), 087101. * Princen & Mason (1965) Princen, H.M & Mason, S.G 1965 Shape of a fluid drop at a fluid-liquid interface. i. extension and test of two-phase theory. Journal of Colloid Science 20 (2), 156 – 172. * Princen (1963) Princen, H. M. 1963 Shape of a fluid drop at a liquid-liquid interface. Journal of Colloid Science 18, 178–195. * Pucci et al. (2013) Pucci, G., Ben Amar, M. & Couder, Y. 2013 Faraday instability in floating liquid lenses: the spontaneous mutual adaptation due to radiation pressure. Journal of Fluid Mechanics 725, 402–427. * Pucci et al. (2015) Pucci, G., Ben Amar, M. & Couder, Y. 2015 Faraday instability in floating drops. Physics of Fluids 27 (9), 091107. * Pucci et al. (2011) Pucci, G., Fort, E., Ben Amar, M. & Couder, Y. 2011 Mutual adaptation of a Faraday instability pattern with its flexible boundaries in floating fluid drops. Phys. Rev. Lett. 106, 024503. * Sterman-Cohen et al. (2017) Sterman-Cohen, Elad, Bestehorn, Michael & Oron, Alexander 2017 Rayleigh-Taylor instability in thin liquid films subjected to harmonic vibration. Physics of Fluids 29 (5), 052105. * Wolf (1969) Wolf, Gerhard Hans 1969 The dynamic stabilization of the Rayleigh-Taylor instability and the corresponding dynamic equilibrium. Zeitschrift für Physik A Hadrons and nuclei 227 (3), 291–300. * Wolf (1970) Wolf, G. H. 1970 Dynamic stabilization of the interchange instability of a liquid-gas interface. Phys. Rev. Lett. 24, 444–446.
Snehal Rajput, Rupal Agravat, Mohendra Roy, Mehul S. Raval 11institutetext: Department of Information and Communication Technology, Pandit Deendayal Petroleum University, Gandhinagar, India. 22institutetext: School of Engineering and Applied Science, Ahmedabad University, Ahmedabad, India. 22email<EMAIL_ADDRESS> 22email<EMAIL_ADDRESS>22email<EMAIL_ADDRESS> # Glioblastoma Multiforme Patient Survival Prediction Snehal Rajput 11 Rupal Agravat 22 Mohendra Roy 11 Mehul S. Raval 22 ###### Abstract Glioblastoma Multiforme is a very aggressive type of brain tumor. Due to spatial and temporal intra-tissue inhomogeneity, location and the extent of the cancer tissue, it is difficult to detect and dissect the tumor regions. In this paper, we propose survival prognosis models using four regressors operating on handcrafted image-based and radiomics features. We hypothesize that the radiomics shape features have the highest correlation with survival prediction. The proposed approaches were assessed on the Brain Tumor Segmentation (BraTS-2020) challenge dataset. The highest accuracy of image features with random forest regressor approach was 51.5% for the training and 51.7% for the validation dataset. The gradient boosting regressor with shape features gave an accuracy of 91.5% and 62.1% on training and validation datasets respectively. It is better than the BraTS 2020 survival prediction challenge winners on the training and validation datasets. Our work shows that handcrafted features exhibit a strong correlation with survival prediction. The consensus based regressor with gradient boosting and radiomics shape features is the best combination for survival prediction. ###### keywords: Brain tumor segmentation (BraTS 2020), glioblastoma, survival prediction 11footnotetext: All authors have contributed equally to this work. ## 1 Introduction Glioblastoma multiforme (GBM) is the commonest type of primary malignant brain tumor. In the case of adults, glioblastoma makes up 60% of all brain tumors [1]. The World Health Organization (WHO) classified GBM as a grade IV type of cancer due to its invasive and diffusive nature. Patients suffering from GBM have a poor prognosis, with a median survival rate of about ten months [1]. This is due to its aggressive nature, highly heterogeneous appearance, location, shape, and unpredictable response to therapy [2]. Magnetic Resonance Imaging (MRI) has been widely utilized to examine tumors due to its non-hazardousness, high contrast and superior resolution. Generally, manual segmentation of a tumor in MRI is time consuming and prone to subjective error. In this regards an automated segmentation method would be of enormous help to oncologists and clinicians. It can help in early diagnosis as well as in therapeutic strategy planning. In recent years, deep learning- based segmentation approaches have outperformed traditional state-of-the-art methods [3, 4]. Segmentation delineates the brain tumor into Whole Tumor (WT), Enhancing Tumor (ET), and Tumor Core (TC). Handcrafted features extracted from these segments are used to classify the survival days of the patients. There are many segmentation models available. Recently, Jiang et al. [5], in the BraTS 2019 challenge, proposed a two-stage asymmetry cascaded U-Net [2] structure. Each model is made up of a larger encoder in order to be able to extract more complex semantic features and a smaller decoder part for generating a segmentation map with a size identical to the input. Zhao et al. [3] proposed multiple methods to generate robust segmentation results. They grouped it into data processing, model devising, and optimization modules. Multiple methods are assimilated into each of these modules to enhance segmentation results. McKinley et al. [4] proposed a Densenet based U-Net architecture. Convolutions that were dilated were used to bring about an increase in the receptive field, which retains spatial information. The model was trained by combining label uncertainty loss, binary cross-entropy and focal loss. Dice scores on the BraTS-2019 validation dataset were 0.91(WT), 0.83(TC), 0.77(ET), and on the BraTS-2019 test dataset were 0.89(WT), 0.83(TC), 0.81(ET). Therefore, researchers seem to be favouring the U-Net based architecture for segmentation. Once the tumor is segmented, features are extracted for overall survival prediction. Agravat et al. [6] used dense layers U-Net trained on the focal loss for segmentation. Next, age, statistical features and radiomic features train the Random Forest Regressor (RFR) for survival prediction and the obtained accuracy on the test dataset was 0.58. Wang et al. [7] used U-Net and U-Net ensembles with attention gates trained on soft dice scores and cross- entropy segmentation. For survival prediction, they proposed the following prognosis models: i) baseline model where only the age feature was used to train a linear regressor model. ii) Radiomic model where morphological and texture features were extracted from segmentation results. iii) Tumor invasiveness model, where relative invasiveness coefficient (RIC) and age feature train the support vector regressor model. The tumor invasive model was found best for survival prediction. The accuracy for survival prediction was 0.59 and 0.56 for BraTS-2019 validation and test dataset respectively. Feng et al. [8] used an ensemble of U-Net models. The models were trained on patches having brain pixels. The main advantage of using an ensemble method is that the network parameter need not be fine-tuned. Further, for OS prediction, volume and surface area features were extracted for each Region of Interest (ROIs) and age to train a linear regression model. The training and testing set accuracy was reported as 0.31 and 0.55 respectively on the BraTS-2019 datasets. Wang et al. [9] utilized a 3D U-Net-based model, and the training occurred in two phases using patching methods. The first phase included both brain and background pixels, whereas the second included only brain pixels. The dice score coefficient loss function was utilized to train the 3D U-Net model. Further for survival prediction, volume, surface area and age were used to train the ANN model. The training, validation, and testing accuracy of the models were 0.515, 0.448, and 0.551 respectively. Islam et al. [10] proposed a 3D U-Net architecture for segmentation, where attention blocks have been desegregated with the decoder modules. For survival prediction, various geometric, fractal, and histogram-based features were extracted to train multiple regressor models, i.e., support vector machine (SVM), multi-layer perceptron (MLP), random forest regressor (RFR), and eXtreme gradient Boosting (XGBOOST). The validation accuracies were: 0.329 for SVM, 0.414 for MLP, 0.356 for RFR and 0.429 for XGBOOST. The proposed paper aims to establish the correlation between handcrafted features and overall survival prediction. Unlike the existing state-of-the-art methods used for survival prediction [6],[7], [8], [9], the paper uses four predictors and two feature sets to establish their correlation with overall survival prediction of High Grade Glioma (HGG) patients. Shape features and gradient boosting regressors achieve better survival prediction accuracy than state-of-the-art methods. It establishes that shape features have a strong correlation with survival prediction. The organization of the remainder of the paper is as follows: The Brain Tumor Segmentation (BraTS) dataset is described in Section 2, survival prediction methods with four predictors and two feature sets are in Section 3, Section 4 contains results and discussions and finally the conclusion of the paper is in Section 5. ## 2 BraTS dataset Due to different standards and differences in the dataset, evaluating brain tumor segmentation methods objectively and predicting overall survival is a challenge. Nevertheless, for a comparison of different tumor segmentation and survival prediction techniques, the BraTS (brain tumor segmentation challenge) [11, 12, 13] has become a popular platform. Since the year 2018, there are three tasks that are included in this platform. The first task is the process of segmenting the brain tumor. The second task is predicting the overall survival (OS) and the third task is estimating the uncertainty for the predicted tumor sub-regions. The process of tumor segmentation involves delineating the tumor into three sub-regions, namely, the whole tumor, the tumor core, and the enhancing tumor. Specificity and sensitivity metrics as well as Dice score and Hausdorff Distance are used for evaluating performance. The overall survival prediction task classifies survival days into the following categories: long-term survivors ($>$15 months), intermediate- survivors (between 10 and 15 months), and short-survivors ($<$10 months). Samples with resection status GTR (gross total resection) are used to rate the performance of the OS prediction. An accuracy metric is used for performance evaluation, whereas mean and median square error are used for postanalysis [14]. The BraTS 2020 training dataset includes 369 volumetric samples of high-grade glioma (HGG) and low-grade glioma (LGG) cases. It includes metadata of 236 samples such as age, survival days, and resection status for survival days prediction (Grosstotal Resection (GTR) = 119, Sub-total Resection (STR) = 10, and NA = 107). The validation dataset includes 125 sample images and metadata (age, survival days, and resection status) with 29 images having a GTR resection status. Each subject includes four MRI scans that are preoperative (T1-weighted, T1-CE, T2-weighted, and FLAIR) and manually annotated ground truth results. The annotations of ground truth include Necrotic and Non- Enhancing tumor core NCR/NET (label-1), Edema (label-2), Active Tumor (label-4), and 0 for everything else. The dataset has been pre-processed, i.e., all the scans are co-registered to the same anatomical structure, skull stripped and resampled to an isotropic resolution of $1\times 1\times 1\hskip 5.69046ptmm^{3}$. The width, height, and depth of each sample are 240, 240, and 155 respectively. ## 3 Survival Prediction Methodology We use the 3D U-Net model for brain tumor segmentation proposed by Isensee et al. [15]. This is the highest ranking and simple model in BraTS 2017. Like the U-Net [2], this model [15] comprises a contracting path to extract more feature information with increasing network depth. It has an expansion path to generate a segmentation mask with precise localization information and a skip connection for better feature reconstruction at every stage of the expansion path. In our work we have used the bias field correction, normalization, clipping maximum/ minimum intensity to remove outliers, rescaled to $[0,\,1]$ and setting non-brain pixels to 0. The model was trained on a patch size of 128×128×128, randomly generated from all the input MRI modalities. The obtained dice score on the BraTS 2020 validation dataset is 0.880(WT), 0.858(TC), 0.759(ET). The segmentation of tumor tissue of a validation sample is as shown in 1. The figures show a visual comparison of an input flair image and a predicted image. The segmented parts are then used for survival prediction with the prognosis methods with 1) Image-based features, 2) Radiomics based features, and the following four predictors. (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 1: Segmentation results of training set: (a) Axial FLAIR slice (b) Axial Ground truth (c) Axial Segmentation (d) Coronal FLAIR slice (e) Coronal Ground truth (f) Coronal Segmentation (g) Sagittal FLAIR slice (h) Sagittal Ground truth (i) Sagittal Segmentation, four color codes are: Brown for label-1(NCR/NET), white for label-4(Active Tumor), orange for label-2(Edema), black for label-0(back ground) ### 3.1 Predictors and Parameter Tuning We have used four predictors and parameter tuning. These are (1) Artificial Neural Network (ANN) [9, 10], (2) Linear Regressor (LR) [7, 8], (3) Gradient Boosting Regressor (GBR) [10], and (4) Random Forest Regressor (RFR) [6, 15, 10]. All these predictors were used by the top performing models in all recent BraTS challenges. These predictors deal with a small dataset and overfitting problems. The image-based prognosis method uses only seven features making it less vulnerable to overfitting. We retain default parameters for ANN and LR, while parameters for GBR and RFR are hyper-tuned using a grid search. We tuned the number of estimators, depth of the tree, sample split, and learning rate parameters for the GBR. In the case of the RFR, the number of estimators and the depth of the tree were hyper tuned. The predictors with radiomics features were also tuned. For radiomics features it turns out that an ANN with five hidden layers was better compared to 2 or 3 hidden layers. Further, we tuned epochs, learning rate, number of neurons, and an optimizer for ANN. In the LR model, a search was also performed for the penalty term, the number of iterations, and up- grading of feature parameters using LASSO and a ridge regressor. We tuned the number of estimators, maximum depth, and learning rate for the GBR. In the RFR model, we tuned the number of estimators, maximum depth of the tree, minimum sample split, minimum samples in a leaf node, and maximum features parameters. Since the random forest and gradient boosting regressor work on ensemble-based learning, they are robust, efficient, and less prone to overfitting. ### 3.2 Prognosis using Features #### 3.2.1 Image-based features [8, 9] Shape features extracted from the segmentation were used in the OS prediction. These features were volume of the WT, TC, and ET, surface area of the WT, TC, and ET, age. Since the tumor size was the decisive predicting factor for various cancer types, we extracted the volume and surface area of the WT, TC, and ET. The features were extracted from the segmentation maps and input images without any library dependency. Training with fewer features has the advantage that it limits the dimensions of feature space. Hence, the model did not overfit. However, we found saturation in the performance due to high bias in the model. #### 3.2.2 Radiomics based features [16] Radiomics based feature extraction is widely used for disease diagnosis, classification, and survival prediction like lung cancer [17], breast cancer [18], and Alzheimer’s disease [19]. Along with the size of the tumor, exploring the correlation of the other features with survival prediction is crucial to increase the performance of the predictor models. Radiomics features addresses this problem. It allows extracting various statistical, shape, intensity, and texture features from radiographic scans. Also, radiomics allow extracting features from many imaging techniques. Using the package PyRadiomics [16]: the following 107 features were extracted: 1. 1. Shape features: Elongation, flatness, axis lengths, maximum diameter, mesh volume, sphericity, surface area, and surface volume ratio. 2. 2. Gray level features: Gray-level size zone (GLSZ), Gray-level co-occurrence matrix (GLCM), Gray-level run-length matrix (GLRLM), Gray-level dependence Matrix (GLDM), and neighbouring gray-tone difference matrix (NGTDM). 3. 3. First-order statistical features: Energy, entropy, minimum intensity value, maximum intensity value, mean, median, Interquartile range, percentiles, absolute deviation, skewness, variance, kurtosis, and uniformity. Radiomics features are typically multi-collinear and redundant [20]; hence the correlation between these features needs to be validated for specific real- world problems. We performed feature selection through recursive feature elimination (RFE) [21] to remove weaker features and avoid the curse of dimensionality. RFE is an example of backward feature elimination. With the given number of estimators, it selects principal features recursively from the feature set. It refits the model until the desired number of selected features is eventually reached. Out of 107 features, we selected 20 best ranking features. In summary, the four predictors: ANN, RFR, LR, and GBR, are applied to: i) the seven image-based features, ii) 107 radiomics features, iii) 20 principal radiomics features, and iv) only shape radiomics features. Literature [6, 15] also suggests dominance of shape features so we also used all predictors with only shape features for survival prediction. We trained the models with all the resection status (i.e., GTR, STR, and NA) given with the dataset to increase the database size and reduce overfitting. ## 4 Results and Discussions Image-based feature prediction is derived from the BraTS 2019 dataset, and the BraTS 2020 dataset was used for radiomics based feature extraction. The results are shown in Tables 1 to 4. We have not participated in the BraTS 2020 challenge and do not have access to the test dataset. Therefore, results are derived on the training and validation datasets. ### 4.1 Image-based feature prediction We observe that the ensemble-based models, i.e., GBR and RFR, show a better performance on the training and validation dataset. Their consistency in the training and validation accuracy suggests that the model does not overfit. Table 1: OS Performance comparison using image-based feature on training and validation BraTS-2019 dataset. MSE, medianSE, stdSE, and SpearmanR denote the mean square error, median square error, standard deviation squared error, and Spearman’s ranking coefficient. Dataset | Regressor | Accuracy | MSE | medianSE | stdSE | SpearmanR ---|---|---|---|---|---|--- Training | ANN | 0.51 | 86148.10 | 21316 | 181346 | 0.48 LR | 0.49 | 87724.00 | 20736 | 183685 | 0.47 GBR | 0.52 | 63234.40 | 16900 | 126534 | 0.61 RFR | 0.52 | 63234.40 | 16900 | 126534 | 0.61 Validation | ANN | 0.45 | 098312.70 | 39204 | 141392 | 0.24 LR | 0.52 | 100509.00 | 38809 | 141263 | 0.29 GBR | 0.52 | 102999.00 | 36481 | 152694 | 0.27 RFR | 0.52 | 102999.00 | 36481 | 152694 | 0.27 ### 4.2 Radiomics feature-based prediction As mentioned, we extracted 107 radiomic features from the segmentation results of the BraTS 2020 images and fed them as input to four regressor models; ANN, LR, GBR, and RFR. It was observed that RFR gave the best results, and they are shown in Table 2. The other regressors performed poorly compared to RFR, and even the fine-tuning of the parameters did not improve the performance. The possible reasons are the redundant nature of radiomics [20], over complexity due to too many features and fewer training samples. Radiomics features are shallow and low-order image features, and unable to fully describe distinct image characteristics [22]. Also, when the number of observations is less for large extracted features, survival prediction is an ill-posed problem [20]. Table 2: OS performance evaluation using 107 radiomics features and Random Forest Regressor. Dataset | Accuracy | MSE | medianSE | stdSE | SpearmanR ---|---|---|---|---|--- Training | 0.479 | 079176.96 | 20702.21 | 169474.53 | 0.684 Validation | 0.379 | 115424.30 | 28779.30 | 214028.11 | 0.138 It can be observed from Table 2 that the large feature set is unable to yield state-of-the-art accuracy results. Therefore, we reduced the feature set by applying recursive feature elimination to find the 20 most dominant features. Dominant features obtained using RFE are: age, amount of edema, elongation, maximum 2D diameter slice, sphericity, surface-volume ratio, minimum and maximum intensity, interquartile range, skewness, kurtosis, root mean absolute deviation, cluster prominence, cluster shade, inverse variance, coarseness, and dependence variance. We then applied four regressors on the dominant feature set, and performance has been noted in Table 3. Table 3: OS performance comparison on 20 principal radiomics features. Dataset | Regressor Models | Accuracy | MSE | medianSE | stdSE | SpearmanR ---|---|---|---|---|---|--- Training | ANN | 0.393 | 8.90E+12 | 2.46E+12 | 3.36E+13 | 0.125 LR | 0.462 | 96853.55 | 33279.52 | 190733.00 | 0.417 GBR | 0.923 | 17213.25 | 00000.00 | 074717.13 | 0.938 RFR | 0.744 | 31829.75 | 06077.32 | 075572.44 | 0.810 Validation | ANN | 0.448 | 2.20E+20 | 3.46E+12 | 8.03E+20 | 0.290 LR | 0.483 | 2.73E+08 | 056167.55 | 9.86E+08 | 0.456 GBR | 0.414 | 255096.40 | 101995.06 | 420861.25 | 0.025 RFR | 0.448 | 098369.46 | 035521.48 | 126218.18 | 0.126 We observe that the linear regressor with regularisation outperforms all other regression models with the highest accuracy on the validation dataset. LR also provides similar accuracy for the training and validation datasets. The Spearman-R is also highest for LR. In contrast, RFR achieves the lowest mean square error (MSE) on the validation dataset. #### 4.2.1 Radiomic shape features based prediction Reviewing the correlation between radiomics features and survival prediction, we found that radiomic shape features play a crucial role in survival prediction [6, 15]. Shape features show significant statistical differences across ROIs [23]. Hence, shape features can capture tumor features related to genetic anomalies and profoundly impact survival prediction. We formulate the hypothesis that _shape features profoundly impact survival prediction_. In order to validate the hypothesis, we trained predictor models with the following shape features: the amount of necrotic, edema, enhancing tumor, the extent of the tumor, coordinates of tumor, elongation, flatness, axis lengths, 2D diameter row, 2D diameter column, 2 D diameter slice, maximum 3D diameter, mesh volume, sphericity, surface area, surface volume ratio, centroid of necrosis and age information. The performance of each predictor model has been noted in Table 4. Table 4: OS performance comparison on BraTS-2020 dataset using radiomics shape features set. Dataset | Predictor Models | Accuracy | MSE | medianSE | stdSE | SpearmanR ---|---|---|---|---|---|--- Training | ANN | 0.400 | 4.41E+11 | 7.15E+10 | 7.97E+11 | 0.149 LR | 0.470 | 89890.41 | 35160.09 | 162137.20 | 0.461 GBR | 0.915 | 31068.75 | 00000.00 | 150724.63 | 0.849 RFR | 0.615 | 62930.78 | 18562.88 | 130788.18 | 0.759 Validation | ANN | 0.448 | 4.73E+11 | 2.14E+11 | 5.97E+11 | 0.149 LR | 0.414 | 087228.24 | 47820.00 | 111960.30 | 0.215 GBR | 0.621 | 141065.30 | 23528.48 | 236728.70 | 0.338 RFR | 0.448 | 109746.60 | 34689.29 | 200725.98 | 0.116 We observe that GBR and RFR have better performance. Specifically, the gradient boosting regressor outperforms all other regression models. In contrast, LR with regularization achieves the lowest mean square error (MSE) on the validation dataset. ### 4.3 Discussions It has been observed that classical machine learning techniques performed better than the deep learning neural network-based models for survival prediction. Radiomics based approaches are well suited for survival prediction. Traditional regression algorithms have better interpretability than deep learning-based algorithms, they have fewer learnable parameters than CNN, and perform better with smaller sample data. A large sample dataset for training is crucial for direct regression from image modalities using CNN. The predictors trained on the 107 radiomics features underperformed. The predictors modelled on the 20 principal features improved the performance. Further, to alleviate performance, we experimented and trained predictors on shape features and found a strong correlation with survival prediction. Shape features trained on the consensus model obtained state-of-the-art survival prediction accuracy. It was observed that the gradient boosting regressor model performed better than other classical algorithms because of: additive model, and with each tree built, the model becomes more expressive based on the ensemble learning model. The proposed GBR model is compared with the survival prediction challenge winners of BraTS 2020 and prediction accuracy for the state-of-the-art methods was obtained from the unranked leader board***https://www.cbica.upenn.edu/BraTS20/lboardValidation.html. TA performance comparison of the GBR model with top-ranking models has been noted in Table 5. It can be observed that shape-based features with the gradient boosting regressor outperform the best-ranking methods over the validation dataset. Table 5: OS performance comparison with top-ranking models on the BraTS-2020 validation dataset. Team name | Accuracy | MSE | medianSE | stdSE | SpearmanR ---|---|---|---|---|--- SCAN | 0.414 | 098704.65 | 36100.00 | 152175.57 | 0.253 Redneucon | 0.517 | 122515.76 | 70305.26 | 157673.99 | 0.134 VLB | 0.379 | 093859.54 | 67348.26 | 102092.41 | 0.280 COMSATS-MIDL | 0.483 | 105079.42 | 37004.93 | 146375.99 | 0.134 Proposed | 0.621 | 141065.30 | 23528.40 | 236728.70 | 0.338 ## 5 Conclusion Predicting oncological outcomes is always very tricky due to multiple challenges from clinical and engineering perspectives. In this work, we have evaluated two feature sets over four predictors. We proposed the image-based and the radiomic based prognosis approaches for survival prediction. The image-based prognosis models performed well, but the performance saturates beyond a certain point because of fewer features, and models could not learn complexity. Similar observations are also made for the 107 radiomics features / 20 principal features and the regressor combination. All above the combinations exhibited correlation with survival prediction. However, we recommend that shape based features with the gradient boosting regressor is the best combination for survival prediction. Comparing models, it was found that ensemble-based learning models became more useful for survival prediction because of their robustness. Whereas ANN converges speedily compared to classical models but due to lack of ample training samples, it overfits easily. With the availability of a large dataset and more clinical non-imaging information such as gender and treatment, survival prediction can be robust. It can further be applied to clinical practice. ## References * [1] Taylor, O.G., Brzozowski, J.S., Skelding, K.A.: Glioblastoma multiforme: An overview of emerging therapeutic targets. Frontiers in oncology 9, 963 (2019) * [2] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. pp. 234–241. Springer (2015) * [3] Zhao, Y.X., Zhang, Y.M., Liu, C.L.: Bag of tricks for 3d mri brain tumor segmentation. In: International MICCAI Brainlesion Workshop. pp. 210–220. Springer (2019) * [4] McKinley, R., Rebsamen, M., Meier, R., Wiest, R.: Triplanar ensemble of 3d-to-2d cnns with label-uncertainty for brain tumor segmentation. In: International MICCAI Brainlesion Workshop. pp. 379–387. Springer (2019) * [5] Jiang, Z., Ding, C., Liu, M., Tao, D.: Two-stage cascaded u-net: 1st place solution to brats challenge 2019 segmentation task. In: International MICCAI Brainlesion Workshop. pp. 231–241. Springer (2019) * [6] Agravat, R.R., Raval, M.S.: Brain tumor segmentation and survival prediction. In: International MICCAI Brainlesion Workshop. pp. 338–348. Springer (2019) * [7] Wang, S., Dai, C., Mo, Y., Angelini, E., Guo, Y., Bai, W.: Automatic brain tumour segmentation and biophysics-guided survival prediction. In: International MICCAI Brainlesion Workshop. pp. 61–72. Springer (2019) * [8] Feng, X., Dou, Q., Tustison, N., Meyer, C.: Brain tumor segmentation with uncertainty estimation and overall survival prediction. In: International MICCAI Brainlesion Workshop. pp. 304–314. Springer (2019) * [9] Wang, F., Jiang, R., Zheng, L., Meng, C., Biswal, B.: 3d u-net based brain tumor segmentation and survival days prediction. In: International MICCAI Brainlesion Workshop. pp. 131–141. Springer (2019) * [10] Islam, M., Vibashan, V., Jose, V.J.M., Wijethilake, N., Utkarsh, U., Ren, H.: Brain tumor segmentation and survival prediction using 3d attention unet. In: International MICCAI Brainlesion Workshop. pp. 262–272. Springer (2019) * [11] Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, J.S., Freymann, J.B., Farahani, K., Davatzikos, C.: Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features. Scientific data 4, 170117 (2017) * [12] Bakas, S., Reyes, M., Jakab, A., Bauer, S., Rempfler, M., Crimi, A., Shinohara, R.T., Berger, C., Ha, S.M., Rozycki, M., et al.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. arXiv preprint arXiv:1811.02629 (2018) * [13] Menze, B.H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J., Burren, Y., Porz, N., Slotboom, J., Wiest, R., et al.: The multimodal brain tumor image segmentation benchmark (brats). IEEE transactions on medical imaging 34(10), 1993–2024 (2014) * [14] Rajput, S., Raval, M.S.: A review on end-to-end methods for brain tumor segmentation and overall survival prediction. arXiv preprint arXiv:2006.01632 (2020) * [15] Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: Brain tumor segmentation and radiomics survival prediction: Contribution to the brats 2017 challenge. In: International MICCAI Brainlesion Workshop. pp. 287–297. Springer (2017) * [16] Van Griethuysen, J.J., Fedorov, A., Parmar, C., Hosny, A., Aucoin, N., Narayan, V., Beets-Tan, R.G., Fillion-Robin, J.C., Pieper, S., Aerts, H.J.: Computational radiomics system to decode the radiographic phenotype. Cancer research 77(21), e104–e107 (2017) * [17] He, B., Zhao, W., Pi, J.Y., Han, D., Jiang, Y.M., Zhang, Z.G.: A biomarker basing on radiomics for the prediction of overall survival in non–small cell lung cancer patients. Respiratory research 19(1), 1–8 (2018) * [18] Liu, C., Ding, J., Spuhler, K., Gao, Y., Serrano Sosa, M., Moriarty, M., Hussain, S., He, X., Liang, C., Huang, C.: Preoperative prediction of sentinel lymph node metastasis in breast cancer by radiomic signatures from dynamic contrast-enhanced mri. Journal of Magnetic Resonance Imaging 49(1), 131–140 (2019) * [19] Li, Y., Jiang, J., Lu, J., Jiang, J., Zhang, H., Zuo, C.: Radiomics: a novel feature extraction method for brain neuron degeneration disease using 18f-fdg pet imaging and its implementation for alzheimer’s disease and mild cognitive impairment. Therapeutic Advances in Neurological Disorders 12, 1756286419838682 (2019) * [20] Weninger, L., Haarburger, C., Merhof, D.: Robustness of radiomics for survival prediction of brain tumor patients depending on resection status. Frontiers in computational neuroscience 13, 73 (2019) * [21] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al.: Scikit-learn: Machine learning in python. the Journal of machine Learning research 12, 2825–2830 (2011) * [22] Lao, J., Chen, Y., Li, Z.C., Li, Q., Zhang, J., Liu, J., Zhai, G.: A deep learning-based radiomics model for prediction of survival in glioblastoma multiforme. Scientific reports 7(1), 1–8 (2017) * [23] Chaddad, A., Desrosiers, C., Hassan, L., Tanougast, C.: A quantitative study of shape descriptors from glioblastoma multiforme phenotypes for predicting survival outcome. The British journal of radiology 89(1068), 20160575 (2016)
# Design, analysis and control of the series-parallel hybrid RH5 humanoid robot Julian Esser1, Shivesh Kumar1, Heiner Peters1, Vinzenz Bargsten1, Jose de Gea Fernandez1, Carlos Mastalli2, Olivier Stasse3 and Frank Kirchner1 1The authors are with the Robotics Innovation Center, DFKI GmbH, 28359 Bremen, Germany. Corresponding Author’s Email<EMAIL_ADDRESS>Mastalli is with the Alan Turing Institute at the University of Edinburgh, Edinburgh, United Kingdom.3Olivier Stasse is with GEPETTO group at LAAS-CNRS, Toulouse, France.This research was supported by the German Aerospace Center (DLR) with federal funds (Grant Numbers: FKZ 50RA1701 and FKZ 01IW20004 respectively) from the Federal Ministry of Education and Research (BMBF). O. Stasse and C. Mastalli acknowledge the support of the European Commission under the Horizon 2020 project Memory of Motion (MEMMO, project ID: 780684), and the Engineering and Physical Sciences Research Council (EPSRC) UK RAI Hub for Offshore Robotics for Certification of Assets (ORCA, grant reference EP/R026173/1). ###### Abstract Last decades of humanoid research has shown that humanoids developed for high dynamic performance require a stiff structure and optimal distribution of mass–inertial properties. Humanoid robots built with a purely tree type architecture tend to be bulky and usually suffer from velocity and force/torque limitations. This paper presents a novel series-parallel hybrid humanoid called RH5 which is 2 m tall and weighs only 62.5 kg capable of performing heavy-duty dynamic tasks with 5 kg payloads in each hand. The analysis and control of this humanoid is performed with whole-body trajectory optimization technique based on differential dynamic programming (DDP). Additionally, we present an improved contact stability soft-constrained DDP algorithm which is able to generate physically consistent walking trajectories for the humanoid that can be tracked via a simple PD position control in a physics simulator. Finally, we showcase preliminary experimental results on the RH5 humanoid robot. ## I Introduction Humanoid robots are designed to resemble the human body and/or human behavior. Recent research indicates that humanoid robots require a stiff structure and good mass distribution for high dynamic tasks [1]. These properties can be easily achieved by utilizing Parallel Kinematic Mechanisms (PKM) in the design, as they provide higher stiffness, accuracy, and payload capacity compared to serial robots. However, most existing bipedal robot designs are based on serial kinematic chains. Series–parallel hybrid designs combining the advantages of serial and parallel topologies are commonly used in the field of heavy machinery, e.g., cranes, excavator arms, etc. However, such designs also have recently caught the attention of robotics researchers from industry and academia (see [2] for an extensive survey). For instance, the Lola humanoid robot [3] has a spatial slider crank mechanism in the knee joint and a two DOF rotational parallel mechanism in the ankle joint. Similarly, the Aila humanoid robot [4] employs parallel mechanisms for its wrist, neck, and torso joints. Furthermore, the design of the NASA Valkyrie humanoid robot [5], built by the NASA Johnson Space Center, follows a similar design concept by utilizing PKM modules for its wrist, torso and ankle joints. Both torque controlled humanoid robots TORO from DLR [6] and TALOS [7] from PAL Robotics mostly contain serial kinematic chains but utilize simple parallelogram linkages in their ankles for creating the pitch movement. The motivation of such hybrid designs is to achieve a lightweight and compact robot while enhancing the stiffness and dynamic characteristics. However, the evaluation of the humanoid design is still non–trivial since it necessitates whole-body trajectory optimization techniques which exploit the full dynamics of the system. Figure 1: The RH5 humanoid robot performing a dynamic walking motion while carrying two 5 kg bars. Trajectory Optimization (TO) is a numerical optimization technique that aims to find a state-control sequence, which locally minimizes a cost function and satisfies a set of constraints. TO based on reduced centroidal dynamics [8, 9] has become a popular approach in the legged robotics community. However, tracking of centroidal motions requires an instantaneous feedback linearization, where typically quadratic programs with task-space dynamics are solved (e.g., [6]). While TO based on reduced dynamics models has shown great experimental results (e.g., [10]), whole-body TO instead is proven to produce more efficient motions, with lower forces and impacts [11]. To this end, we focus on a DDP [12] variant, called Box-FDDP [13], to efficiently compute dynamic whole-body motions, as depicted in Fig. 1. However, the trajectories generated with those solvers often require an additional stabilizing controller to reproduce the behavior in another simulator or the real robot [14]. Figure 2: Actuation and morphology of the RH5 humanoid robot (S: Spherical, R: Revolute, P: Prismatic, U: Universal) ##### Contributions First, we introduce RH5: a novel series–parallel hybrid humanoid robot that has a lightweight modular design, high stiffness and outstanding dynamic properties. Our robot can perform heavy-duty tasks and dynamic motions. Second, we present an analysis of the RH5 design by generating highly dynamic motions using the Box-FDDP algorithm. Third, we present a contact stability soft-constrained DDP trajectory optimization approach which generates physically consistent walking trajectories. Fourth, we present both simulation and preliminary experimental results on the RH5 robot. ##### Organization Section II describes the mechatronic system design of the novel RH5 humanoid robot with details about its mechanical design, electronics design and processing architecture. Section III presents the analysis and control of the system based on the Box-FDDP algorithm. Section IV presents the simulation and first experimental results on the system and Section V concludes the paper. ## II System design of RH5 humanoid This section provides details on the mechanical design, electronics design and processing architecture of the RH5 humanoid robot. ### II-A Mechanical Design The robot has been designed with proportions close to human. The robot has 34 DOF as depicted in Fig. 2. The robot is symmetric around the XZ plane, and its overall weight and height are 62.5 kg and 2 m, respectively. The RH5 robot has a series-parallel hybrid actuation that reduces its weight and improves its structural stiffness and dynamic characteristics. Below, we describe the actuation principle and design of legs, torso, head and arms. #### II-A1 Actuation Principle We use serially arranged rotary actuators to increase the range of motion. However, for joints with small range of motion, we exploit the advantages of parallel kinematics. These include non–linear transmission ratio, superposition of forces of parallel actuators, higher joint stiffness and optimal mass distribution in order to reduce the inertia of the robot’s extremities. We use high torque BLDC motors and harmonic drive gears for joints with direct rotary actuation in serial chains. We utilize this type of drive unit in the three DOF shoulder joints, torso (yaw), hip joints (yaw, roll), elbow and wrist (roll). The head joints are actuated with commercially available servo drives. Parallel drive concepts are implemented using linear drive units consisting of a high torque BLDC motor in combination with a ball screw. We actuate the hip joints (pitch), the body joint (pitch, roll) as well as the knee and ankle joints of the RH5 robot according to this design (see Table I for an overview). Commercial linear drive units are used to actuate the wrists. Non-linear transmission of the parallel mechanisms was optimized and exploited especially in the joints for the forward movement of the locomotive extremities (hip pitch, knee, ankle pitch). The joint angle under which the highest torque occurs was chosen in such a way that it is within the range of the highest torque requirements to be expected according to gait pattern described in [15]. Near the limits of the joint’s Range Of Motion (ROM), the available torque decreases in favor of a higher speed. Using a highly integrated 2-SPRR+1U parallel mechanism [16] in the lower extremities enables an ankle design that outperforms the ankle of similar humanoid robots at almost half of their weight (see Table II). Table III shows the ROM, speed and torque limits in the generalized coordinates (see [17] for a detailed analysis). Actuator | ROM ($\mathrm{mm}$) | Max. force ($\mathrm{N}$) | Max. vel. ($$\mathrm{mm}$/$\mathrm{s}$$) ---|---|---|--- Wrist | $235$–$290$ | $495$ | $38$ Torso | $195$–$284$ | $2716$ | $291$ Hip3 | $272$–$431$ | $4740$ | $175$ Knee | $273$–$391$ | $5845$ | $140$ Ankle | $221$–$331$ | $2000$ | $265$ TABLE I: ROM of linear actuators of the RH5 robot. Robot | Mass ($\mathrm{kg}$) | Ankle DOF | ROM (∘) | Torque ($\mathrm{N}\mathrm{m}$) | Velocity $(^{\circ}/s)$ ---|---|---|---|---|--- TORO | 7.65 | Roll | $-19.5$–$19.5$ | $40$ | $120$ Pitch | $-45$–$45$ | $130$ | $176$ TALOS | 6.65 | Roll | $-30$–$30$ | $100$ | $275$ Pitch | $-75$–$45$ | $160$ | $332$ RH5 | 3.6 | Roll | $-57$–$57$ | $84$–$158$ | $386$–$726$ Pitch | $-51.5$–$45$ | $121$–$304$ | $200$–$502$ TABLE II: Comparison of lower limbs design characteristics between the TORO, TALOS and RH5 humanoid robots. #### II-A2 Leg The two legs of the robot are identical in construction and follow a Spherical–Revolute–Universal (SRU) kinematic design. Each leg has a 3 DOF hip joint (realized with 2 DOF serial mechanism and 1-RRPR mechanism), 1 DOF knee joint (1-RRPR mechanism) and a 2 DOF ankle joint (2-SPRR+1U mechanism). The rotation axes of the hip joint intersect at a single point that is located at approximately half of the total height of the robot at 930 mm. The distance between both hip joints is 220 mm. To adjust the available range of motion, the first joint axis was tilted by 15 degrees with respect to the XY-plane of the robot. The lengths of the upper and lower leg are almost identical with lengths of 410 and 420 mm, respectively.. Upper and lower leg are connected by the knee joint. The ankle joint has two rotation axes that intersect the same point. The axis intersection point is 100 mm above the ground contact surface. Contact with the ground is made via 4 contact points, which span a support polygon with an area of 80 mm x 200 mm. The total mass of a leg is 9.8 kg, of which 6.2 kg are assigned to the thigh and hip joint, 2.3 kg to the lower leg, and 1.3 kg to the foot, respectively. #### II-A3 Torso and Head We use a spherical body joint with 3 DOF (a 2-SPU+1U unit) to expand the body ROM, which translates to i) the realization of more complex walking patterns, ii) the improvement of the robot balance, and iii) a larger manipulation space. The intersection point of the joint axes is at a height of 1140 mm above the foot contact area and it weights 4.8 kg. The body joint carries the torso, which contains most of the electronics and the battery of the robot and acts as a connecting structure between the robot’s extremities. The torso weighs 21 kg in total. The robot also has a head that serves as a sensor carrier for imaging and acoustic perception. This includes a joint with 3 DOF. The intersection point of the joint axes is at a height of 1800 mm above the foot contact area. The head weighs 3.3 kg and includes laser scanner, stereocamera, microphones, infrared camera and some processing units. Joint | ROM (∘) | Max. torque ($\mathrm{N}\text{\,}\mathrm{m}$) | Max. vel. ($$\mathrm{\SIUnitSymbolDegree}$/$\mathrm{s}$$) ---|---|---|--- Shoulder1 | $-180^{\circ}$–$180^{\circ}$ | $135$ | $210$ Shoulder2 | $-110^{\circ}$–$110^{\circ}$ | $167$ | $131$ Shoulder3 | $-180^{\circ}$–$180^{\circ}$ | $135$ | $210$ Elbow | $-125^{\circ}$–$125^{\circ}$ | $23$ | $413$ Wrist Roll | $-180^{\circ}$–$180^{\circ}$ | $18$ | $660$ Wrist Pitch | $-46.8^{\circ}$–$46.8^{\circ}$ | $24$–$35$ | $60$–$106$ Wrist Yaw | $-39.6^{\circ}$–$57.6^{\circ}$ | $22$–$35$ | $62$–$100$ Torso yaw | $-40^{\circ}$–$40^{\circ}$ | $23$ | $413$ Torso pitch | $-25^{\circ}$–$29^{\circ}$ | $380$–$493$ | $184$–$238$ Torso roll | $-36^{\circ}$–$36^{\circ}$ | $285$–$386$ | $208$–$400$ Hip1 | $-180^{\circ}$–$180^{\circ}$ | $135$ | $210$ Hip2 | $-46^{\circ}$–$67^{\circ}$ | $135$ | $210$ Hip3 | $-17^{\circ}$–$72^{\circ}$ | $357$–$540$ | $88$–$133$ Knee | $0^{\circ}$–$88^{\circ}$ | $337$–$497$ | $94$–$139$ Ankle pitch | $-51.5^{\circ}$–$45^{\circ}$ | $121$–$304$ | $200$–$502$ Ankle roll | $-57^{\circ}$–$57^{\circ}$ | $84$–$158$ | $386$–$726$ TABLE III: ROM of the RH5 humanoid robot in its independent joint space (generalized coord. when robot is fixed). #### II-A4 Arm The robot is equipped with two manipulators. Each manipulator includes a 3 DOF shoulder joint, an 1 DOF elbow, a 3 DOF wrist (realized with a rotary actuator in series with 2-SPU+1U mechanism) and a 1 DOF underactuated gripper. The intersection points of the shoulder joint axes have a distance of 640 mm between the right and left shoulder. The first axis is tilted forward by 14 degrees with respect to the XZ-plane of the robot to increase the manipulation area in front of the torso. The lengths of the upper and lower arms are 355 mm and 386 mm, respectively. Upper and lower arm are coupled by the elbow joint. The three joint axes of the wrist also form a common point of intersection. The end effector is a self-adaptive three-finger gripper, whose individual fingers are simultaneously actuated. The upper and lower arm including gripper weight 3.6 and 3.3 kg, respectively. ### II-B Electronic Design and Processing Architecture The RH5 humanoid robot uses a hybrid control approach that combines local control loops for low-level motor control and central controllers for high level control as depicted in Fig. 3. #### II-B1 Decentralized Actuator-Level Controllers In particular, each of the individual actuators is controlled by dedicated electronics placed near the actuator. On the hardware side, this modular approach facilitates the cabling effort, as it is sufficient to have shared power lines for digital communication to the central controllers. The individual electronics are composed of one or two motor driver boards, a processing board based on a _Xilinx Spartan 6_ Field Programmable Grid Array (FPGA), and a board connecting sensors and communication lines. In addition, the hardware structure at the control level allows decentralized low-level control, which enables local control loops with low latency. These local controllers are implemented as a cascade of feedback controllers for motor current, velocity and position, which runs at frequencies of 32 KHz, 4 KHz and 1 KHz, respectively. Additionally, the local controllers provide feed-forward connections to the high level controllers. This allows to feed-forward velocity and motor current, therefore the amount of feedback can be locally limited to achieve a desired compliant behavior. Note that joint position and velocity can be mapped between the independent joint space and actuator space locally, which is also needed for the initialization of the motor’s incremental encoder position offset if the absolute position sensor measures the independent joint position. Figure 3: Electronic and control units of the RH5 robot. #### II-B2 Central Electronics for Mid- & High-Level Control A hybrid FPGA / ARM-based system translates and routes status and command messages between the actuators, sensors and a central control PC connected via an Ethernet connection. In order to maximize the transmitted packets to the central control PC while guaranteeing a an upper limit of transmission delay, we implement a routine to synchronize the translation layer to the command messages. On the control PC, the robot middleware ROCK is used. Software components within this framework act as drivers, which handles the actuator setup and data exchange. It also provides a robot-agnostic interface to the software components implemented in the high level control. The driver components run periodically at a frequency of 1 kHz, resulting in a round trip time of 1 ms. ## III Analysis and control using DDP This section describes the trajectory optimization approach and outlines the simulation and control architecture. ### III-A Contact Stability Soft-Constrained DDP #### III-A1 Formulation of the Trajectory Optimization Problem Consider a system with discrete-time dynamics as $\bm{\mathit{x}}_{i+1}=\bm{\mathit{f}}(\bm{\mathit{x}}_{i},\bm{\mathit{u}}_{i}),$ (1) which can be modeled as a generic function $\bm{\mathit{f}}$ that describes the evolution of the state $\bm{\mathit{x}}\in\bm{\mathit{R}}^{n}$ from time $i$ to $i+1$, given the control $\bm{\mathit{u}}\in\bm{\mathit{R}}^{m}$. The total cost $J$ of a trajectory can be written as the sum of running costs $\ell$ and a final cost $\ell_{f}$ starting from the initial state $\bm{\mathit{x}}_{0}$ and applying the control sequence $\bm{\mathit{u}}$ along the finite time-horizon: $J(\bm{\mathit{x}}_{0},\bm{\mathit{u}})=\ell_{f}(\bm{\mathit{x}}_{N})+\sum_{i=0}^{N-1}\ell(\bm{\mathit{x}}_{i},\bm{\mathit{u}}_{i}).$ (2) The cost $\ell$ at one discrete time-point (i.e., node) of the optimization depends on the assigned weight $\alpha_{c}$ and the according cost term $\Phi_{c}$ as $\ell=\sum_{c=1}^{C}\alpha_{c}\Phi_{c}(\bm{\mathit{x}},\bm{\mathit{u}}).$ (3) Hence, we write the generic optimal control problem as $\displaystyle\bm{\mathit{X}}^{*},\bm{\mathit{U}}^{*}=$ $\displaystyle\arg\min_{\mathbf{X},\mathbf{U}}\ell_{N}(x_{N})+\sum_{k=0}^{N-1}\int_{t_{k}}^{t_{k}+\Delta t}\ell_{k}(\bm{\mathit{x}},\bm{\mathit{u}})dt,$ (4) $\displaystyle\text{s.t.}\quad\quad\quad\underline{\bm{\mathit{u}}}\leq\bm{\mathit{u}}\leq\bar{\bm{\mathit{u}}},$ (5) $\displaystyle\quad\quad\quad\quad\underline{\dot{\bm{\mathit{x}}}}=\bm{\mathit{g}}(\bm{\mathit{x}},\bm{\mathit{u}})$ where a complete trajectory $\bm{\mathit{X}},\bm{\mathit{U}}$ is a sequence of states $\bm{\mathit{X}}=\\{\bm{\mathit{x}}_{0},\bm{\mathit{x}}_{1},...,\bm{\mathit{x}}_{N}\\}$ and control inputs $\bm{\mathit{U}}=\\{\bm{\mathit{u}}_{0},\bm{\mathit{u}}_{1},...,\bm{\mathit{u}}_{N}-1\\}$ satisfying Eq. 1 and the system dynamics, and $\underline{\bm{\mathit{u}}}$ and $\bar{\bm{\mathit{u}}}$ are the lower and upper torque limits of the system, respectively. To solve the trajectory optimization problem of Eq. 4, we use the Box-FDDP algorithm [13], which is publicly available in the open-source library Crocoddyl [18]. The Box-FDDP algorithm can compute highly-dynamics motions thanks to its direct-indirect hybridization approach. #### III-A2 System Dynamics The dynamics of floating base systems is given as: $\bm{\mathit{M}}(\bm{\mathit{q}})\dot{\bm{\mathit{v}}}+\bm{\mathit{h}}(\bm{\mathit{q}},\bm{\mathit{v}})=\bm{\mathit{S}}\bm{\mathit{\tau}}+\sum_{i=1}^{k}\bm{\mathit{J}}_{c_{i}}^{T}\bm{\mathit{\lambda}},$ (6) where $\bm{\mathit{M}}$ is the generalized inertia matrix, $\bm{\mathit{q}}\in SE(3)\times\mathbb{R}^{n}$ are _generalized coordinates_ , $\bm{\mathit{v}}$ is the tangent vector, $\bm{\mathit{S}}$ is the actuator selection matrix, $\bm{\mathit{J}}_{c_{i}}$ is the Jacobian at the location of a contact frame $c_{i}$ and $\bm{\mathit{w}}_{i}$ is the contact wrench acting on the contact link $i$. #### III-A3 Rigid Contact Constraints Contacts can be expressed as kinematic constraint on the equation of motion Eq. 6 as $\bm{\mathit{J}}_{c}\dot{\bm{\mathit{v}}}+\dot{\bm{\mathit{J}}}_{c}\bm{\mathit{v}}=\bm{\mathit{0}}.$ (7) In order to express the holonomic contact constraint $\phi(\bm{\mathit{q}})=0$ in the acceleration space, it can be differentiated twice. Consequently, the contact condition can be seen as a first order differential-algebraic equation with $\bm{\mathit{J}}_{c}=\begin{bmatrix}\bm{\mathit{J}}_{c_{1}}&\cdots&\bm{\mathit{J}}_{c_{k}}\end{bmatrix}$ as a stack of $f$ contact Jacobians. Finally, the multi-contact dynamics can be expressed as $\left[\begin{matrix}\bm{\mathit{M}}&\bm{\mathit{J}}^{\top}_{c}\\\ {\bm{\mathit{J}}_{c}}&\bm{\mathit{0}}\end{matrix}\right]\left[\begin{matrix}\dot{\bm{\mathit{v}}}\\\ -\bm{\lambda}\end{matrix}\right]=\left[\begin{matrix}\bm{\mathit{S\tau}}-\bm{\mathit{h}}\\\ -\dot{\bm{\mathit{J}}}_{c}\bm{\mathit{v}}\end{matrix}\right].$ (8) For more details about the hybrid optimal control (OC) using this contact dynamics see [11]. #### III-A4 Optimization Constraints We consider constraints of the trajectory optimization problem via a cost- penalization in Eq. 4. Cost terms can either incorporate equality or inequality constraints, which are described in the following. In case of equality constraints, an arbitrary task can be formulated as a quadratic regulator term as $\Phi_{\text{c}}=\mid\mid\bm{\mathit{f}}(t)-\bm{\mathit{f}}^{\text{ref}}(t)\mid\mid^{2}_{2},$ where $\bm{\mathit{f}}(t)$ and $\bm{\mathit{f}}^{\text{ref}}$ are actual and reference features, respectively. The DDP algorithm utilizes the derivatives of these regulator functions, namely computing the Jacobians and Hessians of the cost functions. We use equality constraints for the the CoM tracking ($\Phi_{\text{CoM}}$) and the tracking of the left- and right-foot pose ($\Phi_{\text{foot}}$), respectively. Equally important for physically consistent trajectory optimization is the consideration of boundaries, such as robot limits and stability constraints. These inequality constraints can be included as penalization term as well. To do so, we use a bounded quadratic term as $\Phi_{\text{c}}=\begin{cases}\quad\dfrac{1}{2}\bm{\mathit{r}}^{T}\bm{\mathit{r}}&\mid\underline{\mathbf{r}}>\bm{\mathit{r}}>\bar{\mathbf{r}}\\\\[10.0pt] \quad 0&\mid\underline{\mathbf{r}}\leq\bm{\mathit{r}}\leq\bar{\mathbf{r}},\end{cases}$ (9) where $\bm{\mathit{r}}$ is the computed residual vector and $\underline{\mathbf{r}}$ and $\bar{\mathbf{r}}$ are the lower and upper bounds, respectively. In the scope of our work, we define inequality constraints for joint position and velocity limits ($\Phi_{\text{joints}}$), friction cone constraints ($\Phi_{\text{friction}}$) and center of pressure ($\Phi_{\text{CoP}}$). Additional to the described constraints for tasks and physical consistency, we optimize for minimization of the torques ($\Phi_{\text{torques}}$) and regularize the robot posture ($\Phi_{\text{posture}}$). #### III-A5 Contact Stability A key objective in trajectory optimization for legged systems is to ensure a balanced motion that prevents the robot from sliding and falling down. We ensure the robot stability by applying the concept of contact wrench cone [19], instead of the widely accepted zero-moment point criterion [20]. Note that the latter method is limited due to the assumptions of sufficiently high friction and the existence of one planar contact surface; instead, the former also is suitable for multi-contact OC. To this end, we model 6D surface contacts in the OC formulation of Eq. 4 with dedicated inequality constraints for unilaterality of the contact forces, Coulomb friction on the resultant force, and center of pressure (CoP) inside the support area: $\displaystyle\begin{split}\lambda^{z}&>0,\\\ \mid\lambda^{x}\mid&\leq\mu\lambda^{z},\\\ \mid\lambda^{y}\mid&\leq\mu\lambda^{z},\\\ \mid X\mid&\geq c_{x},\\\ \mid Y\mid&\geq c_{y}.\end{split}$ (10) In Eq. 10 $\mu$ denotes the static coefficient of friction and models a spatial friction cone, and $c_{x}$ and $c_{y}$ denote the position of the CoP with respect to the dimensions $X$ and $Y$ of the rectangular robot feet. This motion planning approach is what we call the contact stability soft- constrained DDP [21]. ### III-B Simulation and Control Architecture Figure 4: Simulation and experimental pipeline. We track the motions planned with the proposed trajectory optimization approach in real-time with a PD-controller in the PyBullet simulator and with a joint space online stabilization on the real system as depicted in Fig. 4. In the following, details on the involved components are provided. The contact stability soft-constrained DDP approach computes inherently balanced motions that are concisely captured in an appropriate file. This trajectory file contains the optimal state trajectories $\bm{\mathit{X}}^{*}$, OC inputs $\bm{\mathit{U}}^{*}$ and the resulting contact wrenches $\bm{\mathit{F}}_{\text{ ext}}^{*}$ acting on the feet. The trajectories are interpolated to 1 kHz using cubic splines in order to ensure smoothness. The planned motions are computed based on a tree type robot model. For dynamic real-time control, this simplified model turns out to be sufficient, although the accuracy is reduced [22]. Nevertheless, the problem remains on transforming the results from the independent joint space, to the actuation space. We use the modular software framework HyRoDyn (Hybrid Robot Dynamics) [23] to map the trajectories generated for the serialized robot model to compute the forces of the respective linear actuators. Low-level actuator controllers compensate deviations from the reference trajectories. Analogously to the simulation pipeline, this real-time control approach uses a cascaded feedback of position, velocity and an additional current control loop. ## IV Results and discussions This section presents the evaluation of the robot design, simulation results and first experimental trials. ### IV-A Evaluation of Robot Design We evaluated the RH5 humanoid design by performing a wide range of complex motions. The motivation is to form a basis of decision-making for future design iterations that allow us to perform such tasks. Table IV provides details on the performed motions and Table V summarizes the results. #### IV-A1 Dynamic Walking Variants We study efficient motions for dynamic walking gaits with high velocities. To this end, we apply the proposed approach of contact stability soft-constrained DDP, where the CoP of each foot is constrained. By this, the solver is enabled to find an optimal, dynamic CoM shifting along with the requested contact stability constraints. Fig. 5 shows this approach yields dynamically balanced walking motions where the CoP of each foot (crosses) in contact stays within a predefined range. Following our motion planning approach, we observed that often, for speeds greater than 0.35 m/s, the solver needs to be initialized with a predefined CoM trajectory in order to find a feasible solution, as done in [11]. Figure 5: Dynamic walking gait computed with the contact stability soft- constrained DDP. The motion is inherently balanced, since the CoPs (crosses) for both feet (LF, RF) remain inside the desired CoP region of 50% foot coverage (dashed lines). ##### Walking with weights (5kg Bars) We evaluated the capabilities of the RH5 robot to perform a dynamic walking gait at 0.35 m/s while carrying 5kg aluminum weights in each hand (see Fig. 1). A natural CoM shifting emerges resulting from the inequality constraints for the CoP of each foot. Fig. 6 shows that the found optimal solution is within the joint position and velocity limits as well as torque limits. Figure 6: Optimal solution of the dynamic walking gait with 5 kg weights in each hand within the robot’s velocity and torque limits (dashed lines). ##### Walking with high speed (1 m/s) In order to analyze the limits of the RH5 humanoid, we successfully performed a fast dynamic walking gait at 1 m/s with a predefined CoM trajectory (see Table IV). Also for this dynamic walking gait, the OC solver found a feasible solution within the robot limits, proofing for the versatility of the RH5 robot design. #### IV-A2 Squatting with Weights As for fast dynamic walking, we analyzed a sequence of dynamic squatting movements with a predefined CoM range of 20 cm (see Fig. 7). We also found that the joint position, velocity and torque limits were satisfied. (a) (b) (c) Figure 7: Squatting with 5kg aluminum bars in each hand. #### IV-A3 Jumping Variants We analyzed the limits of the system design by performing highly-dynamic jumps. ##### Vertical jumping Although the RH5 robot has been designed for walking motions and not highly- dynamic ones, vertical jumps with a height of 1 cm can be performed within valid ranges for joint position, velocity and torques. For the case of a 10 cm jump the joint position and torques limits are within the limits. However, velocity peaks at the take-off exceed the limits of the body pitch and knee joints by a factor of two and four, respectively. This effect is plausible, since both the knee as well as the torso swing are essential for a jump. We deployed a heuristic approach to identify the minimal design improvement by scaling the critical joint limits step by step until a feasible solution is found. For the 10 cm vertical jump we found that an optimal solution is found by scaling only the knee joint velocity limits of the robot by a factor of 3. ##### Jumping over multiple obstacles Finally, investigated a more challenging jumping sequence over obstacles (see Fig. 8). Since the humanoid was not designed for such tasks, neither joint velocity nor torque limits can be satisfied. Further details on the formulation of the OC problems, used optimization constraints, extracted design guidelines and videos are provided in [21]. (a) (b) Figure 8: Sequence of challenging jumps over obstacles. TABLE IV: Characteristics and applied optimization constraints for a wide range of dynamic motions. | Motion Characteristics | Optimization Constraints ---|---|--- | Length | Height | Total time | Step size | Tasks | Stability | Limits | Regularization | | | | | $\Phi_{\text{foot}}$ | $\Phi_{\text{CoM}}$ | $\Phi_{\text{friction}}$ | $\Phi_{\text{CoP}}$ | $\Phi_{\text{joint}}$ | $\Phi_{\text{posture}}$ | $\Phi_{\text{torque}}$ Dynamic walking with 5kg weights | 0.5 m | 0.05 m | 1.5 s | 0.03 s | ✕ | | ✕ | ✕ | ✕ | ✕ | ✕ Fast dynamic walking (1 m/s) | 0.7 m | 0.1 m | 0.7 s | 0.03 s | ✕ | ✕ | | | ✕ | ✕ | ✕ Squatting with 5kg weights | – | 0.2 m | 2 s | 0.03 s | ✕ | ✕ | ✕ | ✕ | ✕ | ✕ | ✕ Vertical jump ($h=$ 0.01 m) | – | 0.01 m | 0.9 s | 0.01 s | ✕ | | ✕ | ✕ | ✕ | ✕ | ✕ Vertical jump ($h=$ 0.1 m) | – | 0.1 m | 0.9 s | 0.01 s | ✕ | | ✕ | ✕ | ✕ | ✕ | ✕ Jumps over obstacles | 0.6 m | 0.25 m | 2.7 s | 0.01 s | ✕ | | ✕ | ✕ | ✕ | | ✕ TABLE V: Capabilities of the RH5 humanoid to perform a wide range of motions respecting the hardware limits. Experiment | Pos. Lim. | Torque Lim. | Vel. Lim. ---|---|---|--- Walk with 5kg weights | ✓ | ✓ | ✓ Dynamic walk (1 m/s) | ✓ | ✓ | ✓ Squats with 5kg weights | ✓ | ✓ | ✓ Vertical jump ($h=$ 0.01 m) | ✓ | ✓ | ✓ Vertical jump ($h=$ 0.1 m) | ✓ | ✓ | ✕3 Forward obstacle jumps | ✓ | ✕5 | ✕7 ### IV-B Simulation Results We proved the stability of the optimized dynamic walking motion in the PyBullet simulator using a joint space PD controller. Fig. 9 monitors the optimized motion of the uncontrolled floating base. As can be seen, the floating base deviates about $\pm$ 10 mm in x- and y-direction as well as $+$ 5 mm in z-direction. Figure 9: Motion of the floating base resulting from joint level control for the dynamic walking gait. The motions turn out to be inherently balanced due to the proposed contact stability soft-constrained DDP approach. Hence, our trajectories did not require a dedicated online stabilizer, in contrast to the work of [14], to generate a physically consistent motion. ### IV-C Experimental Trials We conducted three experiments with increasing level of difficulty. The goal of the first experiment is to test the ability of the controller to track a slow balancing task. The quasi-static motion consists of five phases as visualized in Fig. 10. (a) (b) (c) (d) (e) Figure 10: Experiment I: one-leg balancing from (a) an initial pose, (b) CoM shift above the LF, (c) lifting the RF up and (d) down and (e) recovering to the initial pose. The second experiment deals with a stabilization of a static stepping motion (see Fig. 11). The objective of this test is to analyze the effect of more difficult swing-leg motions, a step sequence of two steps and the effect of impacts. (a) (b) (c) (d) (e) Figure 11: Experiment II: static stepping motion from (a) an initial pose, (b) CoM shift above the LF, (c,d) performing a right step and (e) shifting the CoM to the center of the SP. The objective of the third experiment is to evaluate the tracking performance in the context of a dynamic motion. In contrast to the first two motions, the fast squatting experiment (see Fig. 13) involves dynamic forces acting on the robot resulting from a fast vertical base movement in the range of 15 cm within two seconds. Overall, the three planned motions could be stabilized with good accuracy by the controller on the real system. Fig. 12 shows the tracking performance for the one-leg balancing experiment. The control architecture allows following the computed reference trajectory closely, both in actuator space (a,b) and independent joint space (c,d). This precise tracking is achieved with high-gain joint space control, which allows a quick compensation of position differences that comes at the cost of lost compliance in the joints. (a) LLAnkleAct1 (b) LLAnkleAct2 (c) LLAnkleRoll (d) LLAnklePitch Figure 12: Tracking performance for the one-leg balancing experiment in actuators (a,b) and independent joints (c,d). The impact phase turned out to be the main problem for the walking experiments. This is reasonable since the utilized control approach only compensates for errors in joint space, while errors in task space can arise quickly and are not compensated. (a) (b) (c) (d) (e) Figure 13: Experiment III: sequence of fast squats from (a) an initial pose over, (b,d,f) descending the CoM by 15 cm and (c,e,g) recovering to the initial pose. ## V Conclusion This paper presented the design and analysis of a novel series–parallel hybrid humanoid robot named RH5 which has a lightweight design and good dynamic characteristics. We see large potential in using DDP-based whole-body TO to evaluate the capabilities of humanoid robots. The preliminary experiments indicate that the proposed planning approach efficiently generates physically consistent motions for the RH5 humanoid robot. Future work includes experiments with online stabilization to realize heavy-duty tasks with the real system. We also plan to address the resolution of internal closed loops along with the holonomic constraints imposed by the contacts within the DDP formulation. ## References * [1] O. Stasse and T. Flayols, _An Overview of Humanoid Robots Technologies_. Cham: Springer International Publishing, 2019, pp. 281–310. * [2] S. Kumar, H. Wöhrle, J. de Gea Fernández, A. Müller, and F. Kirchner, “A survey on modularity and distributivity in series-parallel hybrid robots,” _Mechatronics_ , vol. 68, p. 102367, 2020. * [3] S. Lohmeier, T. Buschmann, H. Ulbrich, and F. Pfeiffer, “Modular joint design for performance enhanced humanoid robot lola,” in _ICRA_ , 2006. * [4] J. Lemburg, J. de Gea Fernández, M. Eich, D. Mronga, P. Kampmann, A. Vogt, A. Aggarwal, Y. Shi, and F. Kirchner, “Aila - design of an autonomous mobile dual-arm robot,” in _ICRA_ , 2011. * [5] N. A. Radford, P. Strawser, K. Hambuchen, J. Mehling, and et. al., “Valkyrie: Nasa’s first bipedal humanoid robot,” _Journal of Field Robotics_ , vol. 32, no. 3, pp. 397–419, 2015. * [6] J. Englsberger, A. Werner, C. Ott, B. Henze, M. A. Roa, G. Garofalo, R. Burger, A. Beyer, O. Eiberger, K. Schmid, and A. Albu-Schäffer, “Overview of the torque-controlled humanoid robot toro,” in _Humanoids_ , 2014. * [7] O. Stasse, T. Flayols, and et. al., “Talos: A new humanoid research platform targeted for industrial applications,” in _Humanoids_ , 2017. * [8] J. Carpentier, S. Tonneau, M. Naveau, O. Stasse, and N. Mansard, “A versatile and efficient pattern generator for generalized legged locomotion,” in _ICRA_. IEEE, 2016. * [9] B. Aceituno-Cabezas, C. Mastalli, H. Dai, M. Focchi, A. Radulescu, D. G. Caldwell, J. Cappelletto, J. C. Grieco, G. Fernández-López, and C. Semini, “Simultaneous contact, gait, and motion planning for robust multilegged locomotion via mixed-integer convex optimization,” _RA-L_ , vol. 3, no. 3, pp. 2531–2538, 2017. * [10] S. Fahmi, C. Mastalli, M. Focchi, and C. Semini, “Passive whole-body control for quadruped robots: Experimental validation over challenging terrain,” _RA-L_ , vol. 4, no. 3, pp. 2553–2560, 2019. * [11] R. Budhiraja, J. Carpentier, C. Mastalli, and N. Mansard, “Differential dynamic programming for multi-phase rigid contact dynamics,” in _Humanoids_. IEEE, 2018. * [12] D. Mayne, “A second-order gradient method for determining optimal trajectories of non-linear discrete-time systems,” _International Journal of Control_ , vol. 3, no. 1, pp. 85–95, jan 1966. * [13] C. Mastalli, W. Merkt, J. Marti-Saumell, H. Ferrolho, , J. Sola, N. Mansard, and S. Vijayakumar, “A direct-indirect hybridization approach to control-limited ddp,” _arXiv:2010.00411_ , 2021. * [14] K. Giraud-Esclasse, P. Fernbach, G. Buondonno, C. Mastalli, and O. Stasse, “Motion planning with multi-contact and visual servoing on humanoid robots,” in _SII)_. IEEE, 2020. * [15] A. B. Zoss, H. Kazerooni, and A. Chu, “Biomechanical design of the berkeley lower extremity exoskeleton (bleex),” _IEEE/ASME Transactions on Mechatronics_ , vol. 11, no. 2, pp. 128–138, April 2006. * [16] S. Kumar, A. Nayak, H. Peters, C. Schulz, A. Müller, and F. Kirchner, “Kinematic analysis of a novel parallel 2sprr+1u ankle mechanism in humanoid robot,” in _Advances in Robot Kinematics 2018_ , J. Lenarcic and V. Parenti-Castelli, Eds., 2019, pp. 431–439. * [17] S. Kumar, “Modular and analytical methods for solving kinematics and dynamics of series-parallel hybrid robots,” Ph.D. dissertation, Universität Bremen, 2019. * [18] C. Mastalli, R. Budhiraja, W. Merkt, G. Saurel, B. Hammoud, M. Naveau, J. Carpentier, S. Vijayakumar, and N. Mansard, “Crocoddyl: An Efficient and Versatile Framework for Multi-Contact Optimal Control,” in _ICRA_ , 2020\. * [19] S. Caron, Q.-C. Pham, and Y. Nakamura, “Stability of surface contacts for humanoid robots: Closed-form formulae of the contact wrench cone for rectangular support areas,” in _ICRA_. IEEE, 2015. * [20] M. Vukobratović and J. Stepanenko, “On the stability of anthropomorphic systems,” _Mathematical biosciences_ , vol. 15, no. 1-2, 1972. * [21] J. Esser, “Highly-dynamic movements of a humanoid robot using whole-body trajectory optimization,” Master’s thesis, University of Duisburg-Essen, Nov 2020\. * [22] S. Kumar, J. Martensen, A. Mueller, and F. Kirchner, “Model simplification for dynamic control of series-parallel hybrid robots-a representative study on the effects of neglected dynamics,” in _IROS_. IEEE, 2019. * [23] S. Kumar, K. A. v. Szadkowski, A. Mueller, and F. Kirchner, “An analytical and modular software workbench for solving kinematics and dynamics of series-parallel hybrid robots,” _Journal of Mechanisms and Robotics_ , vol. 12, no. 2, 2020.
Characterization of the pseudo-scaling functions on Vilenkin group Prasadini Mahapatra ###### Abstract The study of wavelets, originated from it’s applications in diverse fields, was combined together by it’s Mathematical properties. Initially, all the wavelets and it’s variants were explored in the real space $\mathbb{R}^{n}$. But now, these are being studied in different abstract settings. The present paper also contributes to this extension. Vilenkin groups, introduced by F. Ya Vilenkin, form a class of locally compact abelian groups. In this paper, Parseval frame multiwavelets associated to multiresolution analysis (MRA) are characterized in $L^{2}(G)$, where $G$ is the Vilenkin group. Further, we introduce the pseudo-scaling function along with a class of generalized low pass filters and study their properties in Vilenkin group. ## 1 Introduction During the last two decades, several authors studied more generalizations and extensions of wavelets. Multiresolution analysis (MRA) is very fundamental tool to establishment of scaling function, which appeared in very different contexts. This paper is related to one such generalization of wavelets. Walsh functions were introduced by J. Walsh In 1923, J. Walsh introduced that the linear combination of Haar functions is known as Walsh functions. At first, N. J. Fine and N. Ya Vilenkin independently determined that Walsh system is the group of characters of the Cantor dyadic group. A large class of locally compact abelian groups, called Vilenkin groups is introduced by Vilenkin. Cantor dyadic group is a particular case. Refinable equation gives refinable function which generates MRA and hence wavelets, if the mask satisfies certain conditions. Necessary and sufficient conditions were given over the mask of scaling function $\phi$ in terms of modified Cohen’s condition and blocked sets such that $\phi$ generates an MRA. Wavelets and multiwavelets related work on Vilenkin group has been done in several paper. In case of Vilenkin group if the associated prime $p$ is greater than 2, then MRA generates a multiwavelet set having $p-1$ functions. In section 2, It has been introduced that the pseudo scaling functions associated with the filters and generalized filters. Then we characterize the generalized low pass filters, which motivate us to construct the subclass of MRA Parseval frame multiwavelets. Furthermore, we have studied that the associated class of pseudo-scaling functions which are not necessarily obtained from a multiresolution analysis. ### 1.1 Preliminaries Vilenkin group $G$ is defined as the group of sequences $x=(x_{j})=(...,0,0,x_{k},x_{k+1},x_{k+2},...),$ where $x_{j}\in\\{0,1,...,p-1\\}$, $p$ is prime, for $j\in\mathbb{Z}$ and $x_{j}=0$, for $j<k=k(x)$. The group operation on $G$, denoted by $\oplus$, is defined as coordinatewise addition modulo $p$: $(z_{j})=(x_{j})\oplus(y_{j})\Leftrightarrow z_{j}=x_{j}+y_{j}(\text{mod}\>p),\text{ for }j\in\mathbb{Z}.$ $\theta$ denotes the identity element (zero) of $G$. Let $U_{l}=\\{(x_{j})\in G:x_{j}=0\>\text{for}\>j\leq l\\},\qquad l\in\mathbb{Z},$ be a system of neighbourhoods of zero in $G$. In case of topological groups if we know neighbourhood system $\\{U_{l}\\}_{l\in\mathbb{Z}}$ of zero, then we can determine neighbourhood system of every point $x=(x_{j})\in G$ given by $\\{U_{l}\oplus x\\}_{l\in\mathbb{Z}}$, which in turn generates a topology on $G$. Let $U=U_{0}$ and $\ominus$ denotes the inverse operation of $\oplus$. The Lebesgue spaces $L^{q}(G),\>1\leq q\leq\infty$, are defined with respect to the Haar measure $\mu$ on Borel subsets of $G$ normalized by $\mu(U)=1$. The group dual to $G$ is denoted by $G^{*}$ and consists of all sequences of the form $\omega=(\omega_{j})=(...,0,0,\omega_{k},\omega_{k+1},\omega_{k+2},...),$ where $\omega_{j}\in\\{0,1,...,p-1\\}$, for $j\in\mathbb{Z}$ and $\omega_{j}=0$, for $j<k=k(\omega)$. The operations of addition and subtraction, the neighbourhoods $\\{U_{l}^{*}\\}$ and the Haar measure $\mu^{*}$ for $G^{*}$ are defined as above for $G$. Each character on $G$ is defined as $\chi(x,\omega)=\exp\bigg{(}\frac{2\pi i}{p}\sum_{j\in\mathbb{Z}}{x_{j}w_{1-j}}\bigg{)},\quad x\in G,$ for some $\omega\in G^{*}$. Let $H=\\{(x_{j})\in G\>|\>x_{j}=0,\;\text{ for }j>0\\}$ be a discrete subgroup in $G$ and $A$ be an automorphism on $G$ defined by $(Ax)_{j}=x_{j+1}$, for $x=(x_{j})\in G$. From the definition of annihilator and above definition of character $\chi$, it follows that the annihilator $H^{\perp}$ of the subgroup $H$ consists of all sequences $(\omega_{j})\in G^{*}$ which satisfy $\omega_{j}=0$ for $j>0$. Let $\lambda:G\longrightarrow\mathbb{R}_{+}$ be defined by $\lambda(x)=\sum_{j\in\mathbb{Z}}{x_{j}p^{-j}},\qquad x=(x_{j})\in G.$ It is obvious that the image of $H$ under $\lambda$ is the set of non-negative integers $\mathbb{Z}_{+}$. For every $\alpha\in\mathbb{Z}_{+}$, let $h_{[\alpha]}$ denote the element of $H$ such that $\lambda(h_{[\alpha]})=\alpha$. For $G^{*}$, the map $\lambda^{*}:G^{*}\longrightarrow\mathbb{R}_{+}$, the automorphism $B\in\text{Aut }G^{*}$, the subgroup $U^{*}$ and the elements $\omega_{[\alpha]}$ of $H^{\perp}$ are defined similar to $\lambda$, $A$, $U$ and $h_{[\alpha]}$, respectively. The generalised Walsh functions for $G$ are defined by $W_{\alpha}(x)=\chi(x,\omega_{[\alpha]}),\qquad\alpha\in\mathbb{Z}_{+},x\in G.$ These functions form an orthogonal set for $L^{2}(U)$, that is, $\int_{U}{W_{\alpha}(x)\overline{W_{\beta}(x)}d\mu(x)}=\delta_{\alpha,\beta},\qquad\alpha,\beta\in\mathbb{Z}_{+},$ where $\delta_{\alpha,\beta}$ is the Kronecker delta. The system ${W_{\alpha}}$ is complete in $L^{2}(U)$. The corresponding system for $G^{*}$ is defined by $W_{\alpha}^{*}(\omega)=\chi(h_{[\alpha]},\omega),\qquad\alpha\in\mathbb{Z}_{+},\omega\in G^{*}.$ The system $\\{W_{\alpha}^{*}\\}$ is an orthonormal basis of $L^{2}(U^{*})$. For positive integers $n$ and $\alpha$, $U_{n,\alpha}=A^{-n}(h_{[\alpha]})\oplus A^{-n}(U).$ ### 1.2 Wavelets on Vilenkin group In [farkov3], Farkov considered the Strang-fix condition, partition of unit property and the stability of scaling functions on Vilenkin group. Necessary and sufficient conditions are given for scaling functions to generate an MRA in the $L^{2}$ space on Vilenkin groups by using modified Cohen’s condition and blocked sets. #### 1.2.1 Refinable function and mask ###### Definition 1.1. Let $L_{c}^{2}(G)$ be the set of all compactly supported functions in $L^{2}(G)$. A function $\phi\in L_{c}^{2}(G)$ is said to be a refinable function, if it satisfies an equation of the type $\phi(x)=p\sum_{\alpha=0}^{p^{n}-1}{a_{\alpha}\phi(Ax\ominus h_{[\alpha]})}.$ (1) The above functional equation is called the refinement equation. The generalized Walsh polynomial $m(\omega)=\sum_{\alpha=0}^{p^{n}-1}{a_{\alpha}\overline{W_{\alpha}^{*}{(\omega)}}}$ (2) is called the mask of the refinement equation (or the mask of its solution $\phi$). ###### Theorem 1.2. Let $\phi\in L_{c}^{2}{(G)}$ be a solution of the refinement equation, and let $\widehat{\phi}(\theta)=1$. Then, $\sum_{\alpha=0}^{p^{n}-1}{a_{\alpha}}=1,\qquad\text{supp }\phi\subset U_{1-n},$ and $\widehat{\phi}(\omega)=\prod_{j=1}^{\infty}{m(B^{-j}\omega)}.$ Moreover, the following properties are true: $1$. $\widehat{\phi}(h^{*})=0$, for all $h^{*}\in H^{\perp}\setminus{\\{\theta\\}}$ (the modified Strang-Fix condition), $2$. $\sum_{h\in H}{\phi(x\oplus h)}=1$, for almost every $x\in G$ (the partition of unit property). ###### Definition 1.3. A set $M\subset U^{*}$ is said to be blocked (for the mask $m$) if it coincides with some union of the sets $U_{n-1,s}^{*}$, $0\leq s\leq p^{n-1}-1$, does not contain the set $U_{n-1,0}^{*}$, and satisfies the condition $T_{p}{M}\subset M\cup\\{\omega\in U^{*}:m(\omega)=0\\},$ where $T_{p}M=\bigcup_{l=0}^{p-1}\\{B^{-l}\omega_{[l]}+B^{-1}(\omega):\omega\in M\\}.$ #### 1.2.2 Multiresolution analysis ###### Definition 1.4. A collection of closed subspaces $V_{j}\subset L^{2}{(G)},j\in\mathbb{Z}$, is called a Multiresolution analysis (MRA) in $L^{2}{(G)}$ if the following hold: (i) $V_{j}\subset V_{j+1}$, for all $j\in\mathbb{Z}$ (ii) $\overline{\cup_{j\in\mathbb{Z}}V_{j}}=L^{2}{(G)}$ and $\cap_{j\in\mathbb{Z}}V_{j}=\\{0\\}$ (iii) $f(\cdot)\in V_{j}\Leftrightarrow f(A\cdot)\in V_{j+1}$, for all $j\in\mathbb{Z}$ (iv) $f(\cdot)\in V_{0}\Rightarrow f(\cdot\ominus h)\in V_{0}$, for all $h\in H$ (v) there is a function $\phi\in L^{2}{(G)}$ such that the system $\\{\phi(\cdot\ominus h)|h\in H\\}$ is an orthonormal basis of $V_{0}$. The function $\phi$ in condition (v) is called a scaling function of the MRA $(V_{j})_{j\in\mathbb{Z}}$. For $\phi\in L^{2}{(G)}$, $\phi_{j,h}(x)=p^{j/2}{\phi(A^{j}{x}\ominus h)},\qquad j\in\mathbb{Z},\;\;h\in H$ and the system $\\{\phi_{j,h}:h\in H\\}$ forms an orthonormal basis of $V_{j}$, for every $j\in\mathbb{Z}$. ###### Theorem 1.5. A function $\phi\in L^{2}(G^{*})$ is a scaling function for an MRA of $L^{2}(G^{*})$ if and only if 1. 1. $\sum_{h\in H^{\perp}}|\hat{\phi}(\omega\oplus h)|^{2}=1$, for a.e $\omega\in G^{*}$ 2. 2. $lim_{j\rightarrow\infty}|\hat{\phi}(B^{-j}\omega)|=1$, for a.e $\omega\in G^{*}$ 3. 3. $\hat{\phi}(B\omega)=m(\omega)\hat{\phi}(\omega)$, for a.e $\omega\in G^{*}$. A function $\phi$ is said to generate an MRA in $L^{2}{(G)}$ if the system $\\{\phi(\cdot\ominus h)|h\in H\\}$ is orthonormal in $L^{2}{(G)}$ and, the family of subspaces $V_{j}=\text{clos}_{L^{2}{(G)}}\text{span}\\{\phi_{j,h}:h\in H\\},\qquad j\in\mathbb{Z},$ is an MRA in $L^{2}{(G)}$ with scaling function $\phi$. Farkov gave the following condition under which a compactly supported function $\phi\in L^{2}(G)$ generates an MRA in $L^{2}(G)$. ###### Theorem 1.6. Suppose that the refinement equation possesses a solution $\phi$ such that $\widehat{\phi}(\theta)=1$ and the corresponding mask $m$ satisfies the conditions $m(\theta)=1\qquad\Sigma_{l=0}^{p-1}|m(\omega\oplus\delta_{l})|^{2}=1,\;\;\omega\in G^{*},$ where $\delta_{l}$ is the sequence $\omega=(\omega_{j})$ such that $\omega_{1}=l$ and $\omega_{j}=0$ for $j\neq 1$. Then the following are equivalent: (a) $\phi$ generates an MRA in $L^{2}(G)$. (b) $m$ satisfies the modified Cohen’s condition. (c) $m$ has no blocked sets. Using the above characterization of refinable function and the matrix extension method Farkov gave a procedure for construction of orthonormal wavelets $\psi_{1},...,\psi_{p-1}$ such that the functions $\psi_{l,j,h}(x)=p^{j/2}{\psi_{l}{(A^{j}x\ominus h)}},\qquad 1\leq l\leq p-1,j\in\mathbb{Z},\;\;h\in H,$ form an orthonormal basis of $L^{2}{(G)}$. ## 2 Characterization the pseudo-scaling functions In this section, We establish the notion of pseudo scaling function and its impact on generalized filters. For multiwavelet $\Psi:=\\{\psi_{1},\psi_{2},...,\psi_{p-1}\\}\subset L^{2}(G)$, The affine system $\mathcal{A}(\Psi)$ is defined by $\mathcal{A}(\Psi)=\\{\psi_{j,h}^{l}(x)|\psi_{j,h}^{l}(x)=p^{j/2}\psi^{l}(A^{j}x-h):j\in\mathbb{Z},h\in H,l=1,2,...,p-1\\}.$ (3) The following are the two definitions of multiwavelet frame and the multiwavelet Parseval frame. ###### Definition 2.1. The affine system $\mathcal{A}\subset L^{2}(G)$ is said to be multiwavelet frame if the system (3) is a frame for $L^{2}(G)$. ###### Definition 2.2. The affine system $\mathcal{A}\subset L^{2}(G)$ is said to be multiwavelet Parseval frame if the system (3) is a Parseval frame for $L^{2}(G)$. The following theorem is the characterization of Parseval frame for Vilenkin group, which is one of the particular case of local fields with positive characteristics. ###### Theorem 2.3. Suppose $\Psi=\\{\psi_{1},\psi_{2},...,\psi_{p-1}\\}\subset L^{2}(G)$. Then the affine system $A(\Psi)$ is a Parseval frame for $L^{2}(G)$ if and only if for a.e. $\omega$, the following holds: * 1. $\sum_{i=1}^{p-1}\sum_{j\in\mathbb{Z}}|\hat{\psi}_{i}(B^{j}\omega)|^{2}=1$ * 2. $\sum_{i=1}^{p-1}\sum_{j=0}^{\infty}\hat{\psi}_{i}(B^{j}\omega)\overline{\hat{\psi}_{i}(B^{j}(\omega+\gamma))}=0$, for $\gamma\in H^{\perp}\setminus BH^{\perp}$. The proof of this theorem are in [1]. ###### Definition 2.4. Let $M=\\{m_{0},m_{1},...,m_{p-1}\\}\subset L^{\infty}(U)$ be a $H-$ periodic function, is called a generalized filter if it satisfies the following equation $\sum_{i=0}^{p-1}|m_{i}(\omega)|^{2}=1,\text{ a.e }\omega,$ (4) $m_{0}(\omega)\overline{m_{0}(\omega+\beta)}-(\sum_{i=1}^{p-1}m_{i}(\omega)\overline{m_{i}(\omega+\beta)})=0\text{ a.e }\omega,\text{ where }\beta=B^{-1}\gamma\text{ and }\gamma\text{ as in Theorem 2.3 }.$ (5) We define the set of generalized filters is denoted by $\tilde{F}$ and let $\tilde{F}^{+}=\\{M\in\tilde{F}:m_{0}\geq 0,m_{0}\in M\\}$. Notice that for $M\in\tilde{F}$, $M_{|m_{0}|}=\\{|m_{0}|,m_{1},...,m_{p-1}\\}\in\tilde{F}^{+}$. ###### Definition 2.5. A function $\phi\in L^{2}(G)$ is called a pseudo-scaling function if there exists a generalized filter $M=\\{m_{0},m_{1},...,m_{p-1}\\}\in\tilde{F}$ such that $\hat{\phi}(B\omega)=m_{0}(\omega)\hat{\phi}(\omega),\text{ a.e }\omega.$ (6) Here we observed that $M$ is not uniquely determined by the pseudo-scaling function $\phi$. Therefore, we consider the set of all $M\in\tilde{F}$ such that $M$ satisfies (6) for $\phi$, and is denoted by $\tilde{F}_{\phi}$. In particular, if $\phi=0$, then $\tilde{F}_{\phi}=\tilde{F}$. If $M\in\tilde{F}^{+}$, then $\hat{\phi}_{m_{0}}(\omega)=\prod_{j=1}^{\infty}m_{0}(B^{-j}\omega)\quad\text{ is well defined a.e. },$ since $0\leq m_{0}(\omega)\leq 1$, a.e. $\omega$. Furthermore, we get $\hat{\phi}_{m_{0}}(B\omega)=m_{0}(\omega)\hat{\phi}_{m_{0}}(\omega),\text{ a.e }\omega.$ (7) ###### Definition 2.6. Suppose that $M=\\{m_{0},m_{1},...,m_{p-1}\\}\in\tilde{F}_{\phi_{m_{0}}}$. Let us define $N_{0}(m_{0})=\\{\omega\in G^{*}:lim_{j\rightarrow\infty}\hat{\phi}_{m_{0}}(B^{-j}\omega)=0\\}.$ (8) If $|N_{0}(|m_{0}|)|=0$, then $M$is called a generalized low-pass filter. The set denoted by $\tilde{F}_{0}$ is the set of all generalized filters satisfying equation (8). ###### Lemma 2.7. If $f\in L^{2}(G^{*})$, then $lim_{j\rightarrow\infty}|f(B^{j}\omega)|=0$, for a.e $\omega\in G^{*}$. ###### Proof. Let us assume that $f\in L^{2}(G^{*})$ and by applying the monotone convergence theorem we get $\displaystyle\int_{G^{*}}\sum_{j\in\mathbb{Z}^{+}}|f(B^{j}\omega)|^{2}d\omega$ $\displaystyle=\sum_{j\in\mathbb{Z}^{+}}\int_{G^{*}}|f(B^{j}\omega)|^{2}d\omega$ $\displaystyle=\sum_{j\in\mathbb{Z}^{+}}p^{-j}\int_{G^{*}}|f(\omega)|^{2}d\omega$ $\displaystyle=\frac{1}{p-1}||f||^{2}<\infty.$ That implies $\sum_{j\in\mathbb{Z}^{+}}|f(B^{j}\omega)|^{2}$ is finite, for $\omega\in G^{*}$ a.e. Thus, for a.e. $\omega\in G^{*}$, $lim_{j\rightarrow\infty}|f(B^{j}\omega)|=0$. ∎ ###### Definition 2.8. A Parseval frame multiwavelet(PFMW) $\Psi=\\{\psi_{1},\psi_{2},...,\psi_{p-1}\\}$ is called an MRA PFMW if there exists a pseudo-scaling function $\phi$, $M\in\tilde{F}_{\phi}$ and unimodular functions $s_{i}\in L^{2}(G)$, $1\leq i\leq p-1$ such that $\hat{\psi}_{i}(B\omega)=W_{\alpha}(\omega)s_{i}(B\omega)\overline{M(\omega)}\hat{\phi}(\omega),\text{ a.e }\omega.$ (9) The following theorem gives a characterization of the generalized low pass filter. ###### Theorem 2.9. Suppose $\Psi=\\{\psi_{1},\psi_{2},...,\psi_{p-1}\\}$ is an MRA PFMW and $\phi$ is a pseudo-scaling function satisfying (6). If M is defined by (9), then $M\in\tilde{F}_{0}$. ###### Proof. Notice that $\Psi$ is an MRA PFMW, by Theorem 2.3, (4), (5) and (9), we have $\displaystyle 1$ $\displaystyle=\sum_{i=1}^{p-1}\sum_{j\in\mathbb{Z}}|\hat{\psi}_{i}(B^{j}\omega)|^{2}$ $\displaystyle=\sum_{i=1}^{p-1}\sum_{j\in\mathbb{Z}}|W_{\alpha}(\omega)s_{i}(B^{j}\omega)\overline{M(B^{j-1}\omega)}|^{2}|\hat{\phi}(B^{j-1}\omega)|^{2}$ $\displaystyle=\sum_{i=1}^{p-1}\sum_{j\in\mathbb{Z}}|m_{i}(B^{j-1}\omega)|^{2}|\hat{\phi}(B^{j-1}\omega)|^{2}$ $\displaystyle=lim_{n\rightarrow\infty}\sum_{j=-n}^{n}(\sum_{i=1}^{p-1}|m_{i}(B^{j-1}\omega)|^{2})|\hat{\phi}(B^{j-1}\omega)|^{2}$ $\displaystyle=lim_{n\rightarrow\infty}\sum_{j=-n}^{n}(1-|m_{0}(B^{j-1}\omega)|^{2})|\hat{\phi}(B^{j-1}\omega)|^{2}$ $\displaystyle=lim_{n\rightarrow\infty}\\{|\hat{\phi}(B^{-n-1}\omega)|^{2}-|\hat{\phi}(B^{n}\omega)|^{2}\\}.$ By using the Lemma $\eqref{21}$ for $\phi\in L^{2}(G)$, we get $lim_{n\rightarrow\infty}|\hat{\phi}(B^{n}\omega)|^{2}=0$ for a.e. $\omega$. Therefore, $lim_{n\rightarrow\infty}|\hat{\phi}(B^{-n-1}\omega)|^{2}=1$ holds for a.e $\omega$. From (6), we have $|\hat{\phi}(\omega)|=|\prod_{j=1}^{n}M(B^{-j}\omega)||\hat{\phi}(B^{-j}\omega)|,\text{ a.e. }\omega.$ Using (7), It is obtained that $|\hat{\phi}(\omega)|=\hat{\phi}_{|M|}$ and $|N_{0}(|M|)|=0$ is clearly satisfied. Thus, by Definition 2.5, we have $M\in\tilde{F}_{0}$. ∎ ###### Lemma 2.10. Let $\mu$ be a $H-$ periodic, unimodular function. Then there exists a unimodular function $v$ that satisfies $\mu(\omega)=v(B\omega)\overline{v(\omega)},\quad\text{ a.e. }\omega.$ (10) ###### Theorem 2.11. Let $M\in\tilde{F}_{0}$ be a generalized filter. Then there exist a pseudo-scaling function and an MRA PFMW $\Psi=\\{\psi_{1},\psi_{2},...,\psi_{p-1}\\}$ such that they satisfy (9). ###### Proof. Suppose that $M\in\tilde{F}_{0}$ a generalized filter. For $m_{0}\in M$, define the signum function $\mu$ such that $\mu(\omega)=\left\\{\begin{matrix}\frac{m_{0}(\omega)}{|m_{0}(\omega)|}&m_{0}(\omega)\neq 0;\\\ 1&m_{0}(\omega)=0.\end{matrix}\right.$ Clearly, $\mu$ is a measurable unimodular function and we see the following equation holds for a.e. $\omega$ $m_{0}(\omega)=\mu(\omega)|m_{0}(\omega)|.$ By Lemma (2.10), there exists a unimodular measurable function $v$ such that $\mu(\omega)=v(B\omega)\overline{v(\omega)},\quad\text{ a.e. }\omega.$ By (7), we have the function $\hat{\phi}_{|m_{0}|}(\omega)$ from $|m_{0}|$. Let $\hat{\phi}(\omega)=v(\omega)\hat{\phi_{|m_{0}|}}(\omega),$ (11) then, $\phi\in L^{2}(G)$. Using (7),(10), (11) and the definition of the signum function $\mu$, we have $\displaystyle\hat{\phi}(B\omega)$ $\displaystyle=v(B\omega)\hat{\phi_{|m_{0}|}}(\omega)$ $\displaystyle=v(B\omega)|m_{0}(\omega)||\hat{\phi_{|m_{0}|}}(\omega)$ $\displaystyle==v(B\omega)|m_{0}(\omega)|\overline{v(\omega)}\hat{\phi}(\omega)$ $\displaystyle=\mu(\omega)|m_{0}(\omega)|\hat{\phi}(\omega)$ $\displaystyle=m_{0}(\omega)\hat{\phi}(\omega).$ Thus, $\phi$ is a pseudo-scaling function. For $\hat{\Psi}=\\{\psi_{1},\psi_{2},...,\psi_{p-1}\\}$, let $\hat{\psi}_{i}=W_{\alpha}(\omega)s_{i}(B\omega)\overline{M(\omega)}\hat{\phi}(\omega)$, $1\leq i\leq p-1$, a.e. $\omega$, where $s_{i}(\omega)\in L^{2}(G),m_{i}\in M$. By Theorem 2.3, we see that $\Psi$ is an MRA PFMW. Note that $m_{0}$ is a generalized low pass filter, that implies $lim_{j\rightarrow\infty}|\hat{\phi}(B^{-j}\omega)|=1$, a.e. $\omega$ holds. By using Lemma (2.7), we get $\displaystyle\sum_{i=1}^{p-1}\sum_{j\in\mathbb{Z}}|\hat{\psi}_{i}(B^{j}\omega)|^{2}$ $\displaystyle=\sum_{i=1}^{p-1}\sum_{j\in\mathbb{Z}}|m_{i}(B^{j-1}\omega)|^{2}|\hat{\phi}_{i}(B^{j-1}\omega)|^{2}$ $\displaystyle=lim_{n\rightarrow\infty}\sum_{j=-n}^{n}(\sum_{i=1}^{p-1}|m_{i}(B^{j-1}\omega)|^{2})|\hat{\phi}(B^{j-1}\omega)|^{2}$ $\displaystyle=lim_{n\rightarrow\infty}\sum_{j=-n}^{n}(1-|m_{0}(B^{j-1}\omega)|^{2})|\hat{\phi}(B^{j-1}\omega)|^{2}$ $\displaystyle=lim_{n\rightarrow\infty}\\{|\hat{\phi}(B^{-n-1}\omega)|^{2}-|\hat{\phi}(B^{n}\omega)|^{2}\\}.$ Using $\phi\in L^{2}(G)$, Lemma $\eqref{21}$ implies $lim_{n\rightarrow\infty}|\hat{\phi}(B^{n}\omega)|^{2}=0$ for a.e. $\omega$. Then for a.e $\omega$, $lim_{n\rightarrow\infty}|\hat{\phi}(B^{-n-1}\omega)|^{2}=1$ holds. Now to show that $\Psi$ given above satisfies the second condition of Theorem 2.3. Fix the $\omega$ and $q=Bh\oplus\gamma,\text{ where }h\in H,\gamma\in H^{\perp}/BH^{\perp}$ and split the equation as $\sum_{i=1}^{p-1}\sum_{j=0}^{\infty}\hat{\psi}_{i}(B^{j}\omega)\overline{\hat{\psi}_{i}(B^{j}(\omega+q))}$ $=\sum_{i=1}^{p-1}\hat{\psi}_{i}(\omega)\overline{\hat{\psi}_{i}(\omega+q)}+\sum_{i=1}^{p-1}\sum_{j=1}^{\infty}\hat{\psi}_{i}(B^{j}\omega)\overline{\hat{\psi}_{i}(B^{j}(\omega+q)}$ (12) Using (5), (6) and Lemma (2.7), the first term on the right-hand side of (12) have the following equation. $\displaystyle\sum_{i=1}^{p-1}\hat{\psi}_{i}(\omega)\overline{\hat{\psi}_{i}(\omega+q)}$ $\displaystyle=\sum_{i=1}^{p-1}W_{\alpha}(B^{-1}\omega)s_{i}(\omega)m_{i}(B^{-1}\omega)\hat{\phi}(B^{-1}\omega)$ $\displaystyle*\overline{W_{\alpha}(B^{-1}(\omega+q))}\overline{s_{i}(\omega+q)m_{i}(B^{-1}\omega+q)\hat{\phi}(B^{-1}\omega+q)}$ $\displaystyle=\overline{W_{\alpha}(B^{-1}q)}(\sum_{i=1}^{p-1}|s_{i}(\omega)|^{2}m_{i}(B^{-1}\omega)\overline{m_{i}(B^{-1}(\omega+q))})*\hat{\phi}(B^{-1}\omega)\overline{\hat{\phi}(B^{-1}(\omega+q))}$ $\displaystyle=\overline{W_{\alpha}(B^{-1}q)}\overline{m_{0}(B^{-1}(\omega+B^{-1}q)})*\hat{\phi}(B^{-1}\omega)\overline{\hat{\phi}(B^{-1}(\omega+q))}$ $\displaystyle=-\hat{\phi}(\omega)\overline{\hat{\phi}(\omega+q)}.$ By (4), (6), (9) and Lemma (2.7), for the second term on the right-hand side of (12), we have $\displaystyle\sum_{i=1}^{p-1}\sum_{j=1}^{\infty}\hat{\psi}_{i}(B^{j}\omega)\overline{\hat{\psi}_{i}(B^{j}(\omega+q))}$ $\displaystyle=\sum_{i=1}^{p-1}\sum_{j=1}^{\infty}W_{\alpha}(B^{j-1}\omega)s_{i}(B^{j}\omega)m_{i}(B^{j-1}\omega)\hat{\phi}(B^{j-1}\omega)$ $\displaystyle*\overline{W_{\alpha}(B^{j-1}(\omega+q))}\overline{s_{i}(B^{j}(\omega+q))m_{i}(B^{j-1}(\omega+q))\hat{\phi}(B^{j-1}(\omega+q))}$ $\displaystyle=\sum_{j=1}^{\infty}\overline{W_{\alpha}(B^{j-1}q)}(\sum_{i=1}^{p-1}|s_{i}(B^{j}\omega)|^{2}m_{i}(B^{j-1}\omega)\overline{m_{i}(B^{j-1}(\omega+q))})$ $\displaystyle*\hat{\phi}(B^{j-1}\omega)\overline{\hat{\phi}(B^{j-1}(\omega+q))}$ $\displaystyle=\sum_{j=1}^{\infty}(1-|m_{0}(B^{j-1}(\omega))|^{2})\hat{\phi}(B^{j-1}\omega)\overline{\hat{\phi}(B^{j-1}(\omega+q))}$ $\displaystyle=\sum_{j=1}^{\infty}\\{\hat{\phi}(B^{j-1}\omega)\overline{\hat{\phi}(B^{j-1}(\omega+q))}-\hat{\phi}(B^{j}\omega)\overline{\hat{\phi}(B^{j}(\omega+q))}\\}$ $\displaystyle=\hat{\phi}(\omega)\overline{\hat{\phi}(\omega+q)}-lim_{p\rightarrow\infty}\hat{\phi}(B^{p}\omega)\overline{\hat{\phi}(B^{p}(\omega+q))}$ $\displaystyle=\hat{\phi}(\omega)\overline{\hat{\phi}(\omega+q)}.$ Adding both the result that we obtained above, right hand side of (12) is equal to 0. Hence, $\Psi$ is a PFMW by theorem 2.3. ∎ ## References * [1] B. Behera, Q. Jahan, Multiresolution analysis on local fields and characterization of scaling functions. Adv. Pure Appl. Math., 3, 181–202(2012) * [2] Christensen O. An Introduction to Frames and Riesz Bases. Birkhäuser, Boston (2016). * [3] Farkov, Y.A.: Multiresolution analyis and wavelets on Vilenkin Groups. Facta Univ. 21, 309–325 (2008) * [4] Farkov, Y.A., Lebedeva, E., Skopina, M.: Wavelet frames on Vilenkin groups and their approximation properties. Intern. J. Wavelets, Multiresolution Anal. and Information Processing, 13,1550036(2015) * [5] Farkov, Y.A., Manchanda, P., Siddiqi, A.H.: Construction of Wavelets through Walsh functions. Springer (2019) * [6] Hernández E., Sikic H., Wilson E., Weiss G.: On the properties of integer translates a square integrable function. Contemporary Mathematics, 505, 233–249(2010) * [7] Hernández E., Weiss GL.: A First Course on Wavelets. CRC press,(1996)
[math-ph] # $\star$-Cohomology, Connes-Chern Characters, and Anomalies in General Translation-Invariant Noncommutative Yang-Mills Amir Abbass Varshovi ab.varshovi@sci.ui.ac.ir/amirabbassv<EMAIL_ADDRESS>Department of Mathematics, University of Isfahan, Isfahan, IRAN. School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran, IRAN. ###### Abstract Abstract: Topological structure of translation-invariant noncommutative Yang- Mills theoris are studied by means of a cohomology theory, so called $\star$-cohomology, which plays an intermediate role between de Rham and cyclic (co)homology theory for noncommutative algebras and gives rise to a cohomological formulation comparable to Seiberg-Witten map. Keywords: Translation-Invariant Star Product, Noncommutative Yang-Mills, Spectral Triple, Chern Character, Connes-Chern Character, Family Index Theory, Topological Anomaly, BRST. ## I Introduction Noncommutative geometry is one of the most prominent topics in theoretical physics. Through with last three decades it was extensively believed that the fundamental forces of the nature could be interpreted with more success via the machinery of noncommutative geometry and its different viewpoints maeda ; connes-book2 ; connes-ch ; lizzi . Moyal noncommutative fields, inspired by fascinating formulations of strings, were proposed as the emergence of this idea in order to ramify the singular behaviors of quantum field theories, especially the quantum gravity.111See seiberg-witten for a proper overview to this topic. Appearing $\mathrm{UV/IR}$ mixing as a pathological feature of the Moyal quantum fields led to a concrete generalization of Moyal product as general translation-invariant noncommutative star products.222See lizzi- galluccio and the references therein for a complete list of references and a short history on this topic. However, the topology and the geometry of noncommutative field theories with general translation-invariant star products have not been studied thoroughly yet. Actually, on the one hand it is a problem in noncommutative geometry, and on the other hand it is a physical behavior correlated to the topology and the geometry of the underlying spacetime and corresponding fiber bundles. But, in contrast to commutative algebras the correlation of noncommutative geometric machineries, such as cyclic (co)homology, and those of the ordinary differential structures of commutative geometry, such as the underlying spacetime, is not clear for noncommutative algebras. The former demands a point-free formulation, while the later one is based on ordinary topology and differentiation. In this article we try to fill this gap by introducing a new cohomology theory, so called $\star$-cohomology, which can play an intermediate role between noncommutative and commutative differential geometry for noncommutative algebras. In the following we give a brief introduction to basic algebraic structures of $\star$-cohomology and in next sections we will develop it to study and formulate the topology of general translation- invariant noncommutative Yang-Mills theories. Suppose that $X=Y\times Z$ is a $2n$-dimensional closed spin manifold with $\text{dim}Y=D$ and $Z=\mathbb{T}^{2m}$. Assume that there is a dense contractible open set, say $U\subset X$, which defines a chart with coordinate functions $x^{\mu}$, $\mu=0,\cdots,2n-1$. Let $x^{\mu}$s split $X$, that is $x^{\mu}$ belongs to $Y$ for $\mu=0,\cdots,D-1$, and $x^{i}$ is a torus canonical coordinate for $D\leq i\leq 2n-1$. Here $x^{0}=t$ is the time parameter which varies from $-\infty$ to $+\infty$ on $U$. We also put a metric on $X$ unit diagonal for $x^{\mu}$s. Now consider a general translation-invariant noncommutative star product on $Z$, say $\star$, defined for 2-cocycle $\alpha$ as lizzi-galluccio ; galluccio ; $(f\star g)(x)=\sum_{p,q\in\mathbb{Z}^{2m}}\tilde{f}(p)\tilde{g}(q)~{}e^{\alpha(p+q,p)}~{}e^{i(p+q).x/R}~{},~{}~{}~{}~{}~{}f,g\in C^{\infty}(Z)~{},$ (I.1) for Fourier transformation $\tilde{f}(p)=\frac{{1}}{{(2\pi R)^{2m}}}\int_{Z}f(x)~{}e_{-p}(x)~{},~{}~{}~{}~{}~{}f(x)=\sum_{p\in\mathbb{Z}^{2m}}\tilde{f}(p)~{}e_{p}(x)~{},$ (I.2) with the Fourier basis $e_{p}(x)=e^{ip.x/R}$, $p\in\mathbb{Z}^{2m}$. The integration in (I.2) is taken for the Riemannian volume form of $Z$. Due to Hodge decomposition theorem in $\alpha^{*}$-cohomology varshovi1 ; varshovi2 one readily finds $\alpha=\alpha_{M}+\partial\beta$, where $\alpha_{M}$ is a Moyal 2-cocycle providing a Moyal star product $\star_{M}$, and $\beta$ is a 1-cocycle ($\beta(0)=0$ and $\overline{\beta}(p)=\beta(-p)$) with $\partial\beta(p,q)=\beta(p-q)-\beta(p)+\beta(q)$. In fact, $f^{\prime}\star_{M}g^{\prime}=(f\star g)^{\prime}$ where $f^{\prime}(x)=\sum_{p\in\mathbb{Z}^{2m}}\tilde{f}(p)~{}e^{\beta(p)}~{}e_{p}(x)~{},~{}~{}~{}~{}~{}f\in C^{\infty}(Z)~{}.$ (I.3) Assume $H$ is a separable Hilbert space with coordinate function with orthonormal basis $\\{\left|p\right\rangle\\}_{p\in\mathbb{Z}^{2m}}$. Then, it can be checked that $\pi:C^{\infty}(Z)\to\mathcal{L}(H)$ with $\pi(f):=\hat{f}:=\sum_{p\in\mathbb{Z}^{2m}}\tilde{f}^{\prime}(p)~{}\hat{e}_{p}~{},~{}~{}~{}~{}~{}(\hat{e}_{p})_{r,s}=e^{\alpha_{M}(r,s)}~{}\delta_{p,r-s}~{},$ (I.4) gives rise to a representation of $C^{\infty}_{\star}(Z)$ on $H$, i.e. $\hat{f}.\hat{g}=\widehat{f\star g}$, $f,g\in C^{\infty}(Z)$. This representation can be similarly extended to smooth functions on $X$. In this case for any $f\in C^{\infty}(X)$ the mapped element $\hat{f}$ is an operator valued function on $Y$. Let $\mathfrak{C}_{0}$ be the algebra generated with $\hat{f}$, $f\in C^{\infty}(X)$. Then $\mathfrak{C}_{0}$ is isomorphic to $C^{\infty}_{\star}(X)$ via $\pi$. This leads to definition of noncommutative polynomial. If $P(x_{1},\cdots,x_{k})$ is a polynomial of probably noncommutative variables $x_{1},\cdots,x_{k}$, the noncommutative version of $P$, denoted by $P_{\star}$ is defined as: $P_{\star}(f_{1},\cdots,f_{k})=\pi^{-1}(P(\hat{f}_{1},\cdots,\hat{f}_{k}))~{}.$ (I.5) In fact, $\mathfrak{C}_{0}$ is a unital $*$-algebra for $1=\hat{1}$ and $\hat{f}^{*}=\hat{f}^{{\dagger}}=\hat{\overline{f}}$. The domain of $\pi$ can be simply extended to vector and matrix valued smooth functions on $X$ with $v=(v_{i})\to\hat{v}=(\hat{v}_{i})$ and $g=(g_{ij})\to\hat{g}=(\hat{g}_{ij})$. Moreover, if $\\{t^{a}\\}$ is a basis for Lie algebra $\mathfrak{g}$, then $\mathfrak{g}$-valued smooth function $f=f^{a}t^{a}$ is mapped to $\hat{f}=\hat{f}^{a}t^{a}$ via $\pi$. ## II Star product, $\star$-Cohomology and Chern Characters The representation map $\pi$ can be simply defined for differential forms on $X$. Let us denote the image of $k$-forms by $\mathfrak{C}_{k}$ and set $\mathfrak{C}=\oplus_{k=0}^{2n}\mathfrak{C}_{k}$. Actually, $\mathfrak{C}_{k}$ is the space of operator valued $k$-forms on $Y$ with extra dimensions on $Z$. Moreover, $\mathfrak{C}$ is obviously a graded algebra with operator and wedge product. We define an exterior derivative operator on $\mathfrak{C}_{0}$ as; $d\hat{f}=\widehat{\partial_{\mu}f}~{}dx^{\mu}~{},$ (II.1) and extend it to $\mathfrak{C}$. Then, $(\mathfrak{C},d)$ is a differential graded algebra. The corresponding cohomology of $(\mathfrak{C},d)$ is referred to as $\star$-cohomology and we denote its groups with $H_{\star}^{k}(X,\mathbb{C})$, $0\leq k\leq 2n$. Since $\pi$ is an isomorphism and $C^{\infty}(X)=C^{\infty}_{\star}(X)$ as sets we readily find that $\pi:\Omega_{k}(X)\to\mathfrak{C}_{k}$ is a bijective map for any $k\geq 0$, where here $\Omega_{k}(X)$ is the space of differential $k$-forms on $X$. Thus any element of $\mathfrak{C}$ is simply shown with an overal symbol of $\hat{~}{}$. Moreover, we readily see that $\pi\circ d_{X}=d\circ\pi$ for $d_{X}$ the exterior differential operator on $X$. Therefore, the following diagram commutes $\begin{array}[]{*{20}{c}}0&\to&\Omega_{0}(X)&\to&\Omega_{1}(X)&\to&\cdots&\to&\Omega_{k}(X)&\to&\Omega_{k+1}(X)&\to&\cdots\\\ {}\hfil&{}\hfil&\pi\downarrow&{}\hfil&\pi\downarrow&{}\hfil&\cdots&{}\hfil&\pi\downarrow&{}\hfil&\pi\downarrow&{}\hfil&\cdots\\\ 0&\to&\mathfrak{C}_{0}&\to&\mathfrak{C}_{1}&\to&\cdots&\to&\mathfrak{C}_{k}&\to&\mathfrak{C}_{k+1}&\to&\cdots\\\ \end{array}~{},$ (II.2) wherein the upper arrows stand for $d_{X}$ and the lower ones indicate $d$. This provides isomorphisms $\pi_{*}:H_{\mathrm{dR}}^{k}(X,\mathbb{C})\to H_{\star}^{k}(X,\mathbb{C})$. We are mostly interested in integral group $H^{2n}_{\star}(X,\mathbb{Z})$. To define it an integration structure is put on $\mathfrak{C}$ as $\int\hat{\omega}=\left\\{{\begin{array}[]{*{20}{c}}\int_{X}f~{},~{}~{}~{}~{}~{}\hat{\omega}=\hat{f}~{}d^{2n}x\in\mathfrak{C}_{2n}\\\ ~{}~{}~{}0~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{other wise}{}{}{}{}{}\\\ \end{array}}\right.$ (II.3) where the integral is taken for Riemann volume form of $X$. It is easily seen that (II.3) defines a graded closed trace over $(\mathfrak{C},d)$ for $\int d\hat{\omega}=0$ and $\int\hat{\omega}_{1}.\hat{\omega}_{2}=(-1)^{|\hat{\omega}_{1}||\hat{\omega}_{2}|}\int\hat{\omega}_{2}.\hat{\omega}_{1}$. The space of classes $[\hat{\omega}]\in H_{\mathrm{dR}}^{2n}(X,\mathbb{C})$ with $\int\hat{\omega}\in\mathbb{Z}$ is referred to as $2n^{th}$ integral $\star$-cohomology group and is denoted with $H^{2n}_{\star}(X,\mathbb{Z})$. It can be seen that $\pi_{*}:H_{\mathrm{dR}}^{2n}(X,\mathbb{Z})\to H_{\star}^{2n}(X,\mathbb{Z})$ is also an isomorphism. Therefore, due to Chern- Weil theory $H_{\star}^{2n}(X,\mathbb{Z})$ is generated with the $\pi$ image of $n^{th}$ Chern character $\mathrm{ch}_{n}(E)$ for vector bundles $E\to X$. The noncommutativity via $\pi$ lets us to define three types of Chern characters. The first type Chern character is simply $\hat{\mathrm{ch}}_{n}(E)=\pi(\mathrm{ch}_{n}(E))$, which as mentioned above represents integral class in $H_{\star}^{2n}(X,\mathbb{Z})$. Let us denote it with $\hat{\mathrm{ch}}^{(1)}_{n}(E)$ to stress on its type. Second type Chern character is333From now on the notation $\mathrm{Tr}$ is used for trace on vector bundle dimensions or gauge group colors. $\hat{\mathrm{ch}}_{n}^{(2)}(E)=\frac{{1}}{{(4\pi)^{n}n!}}~{}\mathrm{Tr}\left\\{\hat{F}^{n}\right\\}$ (II.4) wherein $\nabla^{2}=-\frac{{i}}{{2}}F$ is the curvature of connection $\nabla$ on $\mathbb{C}^{k}\to E\to X$ and $\mathrm{Tr}$ is over $\mathbb{C}^{k}$. Hence; $\hat{\mathrm{ch}}_{n}^{(2)}(E)=\frac{{1}}{{(4\pi)^{n}n!}}~{}\epsilon^{\mu_{1}\nu_{1}\cdots\mu_{n}\nu_{n}}~{}\pi(\mathrm{Tr}\\{F_{\mu_{1}\nu_{1}}\star\cdots\star F_{\mu_{n}\nu_{n}}\\})~{}d^{2n}x~{}.$ It is obvious that if $n=1$ then $\hat{\mathrm{ch}}_{n}^{(2)}(E)$ and $\hat{\mathrm{ch}}_{n}^{(1)}(E)$ coincide and hence the cohomology class of $\hat{\mathrm{ch}}_{n}^{(2)}(E)$ is independent of connection $\nabla$. But, however, it is not the case in general;444Similar results for Moyal product are worked out in varshovi-dR-moyal . Theorem 1; _Suppose $\star$ is a general translation-invariant noncommutative star product on $X$ with $n\geq 2$. Then, the cohomology class of $\hat{\mathrm{ch}}^{(2)}_{n}(E)$ is independent of connection $\nabla$ if and only if $\star$ is a Moyal star product. Hence after;_ a) _$\hat{\mathrm{ch}}^{(2)}_{n}(E)$ defines an integral cohomology class in $H_{\star}^{2n}(X,\mathbb{Z})$._ b) _$\hat{\mathrm{ch}}^{(2)}_{n}(E)$ and $\hat{\mathrm{ch}}^{(1)}_{n}(E)$ are cohomologous. That is;_ $\int\hat{\mathrm{ch}}^{(2)}_{n}(E)=\int\hat{\mathrm{ch}}^{(1)}_{n}(E)=\int_{X}\mathrm{ch}_{n}(E)\in\mathbb{Z}~{}.$ (II.5) Hint to the Proof; Actually, according to Chern-Weil theory it is enough to prove equality (II.5). The integral on the far left consists of the following integration according to (I.3); $\int_{X}\epsilon^{\mu_{1}\nu_{1}\cdots\mu_{n}\nu_{n}}~{}\mathrm{Tr}\left\\{F^{\prime}_{\mu_{1}\nu_{1}}\star_{M}\cdots\star_{M}F^{\prime}_{\mu_{n}\nu_{n}}\right\\}~{},$ where $\star_{M}$ is the Moyal product cohomologous to $\star$ via Hodge decomposition varshovi1 . It is also equal to $\int_{X}\epsilon^{\mu_{1}\nu_{1}\cdots\mu_{n}\nu_{n}}~{}\mathrm{Tr}\left\\{F^{\prime}_{\mu_{1}\nu_{1}}\cdots F^{\prime}_{\mu_{n}\nu_{n}}\right\\}~{},$ since Moyal product can be replaced by the ordinary product for integration of symmetric polynomials varshovi-dR-moyal . This integral is independent of connection if and only if there exists some $C\in\mathbb{C}$ so that; $\int_{X}\epsilon^{\mu_{1}\nu_{1}\cdots\mu_{n}\nu_{n}}~{}\mathrm{Tr}\left\\{F^{\prime}_{\mu_{1}\nu_{1}}\cdots F^{\prime}_{\mu_{n}\nu_{n}}\right\\}=C\int_{X}\epsilon^{\mu_{1}\nu_{1}\cdots\mu_{n}\nu_{n}}~{}\mathrm{Tr}\left\\{F_{\mu_{1}\nu_{1}}\cdots F_{\mu_{n}\nu_{n}}\right\\}~{}.$ But it is easily seen that the above equation holds if and only if 1-cocycle $\beta$ is linear and $C=1$. That is $\star=\star_{M}$. The rest of the proof is due to Chern-Weil theory. This finishes the theorem. Q.E.D To define the third type Chern character we represent vector bundle $\mathbb{C}^{k}\to E\to X$ by $\pi$. Set $\hat{E}=\\{\hat{V};~{}V\in C^{\infty}(E)\\}~{}.$ (II.6) Then, the connection $\nabla=d_{X}+A$, for $A\in\Omega_{1}(X)\otimes\mathbb{M}_{k}(\mathbb{C})$, is mapped to $\hat{\nabla}=d+\hat{A}$ on $\hat{E}$. Then, its curvature is an element of $\mathfrak{C}_{2}\otimes\mathbb{M}_{k}(\mathbb{C})$ and is equal to $\hat{\nabla}^{2}=d\hat{A}+\hat{A}^{2}=-\frac{{i}}{{2}}\hat{F}_{\star}$. The third type Chern character is then defined with $\hat{\nabla}^{2}$ as; $\hat{\mathrm{ch}}_{n}^{(3)}(E)=\frac{{1}}{{(4\pi)^{n}n!}}~{}\mathrm{Tr}\left\\{\hat{F}_{\star}^{n}\right\\}=\frac{{1}}{{(4\pi)^{n}n!}}~{}\epsilon^{\mu_{1}\nu_{1}\cdots\mu_{n}\nu_{n}}~{}\pi\left(\mathrm{Tr}\left\\{F_{\star\mu_{1}\nu_{1}}\star\cdots\star F_{\star\mu_{n}\nu_{n}}\right\\}\right)~{}d^{2n}x~{}.$ (II.7) We show $\hat{\mathrm{ch}}^{(3)}_{n}(E)$ with $\check{\mathrm{ch}}_{n}(E)$ for simplicity. The next theorem is significant to our formalism. Theorem 2; _For any general translation-invariant noncommutative star product $\star$ on $X$ we have;_ a) _The cohomology class of $\check{\mathrm{ch}}_{n}(E)$ in $H^{2n}_{\star}(X,\mathbb{C})$ is independent of connection $\nabla$ ($\hat{\nabla}$)._ b) _$\check{\mathrm{ch}}_{n}(E)$ represents an integral cohomology class in $H_{\star}^{2n}(X,\mathbb{Z})$._ c) _$\check{\mathrm{ch}}_{n}(E)$ and $\hat{\mathrm{ch}}^{(1)}_{n}(E)$ are cohomologous. That is;_ $\int\check{\mathrm{ch}}_{n}(E)=\int\hat{\mathrm{ch}}^{(1)}_{n}(E)=\int_{X}\mathrm{ch}_{n}(E)\in\mathbb{Z}~{}.$ (II.8) Hint to the Proof; For connection $\nabla=d_{X}+A$ we obtain; $\int\check{\mathrm{ch}}_{n}(E,A;\star)=\int\check{\mathrm{ch}}_{n}(E,A^{\prime};\star_{M})=\int_{X}\mathrm{ch}_{n}(E,A^{\prime})=\int_{X}\mathrm{ch}_{n}(E)~{},$ where the first equality is due to (I.3), the second one is the replacement of $\star_{M}$ with the ordinary product for integration of symmetric polynomials, and the last equation is an immediate consequence of Chern-Weil theory. This proves (II.8) and thus the theorem follows. Q.E.D ## III Index Theorem in $\star$-Cohomology and Abelian Anomaly Assume vector bundle $\mathbb{C}^{N}\to E\to X$ with structure group $\mathrm{U}(N)$ for some $N$, in fundamental representation. The generators of $\mathfrak{u}(N)$, say $t^{a}$s, are supposed to be Hermitian with totally anti-symmetric structure group $if^{abc}$, anti-commutator $\\{t^{a},t^{b}\\}=-c^{abc}t^{c}$ and normalization condition $\mathrm{Tr}(t^{a}t^{b})=\frac{{1}}{{2}}\delta^{ab}$. We show the space of $\mathfrak{g}$-valued differential forms on $X$ by $\tilde{\Omega}(X)=\oplus_{k=0}^{2n}~{}\tilde{\Omega}_{k}(X)$, where $\mathfrak{g}=\mathfrak{u}(N)$. Similarly the $\mathfrak{C}_{0}\otimes\mathfrak{g}$-valued differential forms on $Y$ with extra dimensions on $Z$ are denoted as $\tilde{\mathfrak{C}}$. We have the natural grading for $\tilde{\mathfrak{C}}$ as $\tilde{\mathfrak{C}}=\oplus_{k=0}^{2n}\tilde{\mathfrak{C}}$ which for $d$ is defined accordingly. Thus, $(\tilde{\mathfrak{C}},d)$ is a differential graded algebra on which $\mathrm{Tr}\otimes\int$ defines an integration structure or a closed graded trace. It is easy to see that $\tilde{\mathfrak{C}}_{0}$ is a unital $*$-algebra with $1=\hat{1}\mathbb{I}$ and $(\hat{f}^{a}t^{a})^{*}=\hat{f^{a}}^{{\dagger}}{t^{a}}^{{\dagger}}=\hat{\overline{f^{a}}}t^{a}$. We refer to it with $\cal{A}$. The involution $*$ is naturally extended to $\tilde{\mathfrak{C}}_{k}$ for any $k\geq 0$. Conventionally, $\sigma\in\tilde{\mathfrak{C}}$ is said to be Hermitian if $\sigma^{*}=\sigma$, and it is anti-Hermitian when $\sigma^{*}=-\sigma$. The set of anti-Hermitian elements of $\mathcal{A}$, denoted by $\tilde{\mathfrak{g}}$, is in fact a Lie algebra. Each element of $\tilde{\mathfrak{g}}$ also is known as infinitesimal gauge transformation. The Lie group generated by exponential of elements of $\tilde{\mathfrak{g}}$, indicated with symbol $\tilde{G}$, is the gauge transformation group. Accordingly, the space of anti-Hermitian elements in $\tilde{\mathfrak{C}}_{1}$, shown with $\Gamma$, is called the connection space. Any element of $\Gamma$, say $\hat{A}=-i\hat{A}^{a}_{\mu}t^{a}~{}dx^{\mu}$, for real functions $A^{a}_{\mu}\in C^{\infty}(X)$, is a connection form on mapped vector bundle $\hat{E}$ for connection $\hat{\nabla}=d+\hat{A}$ due to (II.6). The gauge transformation group $\tilde{G}$ (resp. $\tilde{\mathfrak{g}}$) acts on $\Gamma$ form right: $A\triangleleft g=g^{-1}dg+g^{-1}Ag~{},~{}~{}~{}A\in\Gamma,~{}g\in\tilde{G}~{}~{}(\text{resp.~{}}A\triangleleft\alpha=d\alpha+[A,\alpha]~{},~{}~{}~{}\alpha\in\tilde{\mathfrak{g}}~{})~{}.$ (III.1) The Yang-Mills theory of vector bundle $SE:=S(X)\otimes E\to X$ is mapped to that of $\widehat{SE}$ which by definition is equipped with connection $\hat{\nabla}=d-i\hat{A}^{a}_{\mu}t^{a}~{}dx^{\mu}$ and curvature $\hat{\nabla}^{2}=-\frac{{i}}{{2}}\hat{F}_{\star\mu\nu}^{a}t^{a}~{}dx^{\mu}dx^{\nu}$. The Lagrangian and the action of the $\mathrm{U}(N)$-Yang-Mills theory is then given by;555For $\mathrm{U}(1)$ gauge theory one needs an overal factor of $\frac{{1}}{{2}}$. This is mandatory to compensate the normalization condition $\mathrm{Tr}(t^{a}t^{b})=\frac{{1}}{{2}}\delta^{ab}$ for $N>1$. $L_{\mathrm{Y-M}}(\widehat{SE},\hat{\nabla})=\mathrm{Tr}\left\\{\hat{\nabla}^{2}.*\hat{\nabla}^{2}\right\\}\in\mathfrak{C}_{2n}~{},~{}~{}~{}~{}~{}S_{\mathrm{Y-M}}=\int L_{\mathrm{Y-M}}(\widehat{SE},\hat{\nabla})~{},$ (III.2) for $*$ the Hodge star. Vector bundle $\widehat{SE}$, the space of spinors, is subject to Dirac operator $\mathcal{D}_{0}$ as $\mathcal{D}_{0}\hat{\psi}=i\gamma^{\mu}\partial_{\mu}\hat{\psi}~{},$ (III.3) on $U$ for Dirac matrices $\gamma^{\mu}$. Dirac operator $\mathcal{D}_{0}$ is usually perturbed to $\mathcal{D}_{A}$, for $A=-i\hat{A}^{a}_{\mu}t^{a}~{}dx^{\mu}\in\Gamma$; $\mathcal{D}_{A}\hat{\psi}=i\gamma^{\mu}\partial_{\mu}\hat{\psi}+\gamma^{\mu}t^{a}\hat{A}^{a}_{\mu}\hat{\psi}~{}.$ (III.4) The relevant Lagrangian and action are respectively $L_{\mathcal{D}_{0}}(\widehat{SE})=\overline{\hat{\psi}}.\mathcal{D}_{0}\hat{\psi}~{}d^{2n}x=\hat{\psi}^{{\dagger}}\gamma^{0}\mathcal{D}_{0}\hat{\psi}~{}d^{2n}x\in\mathfrak{C}_{2n}~{},~{}~{}~{}~{}~{}S_{\mathcal{D}_{0}}=\int L_{\mathcal{D}_{0}}(\widehat{SE})~{},$ (III.5) $L_{\mathrm{int}}(\widehat{SE},\hat{\nabla})=\overline{\hat{\psi}}\gamma^{\mu}\hat{A}^{a}_{\mu}t^{a}\hat{\psi}~{}d^{2n}x\in\mathfrak{C}_{2n}~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}S_{\mathrm{int}}=\int L_{\mathrm{int}}(\widehat{SE},\hat{\nabla})~{}.$ (III.6) The total action then defines the well-known noncommutative $\mathrm{U}(N)$-Yang-Mills theory; $S_{\mathrm{total}}=S_{\mathrm{Y-M}}+S_{\mathcal{D}_{0}}+S_{\mathrm{int}}=-\frac{{1}}{{4}}\int_{X}F_{\star\mu\nu}F_{\star}^{\mu\nu}+i\int_{X}\overline{\psi}\gamma^{\mu}\partial_{\mu}\psi+\int_{X}j^{a}_{\star}A^{a}_{\mu}~{},$ (III.7) for $j^{a\mu}_{\star}=\psi_{\beta,j}\star\overline{\psi}_{\alpha,i}~{}\gamma^{\mu}_{ij}~{}t^{a}_{\alpha\beta}$. Replacing $t^{a}$ and $\gamma^{\mu}$ respectively with $\mathbb{I}$ and $\gamma^{\mu}P_{\pm}$, for $P_{\pm}=\frac{{1\pm\gamma}}{{2}}$ and $\gamma=\gamma_{2n+1}:=i^{n-1}\gamma^{0}\cdots\gamma^{2n-1}~{},$ (III.8) leads to chiral singlet currents $j^{\mu}_{\star\pm}$, which obey the classical equation $\partial_{\mu}j^{\mu}_{\star\pm}+i[A^{a}_{\mu},j^{a\mu}_{\star\pm}]_{\star}=0~{}.$ (III.9) Let this equation be anomalously broken at quantum levels with appearing $\mathcal{A}_{\star\pm}$ on the right hand side. Then, since any translation- invariant noncommutative star product $\star$ is cyclic under integration the corresponding charges $Q_{\star\pm}(t)$ receive variations from $t=-\infty$ to $t=+\infty$ as; $\Delta Q_{\star\pm}=Q_{\star\pm}(+\infty)-Q_{\star\pm}(-\infty)=\int_{-\infty}^{+\infty}\frac{{d}}{{dt}}Q_{\star\pm}(t)=\int_{X}\mathcal{A}_{\star\pm}=\int\hat{\mathcal{A}}_{\star\pm}~{}.$ (III.10) Now let us make an _ansatz_ here. By this ansatz, so called _physical consistency_ , we assume that despite to appearence of consistent anomaly $\mathcal{A}_{\star\pm}$ the charge variation $\Delta Q_{\star\pm}$ respects the theory so that it is equal to an integer times the unit charge of it. Actually, the results achieved through with various methods of anomaly derivation in Moyal noncommutative gauge theories, such as noncommutative calculus langmann , perturbative loop calculations ardalan ; bonora ; bonora2 , Seiberg-Witten map brandt and matrix model varshovi-consistent ,666We should emphasize that the results of varshovi-consistent is in fact for general translation-invariant star products. See also varshovi-dR-moyal and the references therein for a more complete list of such papers. confirm the reasonability of the ansatz for the Moyal case. However, the idea for generalizing the results derived for Moyal product to general translation- invariant star products comes in principal from quantum equivalence theorem introduced in varshovi1 ; varshovi2 which asserts that the whole quantum behaviors of a noncommutative quantum field theory with an arbitrary translation-invariant star product coincide precisely with those of its Moyal product case of the same $\alpha^{*}$-cohomology class. Thus, according to physical consistency ansatz, we demand the results of integrals in (III.10) to be integers for any given translation-invariant noncommutative star product $\star$. Hence, for a homotopy of 2-cocyles as $s\alpha$ with $s\in[0,1]$ and correponding star product $\star_{s}$ we obtain $\int_{X}\mathcal{A}_{\star_{s}\pm}\in\mathbb{Z}$, which leads to $\frac{{d}}{{ds}}\int\int_{X}\mathcal{A}_{\star_{s}\pm}=0$ due to continuity. Furthermore, for the commutative fields, i.e. $s=0$, we have; $\mathcal{A}_{\pm}=\mp\mathrm{ch}_{n}(SE)$. Therefore, $\hat{\mathcal{A}}_{\star\pm}$ and $\mp\hat{\mathrm{ch}}_{n}(SE)$ must be cohomologous in $H^{2n}_{\star}(X,\mathbb{Z})$. On the other hand, the left hand side of (III.9) transforms covariantly under infinitesimal gauge transformations via (III.1) and $\hat{\psi}\to\alpha\triangleright\hat{\psi}:=-i\hat{\alpha}^{a}.t^{a}\psi$ for $\alpha=-i\hat{\alpha}^{a}~{}t^{a}\in\tilde{\mathfrak{g}}$. Hence, $\hat{\mathcal{A}}_{\star\pm}$ is an equivariant form. That is, $\hat{\omega}=\hat{\mathcal{A}}_{\star\pm}\pm\check{\mathrm{ch}}_{n}(SE)$ is an equivariant form and thus it vanishes on $U$ for triviality of $E$ over it. Since $\hat{\omega}$ represents the null integral cohomology class in $H^{2n}_{\star}(X,\mathbb{Z})$ we readily conclude $\hat{\omega}=0$ . Thus, we have already established the following theoerm. Theorem 3; _The chiral Abelian anomaly in noncommutative Yang-Mills theories with general translation-invariant noncommutative star product $\star$ is given by $\mathcal{A}_{\star\pm}=\mp\check{\mathrm{ch}}_{n\star}(SE)$._ It is well-known that the Dirac operator in the ordinary commutative case, i.e. $D_{A}=i\gamma^{\mu}\nabla_{\mu}$, $\nabla=d_{X}+A$, via $\mathbb{Z}_{2}$-grading of $SE\to X$ due to $\gamma$ of (III.8), is given as $\gamma=\left({\begin{array}[]{*{20}{c}}1&0\\\ 0&-1\\\ \end{array}}\right)~{},~{}~{}~{}~{}~{}D_{A}=\left({\begin{array}[]{*{20}{c}}0&D_{A}^{-}\\\ D_{A}^{+}&0\\\ \end{array}}\right)~{},~{}~{}~{}~{}~{}\gamma D_{A}=-D_{A}\gamma~{},~{}~{}~{}~{}~{}{D_{A}^{+}}^{{\dagger}}=D_{A}^{-}~{}.$ (III.11) Following the main approach of Atiyah-Singer index theorem atiyah-dirac 777See also the approach of getzler . for $\star$-cohomology and also employing Theorem 3 we readily find an index formula for any translation-invariant noncommutative anomalies via the machinery of $\star$-cohomology. Show $\pi(D_{A}^{(\mp)})$ with $\mathcal{D}_{A}^{(\mp)}$. Theorem 4; _For any noncommutative $\mathrm{U}(N)$-Yang-Mills theory with general translation-invariant noncommutative star product $\star$ the topological index of $\mathcal{D}_{A}^{\mp}$ is given by $\check{\mathrm{ch}}_{n}(SE)$ as;_ $\int_{X}\mathcal{A}_{\star\pm}=\mp\int\check{\mathrm{ch}}_{n}(SE)=\emph{\emph{Index}}(\mathcal{D}_{A}^{\mp})~{}.$ (III.12) The topological index is given for de Rham integral class in $H_{\mathrm{dR}}^{2n}(X,\mathbb{Z})$ with $\check{\mathrm{ch}}_{n\star}(SE)$ via (I.5); $\emph{\emph{Index}}(\mathcal{D}_{A}^{\mp})=\int_{X}\mathcal{A}_{\star\pm}=\mp\int_{X}\check{\mathrm{ch}}_{n\star}(SE)~{}.$ ## IV Anomalies, $\star$-Cohomology and the Connes-Chern Characters In previous section we established an intimate correlation between $\star$\- and de Rham cohomology to figure out the topological structure in the background of a translation-invariant noncommutative Yang-Mills theory. In this section we try to demonstrate a similar relation for cyclic (co)homology and the corresponding Connes-Chern character.888See varshovi-dR-moyal for the special case of Moyal star product. One should remember that since $\mathcal{A}$ (or $C^{\infty}_{\star}(X)$) is a noncommutative algebra there is no definite coincidence for de Rham and cyclic (co)homology (in the sense of connes-book via Hochschild-Kostant-Rosenberg formula999See also connes- differential ; khalkhali .). Therefore, topological interpretation of noncommutative anomalies need some breakthrough between cyclic (co)homology of $\mathcal{A}$ on the one hand and de Rham cohomology of the topologically commutative underlying spacetime $X$ on the other hand. In this section we proved that this subjective is properly achieved by applying the machinery of $\star$-cohomology. Consider a trivial vector bundle over $X$, say $\mathbb{C}^{N^{\prime}}\to E^{\prime}=\mathbb{C}^{N^{\prime}}\times X\to X$, for some large enough $N^{\prime}$, so that $E\to X$ is embedded in via the image of an idemponent $e\in\mathbb{M}_{N^{\prime}}(C^{\infty}(X))$, i.e.; $C^{\infty}(E)=\\{\sigma\in C^{\infty}(E^{\prime});~{}e.\sigma=\sigma\\}~{}.$ Then, ${SE}^{\prime}:=S(X)\otimes E^{\prime}\to X$ is subject to Dirac operator $D=i\gamma^{\mu}\partial_{\mu}$ on $U$. Actually, $\widehat{SE}^{\prime}$ could be completed to a Hilbert space, $\cal H$, with ordinary inner product $\left\langle{\rho(\psi_{1})}\mathrel{\left|{\vphantom{\rho(\psi_{1})\rho(\psi_{2})}}\right.\kern-1.2pt}{\rho(\psi_{2})}\right\rangle=\int_{X}\overline{\psi}_{1}\psi_{2}$. We see that $\pi$ and the Dirac operator $D$ commute, i.e. $\pi\circ D=\mathcal{D}\circ\pi$ for $\mathcal{D}=i\gamma^{\mu}\partial_{\mu}$, and therefore, the spectrum of $\mathcal{D}$ coincides with that of $D$. Hence, Dirac operator $\mathcal{D}$ is a densely defined unbounded Hermitian operator on $\mathcal{H}$. Thus $(\mathcal{A},\mathcal{H},\mathcal{D})$ is a spectral triple. The action of unital $*$-algebra $\cal{A}$ on $\cal H$ is also given as $(\hat{\alpha}^{a}t^{a})\triangleright\hat{\psi}=\hat{\alpha}^{a}.\hat{t}^{a}\psi$. In addition, we have a $\mathbb{Z}_{2}$-grading of $\mathcal{H}$ as $\mathcal{H}=\mathcal{H}^{+}\oplus\mathcal{H}^{-}$ with; $\mathcal{D}=\left({\begin{array}[]{*{20}{c}}0&\mathcal{D}^{-}\\\ \mathcal{D}^{+}&0\\\ \end{array}}\right)~{},~{}~{}~{}~{}~{}\gamma\mathcal{D}=-\mathcal{D}\gamma~{},~{}~{}~{}~{}~{}{\mathcal{D}^{+}}^{{\dagger}}=\mathcal{D}^{-}~{},$ (IV.1) for $\gamma$ of (III.11). It is also clear that $\gamma\alpha=\alpha\gamma$, $\alpha\in\mathcal{A}$. Now we employ the homotopy of last section for star rpoducts $\star_{s}$. Then, it can be seen that for any $\hat{a}\in\mathcal{A}$, the operator $[\mathcal{D},\hat{a}]$ is homotopic to $[D,a]$ and thus is a densely defined operator which could be extended to a bounded operator on $\mathcal{H}$. Also for any $p>2n$ we have $(1+\mathcal{D}^{2})^{-1}\in\mathcal{L}^{p/2}(\mathcal{H})$ since $\emph{\emph{spec}}(\mathcal{D})=\emph{\emph{spec}}(D)$ gilky . Set $F=\mathcal{D}/|\mathcal{D}|$. Actually, $F$ is a bounded operator with $\gamma F=-F\gamma$ and $F^{2}=1$. Hence, $(\mathcal{H},F,\gamma)$ is an even $p$-summable Fredholm module over $\mathcal{A}$, and an element of $K$-homology group $K^{0}(\mathcal{A})$. The vector bundle $\widehat{SE}$ is in fact a dual element with respect to $(\mathcal{H},F,\gamma)$ in $K$-theory group $K_{0}(\mathcal{A})$. Assume an idemponent $e\in\mathbb{M}_{N^{\prime}}(\mathcal{A})$ so that $\widehat{SE}$ embeds in $\widehat{SE}^{\prime}$ as $\widehat{SE}=\\{\hat{\psi}\in\widehat{SE}^{\prime};~{}e\hat{\psi}=\hat{\psi}\\}$. The connection on $\widehat{SE}$ is canonically defined with $\hat{\nabla}\hat{\psi}=e.d\hat{\psi}$. However, it is seen that if for a local basis over $U$ we define $\hat{\nabla}=d+A=d-i\hat{A}^{a}_{\mu}t^{a}~{}dx^{\mu}$, then; $\emph{\emph{Index}}(F^{+}_{e})=\emph{\emph{Index}}(\mathcal{D}_{A}^{+})$, in which $F=\left({\begin{array}[]{*{20}{c}}0&F^{-}\\\ F^{+}&0\\\ \end{array}}\right)~{},$ for the $\mathbb{Z}_{2}$-grading of $\gamma$ and $F^{+}_{e}=eF^{+}e:e\mathcal{H}^{+}\to e\mathcal{H}^{-}$. Hence, according to noncommutative index theorem and Theorem 4 we find; Theorem 5; _The integral cohomology class of the third type Chern character $\check{\mathrm{ch}}_{n}(SE)$ in $H_{\star}^{2n}(X,\mathbb{Z})$ (and the corresponding topological index of chiral Abelian anomaly) is given by the pairing of Connes-Chern characters of $2n^{th}$ cyclic (co)homology groups due to $K$-theory and $K$-cohomology. That is;_101010The notation $\mathrm{Trace}$ is used for trace of operators on the corresponding Hilbert space of Dirac operator. $\int\check{\mathrm{ch}}_{n}(SE)=\left\langle\emph{\emph{Ch}}^{2n}(\mathcal{H},F,\gamma),\emph{\emph{Ch}}_{2n}[e]\right\rangle=\frac{{(-1)^{n}}}{{2}}\mathrm{Trace}\\{\gamma F[F,e][F,e]\cdots[F,e]\\}~{},$ (IV.2) _for $2n+1$ copies of $e$ and for $\mathrm{Trace}$ the trace of operators._ We remember that $\emph{\emph{Ch}}^{2n}(\mathcal{H},F,\gamma)$ in (IV.2) is the Connes-Chern character of $(\mathcal{H},F,\gamma)\in K^{0}(\mathcal{A})$ as $\emph{\emph{Ch}}^{2n}(\mathcal{H},\mathcal{D},\gamma)(a_{0},a_{1},\cdots,a_{2n})=\left(-1\right)^{n}\left(\frac{{n!}}{{2}}\right)\mathrm{Trace}\left\\{\gamma F[F,a_{0}][F,a_{1}]\cdots[F,a_{2n}]\right\\}~{},$ (IV.3) for $a_{0},a_{1},\cdots,a_{2n}\in\mathcal{A}$, and $\emph{\emph{Ch}}_{2n}[e]=\sum_{k=0}^{n}(-1)^{k}\frac{{(2k)!}}{{k!}}~{}\mathrm{tr}\left\\{\left(e-\frac{{1}}{{2}}\right)\otimes e^{2k\otimes}\right\\}~{},$ (IV.4) wherein $e\in\mathbb{M}_{N^{\prime}}(\mathcal{A})$ represents the $K$-theory class $[e]$ in $K_{0}(\mathcal{A})$ and $\mathrm{tr}$ is the trace of $\mathbb{C}^{N^{\prime}}$. As we mentioned above $\star$-cohomology plays an intermediate role between de Rham and cyclic (co)homologies for any general translation-invariant noncommutative star product $\star$. In previous sections we established an intimate correlation of $\star$\- and de Rham cohomology theories. Now, by employing the special abilities of $\star$-cohomology, due to its partly commutative geometric structures, we can also prove a geometric correspondence between $\star$\- and cyclic (co)homology. We emphasize that this relation must be implemented in the setting we just apply for demonstrating that of de Rham and cyclic (co)homologies for commutative algebras. By means of $\star$-cohomology we find the same arguments even for noncommutative algebra $\mathcal{A}$ (or $\mathfrak{C}_{0}$). This correlation is easy to see within familiar concepts of noncommutative geometry. Actually, the well-known Hochschild-Kostant-Rosenberg map $\alpha:HH_{k}(\mathcal{A})\to\Omega_{k}(X)$, for $HH_{*}(\mathcal{A})$ the Hochschild homology group, leads to an isomorphism, similarly denoted by $\alpha$, between cyclic homology group $HC_{2n}(\mathcal{A})$ and $\oplus_{j=0}^{n}H_{\star}^{2j}(X,\mathbb{C})$ due to commutativity of $d\circ\pi=\pi\circ d_{X}$. It is not hard to see that $\alpha$ produces the third type Chern chracter $\check{\mathrm{ch}}_{n}(SE)$ from Connes-Chern character $\mathrm{Ch}_{2n}[e]\in HC_{2n}(\mathcal{A})$. In fact, the composition of isomorphism $\alpha:HC_{2n}(\mathcal{A})\to\oplus_{j=0}^{n}H_{\star}^{2j}(X,\mathbb{C})$ and the canonical projection $\pi_{2n}:\oplus_{j=0}^{n}H^{2j}_{\star}(X,\mathbb{C})\to H^{2n}_{\star}(X,\mathbb{C})$ leads to the following result; $\Xi=\left(\frac{{1}}{{2i\pi}}\right)^{n}\frac{{1}}{{(2n)!}}~{}\pi_{2n}\circ\alpha:HC_{2n}(\mathcal{A})\to H^{2n}_{\star}(X,\mathbb{C})~{},~{}~{}~{}~{}~{}\Xi([\mathrm{Ch}_{2n}[e]])=[\check{\mathrm{ch}}_{n}(SE)]~{},$ (IV.5) for corresponding cohomology classes $[\mathrm{Ch}_{2n}[e]]$ and $[\check{\mathrm{ch}}_{n}(SE)]$. The homomorphism $\Xi$ can be derived with more detailed formalism. To see this we note that for connection $\hat{\nabla}=e.d=d-i\hat{A}^{a}_{\mu}t^{a}~{}dx^{\mu}$ the curvature $\hat{\nabla}^{2}$ is actually $e.de.de=-\frac{{i}}{{2}}\hat{F}_{\star\mu\nu}~{}dx^{\mu}dx^{\nu}$. However, with $(e.de.de)^{n}=e.(de)^{2n}$ we readily find $\alpha\left(\mathrm{Ch}_{2n}[e]\right)=\left(2i\pi\right)^{n}(2n)!~{}\check{\mathrm{ch}}_{n}(SE)+\Delta$, where $\Delta$ is a direct summation of an exact $2n$-form and closed forms of lower even orders in $\mathfrak{C}$. Now let $\Omega^{*}(\mathcal{A})=\oplus_{k\geq 0}\mathcal{A}^{(k+1)\otimes}$ and consider $\xi:\Omega^{*}(\mathcal{A})\to\mathfrak{C}$, with $\xi=0$ on $\mathcal{A}^{(k+1)\otimes}$ for $k>2n$, and $\xi\left(a_{0}\otimes\cdots\otimes a_{k}\right)=-\left(\frac{{-1}}{{2\pi}}\right)^{n}\frac{{i}}{{2^{n}(2n)!}}~{}{\mathrm{Tr}}\\{\gamma a_{0}[\mathcal{D},a_{1}]\cdots[\mathcal{D},a_{k}]\\}~{}d^{2n}x~{},$ (IV.6) for $k\leq 2n$. Here $\mathrm{Tr}$ is the trace on Dirac matrices and on the colors. Thereby, we readily find that $\xi(\emph{\emph{Ch}}_{2n}[e])=\check{\mathrm{ch}}_{n}(SE)+d\phi$ for some $\phi\in\mathfrak{C}_{2n-1}$. We have the following lemma. Lemma 1; _The linear map $\xi$ of (IV.6) leads to a surjection from cyclic cohomology group $HC_{2n}(\mathcal{A})$ to $H^{2n}_{\star}(X,\mathbb{C})$, i.e. $\xi_{*}:HC_{2n}(\mathcal{A})\to H^{2n}_{\star}(X,\mathbb{C})$ for any general translation-invariant noncommutative star product $\star$. Moreover, $\xi_{*}$ is in fact a redefinition for Hochschild-Kostant-Rosenberg map due to (IV.5) with $\xi_{*}=\Xi$._ Proof; First note that $[\mathcal{D},a]=i\gamma^{\mu}\partial_{\mu}a$, $a\in\mathcal{A}$. Then since $\mathrm{Tr}\left(\gamma\gamma^{\mu_{1}}\cdots\gamma^{\mu_{2n}}\right)=(-i)^{n-1}2^{n}~{}\epsilon^{\mu_{1}\cdots\mu_{2n}}~{},~{}~{}~{}~{}~{}\mathrm{Tr}\left(\gamma\gamma^{\mu_{1}}\cdots\gamma^{\mu_{k}}\right)=0~{},~{}~{}~{}~{}~{}k<2n~{},$ it is seen that $\xi$ vanishes on $\mathcal{A}^{(k+1)\otimes}$ for all $k\neq 2n$. On the other hand, $\phi:=\int\circ\xi$ is a closed graded trace on $\Omega(\mathcal{A})$ with support in $\mathcal{A}^{(2n+1)\otimes}$. Thus, it defines a cyclic cohomology class in $HC_{2n}(\mathcal{A})$. This proves that the Connes’ $(b,B)$-bicomplex is compatible with the de Rham complex via $\xi$ so that $\xi\circ b=0$ and $\xi\circ B\in d\mathfrak{C}$. Thus $\xi$ is reduced to a well-defined map $\xi_{*}:HC_{2n}(\mathcal{A})\to H^{2n}_{\star}(X,\mathbb{C})$. Direct calculation shows that $\xi_{*}=\Xi$. This finishes the lemma. Q.E.D Therefore, we have already established the following theorem. Theorem 6; _For any general translation-invariant noncommutative star product $\star$ on $X$ the integral cohomology class of the third type Chern character $\check{\mathrm{ch}}_{n}(SE)$ in $H_{\star}^{2n}(X,\mathbb{Z})$ is the image of that of $\emph{Ch}_{2n}[e]$ in cyclic cohomology group $HC_{2n}(\mathcal{A})$ via $\xi$ due to (IV.6)._ Corollary 1; _The topological index of chiral Abelian anomaly for any general translation-invariant noncommutative $\mathrm{U}(N)$-Yang-Mills theory is given by Connes-Chern character of the corresponding vector bundle $\widehat{SE}$. That is;_ $\text{Index}(\mathcal{D}_{A}^{\mp})=\int\hat{\mathcal{A}}_{\star\pm}=\mp\int\xi\left(\emph{\emph{Ch}}_{2n}[e]\right)~{}.$ ## V Family Index and Homotopy Class of Topological Anomaly Topological or consistent anomalies in noncommutative field theories have been considered by several authors langmann ; bonora ; brandt ; varshovi-consistent ,111111In langmann ; bonora ; brandt the authors considered the Moyal product and in varshovi-consistent general translation-invariant noncommutative star product has been considered via some matrix model. but however, the topological/geometric meaning of the solutions and of the corresponding formulations remained unclear especially for the case of general translation- invariant noncommutative star products. Thus, in this section the subjective is to studying topological anomalies of translation-invariant noncommutative Yang-Mills theories via homotopy classes due to Bismut-Freed determinant bundle and the family index approach bismut-freed1 ; bismut-freed2 ; varshovi- cyclic-moyal . Let us assume the action of $\tilde{G}$ on $\Gamma$ of (III.1) is free so that $\Gamma\to\Gamma/\tilde{G}$ provides a principal $\tilde{G}$-bundle. We also suppose that $\tilde{X}:=\Gamma/\tilde{G}$ is a smooth manifold. Remember that for any $A\in\Gamma$, the Dirac operator is perturbed to unbounded Hermitian operator $\mathcal{D}_{A}:=\mathcal{D}+A$. Thus, $(\mathcal{H},\mathcal{D}_{A},\gamma)$ is also regarded as an even $p$-summable Fredholm module homotopic to $(\mathcal{H},\mathcal{D},\gamma)$. Also $\mathcal{D}_{A}=\left({\begin{array}[]{*{20}{c}}0&\mathcal{D}_{A}^{-}\\\ \mathcal{D}_{A}^{+}&0\\\ \end{array}}\right)~{},$ according to $\mathbb{Z}_{2}$-grading of $\gamma$. Thus, $\mathrm{Ker}(\mathcal{D}_{A}^{+})\subset\mathcal{H}^{+}$ and $\mathrm{Ker}(\mathcal{D}_{A}^{-})\subset\mathcal{H}^{-}$ are finite dimensional. Consider the Bismut-Freed determinant line bundle bismut-freed1 ; bismut-freed2 $\emph{\emph{Det}}:=\emph{\emph{Det}}(\mathcal{H},\mathcal{D},\gamma):=\emph{\emph{det}}(\mathrm{Ker}(\mathcal{D}_{A}^{+}))\otimes\emph{\emph{det}}(\mathrm{Ker}(\mathcal{D}_{A}^{-}))~{},$ (V.1) which has a natural metric and unitary connection, say $\nabla^{\emph{\emph{Det}}}=d_{\Gamma}-2\pi i\Pi(A)$, where $\Pi(A)$ is a one- form on $\Gamma$, and $A\in\Gamma$. Here, $d_{\Gamma}$ is the exterior differential operator on $\Gamma$. Actually $\Pi(A)$ is a closed form when it is restricted to orbits of $\tilde{G}$ bismut-freed1 . Therefore, the connection of $\nabla^{\emph{\emph{Det}}}$ is flat. That is; $\displaystyle 2\pi i~{}\Pi(A)=\delta W(A)~{},~{}~{}~{}~{}~{}\delta\Pi(A)=0~{},$ (V.2) where $\delta$ is the exterior differential operator on $\tilde{G}$, the BRST operator. Due to Bismut-Freed results $\Pi$ represents a non-trivial cohomology class in $H^{1}_{\mathrm{dR}}(\tilde{G},\mathbb{C})$, since in fact $W(A)$, the quantum action, is not in general a smooth function on $\Gamma$. The BRST closedness of $\Pi$, the Wess-Zumino consistency condition, is then an immediate consequence of (V.2). Actually, $\mathfrak{G}(A):=2\pi i~{}\Pi(A)$ is the topological anomaly. The flatness of $\nabla^{\emph{\emph{Det}}}$ on orbits of $\tilde{G}$ implies that the parallelism structure on Det provides a covering space for Lie group $\tilde{G}$. Thus, if $\mathfrak{i}:S^{1}\to\tilde{G}$ defines a smooth map then;121212Actually, for commutative $\mathrm{SU}(N)$-Yang-Mills theories $\mathbb{Q}$ can be replaced with $\mathbb{Z}$. But for the noncommutative case the gauge group $\mathrm{SU}(N)$ has to be replaced with $\mathrm{U}(N)$, since $F_{\star}$ and the Lagrangian contain anti-commutators of the corresponding Lie algebra elements. This, leads to a non-trivial fundamental group for the gauge transformation group $\tilde{G}$ hatcher . Thus, despite to ordinary Yang-Mills theories, for those with noncommutative star products we should deal with rational cohomology $H^{*}_{\mathrm{dR}}(\tilde{G},\mathbb{Q})$ instead of $H_{\mathrm{dR}}^{*}(\tilde{G},\mathbb{Z})$ for topological anomalies. This is in fact due to $\mathrm{U}(1)$ component in $\mathrm{U}(N)=\mathrm{U}(1)\times\mathrm{SU}(N)$. The only finite covering of $\mathrm{U}(1)$ is $\mathrm{U}(1)$ itself. Therefore, the result of (V.3) belongs to $\frac{{1}}{{l}}\mathbb{Z}$ for $l$-covering. For more details see bertlman ; varshovi-cyclic-moyal ; griffith . $\int_{S^{1}}\mathfrak{i}^{*}(\Pi)\in\mathbb{Q}~{}.$ (V.3) Therefore, we readily find that $\Pi\in H^{1}_{\mathrm{dR}}(\tilde{G},\mathbb{Q})$. In principal, the equality of (V.3) should be invariant with respect to homotopy of 2-cocycles $s\alpha$ for $s\in[0,1]$ and its corresponding translation-invariant noncommutative star producs $\star_{s}$, with $\star_{1}=\star$ and $\star_{0}$ the ordinary product. To see this let $\tilde{G}_{s}$ be the gauge transformation group defined with $\star_{s}$. That is $\tilde{G}=\tilde{G}_{1}$ and $\tilde{G}_{0}$ is the gauge transformation group in commutative $\mathrm{U}(N)$-Yang-Mills theory. Then $M=\\{\tilde{G}_{s};~{}s\in I=[0,1]\\}$ is topologically a cylindrical space with ends $\tilde{G}$ and $\tilde{G}_{0}$. Actually, $M$ is topologically equivalent to $I\times\tilde{G}$ and therefore it deformation retracts on both $\tilde{G}$ and $\tilde{G}_{0}$ hatcher ; varshovi-cyclic-moyal . Put a smooth structure on $M$ with $d_{M}=\delta+ds\otimes\partial/\partial s$. Note that here $\delta$ is the exterior derivative operator on $\tilde{G}_{s}$ for any $s\in[0,1]$. Now, $\Pi$ can be assumed as a one-form on $M$. By homotopy invariance of (V.3) we see that, there is some $\Phi\in C^{\infty}(M)$ so that $\frac{{d}}{{ds}}\Pi=\delta\Phi$. In other words, if $\Pi_{0}$ be the Bismut- Freed connection of commutative Yang-Mills theory; $\Pi-\Pi_{0}=\delta\left(\int_{0}^{1}\Phi~{}ds\right)~{}.$ (V.4) Moreover, $\tilde{G}$ and $\tilde{G}_{0}$ are homotopic equivalent. Let $f:\tilde{G}\to\tilde{G}_{0}$ defines this equivalence relation. Then (V.4) leads to the following theorem. Theorem 7; _Let $\star$ be a general translation-invariant noncommutative star product on $X$. Also suppose that $[\mathfrak{G}_{0}(A)]$ (resp. $[\mathfrak{G}(A)]$) is the cohomology class of topological anomaly of commutative (resp. $\star$-noncommutative) $\mathrm{U}(N)$-Yang-Mills theory in $H^{1}_{\mathrm{dR}}(\tilde{G}_{0},\mathbb{C})$ (resp. $H^{1}_{\mathrm{dR}}(\tilde{G},\mathbb{C})$). Then, we have; $f^{*}([\mathfrak{G}_{0}(A)])=[\mathfrak{G}(A)]$. Moreover, $\mathfrak{G}_{0}(A)/2\pi i$ (resp. $\mathfrak{G}(A)/2\pi i$) defines a rational cohomology class in $H^{1}_{\mathrm{dR}}(\tilde{G}_{0},\mathbb{Q})$ (resp. $H^{1}_{\mathrm{dR}}(\tilde{G},\mathbb{Q})$)._ We should explain $f$ with more details. Let $\tilde{\mathfrak{g}}_{s}$ be the infinitesimal gauge transformation group as the Lie algebra of $\tilde{G}_{s}$. Moreover, let $\pi_{s}$ be the corresponding representation map for star product $\star_{s}$. Therefore, $\\{\pi_{s}(e_{p})t^{a}\\}$ for $p\in\mathbb{Z}^{2m}$ and $a=0,\cdots,N^{2}-1$ provides a basis for $\tilde{\mathfrak{g}}_{s}$. Set $df:\tilde{\mathfrak{g}}\to\tilde{\mathfrak{g}}_{0}$ with $df(\pi(e_{p})t^{a})=\pi_{0}(e_{p})t^{a}$. Then, it is seen that its integral provides a group isomorphism $f:\tilde{G}\to\tilde{G}_{0}$.131313For more details of this proof see hall . In varshovi-cyclic-moyal we also proved similar results for the special case of Moyal product. Actually, $f^{*}$ replaces the ordinary product with translation-invariant noncommutative star product $\star$ within differential forms. This leads us to the following corollary. Corollary 2; _Assume that the topological anomaly of commutative $\mathrm{U}(N)$-Yang-Mills gauge theory, $\mathfrak{G}_{0}(A)$, is given by integration of polynomial $P(A,C)$ over spacetime $X$ as; $\mathfrak{G}_{0}(A)=\int_{X}P(A,C)$, for $C$ the ghost field. Then, the topological anomaly of noncommutative $\mathrm{U}(N)$-Yang-Mills theory for any translation-invariant noncommutative star product $\star$ is;_ $\mathfrak{G}(A)=\int_{X}P_{\star}(A,C)~{}.$ (V.5) _wherein $P_{\star}$ is the noncommutative polynomial of $P$ due to (I.5)._ Note that according to Theorem 7 we know that $\mathfrak{G}(A)=\int_{X}P_{\star}(A,C)+\delta\Phi(A)~{}$ for BRST exact term $\delta\Phi(A)$. Adding $-\Phi(A)$ to the quantum action $W(A)$ as a counter term, i.e. $W^{\prime}(A)=W(A)-\Phi(A)$, to renormalize the theory accordingly, then we obtain the topological anomaly $\mathfrak{G}(A)$ as described in (V.5). In principal, the topological anomaly of noncommutative $\mathrm{U}(N)$-Yang-Mills gauge theory on $4$-dimensional spacetime $X$ and for any translation-invariant noncommutative star product $\star$ is given for Weyl fermions as;141414Here the notation $\mathrm{tr}$ is used for trace on both group colors and matrix representation due to $\pi$. $\mathfrak{G}(A)=\frac{{1}}{{24\pi^{2}}}\int\mathrm{tr}\\{C\star d(A\star dA+\frac{{1}}{{2}}A^{3})\\}~{}.$ (V.6) Up to now we studied the topological anomaly within homotopy classes in $H^{1}_{\mathrm{dR}}(\tilde{G},\mathbb{Q})$ via family index theory due to Bismut-Freed determinant line bundle. However, it can also be regarded as a noncommutative geometric problem through with the machinery of cyclic (co)homology perrot ; perrot2 ; perrot1 . To see this we note that the inclusion $\check{g}_{A}:\tilde{G}\to\Gamma$ as an orbit which passes $A\in\Gamma$ is an invertible element of $C^{\infty}(\Gamma)\otimes\mathcal{A}$, and hence defines a class in $K_{1}^{\emph{\emph{alg}}}(C^{\infty}(\Gamma)\otimes\mathcal{A})$. Its Chern character in odd periodic cyclic homology $HP_{\mathrm{odd}}(C^{\infty}(\tilde{G}\otimes\mathcal{A})$ is in fact an element of differential graded algebra $(\Lambda^{\prime},\hat{d})$ with $\Lambda^{\prime}:=\oplus_{k\geq 0}\Lambda^{\prime}_{k}(C^{\infty}(\tilde{G})\otimes\mathcal{A})$ and $\hat{d}=\delta+d$. For the Maurer-Cartan form $\omega_{A}=\check{g}_{A}^{-1}\hat{d}\check{g}_{A}$ we read $\emph{\emph{Ch}}^{1}_{*}[\check{g}_{A}]=\sum_{k\geq 0}~{}(-1)^{k}\frac{{k!}}{{(2k+1)!}}\omega_{A}^{2k+1}~{}\in HP_{\mathrm{odd}}(C^{\infty}(\tilde{G})\otimes\mathcal{A})~{}.$ (V.7) for $\emph{\emph{Ch}}^{1}_{*}[\check{g}_{A}]$ the corresponding Connes-Chern character in odd periodic cyclic homology for $\check{g}_{A}$. An important formula is perrot ; connes-moscovici-cs ; $CS(\mathcal{H},\mathcal{D},\gamma)=\left\langle{\emph{\emph{Ch}}_{*}^{1}[\check{g}_{A}],\emph{\emph{Ch}}_{1}^{*}(\mathcal{H},\mathcal{D},\gamma)}\right\rangle~{},$ (V.8) wherein $\emph{\emph{Ch}}_{1}^{*}(\mathcal{H},\mathcal{D},\gamma)$ is the class of Connes-Chern character in odd periodic cyclic cohomology $HP^{\mathrm{odd}}(C^{\infty}(\tilde{G})\otimes\mathcal{A})$ via (IV.3) and $CS(\mathcal{H},\mathcal{D},\gamma)$ is the corresponding Chern-Simons form. The pairing (V.8) is in fact taken place via cup product $\cup:H_{k}(\tilde{G},\mathbb{C})\otimes HC^{j}(\mathcal{A})\to HC^{k+j}(C^{\infty}(\tilde{G})\otimes\mathcal{A})~{},$ and hence, it represents de Rham cohomology classes in $H_{\mathrm{dR}}^{\mathrm{odd}}(\tilde{G},\mathbb{C})$. The main result is that the component in $H_{\mathrm{dR}}^{1}(\tilde{G},\mathbb{C})$ of pairing (V.8) coincides with $\Pi(A)$ up to some exact form on $\tilde{G}$. Therefore, if $i:S^{1}\to\tilde{G}$ is a smooth map, then we find; $\int_{S^{1}}i^{*}(\mathfrak{G}(A))=2\pi i\left\langle{\emph{\emph{Ch}}_{*}^{1}[\check{g}_{A}],[S^{1}]\cup\emph{\emph{Ch}}_{2}^{*}(\mathcal{H},\mathcal{D},\gamma)}\right\rangle\in 2\pi i~{}\mathbb{Q}~{},$ (V.9) where $i:S^{1}\to\tilde{G}$ is considered to represent a class in $H_{1}(\tilde{G},\mathbb{C})$, say $[S^{1}]$. Calculating (V.9) is actually accomplished by using Connes-Moscovici local index formula as a residue of some zeta function.151515See perrot for more details. ## VI Summary and Conclusions Through this paper we introduced a cohomology theory, so called $\star$-cohomology, with cohomology groups $H_{\star}^{k}(X,\mathbb{C})$, on spacetime manifold $X$, to describe general translation-invariant noncommutative quantum field theories by means of both commutative and noncommutative geometric structures. In fact, $\star$-cohomology plays an intermediate role between de Rham and cyclic (co)homology theories for noncommutative algebras. It provides a breakthrough between commutative and noncommutative quantum field theories and thus is comparable to the Seiberg- Witten map. Employing this framework for Chern-Weil theory we introduced three types of Chern characters so that the third type, $\check{\mathrm{ch}}_{*}$, is shown to belong to $H_{\star}^{*}(X,\mathbb{Z})$ and has intimate correlation to Connes-Chern characters in cyclic (co)homology groups. On the other hand, $\check{\emph{\emph{ch}}}_{*}$ induces integral classes in $H_{\mathrm{dR}}^{*}(X,\mathbb{Z})$ which include significant information about topology of translation-invariant noncommutative Yang-Mills theories over $X$. Therefore, the topology of Abelian and topological anomalies of translation-invariant noncommutative Yang-Mills theories were studied thoroughly with correlation to Connes-Chern characters in cyclic (co)homology groups and the machinery of noncommutative geometry. ## VII Acknowledgments The author says his gratitude to S. Ziaee who was the main reason for appearing this article. Moreover, the author wishes to dedicate this work to Mohammad Reza Shajarian for all he has done to Iranian art and culture along last fifty years. Finally, my special thanks and highest regards go to the esteemed referee and the respectable editor of ROMP for all their honest considerations. ## References * (1) Y. Maeda, H. Moriyoshi, M. Kotani, and S. Watamura (Eds.) _Noncommutative Geometry and Physics 4_ , World Scientific, 2017. * (2) A. Connes, and M. Marcolli, _Noncommutative Geometry, Quantum Fields and Motives_ , American Mathematical Society, Hindustan Book Agency, 2007. * (3) A. Connes, and A. H. Chamseddine, _Conceptual Explenation for the Algebra in the Noncommutative Approach to the Standard Model_ , Phys. Rev. Lett. 99: 191601, 2007 [arXiv:0706.3690 [hep-th]]. * (4) F. Lizzi, _Noncommutative Geometry and Particle Physics_ , PoS CORFU 2017, 133, 40 pages, 2018 [arXiv:1805.00411 [hep-th]]. * (5) N. Seiberg, and E. Witten, _String Theory and Noncommutative Geometry_ , JHEP 09, 032, 93 pages, 1999 [arXiv:hep-th/9908142]. * (6) S. Galluccio, F. Lizzi, and P. Vitale, _Translation Invariance, Commutation Relations and Ultraviolet/Infrared Mixing_ , JHEP 0909: 054, 2009 [arXiv:0907.3640 [hep-th]]. * (7) S. Galluccio, _Non-Commutative Field Theory, Translation Invariant Products and Ultraviolet/Infrared Mixing_ , PhD thesis, 2010 [arXiv:1004.4655 [hep-th]]. * (8) A. A. Varshovi, _Groenewold-Moyal Product, $\alpha^{*}$-Cohomology, and Classification of Translation-Invariant Non-Commutative Structures_, J. Math. Phys. 54: 172301, 2013 [arXiv:1210.1004 [math-ph]]. * (9) A. A. Varshovi, _$\alpha^{*}$ -Cohomology and Classification of Translation-Invariant Non-Commutative Quantum Field Theories_, J. Geom. Phys. 83: 53-68, 2014 [arXiv:1210.0695 [math-ph]]. * (10) A. A. Varshovi, _$\star$ -Cohomology, Third Type Chern Character and Anomalies in General Translation-Invariant Noncommutative Yang-Mills_, preprint, 2019. * (11) E. Langmann, _Descent Equations of Yang-Mills Anomalies in Noncommutative Geometry_ , J. Geom. Phys. 22: 259-279, 1997 [arXiv:hep-th/9508003]. * (12) L. Bonora, M. Schnabl, and A. Tomasiello, _A Note on Consistent Anomalies in Noncommutative YM Theories_ , Phys. Lett. B 485: 311-313, 2000 [arXiv:hep-th/0002210]. * (13) L. Bonora and A. Sorin, _Chiral Anomalies in Noncommutative YM Theories_ , Phys. Lett. B 521: 421-428, 2001 [arXiv:hep-th/0109204]. * (14) F. Ardalan, H. Arfaei, and N. Sadooghi, _On the Anomalies and the Schwinger Terms in Noncommutative Gauge Theories_ , Int. J. Mod. Phys. A 21: 4161-4183, 2006 [arXiv:0507230 [hep-th]]. * (15) F. Brandt, _Seiberg-Witten Maps and Anomalies in Noncommutative Yang-Mills Theories_ , In _Particle Physics and the Universe_ , Springer Proceeding in Physics, 98, Editors: J. Trampetic, and J. Wess, 2005 [arXiv:0403143 [hep-th]]. * (16) A. A. Varshovi, _Consistent Anomalies in Translation-Invariant Noncommutative Gauge Theories_ , J. Math. Phys. 53: 042303, 2012 [arXiv:1102.4059 [hep-th]]. * (17) M. F. Atiyah, I. M. Singer, _Dirac Operators Coupled to Vector Potentials_ , Proc. Nat. Acad. Sci. USA, t. 81: 2597, 1984. * (18) N. Berline, E. Getzler, and M. Vergne, _Heat Kernels and Dirac Operators_ , Springer-Verlag, 1992. * (19) A. Connes, _Noncommutative Geometry_ , Academic Press, Inc. 1994. * (20) A. Connes, _Noncommutative Differential Geometry_ , Publ. Math. IHES 62: 41-144, 1985. * (21) M. Khalkhali, _Basic Noncommutative Geometry_ , European Mathematical Society, 2013. * (22) P. B. Gilkey, _Invariance Theory, The Heat Equation, and the Atiyah-Singer Index Theorem_ , Publish or Perish, 1984. * (23) J. M. Bismut, and D. S. Freed, _The Analysis of Elliptic Families I_ , Commun. Math. Phys. 106: 159-176, 1986. * (24) J. M. Bismut, and D. S. Freed, _The Analysis of Elliptic Families II_ , Commun. Math. Phys. 107: 103-163, 1986. * (25) A. A. Varshovi, _$\star$ -Cohomology, Noncommutative Geometry, and the Topology of Anomalies in Noncommutative Yang-Mills_, preprint, 2019. * (26) A. Hatcher, _Algebraic Topology_ , Cambridge University Press, 2002. * (27) P. Griffiths, and J. Morgan, _Rational Homotopy Theory and Differential Forms_ , Birkhauser, 2013. * (28) R. A. Bertlmann, _Anomalies in Quantum Field Theory_ , Oxford University Press, 1996. * (29) B. Hall, _Lie Groups, Lie Algebras, and Representations_ , Graduate Texts in Matheatics, Springer, 2015. * (30) D. Perrot, _BRS Cohomology and the Chern Character in Noncommutative Geometry_ , Lett. Math. Phys. 50: 135-144, 1999 [arXiv:math-ph/9910044]. * (31) D. Perrot, _Anomalies and Noncommutative Index Theory_ , Contemp. Math. 343: 125-160, 2007 [arXiv:0603209 [hep-th]]. * (32) D. Perrot, _Chern character, Hopf Algebras, and BRS Cohomology_ , 2002 [arXiv:math-ph/0210043]. * (33) A. Connes, and H. Moscovici, _Transgression and the Chern Character of Finite Dimensional $K$-Cycles_, Commun. Math. Phys. 155: 103-122, 1993.
# Tverberg’s theorem for cell complexes Sho Hasui Department of Mathematical Sciences, Osaka Prefecture University, Sakai, 599-8531, Japan<EMAIL_ADDRESS>, Daisuke Kishimoto Department of Mathematics, Kyoto University, Kyoto, 606-8502, Japan <EMAIL_ADDRESS>, Masahiro Takeda Department of Mathematics, Kyoto University, Kyoto, 606-8502, Japan<EMAIL_ADDRESS>and M. Tsutaya Faculty of Mathematics, Kyushu University, Fukuoka 819-0395, Japan <EMAIL_ADDRESS> ###### Abstract. The topological Tverberg theorem states that any continuous map of a $(d+1)(r-1)$-simplex into the Euclidean $d$-space maps some points from $r$ pairwise disjoint faces of the simplex to the same point whenever $r$ is a prime power. We substantially generalize this theorem to continuous maps of certain CW complexes, including simplicial $((d+1)(r-1)-1)$-spheres, into the Euclidean $d$-space. We also discuss the atomicity of the Tverberg property. ###### Key words and phrases: topological Tverberg theorem, complementary acyclicity, simplicial sphere, discretized configuration space, homotopy colimit ###### 2010 Mathematics Subject Classification: 52A37, 55R80 ## 1\. Introduction Tverberg’s theorem [21] states that any given $(d+1)(r-1)+1$ points in $\mathbb{R}^{d}$ can be a partitioned into $r$ disjoint subsets such that the convex hulls of these subsets have a point in common. This has been of great interest in combinatorics for over 50 years, and a variety of its generalization have been obtained. See comprehensive surveys [1, 3, 6] for history and developments around Tverberg’s theorem. Among others, a topological generalization of Tverberg’s theorem was conjectures by Bárány in 1976, and was affirmatively solved as follows, which is now called the topological Tverberg theorem. Let $\Delta^{n}$ denote an $n$-simplex. ###### Theorem 1.1. If $r$ is a prime power, then for any continuous map $f\colon\Delta^{(d+1)(r-1)}\to\mathbb{R}^{d}$, there are pairwise disjoint faces $\sigma_{1},\ldots,\sigma_{r}$ of $\Delta^{(d+1)(r-1)}$ such that $f(\sigma_{1}),\ldots,$ $f(\sigma_{r})$ have a point in common The topological Tverberg theorem was proved by Bárány, Shlosman and Szűcs [4] when $r$ is a prime, and by Özaydin [19] and Volovoikov [22] when $r$ is a prime power. When $r$ is a prime, the proof employs a generalization of the Borsuk-Ulam theorem, and when $r$ is a prime power, Volovikov [22] employed a cohomological index, which is essentially the same as the ideal-valued cohomological index of Fadell and Husseini [7]. Remarkably, Frick [9] proved that the condition that $r$ is a prime power is necessary. Let us consider a generalization of the topological Tverberg theorem. In [10], Tverberg asked whether or not it is possible to generalize the topological Tverberg theorem to continuous maps from $(d+1)(r-1)$-polytopes into $\mathbb{R}^{d}$. The answer is positive because the boundary of a convex $n$-polytope is a refinement of the boundary of an $n$-simplex as in [11, p. 200] and the result follows from the topological Tverberg theorem. Then Tverberg’s question does not contribute to a proper generalization of the topological Tverberg theorem, and so we further ask: ###### Question 1.2. For which CW complexes can we generalize the topological Tverberg theorem to continuous maps from them into Euclidean spaces? Recently, Bárány, Kalai and Meshulam [2] and Blagojević, Haase and Ziegler [5] constructed affirmative examples of matroid complexes for Question 1.2 in a purely combinatorial way. In this paper, we give a new affirmative class of regular CW complexes from a topological point of view, which will turn out to be substantial. To state the main theorem, we set notation and terminology. Let $X$ be a regular CW complex. A _face_ of $X$ means its closed cell. For faces $\sigma_{1},\ldots,\sigma_{k}$ of $X$, let $X(\sigma_{1},\ldots,\sigma_{k})$ denote the subcomplex of $X$ consisting of faces which do not intersect with $\sigma_{1},\ldots,\sigma_{k}$. Recall that a space $Y$ is called _$n$ -acyclic_ if $\widetilde{H}_{*}(Y)=0$ for $*\leq n$. For convenience, a non-empty space will be called $(-1)$-acyclic, so that any $n$-acyclic space for $n\geq 0$ will be assumed non-empty. We define a regular CW complex that we are going to consider in this paper. ###### Definition 1.3. We say that a regular CW complex $X$ is _$k$ -complementary $n$-acyclic_ if $X(\sigma_{1},\ldots,\sigma_{i})$ is $(n-\dim\sigma_{1}-\cdots-\dim\sigma_{i})$-acyclic for any pairwise disjoint faces $\sigma_{1},\ldots,\sigma_{i}$ of $X$ such that $\dim\sigma_{1}+\cdots+\dim\sigma_{i}\leq n+1$ and $0\leq i\leq k$. Now we state the main theorem. ###### Theorem 1.4. If $X$ is an $(r-1)$-complementary $(d(r-1)-1)$-acyclic regular CW complex and $r$ is a prime power, then for any continuous map $f\colon X\to\mathbb{R}^{d}$, there are pairwise disjoint faces $\sigma_{1},\ldots,\sigma_{r}$ of $X$ such that $f(\sigma_{1}),\ldots,f(\sigma_{r})$ have a point in common. Since a $(d+1)$-simplex is $k$-complementary $(d-k)$-acyclic for $1\leq k\leq d+1$, the topological Tverberg theorem is recovered by Theorem 1.4. Moreover, we will prove that every simplicial $d$-sphere is $k$-complementary $(d-k)$-acyclic for $1\leq k\leq d+1$ (Proposition 2.3). Then we get: ###### Corollary 1.5. If $S$ is a simplicial $((d+1)(r-1)-1)$-sphere and $r$ is a prime power, then for any continuous map $f\colon S\to\mathbb{R}^{d}$, there are pairwise disjoint faces $\sigma_{1},\ldots,\sigma_{r}$ of $S$ such that $f(\sigma_{1}),\ldots,f(\sigma_{r})$ have a point in common. Let us go back to Tverberg’s question mentioned above. As we saw above, the topological Tverberg theorem is generalized to maps out of polytopal spheres, where a polytopal sphere means the boundary of a convex polytope. However, a generalization to simplicial spheres has not been proved. Grünbaum and Sreedharan [12] constructed a simplicial 3-sphere which is not polytopal. Moreover, Kalai [15] proved that for $d$ large, ”most” simplicial $d$-spheres are not polytopal. Then Corollary 1.5, hence Theorem 1.4, is a substantial generalization of the topological Tverberg theorem. On the other hand, (the failure of) Tverberg’s question motivates us to consider the atomicity of the Tverberg property, that is, the Tverberg property which is not induced from subcomplexes or refinement. In Section 5, we will pose a counting problem of atomic complexes having the Tverberg property, and we will give some computations in the special cases. We sketch the outline of the proof of Theorem 1.4. We will apply the standard topological method in combinatorics to prove Theorem 1.4 by using discretized configuration spaces, or deleted products, and the weight of a group action, which is essentially the same as Fadell and Husseini’s ideal valued cohomological index [7]. Then the proof reduces to computing acyclicity of discretized configuration spaces. We will give a description of discretized configuration spaces in terms of homotopy colimits (Theorem 4.2), which can be thought of as a combinatorial version of the Fadell-Neuwirth fibration [8] and has other applications such as [16]. This enables us to compute the acyclicity by the Bousfield-Kan type spectral sequence, which actually leads us to the complementary acyclicity. ### Acknowledgement The authors were supported by JSPS KAKENHI Grant Numbers JP18K13414 (Hasui), JP17K05248 and JP19K03473 (Kishimoto), JP21J10117 (Takeda), and JP19K14535 (Tsutaya). ## 2\. Simplicial sphere This section proves that every simplicial $d$-sphere is $k$-complementary $(d-k)$-acyclic for $1\leq k\leq d+1$. We will use the nerve theorem, so we set notation and terminology for it. Let $\mathcal{U}$ be an open cover of a topological space $X$. We say that $\mathcal{U}$ is _good_ if each intersection of finitely many members of $\mathcal{U}$ is either empty or contractible. Let $N(\mathcal{U})$ denote the nerve of $\mathcal{U}$, which is a simplicial complex such that vertices correspond to members of $\mathcal{U}$ and finitely many vertices form a simplex if the intersection of the corresponding members of $\mathcal{U}$ is non-empty. Now we state the nerve theorem. See [13, Corollary 4G.3] for the proof. ###### Lemma 2.1. If $\mathcal{U}$ is a good open cover of a paracompact space $X$, then $X\simeq N(\mathcal{U}).$ Let $K$ be a simplicial complex. For a simplex $\sigma$ of $K$, let $\operatorname{st}(\sigma)$ denote the open star of $\sigma$. Namely, $\operatorname{st}(\sigma)$ is the union of the interiors of all simplices including $\sigma$. Note that every open star is contractible. ###### Lemma 2.2. Let $K$ be a simplicial complex. For vertices $v_{1},\ldots,v_{n}$ of $K$, $\widetilde{H}^{*}(\operatorname{st}(v_{1})\cup\cdots\cup\operatorname{st}(v_{n}))=0\quad(*\geq n-1).$ ###### Proof. Let $U=\operatorname{st}(v_{1})\cup\cdots\cup\operatorname{st}(v_{n})$. For $n=1$, $U=\operatorname{st}(v_{1})$ is contractible, so the statement is obvious. Then we may assume $n\geq 2$, and it is sufficient to show that the homotopy dimension of $U$ is at most $n-2$. We consider an open cover $\mathcal{U}=\\{\operatorname{st}(v_{i})\\}_{i=1}^{n}$ of $U$. As in [18, p. 372], if $\operatorname{st}(v_{i_{1}})\cap\cdots\cap\operatorname{st}(v_{i_{k}})\neq\emptyset$, then vertices $v_{i_{1}},\ldots,v_{i_{k}}$ form a simplex $\sigma$ of $K$ such that $\operatorname{st}(v_{i_{1}})\cap\cdots\cap\operatorname{st}(v_{i_{k}})=\operatorname{st}(\sigma)$, which is contractible. Thus $\mathcal{U}$ is a good open cover of $U$, and by Lemma 2.1, we get $U\simeq N(\mathcal{U}).$ By definition, $N(\mathcal{U})$ is a simplicial complex with $n$ vertices, so $\dim N(\mathcal{U})\leq n-1$. If $\dim N(\mathcal{U})=n-1$, $N(\mathcal{U})$ is a simplex which is contractible. Thus we obtain that the homotopy dimension of $N(\mathcal{U})$ is at most $n-2$, completing the proof. ∎ We are ready to prove: ###### Proposition 2.3. Every simplicial $d$-sphere is $k$-complementary $(d-k)$-acyclic for $1\leq k\leq d+1$. ###### Proof. Let $S$ be a simplicial $d$-sphere, and let $\sigma_{1},\ldots,\sigma_{i}$ be simplices of $S$ such that $\dim\sigma_{1}+\cdots+\dim\sigma_{i}\leq d-k+1$ and $0\leq i\leq k$. Let $v_{1},\ldots,v_{m}$ be vertices of $\sigma_{1},\ldots,\sigma_{i}$. Then $m\leq i+\dim\sigma_{1}+\cdots+\dim\sigma_{i}$. Since $S$ is a simplicial $d$-sphere, it has at least $d+2$ vertices. On the other hand, we have $m\leq i+\dim\sigma_{1}+\cdots+\dim\sigma_{i}\leq d-k+i+1\leq d+1$. Then $S(\sigma_{1},\ldots,\sigma_{i})$ is non-empty. It remains to compute the homology of $S(\sigma_{1},\ldots,\sigma_{i})$. Let $U=\operatorname{st}(v_{1})\cup\cdots\cup\operatorname{st}(v_{m})$. Then we have $S(\sigma_{1},\ldots,\sigma_{i})=S-U.$ By shrinking $U$ slightly, we get a closed and locally contractible subset $C$ of $U$ such that $C\simeq U$ and $S-C\simeq S-U$. By the Alexander duality, we have $\widetilde{H}_{d-j-1}(S-C)\cong\widetilde{H}^{j}(C).$ By Lemma 2.2, $\widetilde{H}^{j}(C)\cong\widetilde{H}^{j}(U)=0$ for $j\geq m-1$, and so $\widetilde{H}_{*}(S(\sigma_{1},\ldots,\sigma_{i}))=\widetilde{H}_{*}(S-U)\cong\widetilde{H}_{*}(S-C)=0$ for $*\leq d-m$. Thus since $d-k-\dim\sigma_{1}-\cdots-\dim\sigma_{i}\leq d-i-\dim\sigma_{1}-\cdots-\dim\sigma_{i}\leq d-m$, the proof is finished. ∎ It is quite probable that every polyhedral $d$-sphere is $k$-complementary $(d-k)$-acyclic for $1\leq k\leq d+1$. However, complementary acyclicity of regular CW spheres seem more complicated. The minimal regular CW decomposition of a $d$-sphere $S^{d}=e_{-}^{0}\cup e_{+}^{0}\cup\cdots\cup e^{d}_{-}\cup e^{d}_{+}$ for $d\geq 1$ is not 1-complementary 0-acyclic because $S^{d}$ minus a 1-face is empty. On the other hand, there are non-polyhedral $d$-spheres which are $k$-complementary $(d-k)$-sphere for $1\leq k\leq d+1$. For example, the regular CW 2-sphere depicted below is $k$-complementary $(2-k)$-acyclic for $1\leq k\leq 3$. It would be interesting to find a condition for a regular CW $d$-sphere being $k$-complementary $(d-k)$-acyclic for $1\leq k\leq d+1$. Figure 1. ## 3\. Group action This section connects Theorem 1.4 to equivariant topology by the standard topological method in combinatorics. Let $X$ be a regular CW complex. The _discretized configuration space_ $\mathtt{Conf}_{r}(X)$ is defined as the subcomplex of the direct product $X^{r}$ consisting of faces $\sigma_{1}\times\cdots\times\sigma_{r}$ such that $\sigma_{1},\ldots,\sigma_{r}$ are pairwise disjoint faces of $X$. The discretized configuration space is often called the deleted product in combinatorics, alternatively. Let $\Delta=\\{(x_{1},\ldots,x_{r})\in(\mathbb{R}^{d})^{r}\mid x_{1}=\cdots=x_{r}\\}$. There is a homotopy equivalence (3.1) $(\mathbb{R}^{d})^{r}-\Delta\simeq S^{d(r-1)-1}.$ Note that the symmetric group $\Sigma_{r}$ acts on $\mathtt{Conf}_{r}(X)$ and $(\mathbb{R}^{d})^{r}-\Delta$ by permuting of entries. The following lemma is proved in [6, Theorem 3.9]. ###### Lemma 3.1. Let $X$ be a regular CW complex. If there is a continuous map $X\to\mathbb{R}^{d}$ such that $f(\sigma_{1})\cap\cdots\cap f(\sigma_{r})=\emptyset$ for all pairwise disjoint faces $\sigma_{1},\ldots,\sigma_{r}$ of $X$, then there is a $\Sigma_{r}$-map $\mathtt{Conf}_{r}(X)\to(\mathbb{R}^{d})^{r}-\Delta.$ To apply Lemma 3.1, we will use the following invariant. Let $G=(\mathbb{Z}/p)^{k}$, and let $X$ be a $G$-space. The invariant $\operatorname{wgt}_{G}(X)$ is defined to be the greatest integer $n$ such that the natural map $H^{n}(BG;\mathbb{Z}/p)\to H^{n}(EG\times_{G}X;\mathbb{Z}/p)$ is injective. We call $\operatorname{wgt}_{G}(X)$ the _weight_ from the view of the category weight [20], instead of the over used ”index” as in [22]. It is easy to see that $\operatorname{wgt}_{G}(X)$ is essentially the same as the ideal-valued cohomological index for $EG\times_{G}X$ due to Fadell and Husseini [7]. We will use the following properties of weights, which were also used in [6, 22, 23]. ###### Lemma 3.2. For $G=(\mathbb{Z}/p)^{k}$, the following statements hold. 1. (1) If there is a $G$-map $X\to Y$ between $G$-spaces $X,Y$, then $\operatorname{wgt}_{G}(X)\leq\operatorname{wgt}_{G}(Y).$ 2. (2) If a $G$-space $X$ is $n$-acyclic, then $\operatorname{wgt}_{G}(X)\geq n+1.$ 3. (3) If a paracompact space $S$ satisfies $H^{*}(S)\cong H^{*}(S^{n})$ and $G$ acts on $S$ without fixed point, then $\operatorname{wgt}_{G}(S)=n.$ ###### Proof. (1) This is obvious from the definition of weights. (2) Consider the mod $p$ cohomology Serre spectral sequence for a fibration $X\to EG\times_{G}X\to BG$. Then since $X$ is $n$-acyclic, $E_{2}^{0,q}=0$ for $1\leq q\leq n$. Thus the map $H^{*}(BG;\mathbb{Z}/p)\to H^{*}(EG\times_{G}X;\mathbb{Z}/p)$ must be injective for $*\leq n+1$, implying $\operatorname{wgt}_{G}(X)\geq n+1$. (3) By (2), we only need to prove $\operatorname{wgt}_{G}(S)\leq n$. Since the $G$-action on $S$ is fixed point free, the map $H^{*}(BG;\mathbb{Z}/p)\to H^{*}(EG\times_{G}S;\mathbb{Z}/p)$ is not injective by [14, Corollary 1, Chapter IV]. Consider the mod $p$ cohomology Serre spectral sequence for a fibration $S\to EG\times_{G}S\to BG$. Since the group of automorphisms of $\mathbb{Z}/p$ is isomorphic to $\mathbb{Z}/(p-1)$ and there is no non-trivial homomorphism $G\to\mathbb{Z}/(p-1)$, the local coefficients in the spectral sequence is trivial. Then since the map $H^{*}(BG;\mathbb{Z}/p)\to H^{*}(EG\times_{G}S;\mathbb{Z}/p)$ is not injective, the transgression $E_{n}^{0,n}\to E_{2}^{n+1,0}$ must be non-trivial. Thus we obtain that the map $H^{n+1}(BG;\mathbb{Z}/p)\to H^{n+1}(EG\times_{G}S;\mathbb{Z}/p)$ is not injective, implying $\operatorname{wgt}_{G}(S)\leq n$. Thus the proof is complete. ∎ Now we apply weights to Lemma 3.1. Let $r=p^{k}$ and $G=(\mathbb{Z}/p)^{k}$. The map $G\times G\to G,\quad(x,y)\mapsto x+y$ defines a faithful $G$-action on $G$ itself, so that we get a monomorphism $G\to\Sigma_{r}$. Then we can consider the induced $G$-action on a $\Sigma_{r}$-space through this monomorphism. We are ready to prove: ###### Proposition 3.3. Let $X$ be a regular CW complex such that $\mathtt{Conf}_{r}(X)$ is $(d(r-1)-1)$-acyclic. If $r$ is a prime power, then for any continuous map $f\colon X\to\mathbb{R}^{d}$, there are pairwise disjoint faces $\sigma_{1},\ldots,\sigma_{r}$ of $X$ such that $f(\sigma_{1}),\ldots,f(\sigma_{r})$ have a point in common. ###### Proof. Let $r=p^{k}$ and $G=(\mathbb{Z}/p)^{k}$. Suppose that all pairwise disjoint faces $\sigma_{1},\ldots,\sigma_{r}$ of $X$ satisfy $f(\sigma_{1})\cap\cdots\cap f(\sigma_{r})=\emptyset$. Then by Lemma 3.1, there is a $G$-map $\mathtt{Conf}_{r}(X)\to(\mathbb{R}^{d})^{r}-\Delta$, and so by Lemma 3.2, we get $d(r-1)\leq\operatorname{wgt}_{G}(\mathtt{Conf}_{r}(X))\leq\operatorname{wgt}_{G}((\mathbb{R}^{d})^{r}-\Delta).$ On the other hand, by (3.1) and Lemma 3.2, $\operatorname{wgt}_{G}((\mathbb{R}^{d})^{r}-\Delta)\leq d(r-1)-1.$ Thus we obtain a contradiction, finishing the proof. ∎ ## 4\. homotopy colimit This section describes a discrete configuration space in terms of a homotopy colimit, and proves Theorem 1.4. We recall from [24] the definition of a homotopy colimit of a functor from a poset. Let $P$ be a poset. Hereafter, we understand $P$ as a category such that objects are elements of $P$ and there is a unique morphism $x\to y$ for $x>y\in P$. For $x\in P$, let $P_{\leq x}=\\{y\in P\mid y\leq x\\}$. The order complex $\Delta(P)$ is the geometric realization of an abstract simplicial complex whose simplices are finite chains $x_{0}<x_{1}<\cdots<x_{n}$ in $P$. Let $F\colon P\to\mathbf{Top}$ be a functor. We define two maps $f,g\colon\coprod_{x<y\in P}\Delta(P_{\leq x})\times F(y)\to\coprod_{x\in P}\Delta(P_{\leq x})\times F(x)$ by $f=\coprod_{x<y\in P}1_{\Delta(P_{\leq x})}\times F(y>x)\quad\text{and}\quad g=\coprod_{x<y\in P}\iota_{x,y}\times 1_{F(y)},$ where $\iota_{x,y}\colon\Delta(P_{\leq x})\to\Delta(P_{\leq y})$ denotes the inclusion for $x<y$. The homotopy colimit $\operatorname{hocolim}F$ is defined to be the coequalizer of $f$ and $g$. By definition, there is a natural projection (4.1) $\pi\colon\operatorname{hocolim}F\to\Delta(P)$ We recall a property of regular CW complexes that we are going to use. For a CW complex $X$, let $P(X)$ denote its face poset. The following lemma is proved in [17, Theorem 1.6, Chapter III]. ###### Lemma 4.1. Let $X$ be a regular CW complex. Then there is a homeomorphism $\Delta(P(X))\xrightarrow{\cong}X$ which restricts to a homeomorphism $\Delta(P(X)_{\leq\sigma})\xrightarrow{\cong}\sigma$ for each face $\sigma$. Now we describe $\mathtt{Conf}_{r}(X)$ in terms of a homotopy colimit. Similarly to the Fadell-Neuwirth fibration [8], we consider the first projection $\pi\colon\mathtt{Conf}_{r}(X)\to X$. Then for each face $\sigma$ of $X$, we have $\pi^{-1}(\mathrm{Int}(\sigma))=\mathtt{Conf}_{r-1}(X(\sigma)).$ Thus since $X(\sigma)\subset X(\tau)$ for $\sigma>\tau$, $\mathtt{Conf}_{r}(X)$ is obtained by gluing $\sigma\times\mathtt{Conf}_{r-1}(X(\sigma))$ along the inclusions $\sigma\times\mathtt{Conf}_{r-1}(X(\sigma))\leftarrow\tau\times\mathtt{Conf}_{r-1}(X(\sigma))\to\tau\times\mathtt{Conf}_{r-1}(X(\tau))$ for $\sigma>\tau$. In other words, $\mathtt{Conf}_{r}(X)$ is homeomorphic to the coequalizer of two maps $f,g\colon\coprod_{\tau<\sigma\in P(X)}\tau\times\mathtt{Conf}_{r-1}(X(\sigma))\to\coprod_{\tau\in P(X)}\tau\times\mathtt{Conf}_{r-1}(X(\tau))$ defined by $f=\coprod_{\tau<\sigma\in P(X)}1_{\tau}\times\theta_{\sigma,\tau}\quad\text{and}\quad g=\coprod_{\tau<\sigma\in P(X)}\iota_{\tau,\sigma}\times 1_{\mathtt{Conf}_{r-1}(X(\sigma))},$ where $\theta_{\sigma,\tau}\colon\mathtt{Conf}_{r-1}(X(\sigma))\to\mathtt{Conf}_{r-1}(X(\tau))$ and $\iota_{\tau,\sigma}\colon\tau\to\sigma$ are inclusions for $\sigma>\tau$. Now we define a functor $F_{r}\colon P(X)\to\mathbf{Top}$ by $F_{r}(\sigma)=\mathtt{Conf}_{r-1}(X(\sigma))\quad\text{and}\quad F(\sigma>\tau)=\theta_{\sigma,\tau}.$ By Lemma 4.1, there is a natural homeomorphism $\Delta(P(X)_{\leq\sigma})\cong\sigma$ for each face $\sigma$ of $X$. Then by the above observation, we get: ###### Theorem 4.2. There is a homeomorphism $\mathtt{Conf}_{r}(X)\cong\operatorname{hocolim}F_{r}.$ We construct a spectral sequence which computes the homology of a homotopy colimit and is essentially the same as the Bousfield-Kan spectral sequence. Let $P$ be a poset. By definition, we have (4.2) $\Delta(P)=\bigcup_{x\in P}\Delta(P_{\leq x})$ so that there is a filtration of $P$ (4.3) $P_{0}\subset\cdots\subset P_{n}\subset P_{n+1}\subset\cdots,$ where $P_{n}$ is the union of all $\Delta(P_{\leq x})$ for $\dim\Delta(P_{\leq x})\leq n$. Let $F\colon P\to\mathbf{Top}$ be a functor. Then by applying the projection (4.1), we get a filtration of $\operatorname{hocolim}F$ $\pi^{-1}(P_{0})\subset\cdots\subset\pi^{-1}(P_{n})\subset\pi^{-1}(P_{n+1})\subset\cdots.$ Suppose that $P=P(X)$ for a regular CW complex $X$. We describe the $E^{1}$-term of the spectral sequence associated with the above filtration. By Lemma 4.1, (4.2) is identified with the cell structure of $X$, and so the filtration (4.3) is identified with the skeletal filtration of $X$. Then we get $E^{1}_{p,q}=H_{p+q}(\pi^{-1}(P(X)_{p}),\pi^{-1}(P(X)_{p-1}))\cong\bigoplus_{\begin{subarray}{c}\sigma\in P(X)\\\ \dim\sigma=p\end{subarray}}H_{q}(F(\sigma)).$ Summarizing, we obtain: ###### Proposition 4.3. Let $X$ be a regular CW complex, and let $F\colon P(X)\to\mathbf{Top}$ be a functor. Then there is a spectral sequence $E^{1}_{p,q}\cong\bigoplus_{\begin{subarray}{c}\sigma\in P(X)\\\ \dim\sigma=p\end{subarray}}H_{q}(F(\sigma))\quad\Longrightarrow\quad H_{p+q}(\operatorname{hocolim}F).$ We compute the acyclicity of $\mathtt{Conf}_{r}(X)$ by using the above spectral sequence. ###### Lemma 4.4. Let $X$ be a regular CW complex, and let $F\colon P(X)\to\mathbf{Top}$ be a functor such that $F(\sigma)$ is $(n-\dim\sigma)$ acyclic for each $\sigma\in P(X)$ with $\dim\sigma\leq n+1$. Then there is an isomorphism for $*\leq n$ $H_{*}(\operatorname{hocolim}F)\cong H_{*}(X)$ ###### Proof. We consider the spectral sequence of Proposition 4.3. Then by assumption on a functor $F$, $E^{1}_{p,q}\cong\begin{cases}C_{p}&q=0\\\ 0&q>0\end{cases}\quad\text{and}\quad E^{1}_{n+1,0}\cong\bigoplus_{k}C_{n+1}$ for $p+q\leq n$, where $k\geq 1$ and $C_{*}$ denotes the cellular chain complex of $\Delta(P(X))$ associated with the cell decomposition (4.2). Then by the construction of the spectral sequence, $d^{1}\colon E^{1}_{p,0}\to E^{1}_{p-1,0}$ is identified with the boundary map $\partial\colon C_{p}\to C_{p-1}$ for $p\leq n$ and the sum of copies of the boundary map $\bigoplus_{k}\partial\colon\bigoplus_{k}C_{n+1}\to C_{n}$ for $p=n+1$. Clearly, we have $\mathrm{Im}\left\\{\bigoplus_{k}\partial\colon\bigoplus_{k}C_{n+1}\to C_{n}\right\\}=\mathrm{Im}\\{\partial\colon C_{n+1}\to C_{n}\\}.$ Then we get $E^{2}_{p,q}\cong\begin{cases}H_{p}(\Delta(P(X)))&q=0\\\ 0&q>0\end{cases}$ for $p+q\leq n$. Thus we obtain $E^{2}_{p,q}\cong E^{\infty}_{p,q}$ for $p+q\leq n$, and the extension of $E_{\infty}^{p,q}$ to $H_{*}(\operatorname{hocolim}F)$ is trivial for $p+q\leq n$. Therefore by Lemma 4.1, the proof is complete. ∎ ###### Proposition 4.5. If $X$ is an $(r-1)$-complementary $n$-acyclic regular CW complex, then $\mathtt{Conf}_{r}(X)$ is $n$-acyclic. ###### Proof. We induct on $r$. For $r=1$, $\mathtt{Conf}_{1}(X)=X$ is $n$-acyclic by assumption. Suppose that $\mathtt{Conf}_{r-1}(Y)$ is $n$-acyclic for any $(r-2)$-complementary $n$-acyclic regular CW complex $Y$. Since $X$ is $(r-1)$-complementary $n$-acyclic, $X(\sigma)$ is $(r-2)$-complementary $(n-\dim\sigma)$-acyclic for $\dim\sigma\leq n+1$. Then $F_{r}(\sigma)=\mathtt{Conf}_{r-1}(X(\sigma))$ is $(n-\dim\sigma)$-acyclic for $\dim\sigma\leq n+1$, and so by Theorem 4.2 and Lemma 4.4, we obtain $H_{*}(\mathtt{Conf}_{r}(X))\cong H_{*}(X)$ for $*\leq n$. Thus since $X$ is $n$-acyclic, $\mathtt{Conf}_{r}(X)$ is $n$-acyclic, completing the induction. ∎ Now we are ready to prove Theorem 1.4. ###### Proof of Theorem 1.4. By Proposition 4.5, $\mathtt{Conf}_{r}(X)$ is $(d(r-1)-1)$-acyclic. Then by Proposition 3.3, the proof is finished. ∎ ## 5\. Atomicity Theorem 1.4 shows that the Tverberg property is possessed not only by a simplex but also by a variety of CW complexes. But the Tverberg property of some CW complexes can be deduced from the that of other complexes. For example, as mentioned in Section 1, the Tverberg property of a polytopal sphere is deduced from a simplex. This section studies CW complexes having the Tverberg property that is not induced from other CW complexes. We say that a regular CW complex $X$ is _$(d,r)$ -Tverberg_ if for any continuous map $f\colon X\to\mathbb{R}^{d}$, there are pairwise disjoint faces $\sigma_{1},\ldots,\sigma_{r}$ of $X$ such that $f(\sigma_{1}),\ldots,f(\sigma_{r})$ have a point in common. For example, by Theorem 1.4, $(r-1)$-complementary $(d(r-1)-1)$-acyclic regular CW complexes are $(d,r)$-Tverberg. Let $X$ be a $(d,r)$-Tverberg regular CW complex. Observe that a regular CW complex $Y$ is $(d,r)$-Tverberg if either of the following conditions is satisfied: 1. (1) $X$ is a subcomplex of $Y$; 2. (2) $Y$ is a refinement of $X$, that is, $X\cong Y$ and each face of $X$ is the union of faces of $Y$. This observation leads us to: ###### Definition 5.1. A $(d,r)$-Tverberg regular CW complex is called _atomic_ if it does not include a proper subcomplex which is $(d,r)$-Tverberg or it is not a refinement of a proper $(d,r)$-Tverberg complex. Here is a fundamental problem on $(d,r)$-Tverberg complexes. ###### Problem 5.2. Given $d,r$ and $n$, are there only finitely many atomic $(d,r)$-Tverberg finite complexes of dimension $n$? First, we consider 1-dimensional $(1,2)$-Tverberg finite complexes. Let $C_{n}$ denote the cycle graph with $n$ vertices for $n\geq 3$. Then by Corollary 1.5, $C_{n}$ is $(1,2)$-Tverberg. Let $Y$ be the $Y$-shaped graph depicted below. Then by the intermediate value theorem, we can see that $Y$ is $(1,2)$-Tverberg too. ###### Proposition 5.3. The only atomic 1-dimensional $(1,2)$-Tverberg finite complexes are $C_{3}$ and $Y$. ###### Proof. Clearly, any proper subcomplex of $C_{3}$ and $Y$ is not $(1,2)$-Tverberg, and $C_{3}$ and $Y$ are not refinements of other regular CW complexes. Then $C_{3}$ and $Y$ are atomic $(1,2)$-Tverberg finite complex of dimension 1. Let $X$ be a $(1,2)$-Tverberg finite complex of dimension 1. Suppose $X$ is not a forest. Then it includes $C_{n}$ for some $n\geq 2$. If $X$ includes $C_{n}$ for some $n\geq 3$, then it is $(1,2)$-Tverberg. If $X$ includes only $C_{2}$, then $X$ is a disjoint union of finitely many path graphs with multiple edges. So $X$ is not $(1,2)$-Tverberg. Suppose $X$ is a forest. If $X$ does not include $Y$, then it is a disjoint union of finitely many path graphs, and so it is not $(1,2)$-Tverberg. Thus $X$ must include $Y$, completing the proof. ∎ ###### Remark 5.4. If we remove its center vertex of $Y$, then it becomes disconnected. Hence $Y$ is not 1-complementary 0-acyclic, so that we cannot apply Theorem 1.4 for $d=1$ and $r=2$ to deduce that $Y$ is $(1,2)$-Tverberg. However, we can directly see $\operatorname{wgt}_{\mathbb{Z}/2}(\mathtt{Conf}_{2}(Y))=1$, implying that $Y$ is $(1,2)$-Tverberg, because $\mathtt{Conf}_{2}(Y)$ is a hexagon so that Lemma 3.2 applies. Next, we consider $(2,2)$-Tverberg polyhedral 2-spheres. Let $\partial\Delta^{n}$ denote the boundary of an $n$-simplex. ###### Proposition 5.5. The only atomic $(2,2)$-Tverberg polyhedral 2-sphere is $\partial\Delta^{3}$. ###### Proof. By Steinitz’s theorem, every polyhedral 2-sphere is polytopal, and as in [11, p. 200], every polytopal sphere is a refinement of the boundary of a simplex. On the other hand, $\partial\Delta^{3}$ is $(2,2)$-Tverberg by the topological Tverberg theorem, and each proper subcomplex of $\partial\Delta^{3}$ is not $(2,2)$-Tverberg. Thus the proof is done. ∎ The 2-sphere in Figure 1 is an atomic $(2,2)$-Tverberg complex, and there may be other atomic $(2,2)$-Tverberg 2-spheres which are not polyhedral. Then we pose a problem much weaker than Problem 5.2 but still interesting. ###### Problem 5.6. Are there only finitely many atomic $(2,2)$-Tverberg 2-spheres? ## References * [1] I. Bárány, P.V.M. Blagojević, and G.M. Ziegler, Tverberg’s theorem at 50: extensions and counterexamples, Notices Amer. Math. Soc. 73 (2016), no. 3, 732-739. * [2] I. Bárány, G. Kalai, and R. Meshulam, A Tverberg type theorem for matroids, A journey through discrete mathematics, pp. 115-121, Springer, Cham, 2017. * [3] I. Bárány and P. Soberón, Tverberg’s theorem is 50 years old: A survey, Bull. Amer. Math. Soc. 55 (2018), no. 4, 459-492. * [4] I. Bárány, S. B. Shlosman, and A. Szűcs, On a topological generalization of a theorem of Tverberg, J. London Math. Soc. (2) 23 (1981), no. 1, 158-164. * [5] P.V.M. Blagojević, A. Haase, and G.M. Ziegler, Tverberg-type theorems for matroids: a counterexample and a proof, Combinatorica 39 (2019), no. 3, 477-500. * [6] P.V.M. Blagojević and G.M. Ziegler, Beyond the Borsuk-Ulam theorem: The topological Tverberg story, A journey through discrete mathematics, pp. 273-341, Springer, Cham, 2017. * [7] E. Fadell and S. Husseini, An ideal-valued cohomological index theory with applications to Borsuk-Ulam and Bourgin-Yang theorems, Ergodic Theory Dynam. Systems, 8 (1988), 73-85. * [8] E. Fadell and L. Neuwirth, Configuration spaces, Math. Scand. 10 (1962), no. 4, 111-118. * [9] F. Frick, Counterexamples to the topological Tverberg conjecture, Oberwolfach Reports 12 (2015), no. 1, 318-321. * [10] P.M. Gruber and R. Schneider, Problems in geometric convexity, Contributions to geometry (Proc. Geom. Sympos., Siegen, 1978), pp. 255-278, Birkhäuser, Basel-Boston, Mass., 1979. * [11] B. Grünbaum, Convex Polytopes, Graduate Texts in Mathematics 221, Springer-Verlag, New York, second edition, 2003. * [12] B. Grünbaum and V.P. Sreedharan, An enumeration of simplicial 4-polytopes with 8 vertices, J. Comb. Theory 2 (1967), 437-465. * [13] A. Hatcher, Algebraic Topology, Cambridge University Press, Cambridge, 2002. * [14] W.Y. Hsiang, Cohomology Theory of Topological Transformation Groups, Ergebnisse der Mathematik und ihrer Grenzgebiete 85, Springer-Verlag, New York, 1975. * [15] G. Kalai, Many triangulated spheres, Discrete Comput. Geom. 3 (1988), 1–14. * [16] D. Kishimoto and T. Matsushita, Van Kampen-Flores theorem for cell complexes, https://arxiv.org/abs/2109.09919. * [17] A.T. Lundell and S. Weingram, The Topology of CW Complexes, van Nostrand, New York, 1969. * [18] J.R. Munkres, Elements of Algebraic Topology, Addison-Wesley, 1984. * [19] M. Özaydin, Equivariant maps for the symmetric group, Unpublished preprint, University of Winsconsin-Madison, 1987. * [20] Y.B. Rudyak, On category weight and its applications, Topology 38 (1999), no. 1, 37-55. * [21] H. Tverberg, A generalization of Radon’s theorem. II, Bull. Austral. Math. Soc. 24 (1981), no. 3, 321-325. * [22] A.Y. Volovikov, On a topological generalization of Tverberg’s theorem, Mat. Zametki 59 (1996), no. 3, 454-456, (English transl.) Math. Notes 59 (1996), no. 3-4, 324-325. * [23] A.Y. Volovikov, On the van Kampen-Flores theorem, Math. Notes 59 (1996), no. 5, 477-481. * [24] G. Ziegler and R. Z̆ivaljević, Homotopy types of subspace arrangements via diagrams of spaces, Math. Ann. 295 (1993) 527-548.
# Post-Newtonian properties of EMRI with Power Law Potential Chinmay N. Gandevikar<EMAIL_ADDRESS>BITS Pilani K.K. Birla Goa Campus, Sancoale, Goa 403726, India Divyesh N. Solanki <EMAIL_ADDRESS>Sardar Vallabhbhai National Institute of Technology, Surat GUJ 395007, India Dipanjan Dey<EMAIL_ADDRESS>International Center for Cosmology, Charusat University, Gujarat 388421, India ###### Abstract There are many astrophysical scenarios where extreme mass ratio inspiral (EMRI) binaries can be surrounded by matter distribution. The distribution of mass can affect the dynamical properties (e.g. orbital frequency, average energy radiation rate, etc.) of the EMRI. In this matter distribution, instead of Kepler-Newton potential, one may consider a more general form of potential i.e. power law potential. Moreover, due to the power law potential, the velocity profile of test particles does not fall as much as that predicted by Kepler-Newton potential and this feature of the velocity profile may be observationally important. In this study, we have obtained the first post- Newtonian (1PN) expressions for dynamical quantities and the average energy radiation rate from the circular orbit EMRI which is surrounded by a matter distribution. We show that the energy radiation rate and orbital frequency of EMRI can be significantly different in the presence of power law potential as compared to that in the Kepler-Newton potential, signatures of which may be observed in gravitational waves from EMRI. Keywords : Power law potential, Gravitational energy radiation, Post-Newtonian approximation, EMRI. ## I Introduction In a galaxy, the study of dynamics of the bodies around the central object has been of great interest for the last four decades. According to Kepler-Newton potential, the velocities of the stars about a central supermassive object in a galaxy should drop drastically with increasing radial distance. However, observations suggest otherwise. The Kepler-Newton expression (also denoted as KN potential) of gravitational potential ($\frac{Gm}{r}$) known to us can be obtained from the effective potential of the Schwarzschild spacetime at an infinite distance from the central body. However, the Schwarzschild metric is a vacuum solution of the Einstein’s field equation. But at the galactic scale, due to dark matter and baryonic matter, the system is non-vacuum. Due to this limitation, the Schwarzschild metric or the KN potential cannot theoretically explain galactic dynamics. In the late 1970s and early 1980s, it was confirmed by the extensive study reported in Sofue and Rubin in 2001 [1] that most of the stars in a galaxy rotate at nearly the same velocities. This means that the velocities of these stars are independent of their distance from the central mass. Investigation of this novel observation has been a hot topic for several decades. There have been multiple theories like modified Newtonian dynamic (MOND) [2], the existence of dark matter, modified gravity, etc. [3, 4, 5]. In 1933, Zwicky [6] suggested “missing mass” to account for the orbital velocities of galaxies in clusters. Later, V Rubin and their group studied the possibility of dark matter around galaxies [7, 8]. In [9, 10], authors study the mass density distribution in galaxies and compare them with the observations. The density profile of dark matter famously known as the Navvaro-Frenk-White (NFW) profile [11] in the galaxy justifies the nature of the curve. Dey et al. have worked on developing a general relativistic approach to understand galactic dynamics. Bertrand Spacetimes is one of the non-vacuum spacetime which can be thought to be seeded by dark matter [14, 15, 12, 13]. In the Newtonian limit, this spacetime can effectively give the NFW dark matter density profile and hence can explain the observed behaviour of rotational velocity curves. In 1983, Milgrom suggested modifying Newtonian dynamics (MOND) which is an alternate way to justify the curves. According to the MOND, Newton’s laws of gravity get modified at a large distance scale. The theoretical prediction of MOND satisfactorily fit the observed velocity profiles of stars for a large number of galaxies [16]. Later, relativistic study of MOND was done by Bekenstein [17], Moffat [19, 18] and Brownstein [20]. Logarithmic potential and appropriate power law potential can lead to the rotational velocities (matching the observations) higher than that in the case of KN potential. The velocity curve plotted using the logarithmic potential is completely flat. Recently, Munera and Delgado-Correal [21] have shown derivation of logarithmic potential as a solution of non-homogeneous Laplace equation. Power law forces are mentioned in classical books like Goldstein [22] and Danby [23]. The power law potentials (also denoted as PL potential) have been extensively studied in the non-relativistic case. In [24, 26, 25], the nature of orbits in the presence of power law potential is studied extensively. The KN potential can be written as an approximate potential derived from the Schwarzschild solution of Einstein field equations which is only valid in vacuum region. Hence, it is not entirely applicable to study binary systems in galaxies or systems with dark matter distribution around them. Power law potential can incorporate the effect of this mass distribution and hence also can justify the behaviour of the velocity profile. In this paper, we consider an extreme mass ratio inspiral (EMRI) binary of a stellar object which is orbiting the central supermassive object in power law potential. The orbital motion makes the EMRI radiate its energy which propagates as the gravitational waves. We consider that this stellar object is in the faraway region from the central supermassive object. This region allows the usability of Post- Newtonian (PN) theory to obtain more precise expressions for the quantities that can be described for the system, e.g., orbital velocity of the object, the orbital radius, the acceleration of the object etc. Subsequently, we predict the average energy radiation rate from the system. The upcoming advanced gravitational-wave detectors like LISA may be capable of detecting the gravitational radiations from EMRI binaries [27, 28, 29, 30]. We organize the paper in the following way. In Sec. (II), we derive the acceleration of a body up to 1PN approximation using power law potential instead of KN potential. The binary system and the PN corrected expressions for the dynamical quantities are discussed in Sec. (III). In Sec. (IV), we discuss the PN corrected average energy radiation rate. In Sec. (V), the significant dynamical variables like the velocity and the orbital frequency, and energy radiation in the case of power law potential are compared with that in the case of KN potential system. In Sec. (VI), we discuss the important results and possible future works. ## II Acceleration of the reference body in the presence of power law potential The first order approximate expression for acceleration vector ($a_{P}^{j}$) of a body named as body-P can be written as [31], $\displaystyle a^{j}_{P}=\partial^{j}U_{P}+\frac{1}{c^{2}}[(v_{P}^{2}-4U_{P})\partial^{j}U_{P}-4v_{P}^{j}v_{P}^{k}\partial_{k}U_{P}$ $\displaystyle-3v_{P}^{j}\dot{U_{P}}+\partial^{j}\psi_{P}+\frac{1}{2}\partial^{j}\ddot{X_{P}}+4\dot{U_{P}}^{j}$ $\displaystyle-4(\partial^{j}U_{P}^{k}-\partial^{k}U_{P}^{j})v_{P,k}]$ $\displaystyle+O(c^{-4}),$ (1) where $U_{P}$, $U^{j}_{P}$, $\psi^{j}_{P}$ and $X_{P}$ are the external scalar gravitational potential, vector gravitational potential, post-Newtonian correction to scalar gravitational potential and the super-potential respectively. The components of the EMRI under consideration are a supermassive compact object-$Q$ of mass $M_{Q}$ and a stellar mass object-$P$ of mass $M_{P}$ (which is much less than that of body-$Q$). We use the following notations throughout the paper: $\mathbf{y}_{Q}\text{ and }\mathbf{y}_{P}$ are the position vectors of the bodies Q and P in the center of mass frame, $r_{PQ}$ is the distance between the two bodies, also given as $|\mathbf{y}_{P}-\mathbf{y}_{Q}|$. $\mathbf{v}_{Q}$ and $\mathbf{v}_{P}$ are the absolute velocities of the bodies, $\mathbf{a}_{Q}$ and $\mathbf{a}_{P}$ are the absolute accelerations of the bodies, $\mathbf{x}_{S}$ is the position vector of any field point ‘$S$’ where potential is being evaluated. The unit vector in $\mathbf{x}_{SQ}$ direction and the unit vector in the direction of $\mathbf{x}_{PQ}$ can be written as $\mathbf{\hat{n}}_{SQ}=\frac{\mathbf{x_{SQ}}}{|\mathbf{x_{SQ}}|}=\frac{\mathbf{x}_{S}-\mathbf{y}_{Q}}{|\mathbf{x}_{S}-\mathbf{y}_{Q}|}$ and $\mathbf{\hat{n}}_{PQ}=\frac{\mathbf{x_{PQ}}}{|\mathbf{x_{PQ}}|}=\frac{\mathbf{y}_{P}-\mathbf{y}_{Q}}{|\mathbf{y}_{P}-\mathbf{y}_{Q}|}$ respectively. With increasing radial distance from the central supermassive object, according to Kepler’s laws, the rotational velocities are expected to fall down to nearly zero. However, in order to have higher values of rotational velocities (which is generally observed in galactic scales [1, 8, 7]), one can use specific forms of power law potential [33, 32]. A general form of the expression of the Newtonian order (i.e. 0 PN order) gravitational acceleration $\mathbf{a}_{P,N}(r)$ for the power law potential can be written as, $\mathbf{a}_{P,N}(r_{PQ})=\partial^{j}U_{P}(r_{PQ})=-\frac{GM^{*}_{Q}r_{PQ}}{(\epsilon^{2}+r_{PQ}^{2})^{\delta+1}}\mathbf{\hat{n}}_{PQ}.$ (2) Here, $M^{*}_{Q}$ is the constant scale mass of the potential due to body-Q, $\epsilon$ is the scale radius of the source and $\delta$ can have any real value [34]. Note that the subscript in the acceleration is “$P,N$” denoting acceleration of body-P at Newtonian order, which is different from “$PN$” which denotes the post-Newtonian corrections. Two interesting special cases are 1. 1. The circular velocity is constant ($v_{circ}=\sqrt{|\mathbf{a}|r}=$ const) i.e. $\delta=0$ and 2. 2. The circular velocity is inversely proportional to the square root of the radial distance ($v_{circ}=\sqrt{|\mathbf{a}|r}\propto r^{-1/2}$) i.e. $\delta=\frac{1}{2}$. Above two cases correspond to flat and Keplerian rotation curves, respectively. The exactly flat rotation curves can be obtained at $\delta=0$. Using Eq. (2), it can be seen that the corresponding potential is the logarithmic potential. In this study, the general power law potential is taken under consideration except for the logarithmic potential case. The potential ($U_{P}$) corresponding to the above mentioned acceleration (Eq. (2)) at point P due to super-massive body-Q [33] is $U_{P}=\frac{GM^{*}_{Q}}{2\delta(\epsilon^{2}+r_{PQ}^{2})^{\delta}}.$ (3) Corresponding vector potential ($U_{P}^{j}$) [31] is accordingly obtained $U^{j}_{P}=\frac{GM^{*}_{Q}}{2\delta(\epsilon^{2}+r_{PQ}^{2})^{\delta}}{v}^{j}_{Q}.$ (4) ### II.1 External potentials and their derivatives In this sub-section, we derive the post-Newtonian potentials and their derivatives as introduced in Eq. (II). It is considered that the distance between the two bodies is much larger than the sizes of the two bodies. From here on, external potentials are referred to as just “potentials”. Before introducing other external potentials and their derivatives, we introduce the following tools which are used to obtain the same $\displaystyle\partial^{j}r_{SQ}$ $\displaystyle=$ $\displaystyle n_{SQ}^{j},$ (5) $\displaystyle\partial_{j}\mathbf{x}^{k}_{SQ}$ $\displaystyle=$ $\displaystyle\delta_{j}^{k},$ (6) $\displaystyle\dot{\mathbf{x}}_{SQ}$ $\displaystyle=$ $\displaystyle-\mathbf{v}_{Q},$ (7) $\displaystyle\dot{{r}}_{SQ}$ $\displaystyle=$ $\displaystyle-\mathbf{\hat{n}}_{SQ}.\mathbf{v}_{Q},$ (8) $\displaystyle\partial^{j}n_{SQ}^{k}$ $\displaystyle=$ $\displaystyle\frac{\delta^{jk}}{r_{SQ}}-\frac{n_{SQ}^{j}n_{SQ}^{k}}{r_{SQ}},$ (9) $\displaystyle\partial^{j}v_{Q}^{k}$ $\displaystyle=$ $\displaystyle 0.$ (10) The over-dot represents the time derivative of the quantity. Following are the expressions for the derivatives of the two potentials mentioned in Eq. (3) and Eq. (4) [31]. $\displaystyle\dot{U}_{P}$ $\displaystyle=$ $\displaystyle\frac{GM^{*}_{Q}r_{PQ}}{(r_{PQ}^{2}+\epsilon^{2})^{\delta+1}}(\mathbf{\hat{n}}_{PQ}.\mathbf{v}_{Q}),$ (11) $\displaystyle\dot{U}^{j}_{P}$ $\displaystyle=$ $\displaystyle\frac{GM^{*}_{Q}r_{PQ}}{2\delta(r_{PQ}^{2}+\epsilon^{2})^{\delta+1}}\bigg{[}2\delta(\mathbf{\hat{n}}_{PQ}.\mathbf{v}_{Q})v_{Q}^{j}$ (12) $\displaystyle\text{ }+\frac{GM_{P}^{*}}{(r_{PQ}^{2}+\epsilon^{2})^{\delta}}{n}^{j}_{PQ}\bigg{]},$ $\displaystyle\partial^{j}U_{P}$ $\displaystyle=$ $\displaystyle-\frac{GM_{Q}^{*}r_{PQ}}{(r_{PQ}^{2}+\epsilon^{2})^{\delta+1}}n^{j}_{PQ},$ (13) $\displaystyle\partial^{j}U^{k}_{P}$ $\displaystyle=$ $\displaystyle-\frac{GM^{*}_{Q}r_{PQ}}{(r_{PQ}^{2}+\epsilon^{2})^{\delta+1}}{n}^{j}_{PQ}{v}^{k}_{Q}.$ (14) Post-Newtonian correction ($\psi_{P}$) to the Newtonian Potential. $\psi_{P}=U_{P}\left(\frac{3}{2}v_{Q}^{2}-U_{Q}\right)\,\,,$ (15) where $U_{Q}$ is the potential at body-Q. Spatial derivative of $\psi_{P}$ is $\displaystyle\partial^{j}\psi_{P}$ $\displaystyle=$ $\displaystyle\partial^{j}\left[U_{P}\left(\frac{3}{2}v_{Q}^{2}-U_{Q}\right)\right]$ (16) $\displaystyle=$ $\displaystyle-\frac{3}{2}\frac{GM^{*}_{Q}r_{PQ}}{(r_{PQ}^{2}+\epsilon^{2})^{\delta+1}}v_{Q}^{2}n_{PQ}^{j}.$ Mathematically, potential is the quantity obtained from Poisson’s equation using a source term. We need mass density to get the gravitational potential ($U$). Using this Newtonian gravitational potential as source, a higher order of potential called super-potential ($X$) can be obtained. In other words super-potential is the potential generated by external potential. Here we will briefly explain the derivation: The Poisson’s equation for super-potential ($X$) is given as $\nabla^{2}X=2U.$ (17) The source term contains the potential $U$. The potential due to a mass-$Q$ at a field point ($\mathbf{x^{\prime}}$) is a function of the distance between the field point and the source mass is $U(\mathbf{x^{\prime}})=U(\mathbf{x^{\prime}}-\mathbf{y}_{Q}).$ (18) The corresponding super-potential ($X(\mathbf{x}_{S})$) at another field point ($\mathbf{x}_{S}$) due to the potential $U(\mathbf{x^{\prime}})$ is the solution of the Poisson’s equation, $\nabla^{2}X(\mathbf{x}_{S})=2U(\mathbf{x}_{S}-\mathbf{x^{\prime}})$. $X(\mathbf{x}_{S})$ is spherically symmetric hence it’s dependence on vector ($\mathbf{x}_{S}-\mathbf{x^{\prime}}$) can be simplified into the dependence over the magnitude of this vector, which we denote as ‘$r$’. The derivative of this super-potential as required in Eq. (II) is formalized as $\displaystyle\partial^{j}\ddot{X}(\mathbf{x_{S}})\ $ $\displaystyle=$ $\displaystyle\partial^{j}(\partial_{tt}X(r))$ $\displaystyle=$ $\displaystyle\partial^{j}\left(X^{\prime\prime}(r)\dot{r}^{2}+X^{\prime}(r)\ddot{r}\right)$ $\displaystyle=$ $\displaystyle X^{\prime\prime\prime}(r)\dot{r}^{2}\partial^{j}r+2X^{\prime\prime}_{P}(r)\dot{r}\partial^{j}\dot{r}$ (19) $\displaystyle+X^{\prime\prime}(r)\ddot{r}\partial^{j}r+X^{\prime}(r)\partial^{j}\ddot{r}.$ Here, the primes denote radial derivative and the over-dots are for time derivatives. The general solution to the Poisson’s equation is given as $X(\mathbf{x}_{S})=-\frac{1}{2\pi}\int\frac{U(\mathbf{x}^{\prime})}{|\mathbf{x}_{S}-\mathbf{x^{\prime}}|}d^{3}x^{\prime}+X_{0}(\mathbf{x}_{S}).$ (20) Here, $X_{0}$ is the constant of integral, which is the solution to the Laplace equation $\nabla^{2}X_{0}=0$. As prescribed [31] (in the box 7.3) the domain of integration is truncated to spatial near zone. This changes the form of Poisson’s equation to $\nabla^{2}X=2U\Theta(R-r)$, where $\Theta(R-r)$ is the Heaviside step function and $R$ is the radius of the boundary of the near zone (value of which is immaterial and this fact will be used later). According to our assumption, the body-Q is heavy enough compared to the body-P such that the effect due to the body-P is negligible, and we consider only the effect due to body-Q as $U(\mathbf{x^{\prime}})\approx U(\mathbf{x^{\prime}}-\mathbf{y}_{Q})=\frac{GM^{*}_{Q}}{2\delta(r_{1Q}^{2}+\epsilon^{2})^{\delta}},$ (21) here, $r_{1Q}=|\mathbf{x^{\prime}}-\mathbf{y}_{Q}|$. Substituting Eq. (21) into Eq. (20), we obtain $X(\mathbf{x}_{S})=-\dfrac{GM^{*}_{Q}}{4\pi\delta}\int\frac{\Theta(R-r^{\prime})}{|\mathbf{x}_{S}-\mathbf{x^{\prime}}|(r^{2}_{1Q}+\epsilon^{2})^{\delta}}d^{3}x^{\prime}+X_{0}(\mathbf{x}_{S}).$ (22) From here on, $r^{\prime}=|\mathbf{x^{\prime}}|$. If the distance between points, $r_{1Q}$, is very large compared to the scale length $\epsilon$, i.e. $\epsilon<<r_{1Q}$, we can approximate the potential as $U(\mathbf{x^{\prime}-y}_{Q})\approx\frac{GM^{*}_{Q}}{2\delta(r_{1Q})^{2\delta}}.$ (23) Therefore, $X(\mathbf{x}_{S})=-\dfrac{GM^{*}_{Q}}{4\pi\delta}\int\frac{\Theta(R-r^{\prime})}{|\mathbf{x}_{S}-\mathbf{x^{\prime}}|(r_{1Q})^{2\delta}}d^{3}x^{\prime}+X_{0}(\mathbf{x}_{S}).$ (24) For the mathematical simplicity, we take $\mathbf{y}_{Q}=0$. At the end of this section, we generalize our result by introducing $\mathbf{y}_{Q}$. We set the Cartesian coordinates such that the Z-axis aligns with the position vector $\mathbf{x_{S}}$. Therefore, $\mathbf{x_{S}}=r\mathbf{\hat{z}}$ , $\mathbf{x_{S}}.\mathbf{x^{\prime}}=rr^{\prime}\cos\theta^{\prime}$ , and $|\mathbf{x}_{S}-\mathbf{x^{\prime}}|=\sqrt{r^{2}+r^{\prime 2}-2rr^{\prime}\cos\theta^{\prime}}$. Hence $X(\mathbf{x}_{S})=-\dfrac{GM^{*}_{Q}}{4\pi\delta}\int\frac{\Theta(R-r^{\prime})r^{\prime 2}\sin\theta^{\prime}dr^{\prime}d\theta^{\prime}d\phi^{\prime}}{r^{\prime 2\delta}\sqrt{r^{2}+r^{\prime 2}-2rr^{\prime}\cos\theta^{\prime}}}+X_{0}.$ (25) The integrand is independent of $\phi^{\prime}$, thus the $\phi$ integral will give a factor $2\pi$. Now, we integrate $\theta^{\prime}$ part first. Let us define it as $I(r,r^{\prime})$. $\displaystyle I(r,r^{\prime})$ $\displaystyle=$ $\displaystyle\int_{0}^{\pi}(r^{2}+r^{\prime 2}-2rr^{\prime}\cos\theta^{\prime})^{-1/2}\sin\theta^{\prime}d\theta^{\prime}$ (26) $\displaystyle=$ $\displaystyle\dfrac{1}{rr^{\prime}}\\{(r+r^{\prime})-|r-r^{\prime}|\\}$ (27) Now we integrate over $r^{\prime}$. $\displaystyle X(\mathbf{x}_{S})$ $\displaystyle=$ $\displaystyle-\dfrac{GM^{*}_{Q}}{2\delta}\int_{0}^{R}r^{\prime 2(1-\delta)}I(r,r^{\prime})dr^{\prime}+X_{0}$ (28) $\displaystyle=$ $\displaystyle-\dfrac{GM^{*}_{Q}}{2\delta}\bigg{(}\int_{0}^{r}r^{\prime 2(1-\delta)}\dfrac{2}{r}dr^{\prime}+\int_{r}^{R}r^{\prime 2(1-\delta)}\dfrac{2}{r^{\prime}}dr^{\prime}\bigg{)}+X_{0}$ $\displaystyle=$ $\displaystyle\dfrac{GM^{*}_{Q}}{2\delta(1-\delta)}\bigg{(}\dfrac{r^{2(1-\delta)}}{3-2\delta}-R^{2(1-\delta)}\bigg{)}+X_{0}.$ Now, one should consider $X_{0}=\dfrac{GM^{*}_{Q}}{2\delta(1-\delta)}R^{2(1-\delta)}$ to get rid of the terms dependent on the boundary of near zone. One can get the following expression of the super-potential $X(\mathbf{x}_{S})=\dfrac{GM^{*}_{Q}}{2\delta(1-\delta)(3-2\delta)}(r_{SQ})^{2(1-\delta)}\,\,,$ (29) where we replace $\mathbf{x}$ by $\mathbf{x-y}_{Q}$ to generalize the result (Eq. (28)) and we drop the assumption $\mathbf{y}_{Q}=0$. Placing the body-P at point S, it can be shown that $\partial^{j}\ddot{X}_{P}=\dfrac{2GM_{Q}^{*}}{(3-2\delta)(r_{PQ})^{2\delta+1}}\Bigg{[}2(\delta+1)(\mathbf{v}_{Q}.\mathbf{\hat{n}}_{PQ})^{2}n^{j}_{PQ}-v_{Q}^{2}n^{j}_{PQ}-2(\mathbf{v}_{Q}.\mathbf{\hat{n}}_{PQ})v_{Q}^{j}+\dfrac{2GM^{*}_{P}}{(r_{PQ})^{2\delta}}n^{j}_{PQ}\Bigg{]}.$ (30) Acceleration of body-P can be obtained by substituting the expressions for the derivatives of the potentials as derived in Eq. (11), (12), (13), (14), (16), and (30) in Eq. (II). As we previously mentioned, we assume that the body-Q is much heavier than the body-P. Therefore, for mathematical simplicity, we can consider the body-Q is always stationary which implies $\mathbf{v}_{Q}\approx 0$ and $\mathbf{a}_{Q}\approx 0$. For simplicity in notations, we use $r_{PQ}=r$, $\mathbf{\hat{n}}_{PQ}=\mathbf{\hat{n}}$, $\mathbf{v}_{P}=\mathbf{v}$, and $\dfrac{GM_{Q}^{*}}{2}=K_{Q}$. Therefore, the potential and it’s differentials reduce to the expressions listed as $\displaystyle U_{P}$ $\displaystyle=$ $\displaystyle\dfrac{K_{Q}}{\delta r^{2\delta}},$ (31) $\displaystyle\partial^{j}U_{P}$ $\displaystyle=$ $\displaystyle\dfrac{-2K_{Q}}{r^{2\delta+1}}n^{j},$ (32) $\displaystyle U_{P}^{j}$ $\displaystyle=$ $\displaystyle\dfrac{K_{Q}v_{Q}^{j}}{\delta r^{2\delta}}=0,$ (33) $\displaystyle\dot{U}_{P}$ $\displaystyle=$ $\displaystyle\dfrac{2K_{Q}(\mathbf{v}_{Q}.\mathbf{\hat{n}})}{r^{2\delta+1}}=0,$ (34) $\displaystyle\partial^{j}\psi_{P}$ $\displaystyle=$ $\displaystyle\dfrac{-3K_{Q}v_{Q}^{2}}{r^{2\delta+1}}n^{j}=0,$ (35) $\displaystyle\dot{U}_{P}^{j}$ $\displaystyle=$ $\displaystyle\dfrac{2K_{P}K_{Q}}{\delta r^{4\delta+1}}n^{j}+\dfrac{2K_{Q}(\mathbf{v}_{Q}.\mathbf{\hat{n}})}{r^{2\delta+1}}v_{Q}^{j}$ (36) $\displaystyle=$ $\displaystyle\dfrac{2K_{P}K_{Q}}{\delta r^{4\delta+1}}n^{j},$ $\displaystyle\partial^{j}U_{P}^{i}$ $\displaystyle=$ $\displaystyle\dfrac{-2K_{Q}}{r^{2\delta+1}}n^{j}v_{Q}^{i}=0.$ (37) One can substitute the above expressions (Eqs. (31) to (37)) along with the expression for the derivative of the super-potential as given in Eq. (30) into Eq. (II) to obtain the acceleration ($a_{P}^{j}$) as follows. $a_{P}^{j}=-\dfrac{GM_{Q}\epsilon^{2\delta-1}}{r^{2\delta+1}}n^{j}-\dfrac{GM_{Q}\epsilon^{2\delta-1}}{c^{2}r^{2\delta+1}}\bigg{[}-4(\mathbf{v}_{P}.\mathbf{\hat{n}})v_{P}^{j}+\bigg{(}v_{P}^{2}-\dfrac{2GM_{Q}\epsilon^{2\delta-1}}{\delta r^{2\delta}}\bigg{)}n^{j}\bigg{]},$ (38) where the mass of the body-Q ($M_{Q}$) is related to the scale mass $M_{Q}^{*}$ by $M^{*}_{Q}=M_{Q}\epsilon^{2\delta-1}$. In the next section, we derive the dynamical variables corresponding to the binary. ## III THE BINARY SYSTEM AND THE DYNAMICAL QUANTITIES The EMRI under consideration comprises of a supermassive compact object (body-Q) and a stellar mass object (body-P) moving in a circular orbit around body-Q. The position $\mathbf{x}$ of the body-P is written as $\mathbf{x}=r(\cos{\phi}\mathbf{\hat{i}}+\sin{\phi}\mathbf{\hat{j}}),$ (39) where $\phi$ is the azimuthal angle, $r$ is the radius. Since we don’t need to work with the dynamics of any body other than body-P, we drop the subscript-“P”, i.e. the quantities $r_{P}$, $\mathbf{x}_{P}$, $\mathbf{v}_{P}$ and $\mathbf{a}_{P}$ will be denoted as $r$, $\mathbf{x}$, $\mathbf{v}$ and $\mathbf{a}$ and the subscript “PQ” in the quantities that are measured with reference to body-Q are also dropped. i.e. $r_{PQ}$, $\mathbf{x}_{PQ}$ and $\mathbf{\hat{n}}_{PQ}$ are henceforth denoted as $r$, $\mathbf{x}$, $\mathbf{\hat{n}}$. In this study, we have used the following sign convention: $U(r)=+\dfrac{GM_{Q}\epsilon^{2\delta-1}}{\delta r^{2\delta}}$, and $a(r)=+\dfrac{dU}{dr}$. The differential orbit equation (for the body-P) can be written as [22, 32, 33, 35] $\dfrac{d^{2}u}{d\phi^{2}}+u=\dfrac{M_{P}}{L^{2}}\dfrac{dU(1/u)}{du},$ (40) where $u=1/r$, $L$ is conserved angular momentum of the body-P. Therefore $\dfrac{d^{2}u}{d\phi^{2}}+u=\dfrac{GM_{Q}M_{P}}{L^{2}}\epsilon^{2\delta-1}u^{2\delta-1}.$ (41) For the circular orbits, the solution of the above differential orbit equation is $u=1/p$. Therefore, we obtain $p^{2(\delta-1)}=\dfrac{GM_{Q}M_{P}}{L^{2}}\epsilon^{2\delta-1}.$ (42) ### III.1 Dynamical quantities at Newtonian order Henceforth, the dynamical quantities, say F, will appear in the form of $F=F_{N}+\frac{1}{c^{2}}F_{PN}$, where $F_{N}$ denotes the Newtonian order or the 0-PN term which has been derived in this sub-section (III.1) and $F_{PN}$ denotes the 1PN correction to the quantity F which will be derived in the next sub-section (III.2). For example, the Newtonian order and post-Newtonian order expression for velocity of body-P are denoted as $\mathbf{v}_{N}$ and $\mathbf{v}_{PN}$ respectively while the complete PN-corrected term is denoted as $\mathbf{v}$. We derive the Kepler’s third law for the power law potential by balancing the centrifugal acceleration with the radial attractive acceleration due to gravity, $\dfrac{v_{N}^{2}}{r}=\dfrac{GM_{Q}}{r^{2\delta+1}}\epsilon^{2\delta-1}.$ (43) Here, $v_{N}=\dfrac{2\pi r}{T}$ is the 0PN or the Newtonian order expression of the velocity of the body-P and $T$ is the time period of its orbit. Therefore, $\dfrac{4\pi^{2}}{T^{2}}=\dfrac{GM_{Q}}{r^{2(\delta+1)}}\epsilon^{2\delta-1}.$ (44) The Kepler’s third law in the case of power law potential becomes $T^{2}\propto r^{2(\delta+1)}$. We have restricted our attention to the circular orbits, therefore $r=p$ (constant). Thus the angular velocity $\dot{\phi}_{N}$ becomes $\dot{\phi_{N}}=\dfrac{2\pi}{T}=\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2(\delta+1)}}}.$ (45) The 0PN velocity $\mathbf{v}_{N}$ of the body-P using Eq. (43) becomes $\mathbf{v}_{N}=\dfrac{d\mathbf{x}}{dt}=\dfrac{d\mathbf{x}}{d\phi}\dot{\phi}=p\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2(\delta+1)}}}\bm{\hat{\nu}}$ (46) where $\bm{\hat{\nu}}=\dfrac{d\mathbf{\hat{n}}}{d\phi}=-\sin{\phi}\mathbf{\hat{i}}+\cos{\phi}\mathbf{\hat{j}}$. The Newtonian order term of the acceleration is $\mathbf{a}_{N}=\dfrac{\partial U}{\partial r}\bigg{|}_{r=p}=-\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta+1}}\mathbf{\hat{n}}\,\,.$ (47) ### III.2 Dynamical quantities with Post-Newtonian correction We use the 1PN corrected acceleration (38) to determine the 1PN correction in other dynamical variables, i.e. velocity, radial coordinate and angular frequency. In the case of circular orbits, $\mathbf{v}_{P}.\mathbf{\hat{n}}=0$, and the acceleration (38) reduces to $\mathbf{a}=-\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta+1}}\mathbf{\hat{n}}+\dfrac{1}{c^{2}}\dfrac{G^{2}M_{Q}^{2}\epsilon^{2(2\delta-1)}}{\delta p^{4\delta+1}}(2-\delta)\mathbf{\hat{n}},$ (48) which can be written as $\mathbf{a}=\mathbf{a}_{N}+\dfrac{1}{c^{2}}\mathbf{a}_{PN},$ (49) where the 1PN correction in the acceleration is $\mathbf{a}_{PN}=\dfrac{G^{2}M_{Q}^{2}\epsilon^{2(2\delta-1)}}{\delta p^{4\delta+1}}(2-\delta)\mathbf{\hat{n}}.$ (50) The expression for velocity at the Newtonian order (0PN) can be given as $v_{N}^{2}=\dfrac{GM_{Q}}{p^{2\delta}}\epsilon^{2\delta-1}$ as derived in Eq. (46). After substituting this expression, the post-Newtonian parameter ($v_{N}^{2}/c^{2}$) becomes evidently apparent. Hence we can simplify Eq. (48), $\mathbf{a}=-\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta+1}}\bigg{[}1-\dfrac{v_{N}^{2}}{c^{2}}\dfrac{(2-\delta)}{\delta}\bigg{]}\mathbf{\hat{n}}.$ (51) In Kepler-Newton (KN) case, $\delta=1/2$, and the acceleration ($\mathbf{a}_{KN}$) reduces to ([36], [37]) $\mathbf{a}_{KN}=-\dfrac{GM_{Q}}{p^{2}}\bigg{[}1-3\dfrac{v_{KN}^{2}}{c^{2}}\bigg{]}\mathbf{\hat{n}}\,\,,$ (52) where $v_{KN}^{2}=\dfrac{GM_{Q}}{p}$. Post-Newtonian corrections are only valid as long as $(\frac{v_{N}}{c})^{2}\ll 1$ and the PN correction is much smaller than the Newtonian order part. This can be simplified to give the following condition $1\gg\big{(}\frac{v_{N}}{c}\big{)}^{2}\big{(}\frac{2-\delta}{\delta}\big{)}=\frac{GM_{Q}\epsilon^{(2\delta-1)}}{p^{2\delta}c^{2}}\frac{2-\delta}{\delta}\implies\delta>0.$ (53) We shall consider the cases where $\delta>0$. Now, we proceed to determine the PN corrections in the other dynamical variables. Let the PN corrected angular momentum per unit mass ($r^{2}\dot{\phi}$) and PN corrected velocity ($\mathbf{v}_{P}$) be written as $\displaystyle r^{2}\dot{\phi}$ $\displaystyle=$ $\displaystyle|\mathbf{r}\times\mathbf{v}|=p\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}(1+\delta h)\,,$ (54) $\displaystyle\mathbf{v}$ $\displaystyle=$ $\displaystyle\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}\bm{\hat{\nu}}+\delta\mathbf{v}\,,$ (55) where $p\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}\delta h$ and $\delta\mathbf{v}$ are the PN correction terms. As in this paper, we consider circular orbits, for a particular orbit, the angular momentum of the body is constant. Therefore, Eq. (54) becomes [37], $r^{2}\dot{\phi}=p\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}},$ (56) here $r$ and $\dot{\phi}$ are corrected up to 1-PN. Since circular orbits are considered, the radial component of the velocity is zero. Hence the velocity can be written as $v=r\dot{\phi}$. Also the relation between acceleration $\mathbf{a}$ and the velocity $\mathbf{v}$ can be given as: $\mathbf{a}=\dfrac{d\mathbf{v}}{dt}=\dfrac{d\mathbf{v}}{d\phi}\dfrac{d\phi}{dt}=\dfrac{d\mathbf{v}}{d\phi}\dot{\phi}.$ (57) Using Eq. (55) and Eq. (51), the above equation can be re-written as, $\displaystyle-\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta+1}}\bigg{[}1-\dfrac{1}{c^{2}}\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}\frac{(2-\delta)}{\delta}\bigg{]}\mathbf{\hat{n}}$ $\displaystyle=$ $\displaystyle\bigg{(}\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}\dfrac{d\bm{\hat{\nu}}}{d\phi}+\dfrac{d\delta\mathbf{v}}{d\phi}\bigg{)}\dot{\phi}$ (58) $\displaystyle=$ $\displaystyle\bigg{(}\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}\dfrac{d\bm{\hat{\nu}}}{d\phi}+\dfrac{d\delta\mathbf{v}}{d\phi}\bigg{)}\dfrac{1}{p}\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}$ $\displaystyle=$ $\displaystyle-\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta+1}}\mathbf{\hat{n}}+\dfrac{1}{p}\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}\dfrac{d\delta\mathbf{v}}{d\phi}.$ Equating the 1-PN terms, we get $\displaystyle\dfrac{2-\delta}{c^{2}}\dfrac{G^{2}M_{Q}^{2}\epsilon^{2(2\delta-1)}}{\delta p^{4\delta+1}}\mathbf{\hat{n}}$ $\displaystyle=$ $\displaystyle\dfrac{1}{p}\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}\dfrac{d\delta\mathbf{v}}{d\phi}.$ (59) Therefore, $\displaystyle\dfrac{d\delta\mathbf{v}}{d\phi}$ $\displaystyle=$ $\displaystyle\dfrac{1}{c^{2}}\dfrac{(2-\delta)}{\delta}\bigg{[}\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}\bigg{]}^{3/2}\mathbf{\hat{n}},$ (60) $\displaystyle\therefore\delta\mathbf{v}$ $\displaystyle=$ $\displaystyle-\dfrac{1}{c^{2}}\dfrac{(2-\delta)}{\delta}\bigg{[}\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}\bigg{]}^{3/2}\bm{\hat{\nu}}.$ (61) Hence, the velocity $\mathbf{v}_{P}$ of the body-P corrected up to the first order Post-Newtonian term is $\therefore\mathbf{v}_{P}=\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}\bigg{[}1-\dfrac{1}{c^{2}}\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}\frac{(2-\delta)}{\delta}\bigg{]}\bm{\hat{\nu}}.$ (62) Substituting Eq. (62) in the following relation $\mathbf{v}_{P}=\dfrac{d\mathbf{x}}{dt}=\dfrac{d\mathbf{x}}{d\phi}\dfrac{d\phi}{dt}=r\dot{\phi}\bm{\hat{\nu}}=\dfrac{r^{2}\dot{\phi}}{r}\bm{\hat{\nu}},$ (63) and using Eq. (56), we obtain $\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}\bigg{[}1-\dfrac{1}{c^{2}}\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}\frac{(2-\delta)}{\delta}\bigg{]}\bm{\hat{\nu}}\\\ =\dfrac{r^{2}\dot{\phi}}{r}\bm{\hat{\nu}}=\dfrac{p}{r}\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}\bm{\hat{\nu}},$ (64) which implies $\dfrac{p}{r}=\bigg{[}1-\dfrac{1}{c^{2}}\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}\frac{(2-\delta)}{\delta}\bigg{]}.$ (65) Therefore, the radial coordinate ($r$) corrected up to the first order Post- Newtonian term is $r=p\bigg{[}1+\dfrac{1}{c^{2}}\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}\frac{(2-\delta)}{\delta}\bigg{]}+O(c^{-4}).$ (66) Next, we derive the expression for the angular velocity $\dot{\phi}$. Substituting Eq. (66) into Eq. (56) and simplifying for $\dot{\phi}$, we obtain $\dot{\phi}=\dfrac{1}{p}\sqrt{\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}}\bigg{[}1-\dfrac{1}{c^{2}}\dfrac{2GM_{Q}\epsilon^{2\delta-1}}{\delta p^{2\delta}}(2-\delta)\bigg{]}.$ (67) In the next section, we discuss the average energy radiation rate of EMRI using the expressions of PN corrected quantities derived here. ## IV Average Energy radiation rate from an EMRI The average energy radiation rate $\langle\frac{dE}{dt}\rangle$ from a binary is given by the following expression [37, 36, 31]. $\bigg{\langle}\frac{dE}{dt}\bigg{\rangle}=-\frac{G}{45c^{5}}\frac{1}{T}\int^{T}_{0}(\dddot{D}^{\alpha\beta})^{2}dt,$ (68) where the mass quadrupole tensor $D^{\alpha\beta}$ is defined as [36] $D^{\alpha\beta}=M_{P}(3x^{\alpha}x^{\beta}-|\mathbf{x}|^{2}\delta^{\alpha\beta}).$ (69) Therefore, $\displaystyle\dddot{D}^{\alpha\beta}$ $\displaystyle=$ $\displaystyle\dfrac{d^{3}}{dt^{3}}\big{[}M_{P}(3x^{\alpha}x^{\beta}-|\mathbf{x}|^{2}\delta^{\alpha\beta})\big{]}$ (70) $\displaystyle=$ $\displaystyle M_{P}\dfrac{d^{2}}{dt^{2}}\big{[}3\dot{x}^{\alpha}x^{\beta}+3x^{\alpha}\dot{x}^{\beta}-2\mathbf{x}.\dot{\mathbf{x}}\delta^{\alpha\beta}\big{]}$ $\displaystyle=$ $\displaystyle M_{P}\dfrac{d}{dt}\big{[}3\ddot{x}^{\alpha}x^{\beta}+6\dot{x}^{\alpha}\dot{x}^{\beta}+3x^{\alpha}\ddot{x}^{\beta}-2\dot{x}^{2}\delta^{\alpha\beta}-2\mathbf{x}.\ddot{\mathbf{x}}\delta^{\alpha\beta}\big{]}$ $\displaystyle=$ $\displaystyle M_{P}\big{[}3\dddot{x}^{\alpha}x^{\beta}+9\ddot{x}^{\alpha}\dot{x}^{\beta}+9\dot{x}^{\alpha}\ddot{x}^{\beta}+3x^{\alpha}\dddot{x}^{\beta}-6\dot{\mathbf{x}}.\ddot{\mathbf{x}}\delta^{\alpha\beta}-2\mathbf{x}.\dddot{\mathbf{x}}\delta^{\alpha\beta}\big{]}$ Hence, $(\dddot{D}^{\alpha\beta})^{2}$ can be written as $\displaystyle(\dddot{D}^{\alpha\beta})^{2}$ $\displaystyle=$ $\displaystyle(\dddot{D}^{\alpha\beta})(\dddot{D}_{\alpha\beta})$ (71) $\displaystyle=$ $\displaystyle 6M_{P}^{2}\big{[}3\dddot{x}^{2}x^{2}+18(\dddot{\mathbf{x}}.\ddot{\mathbf{x}})(\dot{\mathbf{x}}.\mathbf{x})+18(\dddot{\mathbf{x}}.\dot{\mathbf{x}})(\ddot{\mathbf{x}}.\mathbf{x})+(\mathbf{x}.\dddot{\mathbf{x}})^{2}+27\dot{\mathbf{x}}^{2}\ddot{\mathbf{x}}^{2}+9(\dot{\mathbf{x}}.\ddot{\mathbf{x}})^{2}-12(\dddot{\mathbf{x}}.\mathbf{x})(\ddot{\mathbf{x}}.\dot{\mathbf{x}})\big{]},$ where the acceleration terms ($\ddot{\mathbf{x}}$) can be written as the gradient of the potential or the magnitude of the force corresponding to this acceleration is given as $\mathbf{F_{a}}=M_{P}|\mathbf{a}|=\dfrac{\partial V}{\partial r}$, where $V$ is the potential energy. Therefore, the acceleration of the body-P becomes $\mathbf{a}=\frac{1}{M_{P}}\dfrac{\partial V}{\partial r}\mathbf{\hat{n}}+\dfrac{1}{c^{2}}\mathbf{a}_{PN}.$ (72) Substituting the above expression of $\mathbf{a}$ into Eq. (71) and after some simplification, we obtain the following expression for $(\dddot{D}^{\alpha\beta})^{2}$ ([38, 36]): $\displaystyle(\dddot{D}^{\alpha\beta})^{2}=24\Bigg{[}\bigg{(}r\dot{r}\dfrac{\partial^{2}V}{\partial r^{2}}+3\dot{r}\dfrac{\partial V}{\partial r}\bigg{)}^{2}$ $\displaystyle+$ $\displaystyle 12r^{2}\dot{\phi}^{2}\bigg{(}\dfrac{\partial V}{\partial r}\bigg{)}^{2}\Bigg{]}+\dfrac{24M_{P}}{c^{2}}\Bigg{[}2r\bigg{(}\dot{r}\dfrac{\partial^{2}V}{\partial r^{2}}\mathbf{x}+3\dfrac{\partial V}{\partial r}\dot{\mathbf{x}}\bigg{)}.\dot{\mathbf{a}}_{PN}$ (73) $\displaystyle+$ $\displaystyle\bigg{\\{}9\dfrac{\partial V}{\partial r}\big{(}\dot{r}^{2}+2r^{2}\dot{\phi}^{2}\big{)}\mathbf{\hat{n}}+3\dot{r}\dfrac{\partial^{2}V}{\partial r^{2}}\big{(}3\dot{r}\mathbf{x}-r\dot{\mathbf{x}}\big{)}+9\dot{r}\dfrac{\partial V}{\partial r}\dot{\mathbf{x}}\bigg{\\}}.\mathbf{a}_{PN}\Bigg{]},$ For circular orbits $\dot{r}=0$, therefore Eq. (73) reduces to the following expression. $\displaystyle(\dddot{D}^{\alpha\beta})^{2}$ $\displaystyle=$ $\displaystyle 24\Bigg{[}12\bigg{(}\dfrac{\partial V}{\partial r}\bigg{)}^{2}r^{2}\dot{\phi}^{2}$ $\displaystyle+$ $\displaystyle\dfrac{M_{P}}{c^{2}}\bigg{\\{}18^{2}\dot{\phi}^{2}\dfrac{\partial V}{\partial r}\mathbf{\hat{n}}.\mathbf{a}_{PN}+6r\dfrac{\partial V}{\partial r}\mathbf{v}_{P}.\dot{\mathbf{a}}_{PN}\bigg{\\}}\Bigg{]}.$ Here, the time derivative of the PN correction term $\dot{\mathbf{a}}_{PN}$ of the acceleration is $\dot{\mathbf{a}}_{PN}=\dfrac{d\mathbf{a}_{PN}}{dt}=\dfrac{d\mathbf{a}_{PN}}{d\phi}\dfrac{d\phi}{dt}=\dfrac{d\mathbf{a}_{PN}}{d\phi}\dot{\phi}.$ (75) Therefore, $\dot{\mathbf{a}}_{PN}=\bigg{[}\dfrac{GM_{Q}\epsilon^{2\delta-1}}{p^{2\delta}}\bigg{]}^{5/2}\dfrac{(2-\delta)}{\delta p^{2}}\bm{\hat{\nu}}+O(c^{-2}),$ (76) where the higher order terms are ignored as they will contribute to the 2PN or higher orders, which is out of the scope of this current study. We can also write, $\displaystyle V(r)$ $\displaystyle=$ $\displaystyle\dfrac{GM_{Q}M_{P}}{2\delta r^{2\delta}}\epsilon^{2\delta-1},$ (77) $\displaystyle\mathbf{v}_{P}.\dot{\mathbf{a}}_{PN}$ $\displaystyle=$ $\displaystyle\dfrac{G^{3}M_{Q}^{3}\epsilon^{3(2\delta-1)}}{\delta p^{2(3\delta+1)}}(2-\delta)+O(c^{-2}),$ (78) $\displaystyle\mathbf{\hat{n}}.\mathbf{a}_{PN}$ $\displaystyle=$ $\displaystyle\dfrac{G^{2}M_{Q}^{2}\epsilon^{2(2\delta-1)}}{\delta p^{4\delta+1}}(2-\delta)+O(c^{-2}).$ (79) Now substituting Eq. (66), (67), and the above expressions into Eq. (LABEL:D^2_for_circular_orbits), we obtain $(\dddot{D}^{\alpha\beta})^{2}=\dfrac{288G^{3}M_{Q}^{3}M_{P}^{2}\epsilon^{3(2\delta-1)}}{p^{2(3\delta+1)}}\bigg{[}1-\dfrac{1}{c^{2}}\dfrac{2GM_{Q}\epsilon^{2\delta-1}}{\delta p^{2\delta}}(2-\delta)(3+2\delta)\bigg{]}.$ (80) The average rate of the energy radiation mentioned in Eq. (68) can be written as, $\displaystyle\bigg{\langle}\frac{dE}{dt}\bigg{\rangle}=-\frac{G}{45c^{5}}\frac{1}{2\pi}\int^{2\pi}_{0}(\dddot{D}^{\alpha\beta})^{2}d\phi\,\,,$ (81) where $\dot{\phi}=2\pi/T$. (a) For $\delta=0.27\leavevmode\nobreak\ (i.e.\leavevmode\nobreak\ \delta<\delta_{0})$ (b) For $\delta=0.01\text{ i.e. }\approx 0$ (c) For $\delta=0.535\leavevmode\nobreak\ (i.e.\leavevmode\nobreak\ \delta>0.5)$ Figure 1: Figure shows velocity of the smaller object in the extreme mass ratio binary in power law (PL) potential (Orange curve) and in Kepler-Newton (KN) potential (Blue curve) with respect to radial distance between the two bodies. In the case of circular orbits, the integrand is independent of the variable $\phi$. Therefore, the above expression of the average energy radiation rate reduces to $\bigg{\langle}\frac{dE}{dt}\bigg{\rangle}=-\dfrac{G}{45c^{5}}(\dddot{D}^{\alpha\beta})^{2}.$ (82) Thus $\bigg{\langle}\frac{dE}{dt}\bigg{\rangle}=-\dfrac{32G^{4}M_{Q}^{3}M_{P}^{2}\epsilon^{3(2\delta-1)}}{5c^{5}p^{2(3\delta+1)}}\bigg{[}1-\dfrac{1}{c^{2}}\dfrac{2GM_{Q}\epsilon^{2\delta-1}}{\delta p^{2\delta}}(2-\delta)(3+2\delta)\bigg{]}.$ (83) The condition for using PN approximations to evaluate average energy radiation rate for power law potential is $1\gg\frac{GM_{Q}}{c^{2}p^{2\delta}}\frac{(2-\delta)(3+2\delta)}{\delta}.$ (84) For a given EMRI and in a range of radial distance, this condition can be narrowed down to a cut-off $\delta_{0}$ below which PN approximations cannot be used to work out the average energy radiation rate. This cut-off depends on the properties of the binary under consideration. (a) $\delta=0.27\leavevmode\nobreak\ (i.e.\leavevmode\nobreak\ \delta<\delta_{0})$ (b) $\delta=0.395\leavevmode\nobreak\ (i.e.\leavevmode\nobreak\ \delta_{0}<\delta<0.5)$ (c) $\delta=0.535\leavevmode\nobreak\ (i.e.\leavevmode\nobreak\ \delta>0.5)$ Figure 2: Figure shows variation of 1-PN corrected rate of energy radiation from an extreme mass ratio binary in power law (PL) potential (Orange curve) and in Kepler-Newton (KN) potential (Blue curve) with respect to radial distance ($p$) between the two bodies. (a) $\delta=0.395\leavevmode\nobreak\ (i.e.\leavevmode\nobreak\ \delta_{0}<\delta<0.5)$, $\Delta=0.105$ (b) $\delta=0.535\leavevmode\nobreak\ (i.e.\leavevmode\nobreak\ \delta>0.5),\leavevmode\nobreak\ \Delta=-0.035$ Figure 3: Figure shows the comparison of the orbital frequency of the extreme mass ratio binary in power law (PL) potential (Orange curve) and in Kepler- Newton (KN) potential (Blue curve) with respect to radial distance ($p$) between the two bodies. Eq. (83) shows the energy radiation from the binaries following power law potential with the assumptions that (i) the scale length ($\epsilon$) is very small compared to the distance between two bodies i.e. $r_{PQ}$, (ii) the body-Q is much heavier than the body-P, therefore the motion of the body-Q is negligible, and it can be treated as a stationary body and (iii) the body-P is moving in the circular orbit around Q. ## V Comparison of power Law potential with the Kepler-Newton Potential Consider an example of an EMRI system of a supermassive black hole, and a star as massive as the sun. Let the properties of the supermassive compact object be: mass, $M_{Q}$ = $4.1\times 10^{6}M_{\odot}=8.2\times 10^{36}kg$, the scale radius, $\epsilon$= $5.8\times 10^{-4}pc$ and the properties of the star or the stellar mass object be: mass, $M_{P}$ = $1M_{\odot}$ = $2\times 10^{30}kg$. In this section, we compare the velocity curves, the average energy radiation rate and the orbital frequency of the binary in the PL potential and in the KN Potential. The PN corrections are much smaller than the Newtonian values of the evaluated quantities. Hence, in the magnitude scales considered below, the graphs of the Newtonian and the PN corrected quantities will apparently overlap. ### V.1 Velocity profile The comparison of the velocity curves for power law potential (Eq. (62)) and Kepler-Newton potential is shown in this subsection. For $0<\delta<0.5$, the velocity falls more slowly in the case of power law (PL) potential as compared to that in KN potential case which is usually observed in astrophysical scenarios (Fig. 1(a)). In fact, for $\delta=0.01\text{ i.e. }\approx 0$, the velocity curve is nearly flat (Fig. 1(b)). For the case of $0.5<\delta$, the velocity for PL potential drops faster with increasing radial distance (Fig. 1(c)). Therefore, $0.5<\delta$ is astrophysically improbable. ### V.2 Average energy radiation rate Next, we compare the average energy radiation rate from an EMRI in PL potential and that in KN Potential. Average energy radiation rate from a system in KN potential can be determined by substituting $\delta=1/2$ in our final result (Eq. (83)) $\bigg{\langle}\frac{dE}{dt}\bigg{\rangle}_{KNP}=-\frac{32G^{4}M_{Q}^{3}M_{P}^{2}}{5c^{5}p^{5}}\bigg{(}1-\dfrac{1}{c^{2}}\dfrac{24GM_{Q}}{p}\bigg{)}.$ (85) The comparisons of the two energy radiation rates, Eq. (85), and Eq. (83) with $\delta\neq 0.5$, are shown in the Fig. (2) for different $\delta$-s. In the EMRI considered above, we focus on the region where the radial distance $p\approx 10^{16}$ meter and the corresponding $\delta_{0}\approx 0.38$. For $0<\delta<0.38$, the 1-PN term becomes comparable and much higher than the 0-PN (or the Newtonian) part. Therefore, the 1-PN part blows up and it looks as if the EMRI is absorbing energy instead of radiating (Fig. 2(a)). Hence, as far as the energy radiation rate is concerned, we have to limit our study to $\delta>\delta_{0}:=0.38$. For $0.38<\delta<0.5$, the energy radiation rate is greater than that from KN potential (Fig. 2(b)). For $\delta>0.5$, the energy radiation rate is smaller than that from KN potential (Fig. 2(c)). However, as we mentioned before, $\delta>0.5$ is astrophysically improbable. ### V.3 Orbital frequency Presence of matter distribution around a binary affects the dynamics of the binary. The effect of dynamical friction as studied in [39, 40, 41] is ignored here. Let’s consider a power law (where the power is $\delta$) such that it has a small deviation ($\Delta$) in the power from the Kepler-Newton potential ($\delta_{KN}=1/2$) which could be because of the presence of matter distribution around the EMRI. Accordingly, from the Eq. (67), the orbital frequency $\dot{\phi}$ can be written as $\dot{\phi}=\dfrac{1}{p}\sqrt{\dfrac{GM_{Q}\epsilon^{1-2\Delta}}{p^{(1-2\Delta)}}}\bigg{[}1-\dfrac{1}{c^{2}}\dfrac{2GM_{Q}\epsilon^{-2\Delta}}{(1/2-\Delta)p^{(1-2\Delta)}}(3/2+\Delta)\bigg{]}.$ (86) When $\Delta>0$, which is also the case where we expect non-vacuum environment, the orbital frequency is increased (Fig. 3(a)). While in the case where $\Delta<0$ we get orbital frequency lower than that of an EMRI in KN potential but this case astrophysically improbable. From the observation data of the orbital frequency of EMRI, the value of $\Delta$ can be obtained which gives the precise power law potential which can be used to get the matter density profile around the central supermassive compact object. ## VI Conclusion In this paper, we consider an extreme mass ratio inspiral (EMRI) in a system where the supermassive object lies in a non vacuum region such that we can use power law potential. The main motivation to consider power law potential is to get nearly flat profile of orbital velocity which is observed in astrophysical scenarios (e.g. galactic rotation curve). We derive the 1-PN corrected dynamical variables of the orbiting stellar mass object. We consider circular orbits for the ease of calculations. Using the Newtonian gravitational potential ($U$) and the PN potentials ($U^{j},$ $\Psi$ and $X$), we derive the acceleration of a body in the PL potential. Using this, we derive other PN corrected dynamical variables. Subsequently, we obtain the mass quadrupole tensor which is ultimately used to obtain the average energy radiation rate. Next, we discuss an example of an EMRI. We calculate the average energy radiation rate from this EMRI in power law potential at different $\delta$-s and compared them with that from the same EMRI at $\delta=0.5$ i.e. the KN potential case. When $\delta$ is less than a particular cut-off $\delta_{0}$, we cannot use PN approximations because it violates the conditions that the value of 1-PN part should be much less than 0-PN part. The important results we get here are, * • The average energy radiation rate that we have derived is applicable for every EMRI which is surrounded by a matter distribution that can give power law potential in the region around the supermassive compact object. We show that the average energy radiation rate is higher in general PL potential case as compared to that in KN potential. * • We use the comparison of orbital frequency of an EMRI in PL and KN potentials to show how matter distribution (e.g. dark matter distribution) causes change in the signals. This can be used to further investigate the quantities like frequencies of the gravitational waves radiated from such an EMRI. The deviation of the observed orbital frequencies from the frequency of a similar binary in KN potential can be used to evaluate the value of $\delta$ and that can be used to find out the effective potential and subsequently, can calculate the mass density profile around the binary. ## VII Acknowledgement C.N.G. would like to acknowledge the support of the International Center for Cosmology, CHARUSAT, India for funding the work. ## References * [1] Sofue. Y and Rubin. V, Annual Review of Astronomy and Astrophysics 39, no. 1, 137 (2001). * [2] Milgrom, M., Astrophysical Journal, Vol. 270, p. 365-370 (1983). * [3] Evstigneeva, E. A., Yu.A. Nikolaev publisher,p. 180-196, (2004). * [4] Rodrigues, D. C., Letelier, P. S., & Shapiro, I. L., Journal of Cosmology and Astroparticle Physics 2010, no. 04, 020 (2010). * [5] Boris E. Meierovich, Phys. Rev. D 87, 103510 (2013). * [6] Zwicky, F., General Relativity and Gravitation, 41, no. 1, pp.207-224 (2009). * [7] Rubin, V. C., Ford, W. K., & Thonnard, N., Astrophysical Journal, 238, p. 471-487 (1980). * [8] Rubin, V. C. & Ford, W. K., Astrophysical Journal, 159, p.379 (1970). * [9] Díaz E., Zandivarez A., Merchán M. E., et al. Astrophys. J. , 629, 158, (2005). * [10] Donato, F., Gentile, G., Salucci, P. Mon. Not. Roy. Astron. Soc., 353, L17, (2004). * [11] Navarro, J. F., Frenk, C. S., & White, S. D. M., Astrophysical Journal 462, p. 563 (1996). * [12] Banik, U., Dey, D., Bhattacharya, K., et al. , General Relativity and Gravitation, 49, 116, (2017). * [13] Dey, D., Bhattacharya, K., & Sarkar, T., General Relativity and Gravitation, 47, 103 (2015). * [14] Dey, D., Bhattacharya, K., & Sarkar, T., Physical Review D, 87, no. 10, 103505 (2013). * [15] Dey, D., Bhattacharya, K., & Sarkar, T., Physical Review D, 88, no. 8, 083532 (2013). * [16] The, L. S. & White, S. D. M., Astronomical Journal 95, p. 1642 (1988). * [17] Bekenstein, J. D., Physical Review D, 70, no. 8, 083509 (2004). * [18] J W Moffat, Journal of Cosmology and Astroparticle Physics, 2005, no. 5, (2005). * [19] Moffat, J. W., Journal of Cosmology and Astroparticle Physics, 2006, no. 03, 004 (2006). * [20] Brownstein, J. R. and Moffat, J. W., The Astrophysical Journal, 636, no. 2, (2006) * [21] Munera, H. A. & Delgado-Correal, C., Research Notes of the AAS, 4, no. 9, 151 (2020). * [22] Goldstein, H, Addison-Wesley, (1950). * [23] Danby, J. M. A., Willmann-Bell, 2nd ed., rev. & enl (1988). . * [24] Valluri, S. R. and Wiegert, P. A. and Drozd, J. and Da Silva, M., Mon. Not. Roy. Astron. Soc. 427, no. 3, 2392 (2012). * [25] Lynden-Bell, D. and Jin, S., Mon. Not. Roy. Astron. Soc. 386, no. 1, 245 (2008). * [26] Valluri, S. R. and Yu, P. and Smith, G. E. and Wiegert, P. A., Mon. Not. Roy. Astron. Soc. 358, no. 4, 1273 (2005). * [27] Barack Leor, Cutler Curt, Physical Review D, 69, no. 8, 1550-2368 (2004). * [28] Amaro-Seoane, P., Aoudia, S., Babak, S., et al. , Classical and Quantum Gravity, 29, 124016, (2012). * [29] Glampedakis Kostas, Classical and Quantum Gravity, 22, no. 15, S605–S659 (2005). * [30] Gair, J. R., Vallisneri, M., Larson, S. L. et al., Living Reviews in Relativity, 16, no. 1, 1433-8351 (2013). * [31] Poisson. E. and Will, C. M., Cambridge press (2014). * [32] Struck, Curtis, Mon. Not. Roy. Astron. Soc. 446, no. 3, 3139 (2015). * [33] Struck, C, The Astron. Journ. 131, no. 3, 1347 (2006). * [34] J. Binney and S. Tremaine, Princeton, N.J. : Princeton University Press, c (1987). * [35] P. Bambhaniya, A. B. Joshi, D. Dey and P. S. Joshi, Phys. Rev. D 100, no.12, 124020 (2019). * [36] Dionysiou D. D., Astrophysics and Space Science 125, no. 1, pp.115-125 (1986). * [37] Wagoner, R. V. and Will, C. M., Astrophysical Journal, vol. 210, Dec. 15, pt. 1, p. 764-775, (1976). * [38] Dehnen, H. and Ghaboussi, F., Nuovo Cimento B, Serie 11 (ISSN 0369-3554), vol. 89 B, Oct. 11, p. 131-138, (1985). * [39] Bekenstein, J. D. & Zamir, R. Astrophys. J. , 359, 427, (1990). * [40] Gómez, L. G. & Rueda, J. A. Phys. Rev. D, 96, 063001, (2017). * [41] A. Caputo, J. Zavala and D. Blas, Phys. Dark Univ. 19, 1-11 (2018) [arXiv:1709.03991 [astro-ph.HE]].
∎ 11institutetext: Rupal R. Agravat 22institutetext: School of Engineering and Applied Science, Ahmedabad University, Ahmedabad, India. 22email<EMAIL_ADDRESS>33institutetext: Mehul S. Raval 44institutetext: School of Engineering and Applied Science, Ahmedabad University, Ahmedabad, India. 44email<EMAIL_ADDRESS> # A Survey and Analysis on Automated Glioma Brain Tumor Segmentation and Overall Patient Survival Prediction ††thanks: The article has been published in Springer Journal, “Archives of Computational Methods in Engineering”, which is accessible using the link: https://link.springer.com/article/10.1007/s11831-021-09559-w. One can access the article with complementary access rights using the link:https://rdcu.be/cf2Zz. Rupal R. Agravat Mehul S. Raval (Received: 3 May, 2020 / Accepted: 28 January, 2021) ###### Abstract Glioma is the deadliest brain tumor with high mortality. Treatment planning by human experts depends on the proper diagnosis of physical symptoms along with Magnetic Resonance (MR) image analysis. Highly variability of a brain tumor in terms of size, shape, location, and a high volume of MR images make the analysis time-consuming. Automatic segmentation methods achieve a reduction in time with excellent reproducible results. The article aims to survey the advancement of automated methods for Glioma brain tumor segmentation. It is also essential to make an objective evaluation of various models based on the benchmark. Therefore, the 2012 - 2019 BraTS challenges evaluate the state-of- the-art methods. The complexity of the tasks facing this challenge has grown from segmentation (Task 1) to overall survival prediction (Task 2) to uncertainty prediction for classification (Task 3). The paper covers the complete gamut of brain tumor segmentation using handcrafted features to deep neural network models for Task 1. The aim is to showcase a complete change of trends in automated brain tumor models. The paper also covers end to end joint models involving brain tumor segmentation and overall survival prediction. All the methods are probed, and parameters that affect performance are tabulated and analyzed. ###### Keywords: Brain Tumor Segmentation Deep Learning Magnetic Resonance Imaging Medical Image Analysis Overall Survival Prediction ††journal: Archives of Computational Methods in Engineering ## 1 Introduction From the days the medical images were captured by imaging devices and digitally preserved, researchers have started to build computerized automated and semi-automated analysis techniques to address a variety of problems like detection, segmentation, classification, and registration. Till 1990s, medical image analysis was done with pixel processing techniques to detect edge/line with filters, regions based on similarity of pixels or fit mathematical models to detect lines/elliptical shapes. Later, shape models, atlas models and probabilistic models became successful for medical image analysis, where the model learns to predict the unseen data based on observations (training data). This trend moved towards the machine learning models where features were extracted from the data and fed into the computer to make it learn the underlying class/pattern as per the input features and make predictions about unseen data in the future. This approach has changed the trend of human- dependent systems to machine dependent systems where the machine learns from the example data. Such algorithms work very well with the high dimensional feature space to find the optimal decision boundary. Here the only thing which is not done by the computer is feature extraction, which leads to the era of deep learning where the computer learns the optimal set of features for the problem at hand. Deep learning models transform the input data from image/audio/video/text to output data which is location/presence/spread and incrementally learn high dimensional features with the help of a set of intermediate layers between the input and output layers. Medical image analysis has reached the extent where such algorithms play a significant role in the early detection of a disease based on the initial symptoms leading to better treatments. Deadly diseases like cancer (Glioma-the cancerous brain tumor), if detected in the early stages, can increase the life expectancy of the patient. ### 1.1 Brain Tumors and Their Types When the natural cycle of the tissues in the brain breaks and growth becomes uncontrollable, it results in a brain tumorHopkins (2019 (accessed April 6, 2020). Brain tumors are of two types: (1) primary, and (2) secondary. A primary tumor starts with the brain tissues and grows within the brain, whereas a secondary tumor spreads in the brain and from the other cancerous organsCenter (2019 (accessed April 6, 2020). More than 100 types of brain tumors are named based on the tissue and the brain part where it starts to grow. Out of all these tumors, Glioma is the most life-threatening brain tumor. It occurs in the glial cells of the brain. The severity grades of the Glioma tumors depend on of Radiology (1999 (accessed April 10, 2020): * • the tumorous cell growth rate. * • blood supply to the tumorous cells. * • presence of necrosis (dead cells at the center of the tumor). * • location of the tumor within the brain. * • confined area of the tumor. * • its structural similarity with the healthy cells. Grade I and II, are known as Low-Grade Glioma (LGG), which are benign tumors. Grade III and IV are known as High-Grade Glioma (HGG), which are malignant tumors. When symptoms like nausea, vomiting, fatigue, loss of sensation or movement, difficulty in balancing, persist for a longer duration, it is advisable to go through image screening to know the internal structural changes in the brain due to the tumor. ### 1.2 Brain Imaging Modalities Various imaging techniques are used for brain tumor screening which include positron emission tomography (PET), Computed Tomography(CT) and Magnetic Resonance Imaging(MRI). In PET, a radioactive tracer is injected in the body, which captures a high level of chemical activities of the disease infected part of the body. In the CT, an X-ray tube rotates around the patient’s body and emits a narrow X-ray beam, which is passed through the patient’s body to generate cross-sectional images of the brain. An MRI uses a strong magnetic field around the patient’s body, which aligns the protons in the body to the generated magnetic field, which is followed by the passing of radiofrequency signals to the body. When the current is off, the protons emit the energy and try to align with the magnetic field. The emitted energy from an image records the response of various tissues of the brain. There are two types of MRIs: 1)fMRI(Functional MRI): It measures the brain activities from the change in the blood flow, 2) sMRI(Structural MRI): It captures the anatomy and pathology of the brain. The proposed article uses the sMRI as the focus to deal with the pathology in the brain. Various modalities of sMRI capture responses of the tissues which lead to distinct biological information in the images. Various modalities of sMRI are: * • Diffusion Weighted Image(DWI) MRI: MR imaging technique measuring the diffusion of water molecules within tissue voxels. DWI is often used to visualize hyperintensities. * • FLAIR MRI: an MRI pulse sequence which suppresses the fluid (mainly cerebrospinal fluid (CSF)) and enhances an edema. * • T1w MRI:basic MRI pulse sequence that captures longitudinal relaxation time (time constant required for excited protons to return to equilibrium) differences of tissues. * • T1Gd MRI: a contrast enhancing agent, Gadolinium is injected into the body and after that a T1 sequence is acquired. This contrast enhancing agent shortens the T1 time which results in bright appearance of blood vessels and pathologies like tumors. * • T2w MRI: basic MRI pulse sequence that captures transverse relation time(T2) differences of tissues. In general, for brain tumors, CT and MRI are the commonly used techniques. Both the imaging techniques are essential, and the differences between CT and MRI are listed in Table 1. Fig. 1 shows the imaging difference between a CT and an MRI, along with their various modalities. Healthy brain tissues are of three classes: 1) Gray Matter(GM), 2) White Matter (WM), and 3) Cerebrospinal Fluid(CSF). Such soft tissue detailing is captured clearly in an MRI as compared to a CT. Table 1: Difference between CT and MRI Media (2004 (accessed April 10, 2020). Advantages and Disadvantages --- CT | MRI Non-invasive. | Non-invasive. Fast processing. | Slow processing. Cheap. | Expensive. Less accurate. | More accurate. Coarse tissue details. | Fine tissue details. Single modality | Various modalities to record the reaction of different tissues differently (T1, T2, T1c, T2c, FLAIR, DWI). Generates 3D images. | Generates 3D images. Images can be in axial, coronal and sagittal views. | Images can be in axial, coronal and sagittal views. Risks CT | MRI Harms unborn babies. | Reacts with the metals in the body due to magnetic field. A small dosage of radiation. | The loud noise of the machine causes hearing issues. Reacts to the use of dyes. | Increases body temperature when exposed for a longer duration. | Imaging difficulties in case of claustrophobia. (a) (b) (c) (d) Figure 1: Imaging modalities Healthcareplex (2016 (accessed April 10, 2020) a) CT b) T1 MRI c) T2 MRI and d) DWI MRI. There are other different MRI modalities that capture responses of various brain tissues differently. The tumor adds one more tissue class in the brain. Fig. 2 shows the difference in the appearance of a tumor in a CT and an MRI. (a) (b) Figure 2: Appearance of Tumor in a) CT and b) MRI Janssen and Hoff (2012). Table 2 shows the intensity variation of a tumor in different MRI modalities. The tumor class overlaps the normal tissue intensities, e.g., in T1 MR images, the GM, the CSF, and the tumor appear to be dark, whereas, in T2 the GM, the CSF and the tumor appear to be bright. It is desirable to use a combination of various modality MRI images for the purposes of analysis Agravat and Raval (2016). The rest of the paper focuses on the methods based on MR images. Table 2: The appearance of normal brain tissues and Tumor in various MRI modalities. | GM | WM | CSF | Tumor ---|---|---|---|--- T1 | Dark | White | Dark | Dark T2 | Light | Dark | White | Bright T1c | Dark | White | Dark | Bright FLAIR | Light | Dark | Dark | Bright The availability of a benchmark dataset has boosted research in the area of computer-assisted analysis for brain tumor segmentation. Various types of methods for the segmentation task include semi-automated and automated methods. The semi-automated methods require user input to initiate the process. Circular automata and other random field methods require a seed point, a diameter, or rough boundary selection for further computation. Atlas- based methods try to fit the pathological image with a healthy image to locate the abnormal brain area. Pathological atlas creation is another approach to determine the abnormality in the brain. Expectation maximization methods iteratively refine the categories of the brain voxels from the input of Gaussian Mixture Models or atlas. The automated methods include machine learning methods, which use the features of the image for voxel classification. Later on, deep learning methods, specifically Convolution Neural Network (CNN) based methods, have shown success in the field of semantic segmentation, and such methods are adopted widely for brain tumor segmentation. CNN methods with various architectures as well as ensemble approaches have proven to be the best methods for the task of segmentation. In addition to the segmentation task, the survival prediction task predicts the survival days of patients. The contribution of the paper is as follows: 1. 1. It is the most exhaustive review covering brain tumor imaging modalities, challenges in medical image analysis, evaluation metrics, BraTS dataset evolution, Pre-processing and post processing methods, segmentation methods, proposed architecture, hardware and software for implementation, overall survival predictions, and limitations. 2. 2. It exhaustively traces the development in brain tumor segmentation by covering models based on handcrafted features to deep neural networks. It helps to understand state-of-the-art development more comprehensively. 3. 3. A fair comparison among models is made by covering the BraTS benchmark dataset. The methods are classified, their parameters tabulated and analyzed for performance. 4. 4. The paper also covers a survey on end-to-end methods for brain tumor segmentation and overall survival prediction. It helps to understand the impact of segmentation on overall survival prediction. The flow of the paper is as follows: section 2 covers the challenges for computer aided medical image analysis. Section 3 covers the problem statement, dataset, and evaluation framework. In contrast, section 4 includes segmentation methods using hand-crafted features with limitations, Section 5 covers segmentation and Overall Survival (OS) prediction using CNN methods, Section 6 covers the limitations of tumor segmentation and OS prediction methods followed by a conclusion and discussion in Section 7. ## 2 Challenges in Medical Image Analysis Volumetric brain MRI images are analyzed and interpreted by human experts (neurologists, radiologists) to segment various brain tissues as well as to locate the tumor. This analysis is time-consuming. Besides, this type of segmentation is non-reproducible. Accuracy of brain tumor segmentation, which is desirable to plan proper treatment like medication or surgery, mainly depends on the human expert with utmost precision. Computer-aided analysis helps a human expert to locate the tumor in less time as well as it regenerates the analysis results. The intended analysis by computerized methods requires appropriate input with correct working methods. Input to the method may face the following challenges: 1. 1. Low signal to noise ratio (SNR) and artifacts in raw MRI data are mainly due to electronic interferences in the receiver circuits, radiofrequency emissions due to thermal motion of the ions in the patient body, coils, and electronic circuits in MRI scanners. This random fluctuation reduces the image contrasts due to signal-dependent data bias Goyal et al. (2018). 2. 2. Non-uniformity is an irrelevant additional intensity variation throughout the MRI signal. Possible causes of non-uniformity are radio-frequency coils, acquisition pulse sequence, the geometry and nature of the sample. 3. 3. Unwanted information like skull, fat and skin acquired by MR machines along with brain images. 4. 4. The intensity profile of MR images may vary due to the variety of MRI machine configurations. 5. 5. Publicly available brain tumor images for computer-aided analysis are very few. The collection of MR images from various hospitals has privacy or confidentiality related issues. 6. 6. Class imbalance problem is another major issue in medical image analysis. The images for an abnormal class might be challenging to find because abnormal classes are rare compared to the normal classes. ## 3 Glioma Brain Tumor Segmentation Focus on the methods to solve medical related issues has increased since the late 1990s, which is apparent by looking at the gradual increase in semi- automated or automated methods for tumor segmentation, as shown in Fig. 3. By considering the same, the main focus of the article is on the Glioma brain tumor segmentation. An additional task is about the survival prediction techniques of the patients suffering from Glioma. Figure 3: Papers on PubMed for Biotechnology Information (2020 (accessed December 30, 2020) with keywords ‘Brain Tumor Segmentation’. ### 3.1 Brain Tumor Segmentation(BraTS) Challenge Dataset Literature proposes various methods for brain tumor segmentation. All the methods claim their own superiority and usefulness in some way. Initially, all such methods worked on the images taken from some hospitals or the radiology laboratories, which were private, and disclosure of those images to the other researchers was not allowed. This did not allow comparison of different methods. A publicly available dataset and evaluation framework compares and evaluates methods on the same measure. The BraTS challenge dataset Bakas et al. (2017a, b, c) has the following characteristics: * • It contains multi-parametric MRI pre and post -operative scans in T1, T1Gd, T2, and T2-FLAIR volumes (The post-operative scans have been omitted since 2014); * • The dataset contains images with different clinical protocols (2D or 3D) and various scanners from multiple institutions (1.5T or 3 T); * • The dataset set includes images with pre-processing for their harmonization and standardization without affecting the apparent image information; * • It has co-registration to the same anatomical template, interpolation to a uniform isotropic resolution ($1mm^{3}$), and skull-stripping. Initially, clinical images in the dataset were very few and it was challenging to compare methods based on the results of such a small number of images. Comparison is possible with an increase in the number of sample images and accurate generation of the ground truth images. Ground truth is generated based on the evaluation by more than one expert to avoid inter-observer variability. The growth of the dataset from its inception is as shown in Table 3. Table 3: Growth of the BraTS dataset Bakas et al. (2017a, b, c). Year | Total Images | Training Images | Validation Images | Test Images | Tasks | Type of Data ---|---|---|---|---|---|--- 2012 | Clinical:45, Synthetic:65 | Clinical data: 30(20HGG + 10LGG), Synthetic data: 50(25HGG + 25LGG) | N/A | Clinical data:15 Synthetic data:15 | Segmentation | Pre and post operative scans 2013 | 65 | Clinical data from BraTS 2012 | N/A | Leaderboard: Clinical:25 (15 from BraTS 2012 + 10 new) Challenge: 10 | Segmentation | Pre and post operative scans 2014 | 238 | 200 | N/A | 38 | Segmentation, Disease Progression | Pre-operative, Longitudinal 2015 | 253 | 200 | N/A | 53 | Segmentation, Disease Progression | Pre-operative, Longitudinal 2016 | 391 | 200 | N/A | 191 | Segmentation, Disease Progression | Pre-operative, Longitudinal 2017 | 477 | 285 | 46 | 146 | Segmentation, Survival Prediction | Pre-operative, Longitudinal 2018 | 542 | 285 | 66 | 191 | Segmentation, Survival Prediction | Pre-operative, Longitudinal 2019 | 626 | 335 | 125 | 166 | Segmentation, Survival Prediction, Uncertainty Prediction | Pre-operative, Longitudinal Four different types of intra-tumoral structures are useful for ground truth generation: edema, enhancing core, non-enhancing(solid) core, and necrotic (or fluid-filled) core as shown in Fig. 4. An annotation protocol was used by expert raters to annotate each case manually. Then the segmentation results from all raters were fused to obtain a single unanimous segmentation for each subject as the ground truth. The validation of the segmentation methods is based on: 1) Whole Tumor (WT): all intra-tumoral substructures, 2) Tumor Core (TC): enhancing tumor, necrosis, and non-enhancing tumor substructures and 3) Enhancing Tumor (ET): includes only enhancing substructure. Figure 4: Intra-tumoral structures appearance on three imaging modalities with manual annotations. (a) Top: whole tumor (yellow), Bottom: FLAIR, (b) Top: tumor core (red), Bottom: T2, (c) Top: enhancing tumor structures (light blue), surrounding the cystic/necrotic components of the core (green), bottom: T1c, (d) Fusion of the three labels Menze et al. (2014). #### 3.1.1 Overall Survival Prediction On account of the availability of a sufficiently large dataset, the additional task of OS prediction has been introduced in BraTS challenge since 2017. This task focuses on the OS prediction of HGG patients. The dataset includes age, survival days, and resection status: Gross Total Resection (GTR) or Sub-Total Resection (STR) information for HGG patients in addition to the images. The task is to classify the patients into: long-survivors ($>$15 months), mid- survivors(between 10 to 15 months), and short-survivors($<$10 months)Bakas et al. (2018). A detailed description of the OS prediction task is given in Table 4. In 2019, an additional task included the quantification of uncertainty prediction in the segmentation. This task focuses on the uncertainty prediction in the context of glioma tumor segmentation. Table 4: The distribution of BraTS dataset Bakas et al. (2017a, b, c) features in survival classes. Year | # records | Features | Short survivors (<10 months) | Mid survivors (between 10 and 15 months) | Long Survivors (>15 months) ---|---|---|---|---|--- Count | Age ($\mu\pm\sigma$) | OS days ($\mu\pm\sigma$) | Count | Age ($\mu\pm\sigma$) | OS days ($\mu\pm\sigma$) | Count | Age ($\mu\pm\sigma$) | OS days ($\mu\pm\sigma$) 2017 and 2018 | 163 | Age | 65 | 65.44 $\pm$ 10.68 | 147.44 $\pm$ 83.08 | 50 | 58.70 $\pm$ 11.26 | 394 $\pm$ 49.32 | 48 | 55.11 $\pm$ 12.19 | 826.23 $\pm$ 370.91 2019 | 212 | Age, Resection status | 82 | 66.66 $\pm$ 11.42 | 150.21 $\pm$ 84.72 | 54 | 59.14 $\pm$ 10.98 | 377.43 $\pm$ 40.44 | 76 | 57.16 $\pm$ 11.84 | 796.38 $\pm$ 354.32 ### 3.2 Evaluation Metrics for Brain Tumour Segmentation and OS prediction The standard evaluation framework for tumor segmentation and OS prediction includes the following metrics. 1. 1. Dice Similarity Coefficient (DSC) (or F1 measure): It is the overlap of two objects divided by the total size of both the objects. True Positive (TP) is the outcome where the model correctly predicts the positive class. In contrast, False Positive (FP) is the outcome where the model incorrectly predicts the negative class to be positive. False Negative (FN) is the outcome where the model incorrectly predicts the positive class to be negative. This has been shown in equation 1. $DSC=\frac{2TP}{2TP+FP+FN}$ (1) 2. 2. Jaccard Similarity Coefficient: It is known as the intersection over the union of two different sets. This has been shown in equation 2. $Jaccard=\frac{TP}{TP+FP+FN}$ (2) 3. 3. Sensitivity: It is a measure that correctly identifies tumorous voxels. This has been shown in Equation 3. $Sensitivity=\frac{TP}{TP+FN}$ (3) 4. 4. Hausdorff distance: It measures how far two subsets of a metric space are from each other. If $x$ and $y$ are two non-empty subsets of a metric space $(M,d)$, then their Hausdorff distance $d_{H}(x,y)$ can be defined by: $d_{H}(x,y)=max\\{\underset{x\in X}{sup}\,\underset{y\in Y}{inf}\,d(x,y),\underset{y\in Y}{sup}\,\underset{x\in X}{inf}\,d(x,y)\\}$ (4) where, sup represents the supremum and inf the infimum. 5. 5. The OS prediction is measurable with accuracy, which is defined to be the quality of being precise. As DSC is the most commonly used evaluation metric, this article compares all the methods based on DSC unless specified explicitly. ### 3.3 Image Pre-processing Medical image pre-processing plays a significant role in the appropriate input to computer-assisted analysis techniques. The pre-processed images help to get accurate outputs as such images show proper voxel relationships. As mentioned in section 3.1, the dataset has images with different clinical protocols and scanners; the variability has to be standardized , and it is necessary to put all the scans on a single scale. In addition to image registration, uniform isotropic resolution, and skull-stripping, the following types of pre- processing further improve the image input: * • Bias Field Correction(BFC): Bias field is a multiplicative field added in the image due to the magnetic field and radio signal in the MR machine. Authors in Tustison et al. (2010) have suggested a bias field correction technique. * • Intensity Normalization (IN): Different modality images have a separate intensity scale. They must map to the same range. Standardization of all the scans considers zero mean and unit variance. * • Histogram Matching (HM): Due to the different configuration of MR machines, the intensity profile of the acquired images may vary. Intensity profiles are to be brought to the same scale using the histogram matching process. * • Noise Removal (NR): Noise in MR image at the time of acquisition is due to the radio signal, or the magnetic field. Various noise filtering techniques are useful for the removal of noise. ### 3.4 Image Post-processing Segmentation output generated by computer-assisted methods may contain false segmentation in the image due to improper or incorrect feature selection. Segmentation improves by applying post-processing techniques like: 1. 1. Connected Component Analysis (CCA): CCA groups the voxels based on connectivity depending on similar voxel intensity values. The connected components which are very small are excluded from the result as such components are considered to be false positives due to spurious segmentation results. 2. 2. Conditional Random Field (CRF): The classifier predicts the voxel class based on features related to that voxel, which does not depend on the relationship of that voxel with other nearby voxels. CRF takes this relationship into consideration and builds a graphical model to implement dependencies between predictions. 3. 3. Morphological Operations: Such operations are applied to adjust the voxel value based on the value of the other voxels in its neighbourhood according to the size and shape. ## 4 Segmentation Methods using Handcrafted Features The methods in this section are divided into two categories: interactive and noninteractive. Interactive methods require user input in the form of tumor diameter, boundary or seed point selection. Random field-based methods belong to this category. Non-interactive methods do not require user input. The first group of this category detects abnormality. In atlas-based methods, the abnormal image is non-linearly mapped to the normal/input specific atlas to identify the abnormality which in general is the area of the image where atlas mapping fails. The other approach is Expectation Maximisation (EM), where the intensity distribution of the normal and abnormal voxels is learnt with Gaussian Mixture Models (GMM) or probabilistic atlases. The second group of this category is machine learning approaches. The clustering approach groups the voxels in numbers of clusters such that one of these clusters will result in a tumorous voxels group. In random forest (RF) and neural network (NN) approaches, high dimensional features of the images are given for training. The trained model later classifies the unseen voxels. A detailed description of all these approaches is covered in the following sub-sections. ### 4.1 Random Field Based Methods Authors in Hamamci et al. (2011) took user input for the largest possible tumor diameter from HGG images to find Volume of Interest (VoI) for a tumor and background from T1C MRI images, followed by Cellular Automata (CA) to obtain the probability maps for both the regions. A level set surface was further applied to those probability maps to get the final probability maps. They further extended their approach in Hamamci and Unal (2012), which considered multimodal images (T1C and FLAIR images) to segment tumors, edemas as well as LGG images. A semi-automatic method in Guo et al. (2013) took the rough tumor region boundary as user input and fine-tuned it with a global and local active contour-based model. The tumor region was broken into sub-regions with adaptive thresholding based on a statistical analysis of the intensities of various tumor regions to separate an edema from an active tumor core. The process was repeated for all the slices of the MRI of a patient. In Corso et al. (2008), the main contribution was to incorporate soft model assignments into the calculations of model-free affinities, which were then integrated with model-aware affinities for multilevel segmentation by a weighted aggregation algorithm. In Xiao and Hu (2012), Random Walk (RW) based interactive as well as an iterative method was applied to fine-tune the tumor boundary. RW was applied as an edge-weighted graph in the discrete feature space based on the variation of the distribution density of the voxels in the feature space. The user made an initial tumor seed selection for tumors as well as edema. Next, RW was applied to the feature space as well as on the image. If the user did not approve the results, then the segmentation process was reinitiated. In Doyle et al. (2013), the Hidden Markov Random Field (HMRF) based model was used with a modified Pott’s Model to panelize the neighboring pixels belonging to different classes. In Taylor et al. (2013), various modality intensity images along with their neighbourhood voxel intensities fed into the map- reduced Hidden Markov Model (HMM), and the model was corrected iteratively based on the class labels. In Subbanna and Arbel (2012), Gabor filter bank-based Bayesian classification was followed by MRF classification. Initially, each voxel was divided into its constituent class by applying the Gabor filter bank to the input vector (made up of intensities of four modalities at a voxel) to classify the voxel in five different classes (GM, WM, CSF, tumor, and edema). Next, an MRF based classifier was applied to the tumor as well as edema classes. It uses voxel intensity and spatial intensity differences over the neighbouring voxels. Authors in Dera et al. (2016) used the Non-Negative Matrix to find voxel clusters, which showed the tumor and level set methods to fine-tune the region boundary. ### 4.2 Atlas Based Methods Authors in Prastawa et al. (2004), initially identified abnormal brain tissues by registering a tumorous brain image with a healthy tissue brain atlas. This step was followed by identifying the presence of an edema using T2 images, and finally, geometric and spatial constraints were applied to detect the tumor and edema regions. Authors in Cuadra et al. (2004) applied an affine transformation to the atlas image to globally match the patient. The lesion was segmented using the Adaptive Template Moderated Spatially Varying statistical Classification (ATM SVC) algorithm. This atlas was then manually seeded by an expert with a single voxel placed on the estimated origin of the patient’s lesion, which was followed by the nonlinear demons registration algorithm with the model of the lesion growth to deform the seeded atlas to match the patient. Four volumes of the contrast-enhanced agent with meningioma implemented the model. The paper in Kwon et al. (2014) applied a semi-automatic method, which required user input to give the seed point for a tumor, the radius for each tumor, and the seed point for each regular tissue class. The random walk generated tumor priors using initial tumor seeds. The patient-specific atlas was modified to accommodate tumor classes, using a tumor growth model. The Empirical Bayes model used the EM framework to update the posterior of the tumor, growth model parameters as well as a patient-specific atlas. This work was then extended in Bakas et al. (2015), where instead of a single seed point for various labels, multiple seed points were taken into consideration to find the intensity mean and variance of a specific label. This work focused on preoperative MRI scans. Whereas, it was further extended in Zeng et al. (2016) to include post-operative scans along with additional features to the GMM. Here the need for manual selection of the seed point was also omitted. ### 4.3 Expectation Maximisation Based Methods The authors in Menze et al. (2012) used spatially varying probability prior (atlas) to labelling healthy tissues in the brain. The latent probabilistic atlas ‘alpha’ finds the probability of having a tumor at that voxel. Gaussian distribution for healthy tissue classes, as well as tumorous tissue class, was taken into consideration to identify the tumorous tissues in the image by expectation maximization. For spatial tumor regularization, latent atlas alpha used MRF using channel-specific regularization parameters. In contrast, authors in Raviv et al. (2012) used probability priors to predict voxel probability using the EM algorithm, followed by a level set framework with a gradient descent approach for parameter estimation. The authors in Zhao et al. (2012) applied the Gaussian Mixture Model with EM to divide the data into five probable classes. Afterwards, the snake method was used to find the subtle boundary between the tumor and the edema. In Tomas-Fernandez and Wareld (2012), the Bayesian intensity tissue model used the expectation-maximization problem using a trimmed likelihood estimator (TLE), which was robust against the outliers. This step was followed by a graph cut algorithm to differentiate between tumorous tissues as well as false positives. Authors in Zhao et al. (2013) selected super voxels using Simple Linear Iterative Clustering (SLIC) 3D based on the colour and proximity of the voxels. A graph cut on MRF was implemented for initial segmentation, followed by histogram construction, histogram matching, and likelihood estimation applied to predict the probability of voxels. In Piedra et al. (2016) the posterior probability using the Bayes theorem was calculated, which was followed by Super voxel partitioning using the SLIC algorithm. ### 4.4 Clustering Based Methods In Cordier et al. (2013) from all the images, D patches of size 3x3x3 were selected in the database. A single patch of this small cube used all the four modalities. Once the database was ready, from the test dataset, the same size patches were extracted and mapped with the database patches with $k$-nearest neighbors where $k=5$. These five nearest neighbors generated the test patch label. The authors inClark et al. (1998) used knowledge-based multispectral analysis on top of the unsupervised clustering technique. The rule-based expert system then extracted the intracranial region. The training was performed on three T1, Proton Density(PD), and T2 images, whereas the testing was done on thirteen such volumes. In Saha et al. (2016) authors used rough-set approximation to fine-tune the prediction done by k-means clustering. In Rajendran and Dhanasekaran (2012) initially, the enhanced probabilistic fuzzy C-means clustering was applied to get a rough estimation of the tumor region. This estimation and cluster centroid was given to the gradient-vector- flow snake model to refine the tumor boundary. Shin (2012) learnt a sparse dictionary of size 4x4 for different tissue types using four image modalities followed by logistic regression for tissue classification. This initial stage classified the image voxels in various classes. This step was followed by k-means clustering, which used a very high dimensional feature vector as input to find the classification of the tumor as well as the edema region as an overlap of the output of the previous step. ### 4.5 Random Forest Based methods Meier et al. Meier et al. (2013) implemented a generative- discriminative model. The generative model estimated tissue probability using density forest (similar to GMM). The discriminative model implemented classification forest, which took 51 features (gradient information first-order texture features and symmetry-based features, prior tissue probabilities based on the density forest) as input to generate the probability of the tissue. This probability was then supplied to the CRF to fine-tune the result. Authors in Zikic et al. (2012a) used context-aware spatial features, along with the tissue appearance probability generated by the Gaussian Mixture Model, to train a decision forest. The authors in Bauer et al. (2012) worked on Random forest classification with CRF regularization to predict the probability of tissue in multiple classes, i.e., GM, WM, CSF, Edema, Necrotic core and enhancing tumor. 28-dimensional feature vector including the intensity of each modality along with first-order statistics like mean, variance, skewness, kurtosis, energy, and entropy were computed from local patches around each voxel in each modality. In Geremia et al. (2012) each voxel was characterized by signal modality as well as spatial prior. Averaging across a 3x3x3 cube removed noise. The random forest trained using local features (intensity or priors), as well as context- specific features (region-based features or symmetry-based features). The forest was made up of 30 trees with a depth of 20 for each tree and trained on synthetic data. Zikic et al. Zikic et al. (2012b) used a discriminative multiclass classification forest. It used spatially non-local context-sensitive high dimensional features along with the prior probability for the tissue to be classified. Prior probability was available to the GMM. It classified in three classes, i.e., background, edema, and tumor. Three types of features train 40 trees with a depth of 20 each, and such features were intensity difference, intensity mean difference, and intensity range on a 3D line to check for structural changes. A 2000 combination is selected to design the decision trees. Festa et al. Festa et al. (2013) trained a random forest of 50 trees each of depth 25 on 120000 samples with 324 features which comprised of intensity, neighbourhood information, context information, and texture information. In Reza and Iftekharuddin (2013), a random forest classifier used intensities, the difference of intensities (T1 - T2, T1- Flair, T1 – T1c), and the texture- based features such as fractal and texton to train a RF using three-fold cross-validation. The authors further extended their work in Reza and Iftekharuddin (2014) for post-processing, where the connected component analysis removed tiny regions in the 3D volume, and holes in between the various parts of the segmented tumor according to the neighbouring region. In Goetz et al. (2014) Extremely Randomized Trees (ExtraTrees) were used, which introduced more randomness at the time of training. The classifier was trained on 208 features extracted from all four modalities, which included intensity values, local histograms, first-order statistics, second-order statistics, and basic histogram-based segmentation. In the paper, the ExtraTrees trained with the best threshold rather than the threshold derived from individual features. In Kleesiek et al. (2014), pixel classification was done with ten random forests with ten trees each, which were trained in parallel to reduce the training time and finally merged into a single forest with Gini impurity. One thousand samples for tumorous class and 1000 samples for the non-tumorous class trained the RF. A Classification forest in Meier et al. (2014) used 237 features which included appearance specific features (image intensities, first-order texture features, and gradient features), context sensitive features (ray feature, symmetry intensity difference features). Authors in Ellwaa et al. (2016), Le Folgoc et al. (2016), Lefkovits et al. (2016), Maier et al. (2015), Meier et al. (2015), Meier et al. (2016), Phophalia and Maji (2017), Song et al. (2016) used a random forest classifier with a combination of various intensity based features, gradient based features, texture based features and rotation invariant features. In Bharath et al. (2017) tensor features were extracted along with mean, entropy and standard deviation features. Authors in Serrano-Rubio and Everson (2019) extracted features for super voxels from multi scale images and created sparse feature vectors to segment the whole tumor. Subregions of the tumor were then separated using CRF. ### 4.6 Neural Network Based methods In Buendia et al. (2013) Grouping Artificial Immune Network (GAIN) took voxel intensity from 2D as well as 3D slices as input and statistical features and texture features for training as well as segmentation of brain MRI images. Information was in the form of bits from various image modalities. In Agn et al. (2015) Convolutional Restricted Boltzmann Machine trained GMM and spatial tissue priors. Table 4.6 summarizes the methods for the type of pre-processing, the dataset, the number of images used as well as the DSC achieved. The DSC is shown for various tumor sub-regions which include WT, ET, TC, Edema(ED) and Necrosis(NC) for training, validation and test datasets. The DSC comparison of different methods for tumor segmentation is shown in Fig. 5. The comparison is based on the TC region of validation or test set of either all the images or only the HGG images. The atlas and RF-based methods performed well compared to all other approaches. Random field methods, atlas-based methods, expectation maximization methods, and clustering methods do not use any post-processing techniques. Random forest-based methods were used by authors in Ellwaa et al. (2016), Festa et al. (2013), Kleesiek et al. (2014), Maier et al. (2015), Meier et al. (2014), Meier et al. (2015), Reza and Iftekharuddin (2014) and Song et al. (2016) for segmentation. However, Festa et al. (2013), Kleesiek et al. (2014) and Reza and Iftekharuddin (2014) used connected component analysis, Meier et al. (2014), Meier et al. (2015), Song et al. (2016) applied spatial regularization and authors in Ellwaa et al. (2016) and Maier et al. (2015) applied morphological operations to refine the segmentation output. Ref. Pre-processing Dataset # Images DSC Mean Table 4 – continued … Ref. Pre-processing Dataset # Images DSC Mean Summarization of segmentation methods using handcrafted features. @p0.75 cm p1.5 cm p1.3 cm p1.1 cm p1.75 cm@ Random Field methods Hamamci et al. (2011) \- Custom 29 Training TC:0.89 Test TC:0.80 Hamamci and Unal (2012) \- BraTS 2012 30 TC:0.69,ED:0.37 Guo et al. (2013) \- BraTS 2013 30 TC:0.82 Corso et al. (2008) IN Custom 20 Training TC:0.70,ED:0.66 Test TC:0.66,ED:0.61 Xiao and Hu (2012) NR BraTS 2012 30 TC:0.53,ED:0.25 Doyle et al. (2013) \- BraTS 2013 30 HGG WT:0.84,TC:0.54, ET:0.67 LGG WT:0.81,TC:0.54, ET:0.11 Subbanna and Arbel (2012) \- BraTS 2012 28 TC:0.66,ED:0.56 Taylor et al. (2013) BFC BraTS 2013 30 HG TC:0.62,ED:0.59 Atlas Based Methods Kwon et al. (2014) BFC, IN BraTS 2014 200 Validation WT:0.86,TC:0.79, ET:0.59 Test WT:0.88,TC:0.83, ET:0.72 Bakas et al. (2015) BFC, IN BraTS 2015 186 Training WT:0.88,TC:0.77, ET:0.68 Zeng et al. (2016) NR, HM BraTS 2016 200 WT:0.89,TC:0.77, ET:0.67 Expectation Maximization Based Methods Menze et al. (2012) \- BraTS 2012 30 HGG TC:0.55,ED:0.57 LGG TC:0.24,ED:0.42 Raviv et al. (2012) \- BraTS 2012 30 HGG TC:0.58,ED:0.60 LGG TC:0.32,ED:0.36 Zhao et al. (2012) \- BraTS 2012 30 TC:0.31,ED:0.35 Tomas-Fernandez and Wareld (2012) \- BraTS 2012 30 TC:0.43,ED:0.55 Zhao et al. (2013) IN BraTS 2013 30 HGG WT:0.83,TC:0.74, ET:0.68 LGG WT:0.83,TC:0.58, ET:0.51 Piedra et al. (2016) \- BraTS 2015 200 Training WT:0.74,TC:0.55, ET:0.54 Clustering Based Methods Cordier et al. (2013) \- BraTS 2013 30 HGG WT:0.79,TC:0.60, ET:0.59 LGG WT:0.76,TC:0.64, ET:0.44 Saha et al. (2016) \- BraTS 2013 BraTS 2015 200 WT:0.82,TC:0.71, ET:0.72 Rajendran and Dhanasekaran (2012) \- Custom 15 TC:0.82 Shin (2012) \- BraTS 2012 30 TC:0.30,ED:0.39 Random Forest Based Methods Meier et al. (2013) \- BraTS 2013 30 HGG WT:0.80,TC:0.69, ET:0.69 LGG WT:0.76,TC:0.58, ET:0.20 Zikic et al. (2012a) \- Custom 40 TC:0.85,NC:0.75, ET:0.80 Bauer et al. (2012) BFC, IN, HM BraTS 2012 30 HGG-real TC:0.62,ED:0.61 LGG- real TC:0.49,ED:0.35 Geremia et al. (2012) HM BraTS 2012 30 HGG-real TC:0.68,ED:0.56 LGG-real TC:0.52,ED:0.29 Zikic et al. (2012b) BFC BraTS 2012 30 HGG-real TC:0.71,ED:0.70 LGG-real TC:0.62,ED:0.44 Festa et al. (2013) BFC, HM BraTS 2013 30 HGG WT:0.83,TC:0.70, ET:0.75 LGG WT:0.72,TC:0.47, ET:0.21 Reza and Iftekharuddin (2013) BFC, HM BraTS 2013 30 HGG WT:0.92,TC:0.91, ET:0.88 LGG WT:0.92,TC:0.91, ET:0.88 Reza and Iftekharuddin (2014) \- BraTS 2014 200 Training WT:0.81,TC:0.66, ET:0.71 Goetz et al. (2014) BFC, HM BraTS 2013 208 WT:0.83,TC:0.71, ET:0.68 Kleesiek et al. (2014) HM, IN BraTS 2014 200 Training WT:0.84,TC:0.68, ET:0.72 Valid/Test WT:0.87,TC:0.76, ET:0.64 Meier et al. (2014) NR, IN, BFC BraTS 2013 30 Training WT:0.83,TC:0.66, ET:0.58 Challenge WT:0.84,TC:0.73, ET:0.68 Maier et al. (2015) BFC, IN BraTS 2015 252 WT:0.75,TC:0.60, ET:0.56 Meier et al. (2015) \- BraTS 2015 65 WT:0.83,TC:0.69, ET:0.63 Lefkovits et al. (2016) BFC, NR, IN BraTS 2013 30 WT:0.87,TC:0.88 Meier et al. (2016) \- BraTS 2013 30 Valid WT:0.79,TC:0.75, ET:0.66 Challenge WT:0.83,TC:0.76, ET:0.71 Ellwaa et al. (2016) BFC, HM BraTS 2016 200 WT:0.80,TC:0.72, ET:0.73 Song et al. (2016) IN BraTS 2015 274 Training WT:0.87,TC:0.72, ET:0.75 Le Folgoc et al. (2016) IN BraTS 2015 274 Training WT:0.84,TC:0.72, ET:0.71 Phophalia and Maji (2017) BFC BraTS 2017 285 Training WT:0.64,TC:0.49, ET:0.47 Test WT:0.63,TC:0.41, ET:0.42 Bharath et al. (2017) IN, HM BraTS 2017 285 Validation WT:0.79,TC:0.67, ET:0.61 Test WT:0.77,TC:0.61, ET:0.50 Serrano-Rubio and Everson (2019) \- BraTS 2018 285 Validation WT:0.80,TC:0.63, ET:0.57 Test WT:0.73,TC:0.58, ET:0.50 Neural Network Based Methods Buendia et al. (2013) \- BraTS 2013 30 WT:0.73,TC:0.59, ET:0.63 Agn et al. (2015) \- BraTS 2015 200 Test WT:0.81, TC:0.68, ET:0.65 Figure 5: Comparison of segmentation methods using hand-crafted features (DSC for validation/Test for TC of all images/HGG). Limitations associated with methods working on handcrafted features are as follows: 1. 1. Identifying tissue probability classes: tumor tissue intensities overlap with that of the healthy tissues, as mentioned in Table 2; in such a case identifying the probable class for tumorous tissue is quite challenging. 2. 2. Atlas matching (healthy or tumorous atlas): Usually, the brain atlas contains the normal brain tissue distribution map. Due to the deformation of the healthy tissue by the tumor, the atlas matching of a tumorous brain may result in the wrong map. 3. 3. Manual seed point identification for the tumor or its subparts: Almost all semiautomated methods require some initial selection for the tumorous voxel, its diameter, or its rough outer boundary. The selection depends on the expert. Its repetition over all the slices of the brain is a time-consuming task. 4. 4. Feature extraction from the images: RF training depends on the features extracted from the brain images. All the MRI modalities contain different biological information. This variation in the information complicates the tasks of feature extraction as well as selection to training the RF. 5. 5. Discontinuity: The results generated by such methods are spurious, which increases the chances of false positives. Proper post-processing techniques are required to fine-tune the generated results. ## 5 Deep Neural Network Deep Neural Network (DNN) is an artificial intelligence function which mimics the human brain when working for data processing and pattern creation in decision making. There are mainly four reasons contributing to its success: 1. 1. The DNN models solve problems in an end-to-end manner. The models learn the features from the data automatically with the help of various functions. Feature learning improves from simple features at initial layers to complex features at deeper layers of the model. Automatic feature learning has eliminated the need for domain expertise. 2. 2. Computational capabilities of the hardware in terms of GPU and efficient implementation of the model on a GPU with various open source libraries have made the training of the DNN 10 to 50 times faster than the CPU. 3. 3. Efficient optimization techniques for robust learning contribute to the success of DNN for optimal network performance. 4. 4. The availability of benchmark datasets allows training and testing of various deep learning models to be implemented successfully. The exponential growth of the usage of DNN techniques to solve a variety of problems is shown in Fig. 6. A similar growth pattern is identified for solving the brain tumor segmentation problem, which is as shown in Fig. 7. Figure 6: The growth rate of research papers with the keyword ‘deep learning’ on PubMed for Biotechnology Information (2020 (accessed December 30, 2020). Figure 7: The growth rate research papers with the keywords ‘deep learning’ and ‘brain tumor segmentation’ on PubMed for Biotechnology Information (2020 (accessed December 30, 2020). The general block diagram of any deep learning technique is as shown in Fig. 8. The crucial task is to get the labelled data set. After the availability of the dataset, it is divided into training and validation sets, followed by appropriate pre-processing techniques as per the task on hand. Actual DNN applies to the training data, which makes the network learn the network parameters. The output of DNN is spurious in some brain areas, and post- processing fine-tunes the segmentation result. And at last, the evaluation framework measures the performance of the network. Figure 8: Generalize DNN network. ### 5.1 Evolution of DNN In medical image analysis, the semantic segmentation task is common, e.g., segmentation of an organ or a lesion. The Convolutional Neural Network (CNN), specifically the DNN architecture gained its popularity from 1990 with the architecture of LeNetLeCun et al. (1998), which was two layers of architecture. After the availability of fast GPUs and other computing facilities, over fifteen years later, AlexNet was proposed by authors in Krizhevsky et al. (2012) with five convolutional layers. The CNNs are designed with a variety of layers (like convolution layers, non-linearity layers, pooling layers, fully-connected layers), regularization, optimization and normalization, loss functions, as well as network parameter initializations. Authors in Bernal et al. (2019); Litjens et al. (2017) have nicely explained the architectural elements of CNN which are as follows: Convolution layer: extracts representative features from the input. It achieves: 1) weight- sharing mechanism, 2) exploits local connectivity of the input, and 3) provides shift invariance to some extent. Non-linearity layer: provides sparse representation of input space which achieves data variability invariance and a computationally efficient representation. Types of non- linearity layers are Rectified Linear Unit (ReLU), Leaky ReLU(LReLU), Parametric ReLU (PReLU), S-shaped ReLU(SReLU), Maxout and its variants, Exponential Linear Unit(ELU) and its variants. Pooling/subsampling layer: extracts prominent features from a non-overlapping neighbourhood. It is used to: 1) reduce the number of parameters, 2) reduce over-fitting, and 3) achieve translation invariance. Commonly used pooling techniques are max pooling and average pooling. Fully connected layer: converts 2D features to a 1D feature vector. It helps to predict an input image class label. Loss functions: improve the learning process by improving within class similarity and between class separability. Regularizations: deal with over-fitting issues. Commonly used regularization techniques are L1 and L2 regularizations, dropout, early stopping and batch normalization. Optimization: used for proper updates of network parameters during back propogation. Various techniques of optimization include Nesterov accelerated gradient descent, adaptive gradient algorithm (Adagrad), Root Means Square Propogation (RMSProp). Weight initialization and normalization: boosts the learning process by helping the weight update with proper initial values. The convolution layers extract features from the input by applying kernels to it. The output feature map depends on the type of the kernel and its size. At the initial layers simple features are extracted from the input like edge or lines. The gradual increase of the network depth requires higher number of feature maps to extract complex shapes Agravat and Raval (2018). The activation function is applied to the feature maps to learn the non-linear relationship within the data and allows the errors to backpropagate to the initial layers for accurate parameter updates. An increase in the network depth exponentially increases network parameters, which is computationally highly expensive. Pooling layers are introduced to down sample the input feature maps and reduce their spatial size by considering only the prominent features. It balances the growth of the network parameters. The fully connected layers at the end of the network flatten the result of the input layers before actual classification. The loss function at the classification layer calculates the error in the prediction. Based on this error, the network parameters are updated using the gradient descent methods by backpropagation. Commonly used loss functions are: * • Cross Entropy loss function: $J=-\frac{1}{N}\left(\sum_{voxels}y_{true}\cdot log\,\hat{y}_{pred}\right)$ (5) * • Dice Loss function: $J=1-\frac{2\sum_{voxels}y_{true}y_{pred}+\epsilon}{\sum_{voxels}y_{true}^{2}+\sum_{voxels}y_{pred}^{2}+\epsilon}$ (6) Here, N = Number of voxels, $y_{true}$ = ground truth label, $y_{pred}$ = network predicted label, and $\epsilon$ is to avoid zero denominator. The CNNs, with convolution layers followed by fully-connected layers, classify an entire image in a single category. GoogleNet(Inception)Szegedy et al. (2015) and InceptionV3Szegedy et al. (2016) networks have introduced the inception module which implements the kernels of different sizes to reduce network parameters. ResNetTarg et al. (2016) has introduced the residual connection between the convolution layers such that it learns the identity function which allows the effective training of deeper networks. In DenseNetHuang et al. (2017) the layers are very narrow and add very few number of feature maps to the network which again allows to design deeper architectures and training is efficient as each layer has direct access to the gradient of the loss function. For semantic segmentation, the CNN can simply be used to classify each voxel of the image individually by presenting it with several patches extracted around the particular voxel. Each voxel of the image is classified with the same process, resulting in the segmentation of the entire image. This ‘sliding-window’ approach repeats the convolution operations for adjacent patches of neighbouring voxels. The improvement to this approach is the replacement of fully-connected layers with convolution layers, which generates the probability map of the entire input image rather than generating output for a single voxel. Such networks are known to be Fully Convolutional Neural Networks(FCNN). FCNLong et al. (2015) is a type of FCNN where skip connections are introduced to reconstruct a high resolution image. U-NetRonneberger et al. (2015), a very well-known highly adapted network architecture for tumor segmentation has taken the encoder - decoder approach where every encoding layer is connected with its peer decoding layer with a skip connection to reconstruct the dimension as well to get the particular spatial information from the encoding layer. SegNetBadrinarayanan et al. (2017) and DeepLabChen et al. (2017) are other types of FCNN architectures adopted to solve the problem of brain tumor segmentation. ### 5.2 Handling Class Imbalance Problem In medical image analysis, finding the number of abnormal images as compared to the normal images is difficult as an abnormality like a tumor is rare. This problem is called ‘class imbalance’. All images in this article are of the tumor; the class imbalance issue persists because, in a single brain volume, the proportion of the tumor is less compared to the average brain volume. Even on a single brain slice, the area of the tumor is small compared to the brain part, which leads to a class imbalance problem. The proportion of the brain volume (BV) and background volume (BGV) with respect to the tumor volume across all the slices for BraTS 2019 dataset is as shown in Table 5. Table 5: The proportion of non tumor brain volume (NTBV) and non tumor background volume (NTBGV) vs. tumorous volume(TV). | % NTBV | % Necrosis | % Edema | % ET ---|---|---|---|--- BV | 75.75 | 0.70 | 17.78 | 5.77 BGV | 99.11 | 0.03 | 00.65 | 0.21 The following approaches address data imbalance problem. * • Patch sampling: The patch sampling-based methods can mitigate the imbalanced data problem. The sampling process includes equiprobable patches from all the tumorous regions as well as the non-tumorous regions. * • Improvement in loss functions: Some of the loss functions, when used in their raw form, may not suit the tumor segmentation task, as they consider balanced datasets. These functions adopt an imbalanced dataset with modifications as: 1. 1. The weighted cross-entropy loss function: Voxel-wise class prediction averaged for all voxels may lead to error if the class is imbalanced in the image. Even if the weighted cross-entropy loss function weighs each voxel individually, the issue of class imbalance will not be addressed. Since the background regions dominate the training set, it is reasonable to incorporate the weights of multiple classes into the cross-entropy such that more weight is given to the voxels of the positive class. This has been shown in equation 7. $Loss_{WCE}=\sum_{voxels}\sum_{classes}y_{true}\cdot log\hat{y}_{pred}$ (7) 2. 2. Generalized Dice Loss Function: Authors in Sudre et al. (2017) propose using class rebalancing properties of the generalized dice. It provides a robust and accurate deep-learning loss function for unbalanced tasks. This has been shown in equation 8. $Loss_{GDL}=1-2\frac{\sum_{classes}w\sum_{voxels}y_{true}\,y_{pred}}{\sum_{classes}w\sum_{voxels}y_{true}+y_{pred}}$ (8) 3. 3. Focal Loss Function: The detection task uses focal loss. It encourages the model to down-weight easy examples and focuses training on hard negatives. Formally, the focal loss defines a modulating factor to the cross-entropy loss and a parameter for class balancing Lin et al. (2017). It has been shown in equation 9. $\displaystyle Loss_{FL}(p_{t})=-\alpha_{t}(1-p_{t})^{\gamma}log(p_{t})$ (9) $\displaystyle p_{t}=\begin{cases}p_{t}&if\,y_{i}=1\\\ 1-p_{t}&otherwise\end{cases}$ where $y\epsilon\\{1,-1\\}$ is the ground-truth class, and $p_{t}\epsilon[0,1]$ is the estimated probability for the class with label $y=1$. The weighting parameter $\alpha$ deals with the imbalanced dataset. The focusing parameter $\gamma$ smoothly adjusts the rate at which easy examples are down-weighted. Setting $\gamma>0$ can reduce the relative loss for well- classified examples. It places the focus on hard and misclassified examples; the focal loss is equal to the original cross-entropy loss when $\gamma=0$. * • Augmentation techniques: Most of the time, a large number of labels for training are not available for several reasons. Labelling the dataset requires an expert in this field, which is expensive and time-consuming. Training large neural networks from limited training data causes an over-fitting problem. Data augmentation is a way to reduce over-fitting and increase the amount of training data. It creates new images by transforming (rotated, translated, scaled, flipped, distorted, and adding some noise such as Gaussian noise) the ones in the training dataset. Both the original image and the created images are input to the neural network. For example, a large variety of data augmentation techniques include random rotations, random scaling, random elastic deformations, gamma correction augmentation, and mirroring on the fly during training. ### 5.3 CNN Methods Classification for Tumor Segmentation Figure 9: Design aspects of CNN architectures. The classification of CNNs for tumor segmentation uses the combination of design aspects as shown in Fig. 9. Input type: The network may take 2D/3D input in the form of patches or images. The CNNs with fully-connected layers classify the centre voxel of the patch whereas FCNN predicts multiple or all voxels of the patch/image. The network may take multi-scale patches to extract coarse and fine details of the input. Output Type: The output of the network depends on the problem to be solved. It predicts a single output for the classification problem and multiple voxel outputs for the semantic segmentation problem. Type of network: The CNN approach indicates a convolution network with fully-connected layers at the end whereas FCN indicates a network with all convolution layers. Ensemble Approach: Ensemble approaches can be classified into serial and parallel approaches. In the serial approach multiple networks combine in a series to fine tune the end output. The input of one network depends on the output of the other. In the parallel approach multiple networks work in parallel and take the same/different input to gather comprehensive details from the input. The final output of the network is decided based on the majority voting or averaging of all the network output. Figure 10: Variational growth of CNN for tumor segmentation. The evolution of CNN based methods for tumor segmentation is as shown in Fig. 10. Some of well-known CNN architectures for brain tumor segmentation are represented in Fig. 11, Fig. 12, Fig. 13 and Fig. 14. Figure 11: TwopathwayCascadedCNN ArchitectureHavaei et al. (2017). The architecture of Havaei et al. (2017) was a two pathway CNN which took 2D multi-resolution input patches, applied the convolution operations and concatenated the output of both the pathways. The deepmedic Kamnitsas et al. (2016) also followed two pathways with 3D multiresolution input patches and incorporated the residual connections and predicted the output for multiple voxels at a time. Figure 12: Deep Medic Architecture Kamnitsas et al. (2016). Figure 13: U-net architectureRonneberger et al. (2015). U-net Ronneberger et al. (2015) is an encoder-decoder architecture with skip- connections between the peer layers of analysis and synthesis path. This architecture has gained the most popularity. Anisotropic Wang et al. (2017) architecture followed the serial ensemble approach. The first network segments the whole tumor, the second segments the tumor core and considers the output of first network and finally the third network segments the enhancing tumor with the help of the output of the second network. Figure 14: Anisotropic Cascaded ArchitectureWang et al. (2017). Initially, shallow CNN performed voxel-based image segmentation. The authors in Havaei et al. (2017) proposed voxel-wise classification using a CNN multipath way architecture. One pathway used 2D patches of size 32x32, and the other used a fully connected input of 5x5 patch size having a center pixel the same as the 32x32 patch. Patch selection was made such that the labels were equiprobable. L1 and L2 regularizations were used, and are even now being used, to overcome overfitting. In Urban et al. (2014), a voxel-wise class probability prediction used separate 3D CNNs for HGG and LGG images. The final probability classified the voxel into the six classes. In Zikic et al. (2014), a five layer deep 2D CNN architecture performed voxel-wise classification. Gradually the depth of CNN has increased to accommodate more layers in the network. In Pereira et al. (2015), a 2D deep CNN with fully connected output layers separated HGG and LGG. This approach was further extended by Randhawa et al. (2016) with a two-phase process along with a weighted loss function; initially, the network trained using equiprobable patches that follow actual patch training without the class imbalance problem. In Havaei et al. (2015), the authors designed a 2D Input Cascaded CNN, which took the output of a Two Path CNN to train other 2D CNNs with the input images. After the successful implementation of FCN by the authors in Long et al. (2015), the authors in Kamnitsas et al. (2016) proposed a two-pathway architecture, where both pathways included residual connections and trained on the different input patch sizes. As the network was fully convolutional, multiple voxels of the input patch label at a time. Zhao et al. (2016) used a 2D FCNN approach along with CRF. The FCNN trained on patches and the CRF on slices. In Chang (2016), cascaded encoder-decoders like FCNNs along with residual connections were used for segmentation. The first FCNN segmented the whole tumor, followed by the internal tumor regions by the second FCNN. Authors in McKinley et al. (2016) proposed an encoder-decoder FCNN based architecture to segment various tumor sub-regions. In Casamitjana et al. (2016), the authors proposed three different FCNN architectures and showed that the architectures with multi-resolution features performed better compared to a single-resolution architecture. Authors in Lopez and Ventura (2017) implemented a Dilated Residual Network for patch-based training where equiprobable patches were supplied to the network for training. Authors in Dong et al. (2017b) adopted a U-net architecture for brain tumor segmentation. Authors in Feng and Meyer (2017) modified the U-net, which took 3D input, and the depth of the network was reduced to three. Authors in [64], optimized the training of the network proposed in their previous work Isensee et al. (2017). In Chen et al. (2018a), authors proposed a novel encoder- decoder architecture that worked well on multiple biomedical image segmentation problems. Various other CNN based approaches where a single CNN was used for segmentation task were Casamitjana et al. (2017), Castillo et al. (2017), Hu and Xia (2017), Islam and Ren (2017), Jesson and Arbel (2017), Kim (2017), Marcinkiewicz et al. (2018), Mehta and Arbel (2018), Nuechterlein and Mehta (2019), Pawar et al. (2017), Pourreza et al. (2017), Shaikh et al. (2017). An ensemble of CNNs performs better compared to a single CNN, as in Kamnitsas et al. (2017), where the authors implemented an ensemble of seven networks using DeepMedic, FCN, and U-net along with variations of those three networks. They also tried three different approaches for pre-processing on all these networks. The output of individual networks with all the pre-processing generates the final label. Authors in McKinley et al. (2017) extended their work proposed in McKinley et al. (2016) where the dense module and the dilated module were introduced in the encoder decoder cascaded architecture of two networks. The pooling layers were replaced with dilated convolution layers. The authors in Myronenko (2018) implemented an ensemble of ten encoder-decoder based architectures, which included an auto-encoder stream to reconstruct the original image for additional information and regularization purposes. Authors in McKinley et al. (2018) extended their approach proposed in Jungo et al. (2017). They used a combination of Unet and Densenet with U-net like architecture containing dense blocks of dilated convolution. The network of Zhao et al. (2016) was extended in Zhao et al. (2017) to create an ensemble of three networks which were trained on three image views. A two network cascaded path was used in Colmeiro et al. (2017), where a Coarse Segmentation Network segmented the WT and a Fine Segmentation Network segmented the sub-regions of the network. Both the networks used 3D U-net with a four level deep architecture. Authors in Wang et al. (2017) used three networks (WNet, TNet, and ENet) to prepare the cascaded path. The multi-scale prediction averaged for the private network. These networks trained on three views of images (axial, coronal, sagittal), and used the results from the averages to generate the final segmentation output. In Zhou et al. (2018), a cascaded network was proposed, which initially segmented the whole tumor, followed by the tumor core, and enhancing tumor segmentation refinement. In Xu et al. (2018) the authors proposed a cascaded UNet with three networks. The network process downsampled the input and the generated output was passed to the next network in a cascaded sequence. Authors in Xu et al. (2018) proposed multi-scale mask 3D U-nets with atrous spatial pyramid pooling layers, where the WT segmentation generated by the first network was passed to the second for TC generation, which in sequence passed to the final network to generate an ET output. Other ensemble based CNN approaches were explained in Albiol et al. (2018), Chandra et al. (2019), Choudhury et al. (2018), Hua et al. (2018), Kermi et al. (2018), Ma and Yang (2018), Wang et al. (2018), Yao et al. (2019). The comparison of the methods summarized in Table 6 is based on the pre- processing techniques, DNN architecture, activation function, loss function, postprocessing, and the DSC achieved. The pre-processing techniques considered for the comparison are: 1. 1. Intensity clipping: 1 % of highest and lowest frequencies are clipped. 2. 2. Bias field correction. 3. 3. Z-score normalization: $Z=(x-\mu)/\sigma$. 4. 4. Histogram matching: Histogram of all the images is match with the reference histogram. 5. 5. Image normalization: Min-max normalization. 6. 6. Intensity standardization with Nyul approach Nyúl and Udupa (1999). 7. 7. Image denoising: applies noise filtering e.g, Gaussian noise filtering. 8. 8. Intensity rescaling : rescaling the intensity range between some specific limits. The post-processing techniques used for segmentation result improvement are: 1. 1. Connected component analysis: Analyse the connected components and removes the component with the volume below some threshold. 2. 2. Conditional random field. 3. 3. Morphological operators to remove false positives and fill the holes 4. 4. relabelling the output label: Enhancing tumor labels below some threshold are relabelled as necrosis. Fig. 15 shows a pictorial representation of the DSC of various CNN methods for the whole tumor region of the validation set. The average DSC of a CNN is 0.86, for a deep CNN and FCN it is 0.87, and for an ensemble approach it is 0.89. The ensemble of the CNNs/FCNNs learns robust features from the input. ### 5.4 Hardware and Software for DNN Hardware: DNN methods have gained in popularity after the availability of the Graphical Processing Units (GPU). Nowadays efficient parallel processing for manipulation of large amounts of data is possible with the help of General Purpose GPU(GPGPU). Computing libraries like CUDA and OpenGL allow the efficient implementation of the processing code on a GPU. The performance of the GPU mainly depends on GPU computing cores (CUDA cores), Tensor processing cores, Thermal Design Power (TDP), and on-board GPU memory. Various types of GPUs used for implementation of the CNN methods for segmentation tasks are as shown in Table 6. As the computing capacity of the GPU increases, it allows more complex networks with a higher number of network parameters that can be trained with less time. Table 6: GPU specificationsNvidia (2020 (accessed April 28, 2020). Year | GPU Type | CUDA Cores | Tensor Cores | TDP (Watts) | RAM (GB) ---|---|---|---|---|--- 2016 | Tesla K80 | 2496 | N/A | 300 | 12 2016 | GeForce GTX 980 Ti | 2816 | N/A | 165 | 6 2016 - 2017 | GeForce GTX 1080 Ti | 3584 | N/A | 250 | 11 2017 | GTX Titan X | 3072 | N/A | 240 | 12 2018 | Quadro P4000 | 1792 | N/A | 105 | 8 2018 | Quadro P5000 | 2560 | N/A | 180 | 16 2018 | Titan Xp | 3840 | N/A | 250 | 12 2018 | Tesla P100 | 3584 | N/A | 250 | 16 2018 | Tesla V100 | 5120 | 640 | 300 | 16/32 2018 | DGX-1 | 40960 (8xV100) | 5120 | 3500 | 128(16/ GPU) 2020 | RTX 2080 Ti | 4352 | 544 | 250 | 11 Software: Open software library packages provide implementation of various CNN operations like convolution. Most popular python library packages for CNN implementations are: CaffeJia et al. (2014), TensorflowAbadi et al. (2016), Theano Bastien et al. (2012), and PyTorchPaszke et al. (2019). Some of the third-party packages which work on the top of these networks are KerasChollet et al. (2018), LasagneDieleman et al. (2015), and TensorLayerDong et al. (2017a). Ref. Pre-processing Input Modality + Augmentation Patch/ Image Input view Network Architecture # Networks Ensemble Type Loss Function Post-processing Dataset DSC Mean Table 6 – continued … Ref. Pre-processing Input Modality + Augmentation Patch/ Image Input view Network Architecture # Networks Ensemble Type Loss Function Post-processing Dataset DSC Mean Summarization of Segmentation methods using CNN architectures. @p0.8 cm p2 cm p2 cm p1.5 cm p1.5 cm p1.5 cm p1.5 cm p1.2 cm p1 cm p1 cm p1 cm p2.5 cm@ CNN Urban et al. (2014) 5 T1, T2, T1c, FLAIR 3D patches axial 3D CNN 1 - softmax 1 BraTS 2013 Test set: WT:0.87,TC:0.77,ET:0.73 Zikic et al. (2014) 2,downsampling T1, T2, T1c, FLAIR 2D patches axial 2D CNN 1 - softmax - BraTS 2013 Training set: WT:0.84,TC:0.74,ET:0.69 Deep CNN Pereira et al. (2015) 2, 6 T1, T2, T1c, FLAIR, Rotation at certain angles 2D patches axial 2D CNN 2 - categorical cross entropy 3 BraTS 2015 Training set: WT:0.87,TC:0.73,ET:0.68 Havaei et al. (2015) 1,2,3 T1, T2, T1c, FLAIR 2D multiscale patches axial 2D CNN 4 cascaded, parallel softmax 1 BraTS 2013 Test Set: WT:0.88,TC:0.79,ET:0.73 Randhawa et al. (2016) 1,2,3 T1, T2, T1c, FLAIR, horizontal and vertical flipping 2D patches axial 2D CNN 2 - weighted cross entropy with L1 and L2 regularization 1 BraTS 2016 Test Set: WT:0.87,TC:0.75,ET:0.71 FCNN Kamnitsas et al. (2016) 3 T1, T2, T1c, FLAIR + with flipping around a mid- sagittal plane 3D multiscale patches axial 3D FCNN 2 parallel - 2 BraTS 2015 Without residual connections: WT:0.90,TC:0.75,ET:0.72 With residual connection: WT:0.90,TC:0.76,ET:0.72 Zhao et al. (2016) 1, 5 T1c, T2, FLAIR 2D multiscale patches axial 2D FCNN 2 cascaded - 1 BraTS 2013 Validation set: WT:0.86,TC:0.73,ET:0.62 Test set: WT:0.87,TC:0.83,ET:0.76 Chang (2016) 5 T1, T2, T1c, FLAIR, Rotation at 45 degree, bias field corrected images, additional images 2D patches axial 2D FCNN 2 cascaded softmax with L2 regularization - BraTS 2016 Test set: WT:0.89,TC:0.83,ET:0.78 McKinley et al. (2016) \- T1, T2, T1c, FLAIR 2D patches axial 2D FCNN 1 - sigmoid - BraTS 2012 Training set: WT:0.87,TC:0.69,ET:0.56 Casamitjana et al. (2016) \- T1, T2, T1c, FLAIR 3D multiscale patches axial 3D FCNN 2 parallel softmax - BraTS 2015 Training set: 3Dnet1 WT:0.90,TC:0.77,ET:0.63 3DNet2 WT:0.92,TC:0.70,ET:0.74 3DNet3 WT:0.92,TC:0.84,ET:0.77 Lopez and Ventura (2017) 5 T1, T2, T1c, FLAIR 2D patches axial 2D FCNN 1 - dice - BraTS 2017 Validation set: WT:0.78,TC:0.69,ET:0.57 Test set: WT:0.69,TC:0.61,ET:0.51 Pawar et al. (2017) 4 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - softmax - BraTS 2017 Validation set: WT:0.82,TC:0.63,ET:0.58 Test set: WT:0.78,TC:0.58,ET:0.50 Islam and Ren (2017) \- T2, T1c, FLAIR 2D images axial 2D FCNN 1 - - - BraTS 2017 Validation set: WT:0.88,TC:0.76,ET:0.69 Test set: WT:0.86,TC:0.70,ET:0.62 Shaikh et al. (2017) 3 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - dice + cross entropy 1,2 BraTS 2017 Validation set: WT:0.87,TC:0.68,ET:0.65 Test set: WT:0.83,TC:0.65,ET:0.65 Pourreza et al. (2017) 2,4 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - - 3,1 BraTS 2017 Validation set: WT:0.86,TC:0.60,ET:0.69 Test set: WT:0.80,TC:0.55,ET:0.55 Dong et al. (2017b) 3 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - dice - BraTS 2015 Training set: WT:0.86,TC:0.86,ET:0.65 Feng and Meyer (2017) 1,2,3 T1, T2, T1c, FLAIR 3D patches axial 3D FCNN 1 - - - BraTS 2017 Validation set: WT:0.84,TC:0.75,ET:0.66 Jesson and Arbel (2017) 3 T1, T2, T1c, FLAIR 3D images axial 3D FCNN 1 - multi scale weighted - BraTS 2017 Validation set: WT:0.90,TC:0.75,ET:0.71 Test set: WT:0.86,TC:0.78,ET:0.71 Isensee et al. (2018) 3 T1, T2, T1c, FLAIR 3D patches axial 3D FCNN 1 - weighted dice + cross entropy 4 BraTS 2018 Validation set: WT:0.91,TC:0.86,ET:0.81 Test set: WT:0.88,TC:0.81,ET:0.78 Chen et al. (2018a) \- T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - cross entropy - BraTS 2017 Training set: WT:0.83,TC:0.73,ET:0.65 Kermi et al. (2018) 1,3 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - weighted cross entropy + generalized dice loss - BraTS 2018 Validation set: WT:0.87,TC:0.81,ET:0.78 Test set: WT:0.81,TC:0.73,ET:0.65 Stawiaski (2018) \- T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - dice - BraTS 2018 Validation set: WT:0.90,TC:0.85,ET:0.79 Test set: WT:0.88,TC:0.79,ET:0.78 Hu et al. (2018a) \- T1, T2, T1c, FLAIR 3D images axial 3D FCNN 1 - cross entropy - BraTS 2018 Validation set: WT:0.86,TC:0.77,ET:0.72 Nuechterlein and Mehta (2019) 5 T1, T2, T1c, FLAIR 3D images axial 3D FCNN 1 - softmax - BraTS 2018 Test set: WT:0.85,TC:0.68,ET:0.67 Ensemble of CNNs Havaei et al. (2017) 1, 2 on (T1,T1c), 3 T1, T2, T1c, FLAIR 2D multiscale patches axial 2D CNN 3 cascaded, parallel softmax with L1 and L2 regularization - BraTS 2013 Training set-HGG: WT:0.79,TC:0.68,ET:0.57 Training set-LGG: WT:0.81,TC:0.75,ET:0.54 Kamnitsas et al. (2017) 3,2,5 and its different combination T1, T2, T1c, FLAIR 3D multiscale patches axial 3D FCNN 7 parallel - 1 BraTS 2017 Validation set: WT:0.90,TC:0.80,ET:0.74 Test set: WT:0.87,TC:0.79,ET:0.73 Zhao et al. (2017) 1, 5 T1c, T2, FLAIR 2D images axial, coronal, sagittal 2D FCNN 3 parallel - 1 BraTS 2017 Validation set: WT:0.89,TC:0.79,ET:0.75 Test set: WT:0.88,TC:0.75,ET:0.76 Myronenko (2018) 3, intensity scale, intensity shift, flip T1, T2, T1c, FLAIR 3D images axial 3D FCNN 1 - weighted average of L2, Dice,KL - BraTS 2018 Validation Set: WT:0.91,TC:0.87,ET:0.82 Test set: WT:0.88,TC:0.82,ET:0.77 McKinley et al. (2018) 3 on the individual volume T1, T2, T1c, FLAIR 2D images axial, coronal, sagittal 2D FCNN 3 parallel - - BraTS 2018 Validation set: WT:0.90,TC:0.85,ET:0.80 Test set: WT:0.89,TC:0.80,ET:0.73 Wang et al. (2017) 3 T1, T2, T1c, FLAIR 3D patches axial, coronal, sagittal 2D FCNN 3 parallel of cascades dice - BraTS 2017 Validation set: WT:0.90,TC:0.83,ET:0.78 Test set: WT:0.87,TC:0.77,ET:0.78 Zhou et al. (2018) \- - 3D multi scale patches axial - 7 parallel - 1, 4 BraTS 2018 Validation set: WT:0.91,TC:0.87,ET:0.81 Test set: WT:0.88,TC:0.80,ET:0.78 Lachinov et al. (2018) 3,1 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 3 cascaded mean dice - BraTS 2018 Validation set: WT:0.91,TC:0.84,ET:0.77 Xu et al. (2018) 2,3 T1, T2, T1c, FLAIR 3D patches axial 3D FCNN 3 cascaded cross entropy - BraTS 2018 Validation set: WT:0.90,TC:0.83,ET:0.80 McKinley et al. (2017) 3 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 2 cascaded - 1 BraTS 2017 Validation set: WT:0.88,TC:0.76,ET:0.71 Test set: WT:0.86,TC:0.71,ET:0.71 Colmeiro et al. (2017) 5,3 T1, T2, T1c, FLAIR 3D patches axial 3D FCNN 2 cascaded dice - BraTS 2017 Validation set: WT:0.86,TC:0.69,ET:0.66 Test set: WT:0.82,TC:0.67,ET:0.60 Castillo et al. (2017) 5 T1, T2, T1c, FLAIR 2D multiscale patches axial 3D CNN 4 parallel softmax - BraTS 2017 Validation set: WT:0.88,TC:0.68,ET:0.71 Test set: WT:0.86,TC:0.67,ET:0.65 Kim (2017) \- T1, T2, T1c, FLAIR 2D images axial, coronal, sagittal 2D FCNN 3 parallel dice - BraTS 2017 Validation set: WT:0.88,TC:0.73,ET:0.75 Test set: WT:0.86,TC:0.73,ET:0.72 Casamitjana et al. (2017) \- T1, T2, T1c, FLAIR 3D images axial 3D FCNN 2 cascaded weighted dice + cross entropy - BraTS 2017 Validation set: WT:0.88,TC:0.64,ET:0.71 Hu and Xia (2017) 5 T1, T2, T1c, FLAIR 2D patches axial 2D FCNN 4 parallel to cascaded dice - BraTS 2017 Validation set: WT:0.85,TC:0.70,ET:0.65 Test set: WT:0.81,TC:0.69,ET:0.55 Marcinkiewicz et al. (2018) \- T2, T1c, FLAIR 2D images axial 2D FCNN 2 cascaded dice - BraTS 2018 Validation set: WT:0.90,TC:0.81,ET:0.75 Test set: WT:0.86,TC:0.73,ET:0.65 Ma and Yang (2018) 3,4,5 T2, T1c, FLAIR 3D images axial 3D FCNN 3 parallel to cascaded dice - BraTS 2018 Validation set: WT:0.87,TC:0.77,ET:0.74 Test set: WT:0.81,TC:0.73,ET:0.65 Wang et al. (2018) 3 T1, T2, T1c, FLAIR + transformation and noise blurring 3D images axial 3D FCNN(Test time augmentation) 3 cascaded dice - BraTS 2018 Validation set: WT:0.90,TC:0.86,ET:0.80 Test set: WT:0.88,TC:0.80,ET:0.75 Albiol et al. (2018) 3 T1, T2, T1c, FLAIR 3D patches axial 3D FCNN 4 parallel dice - BraTS 2018 Validation set: WT:0.88,TC:0.78,ET:0.77 Test set: WT:0.85,TC:0.74,ET:0.72 Choudhury et al. (2018) 8 T1, T2, T1c, FLAIR 2D images axial, coronal, sagittal 2D FCNN 18 parallel - - BraTS 2018 Validation set: WT:0.88,TC:0.79,ET:0.71 Hu et al. (2018b) 5 T1, T2, T1c, FLAIR 2D images axial, coronal, sagittal 2D FCNN 3 parallel - - BraTS 2018 Validation set: WT:0.88,TC:0.74,ET:0.69 Test set: WT:0.85,TC:0.72,ET:0.66 Mehta and Arbel (2018) 3, 8 T1, T2, T1c, FLAIR 3D images axial 3D FCNN 5 parallel weighted categorical cross entropy - BraTS 2018 Validation set: WT:0.91,TC:0.83,ET:0.79 Test set: WT:0.87,TC:0.77,ET:0.71 Chandra et al. (2019) 4 T1, T2, T1c, FLAIR 3D images axial 3D FCNN 3 parallel generalized dice 2 BraTS 2018 Validation set: WT:0.90,TC:0.81,ET:0.77 Test set: WT:0.83,TC:0.73,ET:0.62 Yao et al. (2019) \- T1, T2, T1c, FLAIR 3D images axial, coronal, sagittal 3D FCNN 3 parallel dice - BraTS 2018 Validation set: WT:0.88,TC:0.81,ET:0.78 Test set: WT:0.86,TC:0.77,ET:0.72 Chen et al. (2018b) 2,4,8 T1, T2, T1c, FLAIR 3D images axial, coronal, sagittal 3D FCNN 3 parallel multi class dice - BraTS 2018 Validation set: WT:0.89,TC:0.83,ET:0.75 Test set: WT:0.84,TC:0.78,ET:0.69 Puch et al. (2019) T1, T2, T1c, FLAIR 3D images axial 3D FCNN 2 parallel categorical cross entropy - BraTS 2018 Validation set: WT:0.90,TC:0.77,ET:0.76 Test set: WT:0.86,TC:0.75,ET:0.69 Kori et al. (2018) 3 T1, T2, T1c, FLAIR 2D images + 3D multi resolution patches axial 3D FCNN 3 parallel - 2,1 BraTS 2018 Validation set: WT:0.89,TC:0.76,ET:0.76 Test set: WT:0.83,TC:0.72,ET:0.69 Figure 15: Validation set DSC comparison for whole tumor region of CNN methods. ### 5.5 Proposed Architecture for Tumor Segmentation The authors of the article have adopted a 2D U-net architecture with three layersAgravat and Raval (2019a). Each encoder layer is replaced with a dense module as shown in Fig. 16. In the first phase, the network is trained for the whole tumor for 50 epochs with a dice loss function. Further the network parameters are initialized with whole tumor weights to train the network for necrosis, enhancing and edema subregions. The inductive transfer learning approach Pan and Yang (2009) for parameter initialization has reduced false positives and boosted the network training convergence. In the second phase, these subregion weights are used as initial parameters to train networks again for the same subregions using focal loss function. Network training continues for 50 epochs for each subregion. From the BraTS 2019 training dataset, 85% of the images are given for training and 15% are kept for validation. The input to the network is 2D slices of size 240x240 from all the four modalities. Blank slices of the dataset are removed as they do not contain any meaningful information. The following are the updates applied to the network: * • The bounding box is applied around the brain to reduce the input which contains zero intensity (blank) voxels. * • The convolution layer is added in the dense module . The results for the network are in Table 7, which includes minor variations in the network, different input image sizes, different loss functions, and the number of images in training. The network is trained on 85% of the total images and validated on the remaining 15% images of the BraTS 2019 dataset. An online tool provided by the organizer generates the evaluation metric for the training set as well as a separate validation set in addition to the training set. Figure 16: Three stage U-net architecture Agravat and Raval (2019a). Table 7: Comparison of model variations of Agravat and Raval (2019a). Model | Architectural change | Input Modalities | # Feature maps | Input image size | Loss function | DSC Training Set | DSC Validation Set ---|---|---|---|---|---|---|--- 1 | Original | T1, T1c, T2, FLAIR | 32, 64, 128, 64, 32 | 240 x 240 | Dice Loss | WT:0.90 TC:0.86 ET:0.75 | WT:0.72 TC:0.63 ET:0.55 2 | Original | T1, T1c, T2, FLAIR | 32, 64, 128, 64, 32 | 156 x 200 | Dice Loss | WT:0.90 TC:0.86 ET:0.75 | WT:0.72 TC:0.63 ET:0.55 3 | dense module + convolution layer | T1, T1c, T2, FLAIR | 32, 64, 128, 64, 32 | 156 x 200 | Dice Loss | WT:0.89 TC:0.84 ET:0.71 | WT:0.71 TC:0.61 ET:0.56 4 | dense module + convolution layer | T1, T1c, T2, FLAIR | 32, 64, 128, 64, 32 | 156 x 200 | Focal Loss | WT:0.93 TC:0.90 ET:0.83 | WT:0.75 TC:0.65 ET:0.60 5 | dense module + convolution layer | T1c, T2, FLAIR | 32, 64, 128, 64, 32 | 156 x 200 | Dice Loss | WT:0.87 TC:0.81 ET:0.67 | WT:0.71 TC:0.58 ET:0.51 6 | dense module + convolution layer | T1c, T2, FLAIR | 32, 64, 128, 64, 32 | 156 x 200 | Focal Loss | WT:0.92 TC:0.89 ET:0.79 | WT:0.76 TC:0.66 ET:0.60 A DSC comparison of the training and validation sets for these variations is as shown in Fig. 17. The segmentation results show that the method overfits the training data and does not generate good results for the unknown validation set. Segmentation results of the sample images from the training set for correct as well as incorrect segmentation are shown in Fig. 18 and Fig. 19. The network does not distinguish between the subregions where the tumor appearance is homogeneous for all its subregions and the intensity is the same as normal brain tissues. (a) (b) (c) Figure 17: DSC comparison for training and validation set a) whole tumor b)tumor core c) enhancing tumor. (a) (b) (c) (d) (e) (f) (g) (h) Figure 18: Training set image, correct segmentation a) Original FLAIR image b) ground truth segmentation c) model 1 d) model 2 e) model 3 f) model 4 g) model 5 h) model 6. (a) (b) (c) (d) (e) (f) (g) (h) Figure 19: Training set image, incorrect segmentation a) Original FLAIR image b) ground truth segmentation c) model 1 d) model 2 e) model 3 f) model g) model 5 h) model 6. ### 5.6 End-to-End methods for tumor segmentation and OS prediction Since 2017, the second task of survival prediction has been introduced in the challenge. Some of the methods participated in the end-to-end solutions in the challenge, i.e., segmentation followed by survival prediction. In Shboul et al. (2017), an ensemble of RF and CNN segments the tumor and the random forest regressor (RFR) was used to predict the overall survival days using 240 features out of 1366 different features (Kaplan-Meier was used to find relevant and useful features). Authors in Jungo et al. (2017) modified U-net with Full Resolution Residual Network (FRRN) and Residual Unit (RU) units along with weight scaling dropout. The survival prediction ANN worked with a linear activation function on four selected features. A variant of U-net was used in Isensee et al. (2017), which took 3D input and included a context module and a localization module in each level of the architecture. The segmentation result was generated based on an element-wise summation of the output from the decoder layers. Survival prediction was the average of RFR and multilayer perceptron (MLP). RFR trained on 517 features extracted from three tumor sub-regions using the radiomics package Van Griethuysen et al. (2017). The output of RFR and MLP averaged 15 MLPs designs with three hidden layers, each with 64 neurons. Authors in Baid et al. (2018) implemented 3D Unet with three stages of encoder-decoder architecture. A egression model based radiomics features selection trains the MLP for OS prediction. Whereas in Weninger et al. (2018), two 3D U-nets used a four-stage encoder decoder architecture; the first network segmented the whole tumor, and the second one segmented the tumor subregion. In addition to four conventional modalities, they used an additional image as an input, which was the T1c-T1 subtracted image. This image provided additional information for the tumor core region. They used only the age feature to predict the OS using a linear regressor. The approach presented in Li and Shen (2017) used an FCNN named FCRN-101, which derived from pre-trained SegNet and U-net architecture. A three path network combined the result of three views, i.e., axial, coronal, and sagittal. The OS prediction used SPNet, a fully connected CNN, which took four modalities and the network segmentation result as input to predict the probability of OS prediction. The authors in Feng et al. (2018) used an ensemble of six 3D U-net type networks with variation in the input size, number of encoding/decoding blocks, and feature maps at every layer. The OS prediction used linear regression with ground truth image volume, surface area, age, and resection status. Features were input to the network after z-score normalization. Authors in Puybareau et al. (2018) implemented FCN and generated results for three axes. They used majority voting to generate the final segmentation results. For OS prediction, ten features (focusing on necrosis and active tumor) from the segmentation results were generated, and the mean PCA and standard deviation PCA was used to train RF on the GTR images. In Sun et al. (2018) an ensemble of three networks (U-netRonneberger et al. (2015), DFKZNetIsensee et al. (2017), and CA-CNNWang et al. (2017)) was used, and majority voting applied for final segmentation. The OS prediction used RF with 14 radiomics features selected from various modality images, Laplacian of Gaussian Images and wavelet decomposed images. Authors in Agravat and Raval (2019b) implemented a 2D U-net architecture with three stages for tumor segmentation and age, volumetric, and shape features of the whole tumor were used to predict the OS. All the approaches did not use location-based information of the tumor and its sub-regions. In contrast, Kao et al. (2018), used twenty-one brain parcellation regions as input along with four MR modalities. It emphasized the number of tumor regions in those specific parcellation areas. Those twenty- five input channels were given as input to the ensemble of 3D U-net as well as the ensemble of DeepMedic architectures with different kernel and input patch sizes. Tractrographic features from network segmented regions trained SVM classifiers with a linear kernel to predict the OS. Authors in Agravat and Raval (2019a) implemented a 2D Unet of three stages with dense blocks at every encoder level, and the feature set of Agravat and Raval (2019b) of the necrosis tumor sub-region for the OS prediction. The authors in Soltaninejad et al. (2017) combined the features from VGG16 based FCN and texton maps to generate features and supply them to the RF classifier to generate the segmentation result. RF was also used for OS prediction using volumetric as well as the age feature of the patient. Authors in Agravat and Raval (2020), Dai et al. (2018), Hua et al. (2018), Islam et al. (2018), Raval et al. (2021), Zhou et al. (2017) attempted an end- to-end segmentation approach. Table 5.6 compares the segmentation results of end-to-end methods. The approaches for pre-processing and post-processing techniques are as specified in Section 5.3 and Table 8 provides details related to survival prediction. Ref. Pre-processing Input Modality + Augmentation Patch/ Image Input view Network Architecture # Networks Ensemble Type Loss Function Post-processing Dataset DSC Mean Table 7 – continued … Ref. Pre-processing Input Modality + Augmentation Patch/ Image Input view Network Architecture # Networks Ensemble Type Loss Function Post-processing Dataset DSC Mean End-to-end Methods, Task 1: Brain Tumor Segmentation. @p0.8 cm p2 cm p2 cm p1.5 cm p1.5 cm p1.5 cm p1.5 cm p1.2 cm p1 cm p1 cm p1 cm p2.5 cm@ Li and Shen (2017) 6 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 3 Parallel weighted focal - BraTS 2017 Validation set: ET:0.75,WT:0.88,CT:0.71 Test set: ET:0.69,WT:0.88,TC:0.71 Isensee et al. (2017) 3 followed by clipping and scaling T1, T2, T1c, FLAIR 3D patches axial 3D FCNN 1 - weighted dice - Brats 2017 Validation set: WT:0.90,TC:0.80,ET:0.73 Test set: WT:0.86,TC:0.78,ET:0.65 Shboul et al. (2017) 2,3 T1, T2, T1c, FLAIR 2D patches axial 2D CNN + RF 1 - softmax - BraTS 2017 HGG Test set: ET:0.73,WT:0.83,TC:0.72 Jungo et al. (2017) \- T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - - - BraTS 2017 Validation set: WT:0.90,TC:0.79,ET:0.75 Test set: WT:0.87,TC:0.74,ET:0.67 Soltaninejad et al. (2017) 1,3,4 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - - - BraTS 2017 Validation set: WT:0.86,TC:0.78,ET:0.66 Test set: WT:0.85,TC:0.69,ET:0.67 Zhou et al. (2017) 4,5 T1, T2, T1c, FLAIR 2D + 3D patches axial 2D FCNN 5 cascaded to parallel - 1,4 BraTS 2017 Training set: WT:0.88,TC:0.72,ET:0.73 Feng et al. (2018) 2,7 T1, T2, T1c, FLAIR, Random Flip 3D patches axial 3D FCNN 6 parallel weighted uniform - BraTS 2018 Validation set: WT:0.91,TC:0.84,ET:0.79 Puybareau et al. (2018) 8 T1, T2, T1c 2D images axial 2D FCNN 1 - softmax 3 BraTS 2018 Training set: WT:0.82 Sun et al. (2018) 3, random flipping, gaussian noise T1, T2, T1c, FLAIR 3D images axial 3D FCNN 3 parallel - - BraTS 2018 Validation set: WT:0.91,TC:0.85,ET:0.81 Test set: WT:0.88,TC:0.80,ET:0.72 Baid et al. (2018) 2,3 T1, T2, T1c, FLAIR 3D patches axial 3D FCNN 1 - \- 1,3, false positives removal BraTS 2018 Validation set: WT:0.88,TC:0.83,ET:0.75 Test set: WT:0.85,TC:0.77,ET:0.67 Weninger et al. (2018) 3 T1, T2, T1c, FLAIR, (T1c-T1) 3D patches axial 3D FCNN 2 cascaded dice - BraTS 2018 Validation set: WT:0.89,TC:0.76,ET:0.71 Test set: WT:0.84,TC:0.73,ET:0.62 Kao et al. (2018) \- T1, T2, T1c, FLAIR, 21 binary brain parcellation images 3D patches axial 3D FCNN 26 parallel - - BraTS 2018 Validation set: WT:0.91,TC:0.81,ET:0.79 Test set: WT:0.88,TC:0.79,ET:0.75 Hua et al. (2018) 2,4,8 T1, T2, T1c, FLAIR 3d images axial 3D FCNN 5 parallel to cascaded focal 1 BraTS 2018 Validation set: WT:0.90,TC:0.84,ET:0.77 Test set: WT:0.87,TC:0.80,ET:0.74 Banerjee et al. (2018) \- T1, T2, T1c, FLAIR 2D images axial, coronal, sagittal 2D FCNN 3 parallel weighted cross entropy + generalized dice - BraTS 2018 Validation set: WT:0.88,TC:0.80,ET:0.77 Test set: WT:0.87,TC:0.80,ET:0.74 Islam et al. (2018) 3 T2, T1c, FLAIR 2D images axial 2D FCNN 1 - - - BraTS 2018 Validation set: WT:0.90,TC:0.83,ET:0.77 Test set: WT:0.87,TC:0.77,ET:0.70 Dai et al. (2018) 1,3 T1, T2, T1c, FLAIR 3D images axial 2D FCNN 9 parallel confusion + multi class dice 1,3 BraTS 2018 Validation set: WT:0.91,TC:0.85,ET:0.81 Test set: WT:0.87,TC:0.79,ET:0.74 Agravat and Raval (2019b) 3 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - dice - BraTS 2018 - Agravat and Raval (2019a) 3 T1, T2, T1c, FLAIR 2D images axial 2D FCNN 1 - focal - BraTS 2019 Validation set: WT:0.73,TC:0.63,ET:0.59 Test set: WT:0.72,TC:0.66,ET:0.64 Table 8: End-to-end Methods, Task 2: OS Prediction. Ref. | Method | # features/ type of network | Dataset | Accuracy (%) ---|---|---|---|--- Li and Shen (2017) | 2D CNN | - | BraTS 2017 | Valid:55 Test:45 Isensee et al. (2017) | RFR and MLP | 66 | BraTS 2017 | Valid:52.6 Shboul et al. (2017) | RFR | 240 | BraTS 2017 | Valid:66.7 Test:57.9 Jungo et al. (2017) | ANN | 4 | BraTS 2017 | Valid:42.4 Test:56.8 Feng et al. (2018) | Linear Regression Model | 9 | BraTS 2018 | Valid:32.1 Puybareau et al. (2018) | PCA + RF | 10 | BraTS 2018 | Test:61 Sun et al. (2018) | RF | 14 | BraTS 2018 | Valid:46.4 Test:61 Baid et al. (2018) | MLP | 468 | BraTS 2018 | Valid:57.1 Test:55.8 Weninger et al. (2018) | Linear Regressor | 1 | BraTS 2018 | Valid:50 Test:55.8 Kao et al. (2018) | SVM with linear kernel | - | BraTS 2018 | Valid:35.7 Test:41.6 Agravat and Raval (2019b) | RF | 13 | BraTS 2018 | 59 Agravat and Raval (2019a) | RF | 13 | BraTS 2019 | Valid:58.6 Test:57.9 Soltaninejad et al. (2017) | RF | 4 | BraTS 2017 | Valid:48.5 Test:41.1 Zhou et al. (2017) | CNN+XBoost | 183 + CNN features | BraTS 2017 | Training:63.3 Hua et al. (2018) | ensemble of Xboost, SVM, MLP, DT, RF, LDA | 900 | BraTS 2018 | Test:51.9 Banerjee et al. (2018) | MLP | 83 | BraTS 2018 | Valid:54 Islam et al. (2018) | ANN | 50 | BraTS 2018 | Valid:67.9 Test:46.8 Dai et al. (2018) | Xboost | 195 | BraTS 2017 | Valid:50 ## 6 Limitations of tumor segmentation and OS prediction Tumor segmentation result depends on the architectural design from shallow CNN to the ensemble/cascaded CNN, the amount of training data, input pre- processing, type of input(2D/3D), network optimization as well as post- processing of the generated output. Still, the DL methods have certain limitations, which include: * • Over-fitting: The common problem of the DNN based approach is over-fitting. It may occur due to the unavailability of an adequate amount of labelled training data for brain tumor segmentation, which refers to a model that has an excellent performance on the training dataset but does not perform well on new data. The over-fitting problem can be handled either by reducing the network complexity (in terms of network layers and parameters) or by generating an ample amount of training data using image augmentation techniques. The augmentation techniques produce new images by performing data transformations and the corresponding ground truth that includes operations of scaling, rotation, translation, brightness variation, elastic deformations, horizontal flipping, and mirroring. * • Class imbalance: Class imbalance is another issue in tumor segmentation where the background class dominates the foreground class(tumor). Class imbalance can be handled by proper training data sampling, improved loss functions, and augmentation techniques. The highest reported accuracy for the survival prediction task does not exceed 63% Feng et al. (2018). It is due to the dependency on the extraction of the features from the segmentation results. Incorrect segmentation results in wrong feature extraction for survival prediction. The biological importance of the extracted features also plays an important role. If the relevance of the features is not known correctly, the survival prediction cannot be accurate. Besides, the importance of tumor sub-regions plays a role in feature extraction. The dataset includes all the pre-operative scans and does not give any other information like the success of tumor removal, post-operative treatments to the patients, and the response of the patients to such treatments. Chances of tumor reoccurrence are high if the patient is exposed to a radiation environment. Features related to it may further improve the OS prediction. ## 7 Conclusion and Future Research Directions The availability of the benchmark dataset (BraTS) has grown the field of computer-assisted medical image analysis for brain tumor segmentation. The article covers a detailed literature survey for brain tumor segmentation techniques. Tumor segmentation is approached with various techniques like semi-automated and automated methods. The semi-automated methods work on the input provided by the user. Such methods suffer from limitations like manual seed point selection and atlas creation. Methods have gradually been improved to include machine learning techniques like clustering, RF and ANN. The limitation of such methods is the selection of the features for training, which requires knowledge of biological information of the image. The need of domain knowledge is removed by CNNs - deep neural network. CNN extracts high level features at deeper layers successively from the low level features from the preceding layers. This feature learning has improved the performance of CNN for tumor segmentation. Improvements to CNN are its variants, FCNN and ensemble of CNN/FCNN. Details of all the methods along with pre-processing, post-processing, prominent highlights of the methods and evaluation measures are given in Table 4.6, Table 6, Table 5.6 and Table 8. The ensemble generates robust segmentation results as well as improvements in the accuracy of the network. All these methods generate spurious segmentation results, which improve with the help of post-processing techniques like connected component analysis, spatial regularization and morphological operations to fine-tune the output. Class imbalance is the primary concern in the training of CNNs for medical image analysis. Balanced input selection and loss function for a positive class will resolve the issue. Due to the unavailability of a large amount of data for training, it may be possible that the network overfits the training data. Regularization and dropout resolve such issues. Moreover, as Bayesian CNN Shridhar et al. (2019) handles epistemic and aleatoric uncertainties in the presence of limited data and knowledge, such type of a network is useful for semantic segmentation. Adaptive loss Barron (2019), approximates varieties of loss functions with a single latent variable. This function can further be thought of to solve the problem of semantic segmentation. Despite their popularity such methods have following limitations: * • computational efficiency * • memory requirement: As the depth of the network increases, the number of network parameters increases. This increase in the parameters requires additional memory as well as time to tune those on each epoch. * • Ample amount of annotated training data requirement: The annotated data generation is itself a challenge as this process is very time consuming and the annotation results may change depending on variability of the expert. In addition, specific annotation tools are required by the expert for proper delineation and annotation purposes. In place of voxel annotation, image labelling for the presence/ absence of a tumor is less time consuming, does not require much expertise and specialized annotation tools. Use of the mixed supervision Lee et al. (2019) of image labelling in addition to voxel labelling may help the network to learn relevant features for segmentation with less annotated data. ###### Acknowledgements. The authors would like to thank NVIDIA Corporation for donating the Quadro K5200 and Quadro P5000 GPU used for this research, Dr. Krutarth Agravat (Medical Officer, Essar Ltd) for clearing our doubts related to medical concepts, Po-yu Kao, and Ujjawal Baid for their continuous support and help, Dr. Spyros and his entire team for BraTS dataset. The authors acknowledge continuous support from Professor Sanjay Chaudhary, Professor N. Padmanabhan, and Professor Manjunath Joshi for this work. ## Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. ## References * Abadi et al. (2016) Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M, et al. (2016) Tensorflow: A system for large-scale machine learning. In: 12th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation ($\\{$OSDI$\\}$ 16), pp 265–283 * Agn et al. (2015) Agn M, Puonti O, af Rosenschöld PM, Law I, Van Leemput K (2015) Brain tumor segmentation using a generative model with an rbm prior on tumor shape. In: BrainLes 2015, Springer, pp 168–180 * Agravat and Raval (2019a) Agravat R, Raval MS (2019a) Brain tumor segmentation and survival prediction. arXiv preprint arXiv:190909399 * Agravat and Raval (2020) Agravat R, Raval MS (2020) 3d semantic segmentation of brain tumor for overall survival prediction. arXiv preprint arXiv:200811576 * Agravat and Raval (2016) Agravat RR, Raval MS (2016) Brain tumor segmentation-towards a better life. CSI Communication 40:31–35 * Agravat and Raval (2018) Agravat RR, Raval MS (2018) Deep learning for automated brain tumor segmentation in mri images. In: Soft Computing Based Medical Image Analysis, Elsevier, pp 183–201 * Agravat and Raval (2019b) Agravat RR, Raval MS (2019b) Prediction of overall survival of brain tumor patients. In: TENCON 2019-2019 IEEE Region 10 Conference (TENCON), IEEE, pp 31–35 * Albiol et al. (2018) Albiol A, Albiol A, Albiol F (2018) Extending 2d deep learning architectures to 3d image segmentation problems. In: International MICCAI Brainlesion Workshop, Springer, pp 73–82 * Badrinarayanan et al. (2017) Badrinarayanan V, Kendall A, Cipolla R (2017) Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39(12):2481–2495 * Baid et al. (2018) Baid U, Talbar S, Rane S, Gupta S, Thakur MH, Moiyadi A, Thakur S, Mahajan A (2018) Deep learning radiomics algorithm for gliomas (drag) model: a novel approach using 3d unet based deep convolutional neural network for predicting survival in gliomas. In: International MICCAI Brainlesion Workshop, Springer, pp 369–379 * Bakas et al. (2015) Bakas S, Zeng K, Sotiras A, Rathore S, Akbari H, Gaonkar B, Rozycki M, Pati S, Davatzikos C (2015) Glistrboost: combining multimodal mri segmentation, registration, and biophysical tumor growth modeling with gradient boosting machines for glioma segmentation. In: BrainLes 2015, Springer, pp 144–155 * Bakas et al. (2017a) Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby J, Freymann J, Farahani K, Davatzikos C (2017a) Segmentation labels and radiomic features for the pre-operative scans of the tcga-gbm collection. the cancer imaging archive (2017) * Bakas et al. (2017b) Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby J, Freymann J, Farahani K, Davatzikos C (2017b) Segmentation labels and radiomic features for the pre-operative scans of the tcga-lgg collection. The Cancer Imaging Archive 286 * Bakas et al. (2017c) Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby JS, Freymann JB, Farahani K, Davatzikos C (2017c) Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features. Scientific data 4:170117 * Bakas et al. (2018) Bakas S, Reyes M, Jakab A, Bauer S, Rempfler M, Crimi A, Shinohara RT, Berger C, Ha SM, Rozycki M, et al. (2018) Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. arXiv preprint arXiv:181102629 * Banerjee et al. (2018) Banerjee S, Mitra S, Shankar BU (2018) Multi-planar spatial-convnet for segmentation and survival prediction in brain cancer. In: International MICCAI Brainlesion Workshop, Springer, pp 94–104 * Barron (2019) Barron JT (2019) A general and adaptive robust loss function. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4331–4339 * Bastien et al. (2012) Bastien F, Lamblin P, Pascanu R, Bergstra J, Goodfellow I, Bergeron A, Bouchard N, Warde-Farley D, Bengio Y (2012) Theano: new features and speed improvements. arXiv preprint arXiv:12115590 * Bauer et al. (2012) Bauer S, Fejes T, Slotboom J, Wiest R, Nolte LP, Reyes M (2012) Segmentation of brain tumor images based on integrated hierarchical classification and regularization. In: MICCAI BraTS Workshop. Nice: Miccai Society, p 11 * Bernal et al. (2019) Bernal J, Kushibar K, Asfaw DS, Valverde S, Oliver A, Martí R, Lladó X (2019) Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review. Artificial intelligence in medicine 95:64–81 * Bharath et al. (2017) Bharath HN, Colleman S, Sima DM, Van Huffel S (2017) Tumor segmentation from multimodal mri using random forest with superpixel and tensor based feature extraction. In: International MICCAI Brainlesion Workshop, Springer, pp 463–473 * for Biotechnology Information (2020 (accessed December 30, 2020) for Biotechnology Information NC (2020 (accessed December 30, 2020)) National Library of Medicine. URL https://pubmed.ncbi.nlm.nih.gov/ * Buendia et al. (2013) Buendia P, Taylor T, Ryan M, John N (2013) A grouping artificial immune network for segmentation of tumor images. Multimodal Brain Tumor Segmentation 1 * Casamitjana et al. (2016) Casamitjana A, Puch S, Aduriz A, Vilaplana V (2016) 3d convolutional neural networks for brain tumor segmentation: a comparison of multi-resolution architectures. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 150–161 * Casamitjana et al. (2017) Casamitjana A, Catà M, Sánchez I, Combalia M, Vilaplana V (2017) Cascaded v-net using roi masks for brain tumor segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 381–391 * Castillo et al. (2017) Castillo LS, Daza LA, Rivera LC, Arbeláez P (2017) Brain tumor segmentation and parsing on mris using multiresolution neural networks. In: International MICCAI Brainlesion Workshop, Springer, pp 332–343 * Center (2019 (accessed April 6, 2020) Center RM (2019 (accessed April 6, 2020)) Health Encyclopedia. URL https://www.urmc.rochester.edu/encyclopedia/ content.aspx * Chandra et al. (2019) Chandra S, Vakalopoulou M, Fidon L, Battistella E, Estienne T, Sun R, Robert C, Deutsch E, Paragios N (2019) Context aware 3d cnns for brain tumor segmentation. brainles 2018. Springer LNCS 11384:299–310 * Chang (2016) Chang PD (2016) Fully convolutional deep residual neural networks for brain tumor segmentation. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 108–118 * Chen et al. (2018a) Chen L, Bentley P, Mori K, Misawa K, Fujiwara M, Rueckert D (2018a) Drinet for medical image segmentation. IEEE transactions on medical imaging 37(11):2453–2462 * Chen et al. (2017) Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017) Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 40(4):834–848 * Chen et al. (2018b) Chen W, Liu B, Peng S, Sun J, Qiao X (2018b) S3d-unet: separable 3d u-net for brain tumor segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 358–368 * Chollet et al. (2018) Chollet F, et al. (2018) Keras: The python deep learning library. Astrophysics Source Code Library * Choudhury et al. (2018) Choudhury AR, Vanguri R, Jambawalikar SR, Kumar P (2018) Segmentation of brain tumors using deeplabv3+. In: International MICCAI Brainlesion Workshop, Springer, pp 154–167 * Clark et al. (1998) Clark MC, Hall LO, Goldgof DB, Velthuizen R, Murtagh FR, Silbiger MS (1998) Automatic tumor segmentation using knowledge-based techniques. IEEE transactions on medical imaging 17(2):187–201 * Colmeiro et al. (2017) Colmeiro RR, Verrastro C, Grosges T (2017) Multimodal brain tumor segmentation using 3d convolutional networks. In: International MICCAI Brainlesion Workshop, Springer, pp 226–240 * Cordier et al. (2013) Cordier N, Menze B, Delingette H, Ayache N (2013) Patch-based segmentation of brain tissues * Corso et al. (2008) Corso JJ, Sharon E, Dube S, El-Saden S, Sinha U, Yuille A (2008) Efficient multilevel brain tumor segmentation with integrated bayesian model classification. IEEE transactions on medical imaging 27(5):629–640 * Cuadra et al. (2004) Cuadra MB, Pollo C, Bardera A, Cuisenaire O, Villemure JG, Thiran JP (2004) Atlas-based segmentation of pathological mr brain images using a model of lesion growth. IEEE transactions on medical imaging 23(10):1301–1314 * Dai et al. (2018) Dai L, Li T, Shu H, Zhong L, Shen H, Zhu H (2018) Automatic brain tumor segmentation with domain adaptation. In: International MICCAI Brainlesion Workshop, Springer, pp 380–392 * Dera et al. (2016) Dera D, Raman F, Bouaynaya N, Fathallah-Shaykh HM (2016) Interactive semi-automated method using non-negative matrix factorization and level set segmentation for the brats challenge. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 195–205 * Dieleman et al. (2015) Dieleman S, Schlüter J, Raffel C, Olson E, Sønderby SK, Nouri D, et al. (2015) Lasagne: First release. DOI 10.5281/zenodo.27878, URL http://dx.doi.org/10.5281/zenodo.27878 * Dong et al. (2017a) Dong H, Supratak A, Mai L, Liu F, Oehmichen A, Yu S, Guo Y (2017a) Tensorlayer: a versatile library for efficient deep learning development. In: Proceedings of the 25th ACM international conference on Multimedia, pp 1201–1204 * Dong et al. (2017b) Dong H, Yang G, Liu F, Mo Y, Guo Y (2017b) Automatic brain tumor detection and segmentation using u-net based fully convolutional networks. In: annual conference on medical image understanding and analysis, Springer, pp 506–517 * Doyle et al. (2013) Doyle S, Vasseur F, Dojat M, Forbes F (2013) Fully automatic brain tumor segmentation from multiple mr sequences using hidden markov fields and variational em. Procs NCI-MICCAI BraTS pp 18–22 * Ellwaa et al. (2016) Ellwaa A, Hussein A, AlNaggar E, Zidan M, Zaki M, Ismail MA, Ghanem NM (2016) Brain tumor segmantation using random forest trained on iteratively selected patients. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 129–137 * Feng and Meyer (2017) Feng X, Meyer C (2017) Patch-based 3d u-net for brain tumor segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) * Feng et al. (2018) Feng X, Tustison N, Meyer C (2018) Brain tumor segmentation using an ensemble of 3d u-nets and overall survival prediction using radiomic features. In: International MICCAI Brainlesion Workshop, Springer, pp 279–288 * Festa et al. (2013) Festa J, Pereira S, Mariz JA, Sousa N, Silva CA (2013) Automatic brain tumor segmentation of multi-sequence mr images using random decision forests. Proceedings of NCI-MICCAI BRATS 1:23–26 * Geremia et al. (2012) Geremia E, Menze BH, Ayache N, et al. (2012) Spatial decision forests for glioma segmentation in multi-channel mr images. MICCAI Challenge on Multimodal Brain Tumor Segmentation 34 * Goetz et al. (2014) Goetz M, Weber C, Bloecher J, Stieltjes B, Meinzer HP, Maier-Hein K (2014) Extremely randomized trees based brain tumor segmentation. Proceeding of BRATS challenge-MICCAI pp 006–011 * Goyal et al. (2018) Goyal B, Agrawal S, Sohi B (2018) Noise issues prevailing in various types of medical images. Biomedical & Pharmacology Journal 11(3):1227 * Guo et al. (2013) Guo X, Schwartz L, Zhao B (2013) Semi-automatic segmentation of multimodal brain tumor using active contours. Multimodal Brain Tumor Segmentation 27 * Hamamci and Unal (2012) Hamamci A, Unal G (2012) Multimodal brain tumor segmentation using the tumor-cut method on the brats dataset. Proc MICCAI-BRATS pp 19–23 * Hamamci et al. (2011) Hamamci A, Kucuk N, Karaman K, Engin K, Unal G (2011) Tumor-cut: segmentation of brain tumors on contrast enhanced mr images for radiosurgery applications. IEEE transactions on medical imaging 31(3):790–804 * Havaei et al. (2015) Havaei M, Dutil F, Pal C, Larochelle H, Jodoin PM (2015) A convolutional neural network approach to brain tumor segmentation. In: BrainLes 2015, Springer, pp 195–208 * Havaei et al. (2017) Havaei M, Davy A, Warde-Farley D, Biard A, Courville A, Bengio Y, Pal C, Jodoin PM, Larochelle H (2017) Brain tumor segmentation with deep neural networks. Medical image analysis 35:18–31 * Healthcareplex (2016 (accessed April 10, 2020) Healthcareplex (2016 (accessed April 10, 2020)) CT Scan vs. MRI. URL https://healthcareplex.com/mri-vs-ct-scan/ * Hopkins (2019 (accessed April 6, 2020) Hopkins J (2019 (accessed April 6, 2020)) Health. URL https://www.hopkinsmedicine.org/health/conditions-and-diseases/basics-of-brain-tumors * Hu et al. (2018a) Hu X, Li H, Zhao Y, Dong C, Menze BH, Piraud M (2018a) Hierarchical multi-class segmentation of glioma images using networks with multi-level activation function. In: International MICCAI Brainlesion Workshop, Springer, pp 116–127 * Hu and Xia (2017) Hu Y, Xia Y (2017) 3d deep neural network-based brain tumor segmentation using multimodality magnetic resonance sequences. In: International MICCAI Brainlesion Workshop, Springer, pp 423–434 * Hu et al. (2018b) Hu Y, Liu X, Wen X, Niu C, Xia Y (2018b) Brain tumor segmentation on multimodal mr imaging using multi-level upsampling in decoder. In: International MICCAI Brainlesion Workshop, Springer, pp 168–177 * Hua et al. (2018) Hua R, Huo Q, Gao Y, Sun Y, Shi F (2018) Multimodal brain tumor segmentation using cascaded v-nets. In: International MICCAI Brainlesion Workshop, Springer, pp 49–60 * Huang et al. (2017) Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708 * Isensee et al. (2017) Isensee F, Kickingereder P, Wick W, Bendszus M, Maier-Hein KH (2017) Brain tumor segmentation and radiomics survival prediction: Contribution to the brats 2017 challenge. In: International MICCAI Brainlesion Workshop, Springer, pp 287–297 * Isensee et al. (2018) Isensee F, Kickingereder P, Wick W, Bendszus M, Maier-Hein KH (2018) No new-net. In: International MICCAI Brainlesion Workshop, Springer, pp 234–244 * Islam and Ren (2017) Islam M, Ren H (2017) Multi-modal pixelnet for brain tumor segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 298–308 * Islam et al. (2018) Islam M, Jose VJM, Ren H (2018) Glioma prognosis: Segmentation of the tumor and survival prediction using shape, geometric and clinical information. In: International MICCAI Brainlesion Workshop, Springer, pp 142–153 * Janssen and Hoff (2012) Janssen PM, Hoff EI (2012) Teaching neuroimages: Subacute intracerebral hemorrhage mimicking brain tumor. Neurology 79(21):e183–e183 * Jesson and Arbel (2017) Jesson A, Arbel T (2017) Brain tumor segmentation using a 3d fcn with multi-scale loss. In: International MICCAI Brainlesion Workshop, Springer, pp 392–402 * Jia et al. (2014) Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: Convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on Multimedia, pp 675–678 * Jungo et al. (2017) Jungo A, McKinley R, Meier R, Knecht U, Vera L, Pérez-Beteta J, Molina-García D, Pérez-García VM, Wiest R, Reyes M (2017) Towards uncertainty-assisted brain tumor segmentation and survival prediction. In: International MICCAI Brainlesion Workshop, Springer, pp 474–485 * Kamnitsas et al. (2016) Kamnitsas K, Ferrante E, Parisot S, Ledig C, Nori AV, Criminisi A, Rueckert D, Glocker B (2016) Deepmedic for brain tumor segmentation. In: International workshop on Brainlesion: Glioma, multiple sclerosis, stroke and traumatic brain injuries, Springer, pp 138–149 * Kamnitsas et al. (2017) Kamnitsas K, Bai W, Ferrante E, McDonagh S, Sinclair M, Pawlowski N, Rajchl M, Lee M, Kainz B, Rueckert D, et al. (2017) Ensembles of multiple models and architectures for robust brain tumour segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 450–462 * Kao et al. (2018) Kao PY, Ngo T, Zhang A, Chen JW, Manjunath B (2018) Brain tumor segmentation and tractographic feature extraction from structural mr images for overall survival prediction. In: International MICCAI Brainlesion Workshop, Springer, pp 128–141 * Kermi et al. (2018) Kermi A, Mahmoudi I, Khadir MT (2018) Deep convolutional neural networks using u-net for automatic brain tumor segmentation in multimodal mri volumes. In: International MICCAI Brainlesion Workshop, Springer, pp 37–48 * Kim (2017) Kim G (2017) Brain tumor segmentation using deep fully convolutional neural networks. In: International MICCAI Brainlesion Workshop, Springer, pp 344–357 * Kleesiek et al. (2014) Kleesiek J, Biller A, Urban G, Kothe U, Bendszus M, Hamprecht F (2014) Ilastik for multi-modal brain tumor segmentation. Proceedings MICCAI BraTS (brain tumor segmentation challenge) pp 12–17 * Kori et al. (2018) Kori A, Soni M, Pranjal B, Khened M, Alex V, Krishnamurthi G (2018) Ensemble of fully convolutional neural network for brain tumor segmentation from magnetic resonance images. In: International MICCAI Brainlesion Workshop, Springer, pp 485–496 * Krizhevsky et al. (2012) Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105 * Kwon et al. (2014) Kwon D, Akbari H, Da X, Gaonkar B, Davatzikos C (2014) Multimodal brain tumor image segmentation using glistr. MICCAI brain tumor segmentation (BraTS) challenge manuscripts pp 18–19 * Lachinov et al. (2018) Lachinov D, Vasiliev E, Turlapov V (2018) Glioma segmentation with cascaded unet. In: International MICCAI Brainlesion Workshop, Springer, pp 189–198 * Le Folgoc et al. (2016) Le Folgoc L, Nori AV, Ancha S, Criminisi A (2016) Lifted auto-context forests for brain tumour segmentation. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 171–183 * LeCun et al. (1998) LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278–2324 * Lee et al. (2019) Lee J, Kim E, Lee S, Lee J, Yoon S (2019) Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5267–5276 * Lefkovits et al. (2016) Lefkovits L, Lefkovits S, Szilágyi L (2016) Brain tumor segmentation with optimized random forest. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 88–99 * Li and Shen (2017) Li Y, Shen L (2017) Deep learning based multimodal brain tumor diagnosis. In: International MICCAI Brainlesion Workshop, Springer, pp 149–158 * Lin et al. (2017) Lin TY, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988 * Litjens et al. (2017) Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, Van Der Laak JA, Van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Medical image analysis 42:60–88 * Long et al. (2015) Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440 * Lopez and Ventura (2017) Lopez MM, Ventura J (2017) Dilated convolutions for brain tumor segmentation in mri scans. In: International MICCAI Brainlesion Workshop, Springer, pp 253–262 * Ma and Yang (2018) Ma J, Yang X (2018) Automatic brain tumor segmentation by exploring the multi-modality complementary information and cascaded 3d lightweight cnns. In: International MICCAI Brainlesion Workshop, Springer, pp 25–36 * Maier et al. (2015) Maier O, Wilms M, Handels H (2015) Image features for brain lesion segmentation using random forests. In: BrainLes 2015, Springer, pp 119–130 * Marcinkiewicz et al. (2018) Marcinkiewicz M, Nalepa J, Lorenzo PR, Dudzik W, Mrukwa G (2018) Segmenting brain tumors from mri using cascaded multi-modal u-nets. In: International MICCAI Brainlesion Workshop, Springer, pp 13–24 * McKinley et al. (2016) McKinley R, Wepfer R, Gundersen T, Wagner F, Chan A, Wiest R, Reyes M (2016) Nabla-net: A deep dag-like convolutional architecture for biomedical image segmentation. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 119–128 * McKinley et al. (2017) McKinley R, Jungo A, Wiest R, Reyes M (2017) Pooling-free fully convolutional networks with dense skip connections for semantic segmentation, with application to brain tumor segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 169–177 * McKinley et al. (2018) McKinley R, Meier R, Wiest R (2018) Ensembles of densely-connected cnns with label-uncertainty for brain tumor segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 456–465 * Media (2004 (accessed April 10, 2020) Media H (2004 (accessed April 10, 2020)) CT Scan vs. MRI. URL https://www.healthline.com/health/ct-scan-vs-mri/ * Mehta and Arbel (2018) Mehta R, Arbel T (2018) 3d u-net for brain tumour segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 254–266 * Meier et al. (2013) Meier R, Bauer S, Slotboom J, Wiest R, Reyes M (2013) A hybrid model for multimodal brain tumor segmentation. Multimodal Brain Tumor Segmentation 31:31–37 * Meier et al. (2014) Meier R, Bauer S, Slotboom J, Wiest R, Reyes M (2014) Appearance-and context-sensitive features for brain tumor segmentation. Proceedings of MICCAI BRATS Challenge pp 020–026 * Meier et al. (2015) Meier R, Karamitsou V, Habegger S, Wiest R, Reyes M (2015) Parameter learning for crf-based tissue segmentation of brain tumors. In: BrainLes 2015, Springer, pp 156–167 * Meier et al. (2016) Meier R, Knecht U, Wiest R, Reyes M (2016) Crf-based brain tumor segmentation: alleviating the shrinking bias. In: International workshop on brainlesion: glioma, multiple sclerosis, stroke and traumatic brain injuries, Springer, pp 100–107 * Menze et al. (2012) Menze BH, Geremia E, Ayache N, Szekely G (2012) Segmenting glioma in multi-modal images using a generative-discriminative model for brain lesion segmentation. Proc MICCAI-BRATS (Multimodal Brain Tumor Segmentation Challenge) 8 * Menze et al. (2014) Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R, et al. (2014) The multimodal brain tumor image segmentation benchmark (brats). IEEE transactions on medical imaging 34(10):1993–2024 * Myronenko (2018) Myronenko A (2018) 3d mri brain tumor segmentation using autoencoder regularization. In: International MICCAI Brainlesion Workshop, Springer, pp 311–320 * Nuechterlein and Mehta (2019) Nuechterlein N, Mehta S (2019) 3d-espnet with pyramidal refinement for volumetric brain tumor image segmentation. brainles 2018. Springer LNCS 11384:245–253 * Nvidia (2020 (accessed April 28, 2020) Nvidia (2020 (accessed April 28, 2020)) Nvidia. URL https://www.nvidia.com/en-in/ * Nyúl and Udupa (1999) Nyúl LG, Udupa JK (1999) On standardizing the mr image intensity scale. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 42(6):1072–1081 * Pan and Yang (2009) Pan SJ, Yang Q (2009) A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22(10):1345–1359 * Paszke et al. (2019) Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, et al. (2019) Pytorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, pp 8024–8035 * Pawar et al. (2017) Pawar K, Chen Z, Shah NJ, Egan G (2017) Residual encoder and convolutional decoder neural network for glioma segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 263–273 * Pereira et al. (2015) Pereira S, Pinto A, Alves V, Silva CA (2015) Deep convolutional neural networks for the segmentation of gliomas in multi-sequence mri. In: BrainLes 2015, Springer, pp 131–143 * Phophalia and Maji (2017) Phophalia A, Maji P (2017) Multimodal brain tumor segmentation using ensemble of forest method. In: International MICCAI Brainlesion Workshop, Springer, pp 159–168 * Piedra et al. (2016) Piedra EAR, Ellingson BM, Taira RK, El-Saden S, Bui AA, Hsu W (2016) Brain tumor segmentation by variability characterization of tumor boundaries. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 206–216 * Pourreza et al. (2017) Pourreza R, Zhuge Y, Ning H, Miller R (2017) Brain tumor segmentation in mri scans using deeply-supervised neural networks. In: International MICCAI Brainlesion Workshop, Springer, pp 320–331 * Prastawa et al. (2004) Prastawa M, Bullitt E, Ho S, Gerig G (2004) A brain tumor segmentation framework based on outlier detection. Medical image analysis 8(3):275–283 * Puch et al. (2019) Puch S, Sánchez I, Hernández A, Piella G, Prćkovska V (2019) Global planar convolutions for improved context aggregation in brain tumor segmentation. brainles 2018. Springer LNCS 11384:393–405 * Puybareau et al. (2018) Puybareau E, Tochon G, Chazalon J, Fabrizio J (2018) Segmentation of gliomas and prediction of patient overall survival: a simple and fast procedure. In: International MICCAI Brainlesion Workshop, Springer, pp 199–209 * of Radiology (1999 (accessed April 10, 2020) of Radiology AC (1999 (accessed April 10, 2020)) Brain Tumor Treatment. URL https://www.radiologyinfo.org/ * Rajendran and Dhanasekaran (2012) Rajendran A, Dhanasekaran R (2012) Brain tumor segmentation on mri brain images with fuzzy clustering and gvf snake model. International Journal of Computers Communications & Control 7(3):530–539 * Randhawa et al. (2016) Randhawa RS, Modi A, Jain P, Warier P (2016) Improving boundary classification for brain tumor segmentation and longitudinal disease progression. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 65–74 * Raval et al. (2021) Raval M, Rajput S, Roy M, Agravat R (2021) Glioblastoma multiforme patient survival prediction * Raviv et al. (2012) Raviv TR, Leemput KV, Menze BH (2012) Multi-modal brain tumor segmentation via latent atlases. Proceeding MICCAIBRATS 64 * Reza and Iftekharuddin (2013) Reza S, Iftekharuddin K (2013) Multi-class abnormal brain tissue segmentation using texture. Multimodal Brain Tumor Segmentation 38 * Reza and Iftekharuddin (2014) Reza S, Iftekharuddin K (2014) Improved brain tumor tissue segmentation using texture features. Proceedings MICCAI BraTS (brain tumor segmentation challenge) pp 27–30 * Ronneberger et al. (2015) Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, Springer, pp 234–241 * Saha et al. (2016) Saha R, Phophalia A, Mitra SK (2016) Brain tumor segmentation from multimodal mr images using rough sets. In: International Conference on Computer Vision, Graphics, and Image processing, Springer, pp 133–144 * Serrano-Rubio and Everson (2019) Serrano-Rubio J, Everson R (2019) Brain tumour segmentation method based on supervoxels and sparse dictionaries. brainles 2018. Springer LNCS 11384:210–221 * Shaikh et al. (2017) Shaikh M, Anand G, Acharya G, Amrutkar A, Alex V, Krishnamurthi G (2017) Brain tumor segmentation using dense fully convolutional neural network. In: International MICCAI Brainlesion Workshop, Springer, pp 309–319 * Shboul et al. (2017) Shboul ZA, Vidyaratne L, Alam M, Iftekharuddin KM (2017) Glioblastoma and survival prediction. In: International MICCAI Brainlesion Workshop, Springer, pp 358–368 * Shin (2012) Shin HC (2012) Hybrid clustering and logistic regression for multi-modal brain tumor segmentation. In: Proc. of Workshops and Challanges in Medical Image Computing and Computer-Assisted Intervention (MICCAI’12) * Shridhar et al. (2019) Shridhar K, Laumann F, Liwicki M (2019) A comprehensive guide to bayesian convolutional neural network with variational inference. arXiv preprint arXiv:190102731 * Soltaninejad et al. (2017) Soltaninejad M, Zhang L, Lambrou T, Yang G, Allinson N, Ye X (2017) Mri brain tumor segmentation and patient survival prediction using random forests and fully convolutional networks. In: International MICCAI Brainlesion Workshop, Springer, pp 204–215 * Song et al. (2016) Song B, Chou CR, Chen X, Huang A, Liu MC (2016) Anatomy-guided brain tumor segmentation and classification. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 162–170 * Stawiaski (2018) Stawiaski J (2018) A pretrained densenet encoder for brain tumor segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 105–115 * Subbanna and Arbel (2012) Subbanna N, Arbel T (2012) Probabilistic gabor and markov random fields segmentation of brain tumours in mri volumes. Proc MICCAI Brain Tumor Segmentation Challenge (BRATS) pp 28–31 * Sudre et al. (2017) Sudre CH, Li W, Vercauteren T, Ourselin S, Cardoso MJ (2017) Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Deep learning in medical image analysis and multimodal learning for clinical decision support, Springer, pp 240–248 * Sun et al. (2018) Sun L, Zhang S, Luo L (2018) Tumor segmentation and survival prediction in glioma with deep learning. In: International MICCAI Brainlesion Workshop, Springer, pp 83–93 * Szegedy et al. (2015) Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9 * Szegedy et al. (2016) Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826 * Targ et al. (2016) Targ S, Almeida D, Lyman K (2016) Resnet in resnet: Generalizing residual architectures. arXiv preprint arXiv:160308029 * Taylor et al. (2013) Taylor T, John N, Buendia P, Ryan M (2013) Map-reduce enabled hidden markov models for high throughput multimodal brain tumor segmentation. Multimodal Brain Tumor Segmentation 43 * Tomas-Fernandez and Wareld (2012) Tomas-Fernandez X, Wareld S (2012) Automatic brain tumor segmentation based on a coupled global-local intensity bayesian model. MICCAI Challenge on Multimodal Brain Tumor Segmentation 34 * Tustison et al. (2010) Tustison NJ, Avants BB, Cook PA, Zheng Y, Egan A, Yushkevich PA, Gee JC (2010) N4itk: improved n3 bias correction. IEEE transactions on medical imaging 29(6):1310–1320 * Urban et al. (2014) Urban G, Bendszus M, Hamprecht F, Kleesiek J (2014) Multi-modal brain tumor segmentation using deep convolutional neural networks. MICCAI BraTS (brain tumor segmentation) challenge Proceedings, winning contribution pp 31–35 * Van Griethuysen et al. (2017) Van Griethuysen JJ, Fedorov A, Parmar C, Hosny A, Aucoin N, Narayan V, Beets-Tan RG, Fillion-Robin JC, Pieper S, Aerts HJ (2017) Computational radiomics system to decode the radiographic phenotype. Cancer research 77(21):e104–e107 * Wang et al. (2017) Wang G, Li W, Ourselin S, Vercauteren T (2017) Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. In: International MICCAI brainlesion workshop, Springer, pp 178–190 * Wang et al. (2018) Wang G, Li W, Ourselin S, Vercauteren T (2018) Automatic brain tumor segmentation using convolutional neural networks with test-time augmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 61–72 * Weninger et al. (2018) Weninger L, Rippel O, Koppers S, Merhof D (2018) Segmentation of brain tumors and patient survival prediction: Methods for the brats 2018 challenge. In: International MICCAI Brainlesion Workshop, Springer, pp 3–12 * Xiao and Hu (2012) Xiao Y, Hu J (2012) Hierarchical random walker for multimodal brain tumor segmentation. MICCAI Challenge on Multimodal Brain Tumor Segmentation * Xu et al. (2018) Xu Y, Gong M, Fu H, Tao D, Zhang K, Batmanghelich K (2018) Multi-scale masked 3-d u-net for brain tumor segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 222–233 * Yao et al. (2019) Yao H, Zhou X, Zhang X (2019) Automatic segmentation of brain tumor using 3d se-inception networks with residual connections. brainles 2018. Springer LNCS 11384:346–357 * Zeng et al. (2016) Zeng K, Bakas S, Sotiras A, Akbari H, Rozycki M, Rathore S, Pati S, Davatzikos C (2016) Segmentation of gliomas in pre-operative and post-operative multimodal magnetic resonance imaging volumes based on a hybrid generative-discriminative framework. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 184–194 * Zhao et al. (2012) Zhao L, Wu W, Corso JJ (2012) Brain tumor segmentation based on gmm and active contour method with a model-aware edge map. BRATS MICCAI pp 19–23 * Zhao et al. (2013) Zhao L, Sarikaya D, Corso JJ (2013) Automatic brain tumor segmentation with mrf on supervoxels. Multimodal Brain Tumor Segmentation 51 * Zhao et al. (2016) Zhao X, Wu Y, Song G, Li Z, Fan Y, Zhang Y (2016) Brain tumor segmentation using a fully convolutional neural network with conditional random fields. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, pp 75–87 * Zhao et al. (2017) Zhao X, Wu Y, Song G, Li Z, Zhang Y, Fan Y (2017) 3d brain tumor segmentation through integrating multiple 2d fcnns. In: International MICCAI Brainlesion Workshop, Springer, pp 191–203 * Zhou et al. (2018) Zhou C, Chen S, Ding C, Tao D (2018) Learning contextual and attentive information for brain tumor segmentation. In: International MICCAI Brainlesion Workshop, Springer, pp 497–507 * Zhou et al. (2017) Zhou F, Li T, Li H, Zhu H (2017) Tpcnn: two-phase patch-based convolutional neural network for automatic brain tumor segmentation and survival prediction. In: International MICCAI Brainlesion Workshop, Springer, pp 274–286 * Zikic et al. (2012a) Zikic D, Glocker B, Konukoglu E, Criminisi A, Demiralp C, Shotton J, Thomas OM, Das T, Jena R, Price SJ (2012a) Decision forests for tissue-specific segmentation of high-grade gliomas in multi-channel mr. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, pp 369–376 * Zikic et al. (2012b) Zikic D, Glocker B, Konukoglu E, Shotton J, Criminisi A, Ye D, Demiralp C, Thomas O, Das T, Jena R, et al. (2012b) Context-sensitive classification forests for segmentation of brain tumor tissues. In: Proc. MICCAI-BRATS, pp 22–30 * Zikic et al. (2014) Zikic D, Ioannou Y, Brown M, Criminisi A (2014) Segmentation of brain tumor tissues with convolutional neural networks. Proceedings MICCAI-BRATS pp 36–39
# Enhancing the spin-photon coupling with a micromagnet Xin-Lei Hei Xing-Liang Dong Jia-Qiang Chen Cai-Peng Shen Yi-Fan Qiao Peng-Bo Li<EMAIL_ADDRESS>Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China ###### Abstract Hybrid quantum systems involving solid-state spins and superconducting microwave cavities play a crucial role in quantum science and technology, but improving the spin-photon coupling at the single quantum level remains challenging in such systems. Here, we propose a simple technique to strongly couple a single solid-state spin to the microwave photons in a superconducting coplanar waveguide (CPW) cavity via a magnetic microsphere. We show that, strong coupling at the single spin level can be realized by virtual magnonic excitations of a nearby micromagnet. The spin-photon coupling strength can be enhanced up to typically four orders of magnitude larger than that without the use of the micromagnet. This work can find applications in quantum information processing with strongly coupled solid-state spin-photonic systems. ## I introduction Hybrid quantum architectures based on degrees of freedom of completely different nature have attracted much attention over recent years Xiang _et al._ (2013); Li _et al._ (2015); Das _et al._ (2017); Schütz _et al._ (2020); Li _et al._ (2020a); Bin _et al._ (2020); Kubo _et al._ (2011); Faraon _et al._ (2012); Bennett _et al._ (2013); Li _et al._ (2016); Li and Nori (2018); Li _et al._ (2020b); Blais _et al._ (2020); Li _et al._ (2021); Arcizet _et al._ (2011); Li _et al._ (2020c); Twamley and Barrett (2010); Amsüss _et al._ (2011); Grigoryan _et al._ (2018); Julsgaard _et al._ (2013); Imamoğlu (2009); Kubo _et al._ (2010), due to a variety of applications in quantum technologies such as quantum networks Reiserer and Rempe (2015), information processing Galindo and Martín-Delgado (2002); Blais _et al._ (2020) and sensing Degen _et al._ (2017). The interactions between solid-state spins and microwave photons play a central role in hybrid quantum systems Kubo _et al._ (2010); Twamley and Barrett (2010); Amsüss _et al._ (2011); Grigoryan _et al._ (2018); Julsgaard _et al._ (2013); Imamoğlu (2009), which associate the solid-state spins as quantum memories Fuchs _et al._ (2011) and the photons as quantum information carriers Zhong _et al._ (2020). Besides, they can be useful for fundamental investigation in quantum mechanics, solid-state physics and quantum optics, which provide useful platforms and tools to deepen the research of currently unexploited quantum physics Mi _et al._ (2018); Awschalom _et al._ (2018). To construct hybrid solid-state platforms, color centers in diamond are often employed. For example, NV centers, one of the excellent color centers with long coherence time and stable triplet ground states Aharonovich _et al._ (2011); Lee _et al._ (2017); Marcus _et al._ (2013); Doherty _et al._ (2014); Barry _et al._ (2020); Bar-Gill _et al._ (2013); Abobeih _et al._ (2018), are frequently used for quantum storage Fuchs _et al._ (2011) and sensing Dolde _et al._ (2011); Kolkowitz _et al._ (2012). While the realization of strong coupling is difficult due to the large mismatch between the spatial extension of free-space photons and typical spins, the solid-state spins can strongly couple to CPW resonators Roy _et al._ (2017); Twamley and Barrett (2010); Amsüss _et al._ (2011), similar to cavity quantum electrodynamics with atomsLiu _et al._ (2014). However, current experiments in hybrid systems often involve spin ensembles rather than single spins Verdú _et al._ (2009); Kubo _et al._ (2010); Amsüss _et al._ (2011); Rabl _et al._ (2006), since the single spin-photon coupling strength is just around 10 Hz, which is far from the strong coupling regime Mi _et al._ (2018); Blais _et al._ (2020). Here, we propose a feasible scheme to realize the strong spin-photon coupling at the single quantum level via virtual excitations of magnons in a micromagnet. Magnons, the energy quanta of spin waves, play an essential role in quantum information processing and quantum sensing Zhang _et al._ (2015); Stancil and Prabhakar (2009); Zhang _et al._ (2016a); Haigh _et al._ (2016); Lachance-Quirion _et al._ (2019); Wolski _et al._ (2020), with different types of magnets such as sphere magnets Soykal and Flatté (2010); Tabuchi _et al._ (2014); Zhang _et al._ (2014); Tabuchi _et al._ (2015); Lambert _et al._ (2015, 2016); Bourhill _et al._ (2016); Kostylev _et al._ (2016); Wang _et al._ (2018); Gonzalez-Ballestero _et al._ (2020a); Gieseler _et al._ (2020), film layer Sandweg _et al._ (2011); Huebl _et al._ (2013); Zhang _et al._ (2016b); Li _et al._ (2019a); Hou and Liu (2019); Wang _et al._ (2020); Zhang _et al._ (2020) and cylinder magnets Almulhem _et al._ (2020). Thanks to the small mode volume and high spin density of the Kittel mode in a sphere micromagnet, it can focus the field energy to enable the strong spin- magnon interaction Huillery _et al._ (2020). Therefore, the Kittel mode in the sphere magnet is frequently used in the magnon-cavity coupling system to achieve the strong coupling between magnons and microwave cavity photons Simons and Simons (2001); Wolf _et al._ (2001); Verdú _et al._ (2009); Wesenberg _et al._ (2009); Tabuchi _et al._ (2016); Morris _et al._ (2017); Maier-Flaig _et al._ (2017); Dany _et al._ (2019); Li _et al._ (2019b); Martínez-Pérez and Zueco (2019). The nanoscale micromagnet, yttrium iron garnet (YIG) sphere considered in this work, is well matched with the NV spins and CPW cavities, and is very crucial to the strong interactions among the three subsystems. We consider efficient coupling of magnonic excitations to a single NV spin and a CPW cavity. We show that, even when the Kittel mode is only virtually populated, the magnon-mediated interaction between the NV spin and the microwave cavity mode can be modified significantly. The induced spin- photon coupling strength can be enhanced up to typically four orders of magnitude larger than that in the absence of the micromagnet. This regime may provide a powerful tool for applications of quantum information processing based on strong spin-photon interactions at the single quantum level. ## II description of the system ### II.1 The setup As illustrated in Fig. 1(a), we consider a hybrid tripartite system where a spherical micromagnet with radius $R$ is magnetically coupled to a NV center and a CPW resonator simultaneously. Here, the magnetic microsphere is a YIG sphere. The NV center is placed above the microsphere, which is close to the surface of the CPW resonator. Figure 1: (color online). (a) Schematic of the NV center (red circle with arrow) coupled to a CPW cavity (blue) via a YIG sphere (gray). (b) Frequency split of the NV center spin states and the resonant frequency of the Kittel mode changed with the longitudinal magnetic field. The magnetic microsphere supports spin waves, which always exist in the ferrimagnetic or antiferromagnetic materials in the low energy limit. The spin wave in a magnetic microsphere can be quantized by the dipolar, isotropic, and magnetostatic approximations Gonzalez-Ballestero _et al._ (2020a), with magnons as the quanta. In our setup, only the Kittel mode in the YIG sphere is considered, where all the spins in the magnetic sphere precess in phase and with the same amplitude Tabuchi _et al._ (2016). The Hamiltonian of the Kittel mode (with the destruction operator $\hat{s}_{K}$) can be expressed as $\displaystyle\hat{H}_{K}=\hbar\omega_{K}\hat{s}_{K}^{\dagger}\hat{s}_{K},$ (1) Here, the frequency $\omega_{K}=|\gamma|B_{z}$ is controlled by the external magnetic field, with the gyromagnetic ratio $\gamma=-1.76\times 10^{11}$ T-1s-1. For the NV center, the spin-triplet $S=1$ ground states {$|0\rangle$, $|{\pm 1}\rangle$} are the eigenstates of the spin operator $\hat{S}_{z}$, with $\hat{S}_{z}|{i}\rangle=i|{i}\rangle$ and $i=0,\pm 1$. In the presence of the external magnetic field $\vec{B}_{z}$ oriented along the NV symmetry axis Maze _et al._ (2011); Marcus _et al._ (2013); Lee _et al._ (2017); Li _et al._ (2019c); Barry _et al._ (2020), the degeneracy of the states $|{\pm 1}\rangle$ are split via the Zeeman effect. Taking $|{0}\rangle$ as the energy reference, the Hamiltonian of the NV center can be expressed as $\hat{H}_{NV}=\hbar\sum_{i=\pm 1}\omega_{i}|{i}\rangle\langle{i}|,$ (2) with $\omega_{\pm 1}=D_{0}\pm|\gamma|B_{z}$ and the zero-field-splitting $D_{0}=2\pi\times 2.87$ GHz. With proper magnetic fields, we can choose the states $|{0}\rangle$ and $|{-1}\rangle$ as a spin qubit, which are selected to be resonant with the magnon mode. The Hamiltonian of the NV center can be simplified by $\displaystyle\hat{H}_{NV}=\frac{\hbar}{2}\omega_{NV}\hat{\sigma}_{z},$ (3) where we define the frequency as $\omega_{NV}\equiv\omega_{-1}$ and the spin operator as $\hat{\sigma_{z}}\equiv|{-1}\rangle\langle{-1}|-|{0}\rangle\langle{0}|$. The state $|{+1}\rangle$ can be safely excluded due to its off-resonance with the Kittel mode. The CPW resonator is a one dimensional resonator whose frequency $\omega_{C}$ is related to the length of the transmission line segment. The electron oscillating in the line cavity creates a variational electromagnetic field, which can be quantized and interacts with other devices Wesenberg _et al._ (2009); Bushev _et al._ (2011); Jenkins _et al._ (2013, 2014); Martínez- Pérez and Zueco (2019). The corresponding Hamiltonian of photons in the CPW resonator can be written as $\displaystyle\hat{H}_{C}=\hbar\omega_{C}\hat{a}^{\dagger}\hat{a},$ (4) where $\hat{a}$ is the annihilation operator of the photon mode. The photon frequency is designed to match the frequencies of magnons and spins. That means, in this setup, the frequencies of the three individual systems are equivalent. ### II.2 Interactions between spins and magnons We now consider the quantization of the magnetic field induced by the magnetic sphere. This magnetic field exists both inside and outside the micromagnet. Here, we focus on the latter because of the location of spin qubit. From classical electrodynamics, the magnetic field of a magnetic sphere with the magnetization $\vec{M}$ can be expressed as $\vec{B}_{m}(\vec{r})=\mu_{0}R^{3}/(3r^{3})\\{3(\vec{M}\cdot\vec{r})\vec{r}/r^{2}-\vec{M}\\},$ (5) where $\mu_{0}=4\pi\times 10^{-7}$ T$\cdot$m$/$A is the vacuum permeability, $R$ is the radius of the YIG sphere, and $\vec{r}=(r,\theta,\phi)$ is the position vector relative to the centre of the magnetic sphere. After quantizing the spin wave, the corresponding magnetization operator $\hat{\vec{M}}$ of the Kittel mode is $\displaystyle\hat{\vec{M}}={M_{K}}\left(\vec{\tilde{m}}_{K}\hat{s}_{K}+\vec{\tilde{m}}_{K}^{*}\hat{s}_{K}^{\dagger}\right),$ (6) where $M_{K}=\sqrt{\hbar|\gamma|M_{s}/2V}$ is the zero-point magnetization, $M_{s}$ is the saturation magnetization, and $V$ is the volume. Meanwhile, we define the Kittel mode function as $\vec{\tilde{m}}_{K}=\vec{e}_{x}+i\vec{e}_{y}$ with the unit coordinate vector $\vec{e}_{x}$ and $\vec{e}_{y}$ (see more details in Appendix A). Remarkably, there is no $z$ component in the mode function. From the above discussion, we can get the quantized magnetic field $\hat{\vec{B}}_{m}$ as $\displaystyle\hat{\vec{B}}_{m}(\vec{r})=$ $\displaystyle\frac{\mu_{0}R^{3}M_{K}}{3r^{3}}\Big{\\{}\big{[}(3C_{\theta}^{2}-1)\hat{X}+3S_{\theta}C_{\theta}\hat{P}\big{]}\vec{e}_{x}$ (7) $\displaystyle+\big{[}3S_{\theta}C_{\theta}\hat{X}+(3S_{\theta}^{2}-1)\hat{P}\big{]}\vec{e}_{y}\Big{\\}},$ where we define $C_{\theta}=\cos\theta$, $S_{\theta}=\sin\theta$, $\hat{X}=\hat{s}_{K}+\hat{s}_{K}^{\dagger}$ and $\hat{P}=i(\hat{s}_{K}-\hat{s}_{K}^{\dagger})$ for convenience. We then consider the interaction between the spin qubit and the Kittel mode in the micromagnet. When a magnetic dipole is placed in a magnetic field, it experiences a torque which tends to line it up parallel to the field. The Hamiltonian associated with this torque can be naturally written as $\hat{H}_{N-K}/\hbar=-g_{e}\mu_{B}\hat{\vec{B}}_{m}\cdot\hat{\vec{S}}$ (8) with the electronic spin Landé factor $g_{e}$, the Bohr magneton $\mu_{B}$ and the spin operator $\hat{\vec{S}}=(\hat{S}_{x},\hat{S}_{y},\hat{S}_{z})$. In this work, we define the corresponding components of the spin operator as $\hat{S}_{x}=\hbar(|{-1}\rangle\langle{0}|+|{0}\rangle\langle{-1}|)/2$, $\hat{S}_{y}=\hbar(|{-1}\rangle\langle{0}|-|{0}\rangle\langle{-1}|)/2i$. Assuming that the coupling strength is smaller than the resonance frequency, the spin-magnon interaction under the rotation wave approximation can be described by $\displaystyle\hat{H}_{N-K}=-\hbar g(\hat{s}_{K}\hat{\sigma}^{+}+H.c.),$ (9) where the spin operators are $\hat{\sigma}^{+}=|{-1}\rangle\langle{0}|$, and $\hat{\sigma}^{-}=|{0}\rangle\langle{-1}|$. And the corresponding spin-magnon coupling strength is expressed as $\displaystyle g=\sqrt{\frac{|\gamma|M_{s}}{12\pi}}\frac{g_{e}\mu_{0}\mu_{B}R^{3/2}}{(R+d)^{3}},$ (10) where the distance between the spin qubit and the surface of the magnetic sphere as $d=r-R$ (see Fig. 1(a)). The coupling strength is proportional to the square root of the micromagnet volume with $d>R$, while the coupling strength decreases slowly with $R>d$. In Fig. 2(a) and Fig. 2(b), we show the dependence of the spin-magnon coupling as a function of $R$ and $d$. Figure 2: (color online). (a)- (b) The couping strength $g/2\pi$ between the individual NV center spin qubit and the Kittel mode in the YIG sphere versus the distance $d$ and the radius $R$. (c) The coupling strength $\lambda/2\pi$ between the CPW resonator photon and the Kittel mode in the YIG sphere versus the radius $R$. The coupling rate keep reducing with the increase of $d$. While it increases with the growth of $R$ utill the maximum. From the numerical result we find that if we choose the YIG sphere with $R\sim 50$ nm and the distance $d\sim 10$ nm, then the coupling strength is about $\sim 1$ MHz. ### II.3 Interactions between photons and magnons We consider the quantizing form of the magnetic field generated by the CPW resonator. For the single mode, the magnetic field operator is $\hat{\vec{B}}_{c}=\vec{b}_{tr}(x,y)(\hat{a}+\hat{a}^{\dagger})/\sqrt{2}.$ (11) Here, the mode function $\vec{b}_{tr}(x,y)$ varies strongly in space depending on the CPW resonator geometry. To obtain the stronger interaction, the magnetic sphere is in close proximity to the surface of the CPW resonator. Then the mode function depends on the radius of the sphere ($\vec{b}_{tr}(R)$). From the discussion in Sec.II.1, the radius is about tens of nanometers, which is less than the width of the CPW resonator ($\sim 1$ $\mu$m). Therefore, the variation of the mode function depending on the radius can be ignored and $\vec{b}_{tr}(R)$ acts as an estimated constant ranging from $35\sim 40$ $\mu$G. We then expound the magnetic interaction between the CPW resonator and the micromagnet. The classical exchange energy of a magnetic dipole with the magnetic moment $\vec{m}$ in a uniform magnetic field $\vec{B}_{c}$ is $U=-\vec{m}\cdot\vec{B}_{c}$. For a uniformly magnetized sphere, we treat it as a magnetic dipole with $\vec{m}=V\vec{M}$. After replacing the classical quantity with the magnetization operator $\hat{\vec{M}}$ and the magnetic field operator $\hat{\vec{B}}_{c}$ in the expression above, the photon-magnon interaction Hamiltonian is $\hat{H}_{C-K}=-\hbar\lambda(\hat{s}_{K}+\hat{s}_{K}^{\dagger})(\hat{a}+\hat{a}^{\dagger}).$ Here, the homologous coupling strength is defined as $\displaystyle\lambda=\sqrt{\frac{\pi|\gamma|M_{s}}{3\hbar}}b_{x}(R)R^{3/2}.$ (12) We ignore the $y$ component of $\vec{b}_{tr}(R)$, since the magnetic field is parallel to the $x$ axis. Then, the photon-magnon coupling strength $\lambda$ can be enhanced to around $1$ MHz with a proper geometrical size of the magnetic microsphere (see Fig. 2(c)). Under the rotating wave approximation, we can get the photon-magnon interaction Hamiltonian $\displaystyle\hat{H}_{C-K}=-\hbar\lambda(\hat{s}_{K}^{\dagger}\hat{a}+H.c.).$ (13) ## III realistic consideration and experimental parameters From the above discussion, we can obtain the total Hamiltonian of the hybrid spin-magnon-photon system, which can be expressed as $\displaystyle\hat{H}=$ $\displaystyle\hbar\omega_{C}\hat{a}^{\dagger}\hat{a}+\hbar\omega_{K}\hat{s}_{K}^{\dagger}\hat{s}_{K}+\frac{1}{2}\hbar\omega_{NV}\hat{\sigma}_{z}$ (14) $\displaystyle-\hbar g(\hat{s}_{K}\hat{\sigma}^{+}+H.c.)-\hbar\lambda(\hat{s}_{K}^{\dagger}\hat{a}+H.c.),$ where the first three terms correspond to the free Hamiltonian from Eq. (1), Eq. (3) and Eq. (4), and the last two terms describe two interactions: one between the Kittel mode and the NV center spin from Eq. (9), while the other between the Kittel mode and the photon in the CPW resonator from Eq. (13). As the Kittel mode is coupled both to the spin qubit and the CPW resonator photon, it is possible to work as a quantum interface between the spin qubit and the CPW resonator. We now discuss the dynamics of the system in a realistic situation. In this case, we take into account the dephasing of the NV center spin ($\gamma_{s}$), the decays of the Kittel mode ($\gamma_{m}$) and the CPW resonator ($\kappa$). As a result, the master equation of the total system can be expressed as ${\dot{\hat{\rho}}(t)}=-i/\hbar[{\hat{H},\hat{\rho}}]+\mathcal{L}[\hat{\rho}],$ (15) where $\hat{\rho}$ is the density operator. The last term in above equation is given by $\displaystyle\mathcal{L}[\hat{\rho}]$ $\displaystyle=$ $\displaystyle{\gamma_{s}}\mathcal{D}[{\hat{\sigma}_{z}}]\hat{\rho}+\sum_{j}\\{(\bar{n}_{j}+1){\Gamma_{j}}\mathcal{D}[{{{\hat{o}}_{j}}}]\hat{\rho}$ (16) $\displaystyle+\bar{n}_{j}{\Gamma_{j}}\mathcal{D}[\hat{o}_{j}^{\dagger}]\hat{\rho}\\}.$ For compactness we define $\\{\Gamma_{m},\Gamma_{p}\\}\equiv\\{\gamma_{m},\kappa\\}$, $\\{\hat{o}_{m},\hat{o}_{p}\\}\equiv\\{\hat{s}_{K},\hat{a}\\}$ and $\mathcal{D}[\hat{o}]\hat{\rho}\equiv\hat{o}\hat{\rho}\hat{\rho}^{\dagger}-\\{\hat{o}^{\dagger}\hat{o},\hat{\rho}\\}/2$. Here, we have introduced the thermal occupation number $\bar{n}_{j}=(e^{\hbar\omega_{j}/kT}-1)^{-1}$ with frequencies $\\{\omega_{m},\omega_{p}\\}=\\{\omega_{K},\omega_{C}\\}$ and environmental temperature $T$ Li _et al._ (2015); Gonzalez-Ballestero _et al._ (2020b). Considering the low temperature limit ($T\sim 10$ mT), the thermal occupation number $\bar{n}_{j}$ is safely ingnored with the frequency $\omega_{j}\sim 1.4$ GHz. Then the dissipation of the Kittel mode and the photon mode therefore can be simplified. Then, the master equation turns to ${\dot{\hat{\rho}}(t)}=-i/\hbar[{\hat{H},\hat{\rho}}]+{\gamma_{s}}\mathcal{D}[{\hat{\sigma}_{z}}]\hat{\rho}+\sum_{j}{\Gamma_{j}}\mathcal{D}[{{{\hat{o}}_{j}}}]\hat{\rho}.$ (17) In the frame rotating at the individual spin qubit frequency $\omega_{NV}$, the whole Hamiltonian is firstly transformed to the form as $\displaystyle\hat{H}$ $\displaystyle=$ $\displaystyle\hbar\Delta_{1}\hat{s}_{K}^{\dagger}\hat{s}_{K}+\hbar\Delta_{2}\hat{a}^{\dagger}\hat{a}-\hbar g\hat{s}_{K}\hat{\sigma}^{+}$ $\displaystyle-\hbar\lambda\hat{s}_{K}^{\dagger}\hat{a}+H.c.,$ where $\Delta_{1}=\omega_{K}-\omega_{NV}$ and $\Delta_{2}=\omega_{C}-\omega_{NV}$ Liu _et al._ (2014). In view of the large detuning ($\Delta_{1}\gg g,$ $\lambda$), the Kittel mode can be eliminated Li _et al._ (2012) and we can obtain the effective interaction between the NV center spin qubit and the CPW resonator photon with the following Hamiltonian $\displaystyle\hat{H}_{\mathrm{eff}}=$ $\displaystyle\hbar(\Delta_{2}-\beta^{2}\Delta_{1})\hat{a}^{\dagger}\hat{a}-\frac{1}{2}\hbar\alpha^{2}\Delta_{1}\hat{\sigma}_{z}$ (19) $\displaystyle-\hbar g_{\mathrm{eff}}(\hat{a}^{\dagger}\hat{\sigma}^{-}+H.c.),$ where we difined the dimensionless parameters as $\alpha=g/\Delta_{1}$, $\beta=\lambda/\Delta_{1}$ as well as the effective coupling srength $g_{\mathrm{eff}}=g\lambda/\Delta_{1}$. This result shows that the interface between the spin and the cavity photon is achieved via utilizing the Kittel mode in the micromagnet as a medium. We next discuss the effective coupled system. First, the dissipative interaction system can be discribed with a reduced effective master equation as $\displaystyle\frac{{d{{\hat{\rho}}_{r}}(t)}}{{dt}}=$ $\displaystyle-\frac{i}{\hbar}[{{{\hat{H}}_{\mathrm{eff}}},{{\hat{\rho}}_{r}}}]+{\gamma_{s}}\mathcal{D}[{\hat{\sigma}_{z}}]{{\hat{\rho}}_{r}}$ (20) $\displaystyle+{\gamma_{\mathrm{eff}}}\mathcal{D}[{{{\hat{\sigma}}_{-}}}]{{\hat{\rho}}_{r}}+{\kappa_{\mathrm{eff}}}\mathcal{D}[{\hat{a}}]{{\hat{\rho}}_{r}},$ where ${\gamma_{\mathrm{eff}}}=\ {\alpha^{2}}{\gamma_{m}}$ and ${\kappa_{\mathrm{eff}}}=\kappa+{\beta^{2}}{\gamma_{m}}$ are the effective dacay rates, respectively. Here, the original dissipation rates are chosen as $\gamma_{s}/2\pi\sim 1$ kHz Li _et al._ (2019c), $\gamma_{m}/2\pi\sim 1$ MHz Gonzalez-Ballestero _et al._ (2020b) and $\kappa/2\pi\sim 6$ kHz Li _et al._ (2015). And we take the dimensionless parameters as $\alpha\sim\beta\sim 0.1$ for the condition of eliminating the Kittel mode. We therefore estimate the effective dacay rates as ${\gamma_{\mathrm{eff}}}\sim 10$ kHz and ${\kappa_{\mathrm{eff}}}\sim 16$ kHz. At the same time, the effective coupling strength $g_{\mathrm{eff}}$ can be approximatively obtained as $g_{\mathrm{eff}}\sim 0.1g$, which is larger than the effective dacay rates ${\gamma_{\mathrm{eff}}}$, ${\kappa_{\mathrm{eff}}}$ as well as the dephasing of the NV center $\gamma_{s}$. Since the spin dephasing rate $\gamma_{s}$ is much smaller than the effective decay rates ${\gamma_{\mathrm{eff}}}$ and ${\kappa_{\mathrm{eff}}}$, its effect on the dynamics of the system can be neglected. Therefore, we can obtain the center result of this work: the individual NV center spin qubit can strongly couple to the microwave photon in a CPW resonator under the proper conditions. The spin-photon coupling is well within the strong coupling regime. Figure 3: (color online). The effective coupling strength $g_{\mathrm{eff}}/2\pi$, the effective decay rates ${\gamma_{\mathrm{eff}}/2\pi}$ and ${\kappa_{\mathrm{eff}}}/2\pi$ depend on the detuning $\Delta_{1}/2\pi$ with the value of (a) $R=50$ nm and (b) $R=100$ nm. And inversely these parameters depend on the radius of the micromagnet sphere $R$ with the value of (c) $\Delta_{1}/2\pi=10$ MHz and (d) $\Delta_{1}=20$ MHz. We choose the distance between the spin qubit and the surface of the magnetic microsphere as $d=30$ nm in these figures. Accurately, the effective coupling strength and decay rates depend on the detuning $\Delta_{1}$ and the geometrical radius of the magnetic sphere $R$, which is clearly shown in Fig. 3. The effective coupling rate $g_{\mathrm{eff}}/2\pi$ and decay rate ${\kappa_{\mathrm{eff}}}/2\pi$ increase with the increase of the detuning $\Delta_{1}/2\pi$ while they decrease with the increase of $R$. Another effective decay rate ${\gamma_{\mathrm{eff}}/2\pi}$ decreases with the increase of $\Delta_{1}/2\pi$ and $R$. There is always a large range where the effective coupling strength exceeds both the effective decay rates with various values of detuning $\Delta_{1}/2\pi$ and $R$. Thus, we can take the effective spin- photon system into the strong coupling regime via tuning the relevant parameters. We proceed to analyze the effective spin-photon coupling from another point of view. In order to compare the effective coupling strength with the effective dacay rates, we employ three parameters $g_{\mathrm{eff}}/\gamma_{\mathrm{eff}}$, $g_{\mathrm{eff}}/\kappa_{\mathrm{eff}}$ as well as the cooperativity ${\cal C}=g_{\mathrm{eff}}^{2}/\gamma_{\mathrm{eff}}\kappa_{\mathrm{eff}}$. The patameter $g_{\mathrm{eff}}/\gamma_{\mathrm{eff}}$ increases with the increase of the detuning $\Delta_{1}$ and the radius $R$ as shown in the Fig. 4(a). The results show that the larger physical dimension of the magnetic microsphere allows for stronger effective interaction between the spin qubit and the CPW resonator photon, which makes experimental realization handy. There is a range for the parameter $g_{\mathrm{eff}}/\kappa_{\mathrm{eff}}>1$ as shown in the Fig. 4(e). The cooperativity $\cal C$ increases with the increase of the detuning $\Delta_{1}$ and the radius $R$. And there is a very large range where the cooperativity even exceeds 10 as shown in Fig. 4(d). Figs. 4(b) and (c) respectively show that the three parameters are larger than 1 with different detunings when the radius $R$ is within certain limits. Furthermore, the results displayed in Fig. 3 and Fig. 4 are obtained when the NV spin qubit is close to the magnetic microsphere ($d=30$ nm). Figure 4: (color online). The contour maps show the parameters (a) $g_{\mathrm{eff}}/\gamma_{\mathrm{eff}}$, (d) cooperativity and (e) $g_{\mathrm{eff}}/\kappa_{\mathrm{eff}}$ as functions of the detuning $\Delta_{1}/2\pi$ and the radius $R$. The more visualized relationship among the three parameters, the detuning and the radius is shown in (b) and (c). And the distance between the spin qubit and the surface of the magnetic microsphere is chosen as $d=30$ nm as well. The short dots horizontal lines in (b) and (c) point to the value 1. We now explain the significance of the magnetic microsphere in this spin- photon coupled system using the numerical simulation. The direct interaction between the individual spin qubit and the single photon is very weak on account of the low magnetic effect generated by a single microwave photon. Nevertheless, the magnon mode is sensitive to weak magnetic fields. At the same time, individual spin qubits can strongly couple to the magnon mode with a small distance apart from the magnetic microsphere. Hence, the magnons play a pivotal role in the realization of strong spin-photon coupling. The effective interaction can be proportionally enhanced with the increase of the volume of the magnetic microsphere within a certain range. We numerically simulate the time evolution of the occupations of the spin qubit, the Kittel magnon and the CPW resonator using the qutip package in python Johansson _et al._ (2013) as shown in the Fig. 5. Figure 5: (color online). (a) The occupation map of the three parts in system depends on the dimensionless time $gt/\pi$ with the parameters $\gamma_{m}\sim g$, $\gamma_{s}\sim 0.1g$, $\kappa\sim 0.5g$, $g\sim\lambda$ and $\Delta_{1}\sim 10g$. And the figure (b) shows the direct interaction between the spin qubit and the CPW resonator photon without the magnon mode as an intermediary. The damped oscillations of the occupations of the spin and photon in Fig. 5(a) show the strong coupling between them in the presence of magnons. While in the absence of magnons, Fig. 5(b) shows an exponential decay curve without oscillations. This result confirms the validity of the effective master equation (20). Note that the population of the virtually excited Kittel mode is nearly zero in the whole process. These results clearly show the significant role played by the magnon in realizing the strong coupling between the spin and the photon. ## IV Conclusion In conclusion, we have proposed a scheme for strongly coupling a single solid- state spin like NV centers to the microwave photons in a CPW cavity via a magnetic microsphere. We have shown that the strong coupling at the single quantum level can be realized by virtual magnonic excitations of a nearby micromagnet. In contrast to the case in the absence of the micromagnet, the spin-photon coupling strength has been enhanced up to typically four orders of magnitude. Here, the employment of magnons opens up intriguing perspectives for magnonics and spintronics as well. This regime may facilitate much more powerful applications in quantum information processing based on strongly coupled solid-state spin-photonic systems. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grants No. 92065105, No. 11774285 and the Natural Science Basic Research Program of Shaanxi (Program No. 2020JC-02). ## Appendix A Quantization of the spin wave In this Appendix, we integrally show the quantization of the spin wave and obtain the theoretical model of the magnon in a ferromagnetic microsphere. First, in Sec. A.1, we introduce the original equation of motion for the magnetization, the general Landau-Lifshitz equation, which is simplified under several physical approximations. Then, we present the Walker mode in a ferrite sphere in Sec. A.2. Based on the magnetostatic energy density expression, we obtain the intrinsic Hamiltonian and quantization of the magnon modes in Sec. A.3. ### A.1 Spin wave equations in a magnetic sphere First, we consider the spin waves with a continuous magnetization field $\vec{M}(\vec{r},t)$ and the corresponding eletromagnetic field intensity $\vec{E}(\vec{r},t)$ and $\vec{H}(\vec{r},t)$. The dynamics of spin waves generally follows the Maxwell’s equations with the relationship between the induced magnetization and the applied field. Now we start from the phenomenological Landau-Lifshitz equation of motion for the magnetization Stancil and Prabhakar (2009); Aharoni (2000) $\frac{d}{dt}\vec{M}(\vec{r},t)=-|\gamma|\mu_{0}\vec{M}(\vec{r},t)\times\vec{H}_{\mathrm{eff}}(\vec{M},\vec{r},t),$ (21) where the effective field $\vec{H}_{\mathrm{eff}}(\vec{M},\vec{r},t)$ comprises the Maxwellian field $\vec{H}(\vec{r},t)$ and the extra parts $\vec{H}^{\prime}(\vec{M},\vec{r},t)$ as follows Stancil and Prabhakar (2009) $\vec{H}^{\prime}(\vec{M},\vec{r},t)=\vec{H}_{\mathrm{ex}}(\vec{M},\vec{r},t)+\vec{H}_{\mathrm{an}}(\vec{M},\vec{r},t)+\vec{H}_{\mathrm{dm}}(\vec{M},\vec{r},t),$ (22) Here, $\vec{H}_{\mathrm{ex}}$, $\vec{H}_{\mathrm{an}}$ and $\vec{H}_{\mathrm{dm}}$ are effective fields due to exchange, anisotropy and demagnetization induced by the magnetic dipole-dipole interactions, respectively. The extra part of the effective field evidently depends on the magnetization, which means that the Landau-Lifshitz equation is inhomogeneous. We then assume that the magnet is in the saturated magnetic state along the $z$ axis under the collinear field. Then the fluctuation of the magnetization and the field is very small compared with the constant part Gonzalez- Ballestero _et al._ (2020a). Naturally, they are written as $\displaystyle\vec{M}(\vec{r},t)$ $\displaystyle=M_{S}\vec{e}_{z}+\vec{m}(\vec{r},t),$ (23a) $\displaystyle\vec{H}(\vec{r},t)$ $\displaystyle=H_{0}\vec{e}_{z}+\vec{h}(\vec{r},t)$ (23b) Here, $\vec{m}\ll M_{S}$ and $\vec{h}\ll H_{0}$ are the dynamical variables to be solved and $\vec{e}_{z}$ is the unit vector along the $z$ axis. Then we consider the extra terms in the effective field. The exchange field is associated with the domain wall interaction, which is less than the dipole- dipole interaction when the micromagnet sizes is large compared with the domain wall length. The magnetocrystalline anisotropy for a cubic material is also negligible Aharoni (2000); Stancil and Prabhakar (2009); Maier-Flaig _et al._ (2017). As for the demagnetizing field, we assume $\vec{H}_{\mathrm{dm}}=-(M_{S}/3)\vec{e}_{z}$, which is suitable for a spherical magnet Aharoni (2000); Jackson (2007). The above expressions conbined with the Eq. (21) lead to a more clear form of Landau–Lifshitz equation as $\displaystyle\frac{\dot{\vec{m}}(\vec{r},t)}{|\gamma|\mu_{0}M_{S}H_{0}}$ $\displaystyle-\vec{e}_{z}\times\left[\frac{\vec{m}(\vec{r},t)}{M_{S}}-\frac{\vec{h}(\vec{r},t)}{H_{0}}\right]+\left[\vec{e}_{z}+\frac{\vec{m}(\vec{r},t)}{M_{S}}\right]$ (24) $\displaystyle\times\frac{(H_{0}-M_{S}/3)\vec{e}_{z}}{H_{0}}=\frac{\vec{h}(\vec{r},t)}{H_{0}}\times\frac{\vec{m}(\vec{r},t)}{M_{S}}$ Note that the variables $\vec{m}/M_{S}$ and $\vec{h}/H_{0}$ are small. Thus, we can safely neglect the second order term in the above equation. Then the Landau-Lifshitz equations is linearized as Stancil and Prabhakar (2009) $\left[\begin{array}[]{c}\dot{m}_{x}(\vec{r},t)\\\ \dot{m}_{y}(\vec{r},t)\end{array}\right]=\left[\begin{array}[]{c}-\omega_{0}m_{y}(\vec{r},t)+\omega_{M}h_{y}(\vec{r},t)\\\ \omega_{0}m_{x}(\vec{r},t)-\omega_{M}h_{x}(\vec{r},t)\end{array}\right],$ (25) where, two relevant system frequencies are $\displaystyle\omega_{\mathrm{M}}$ $\displaystyle=|\gamma|\mu_{0}M_{\mathrm{S}},$ (26a) $\displaystyle\omega_{0}$ $\displaystyle=|\gamma|\mu_{0}(H_{0}-M_{S}/3).$ (26b) We notice that the $z$ component vanishes because of the cross product with vector $\vec{e}_{z}$. The result is not abnormal when we consider only small fluctuations around the fully magnetized state which is along $z$ axis. Finally, it comes to the magnetostatic approximation $\nabla\times\vec{h}(\vec{r},t)\simeq 0$. In the Maxwell equations, the electric field of the spin wave is uncoupled from $\vec{h}$ under this circumstance. In addition, the approximation makes it easy to introduce the magnetostatic potential through $\vec{h}(\vec{r},t)=-\nabla\psi(\vec{r},t)$. Combined with the zero-divergence condition $\nabla\cdot\vec{b}=0$ in Maxwell equations, the relation $\vec{b}=\mu_{0}(\vec{h}+\vec{m})$ allows one to obtain the following equation for three acalar fileds Stancil and Prabhakar (2009) $\nabla^{2}\psi(\vec{r},t)=\partial_{x}m_{x}(\vec{r},t)+\partial_{y}m_{y}(\vec{r},t),$ (27) which is the constraint inside the micromagnet. While the equation outside the micromagnet is $\nabla^{2}\psi=0$. Therefore, the linear scalar equations Eq. (25) and Eq. (27) completely describe the spin wave with the boundary conditions, which is the continuity of the normal direction components of $\vec{h}$ and $\vec{b}$, respectively. The spin-wave eigenmodes solved from these equations are the magnetostatic dipolar spin waves or Walker modes Aharoni (2000); Stancil and Prabhakar (2009). ### A.2 Walker modes and Kittel modes This section reveals the process of calculating the Walker modes. First, the expressions of the magnetization and magnatic fields in terms of the eigenmodes is shown by $\displaystyle\vec{m}(\vec{r},t)=\sum_{\beta}\left[s_{\beta}\vec{m}_{\beta}(\vec{r})e^{-i\omega_{\beta}t}+\mathrm{c.c.}\right],$ (28a) $\displaystyle\vec{h}(\vec{r},t)=\sum_{\beta}\left[s_{\beta}\vec{h}_{\beta}(\vec{r})e^{-i\omega_{\beta}t}+\mathrm{c.c.}\right].$ (28b) Here, the eigenmode fields $\vec{m}_{\beta}(\vec{r})$ and $\vec{h}_{\beta}(\vec{r})=-\nabla\psi_{\beta}(\vec{r})$ are characterized by a series of mode indices $\\{\beta\\}$, an eigenfrequency $\omega_{\beta}$ and a complex amplitude $s_{\beta}$ Fletcher and Bell (1959); Röschmann and Dötsch (1977). Then the linearized Landau-Lifshitz equations turns to time- independent forms $\displaystyle i\omega m_{x}(\vec{r})=\omega_{M}\partial_{y}\psi(\vec{r})+\omega_{0}m_{y}(\vec{r}),$ (29a) $\displaystyle i\omega m_{y}(\vec{r})=-\omega_{M}\partial_{x}\psi(\vec{r})-\omega_{0}m_{x}(\vec{r}).$ (29b) We can eliminate the scalar field $m_{x}(\vec{r})$ and $m_{y}(\vec{r})$ through these equations and Eq. (27). Then the formula only contains the magnetostatic potential as $\nabla^{2}\psi_{\text{out }}(\vec{r})=0,$ (30a) $\left(1+\chi_{p}\right)\left(\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}\right)\psi_{\mathrm{in}}(\vec{r})+\frac{\partial^{2}}{\partial z^{2}}\psi_{\mathrm{in}}(\vec{r})=0,$ (30b) where $\psi_{\text{in }}$ and $\psi_{\text{out }}$ are the magnetostatic potential inside and outside the micromagnet, respectively. Here, the diagonal element of the Polder susceptibility tensor is defined as Stancil and Prabhakar (2009) $\chi_{p}(\omega)\equiv\frac{\omega_{M}\omega_{0}}{\omega_{0}^{2}-\omega^{2}}$ (31) As for the outside situation, the general solution in the spherical coordinates is given by $\psi_{\text{out }}(\vec{r})=\sum_{lm}\left[\frac{A_{lm}}{r^{l+1}}+B_{lm}r^{l}\right]Y_{l}^{m}(\theta,\phi).$ (32) Here, the expansion coefficients $A_{lm}$, $B_{lm}$ are determined by the boundary conditions. The spherical harmonics $Y_{l}^{m}(\theta,\phi)$ indicates the symmetry of the spin wave mode. While the potential inside the sphere is more hard to solve. A set of nonorthogonal coordinates $\\{\xi,\eta,\phi\\}$ is introduced to make it convenient. They fulfill $\displaystyle x$ $\displaystyle=\sqrt{\chi_{p}}R\sqrt{\xi^{2}-1}\sin\eta\cos\phi,$ (33a) $\displaystyle y$ $\displaystyle=\sqrt{\chi_{p}}R\sqrt{\xi^{2}-1}\sin\eta\sin\phi,$ (33b) $\displaystyle z$ $\displaystyle=\sqrt{\frac{\chi_{p}}{1+\chi_{p}}}R\xi\cos\eta.$ (33c) In the above coordinates, the solution of Eq. (30b) becomes available, which is an expression including Legendre polynomials and spherical harmonics Fletcher and Bell (1959); Röschmann and Dötsch (1977) as $\psi_{\mathrm{in}}(\vec{r})=\sum_{lm}C_{lm}P_{l}^{m}(\xi)Y_{l}^{m}(\eta,\phi).$ (34) In each term of the summation, the coefficients $C_{lm}$ depend on the boundary conditions as well. We then determine all the coefficients by the boundary conditions. First, the term corresponding to $B_{lm}$ is not convergent at infinity, which should be removed to make the potential $\psi$ regular. That indicates the first condition, $B_{lm}=0$. Considering the potential on the surface of the sphere, the coordinates are $\xi\rightarrow\xi_{0}=\sqrt{\frac{1+\chi_{p}}{\chi_{p}}},\quad\\{\eta,\phi\\}\rightarrow\\{\theta,\phi\\}.$ (35) Applying these coordinates to the two solution expressions Eq. (32) and Eq. (34), we can use the boundary condition, the normal-direction-component continuity of $\vec{h}$, and obtain the second condition $A_{lm}=C_{lm}P_{l}^{m}\left(\xi_{0}\right)R^{l+1}.$ (36) Similarly, we can apply the continuity of the normal component of the $\vec{b}$ field to obtain the final condition Fletcher and Bell (1959) $\left.\frac{\partial\psi_{\text{out }}}{\partial r}\right|_{r=R}=\left.\frac{\xi_{0}}{R}\frac{\partial\psi_{\text{in }}}{\partial\xi}\right|_{r=R}-\left.i\frac{\kappa_{p}}{R}\frac{\partial\psi_{\text{in }}}{\partial\phi}\right|_{r=R}.$ (37) Here, $\kappa_{p}(\omega)=\omega_{M}\omega/(\omega_{0}^{2}-\omega^{2})$ is the off-diagonal element of the Polder susceptibility tensor Stancil and Prabhakar (2009). Applying the above fomula and Eq. (36), the Walker mode eigenfrequency fulfill the equation as Fletcher and Bell (1959); Röschmann and Dötsch (1977) $\xi_{0}(\omega)\frac{P_{l}^{\prime m}\left(\xi_{0}(\omega)\right)}{P_{l}^{m}\left(\xi_{0}(\omega)\right)}+m\kappa_{p}(\omega)+l+1=0.$ (38) There are two results we can obtain from this equation: a) the eigenfrequency $\omega$ is independent of the radius $R$; b) there are several solutions for some pairs $\\{l,m\\}$ which are not positive and physical ($l=0$, for instance). Generally, we can use the indices $\\{l,m\\}$ to mark the allowed eigenmodes of the spin waves. The corresponding mode functions have been explicitly calculated in Ref. Fletcher and Bell (1959). ### A.3 The intrinsic Hamiltonian and quantization of the magnetostatic dipolar magnon modes We now show the quantization of the Walker modes from a phenomenological micromagnetic energy functional Gonzalez-Ballestero _et al._ (2020a) $E_{m}(\\{\vec{m}\\},\\{\vec{h}\\})=\frac{\mu_{0}}{2}\int dV\vec{m}(\vec{r},t)\cdot\left[\frac{H_{I}}{M_{S}}\vec{m}(\vec{r},t)-\vec{h}(\vec{r},t)\right]$ (39) For convenience, we apply the linearized Landau-Lifshitz equations (25) to the above expression and the energy only depends on $\vec{m}(\vec{r},t)$ in the form $\displaystyle E_{m}(\\{\vec{m}\\})=$ $\displaystyle\frac{1}{2|\gamma|M_{S}}\int dV\left(m_{x}(\vec{r},t)\frac{\partial m_{y}(\vec{r},t)}{\partial t}\right.$ (40) $\displaystyle\left.-m_{y}(\vec{r},t)\frac{\partial m_{x}(\vec{r},t)}{\partial t}\right).$ The key of quantization is to find the proper zero-point fluctuation and the analogue energy expression with the harmonic oscillator. Thus, making use of the magnetization expanded in terms of the eigenmodes, Eq. (28a), we transform the above energy expression to Mills (2006); Walker (1957) $E_{m}=\frac{1}{2\hbar|\gamma|M_{S}}\sum_{\beta}\hbar\omega_{\beta}\Lambda_{\beta}\left[s_{\beta}s_{\beta}^{*}+s_{\beta}^{*}s_{\beta}\right],$ (41) where the coefficients have the following form $\Lambda_{\beta}=2\mathrm{Im}\int dVm_{\beta y}(\vec{r})m_{\beta x}^{*}(\vec{r}).$ (42) Compared with the Hamiltonian of the harmonic oscillator, we can choose adequate eigenmode normalization to fulfill $\Lambda_{\beta}/(M_{S}|\gamma|\hbar)=1$ and the energy expression becomes $E_{m}=\hbar/2\sum_{\beta}\omega_{\beta}[s_{\beta}s_{\beta}^{*}+s_{\beta}^{*}s_{\beta}]$. After replacing the expansion coefficients with the bosonic magnon operators, i.e., $s_{\beta}\rightarrow\hat{s}_{\beta}$ and $s_{\beta}^{*}\rightarrow\hat{s}_{\beta}^{\dagger}$ with the commutation relation $[\hat{s}_{\beta},\hat{s}_{\beta}^{\dagger}]=1$, we can obtain the quantized magnon modes Hamiltonian $\hat{H}_{m}=\sum_{\beta}\hbar\omega_{\beta}[\hat{s}_{\beta}^{\dagger}\hat{s}_{\beta}+1/2]$, where the constant term is the analogue of the zero-point energy. For simplicity, after defining a zero-point magnetization $M_{0\beta}=\sqrt{\hbar|\gamma|M_{S}/\tilde{\Lambda}_{\beta}}$ and the normalization constant $\tilde{\Lambda}_{\beta}=2\mathrm{Im}\int dV\tilde{m}_{x}^{*}(\mathbf{r})\tilde{m}_{y}(\mathbf{r})$, the mode functions are replaced as Gonzalez-Ballestero _et al._ (2020a) $\left[\begin{array}[]{c}\vec{m}_{\beta}(\vec{r})\\\ \vec{h}_{\beta}(\vec{r})\end{array}\right]\rightarrow{M}_{0\beta}\left[\begin{array}[]{c}\tilde{\vec{m}}_{\beta}(\vec{r})\\\ \tilde{\vec{h}}_{\beta}(\vec{r})\end{array}\right].$ (43) The corresponding magnetization and magnetic field operators in the Schrödinger picture are writen as $\displaystyle\hat{\vec{m}}(\vec{r})=\sum_{\beta}{M}_{0\beta}\left[\tilde{\vec{m}}_{\beta}(\vec{r})\hat{s}_{\beta}+\mathrm{H.c.}\right],$ (44a) $\displaystyle\hat{\vec{h}}(\vec{r})=\sum_{\beta}{M}_{0\beta}\left[\tilde{\vec{h}}_{\beta}(\vec{r})\hat{s}_{\beta}+\mathrm{H.c.}\right].$ (44b) As for the Kittel mode, especially, the mode function is $\tilde{m}_{K}=\vec{e}_{x}+i\vec{e}_{y}$ with the coordinate vector $\vec{e}_{x}$ and $\vec{e}_{y}$. The zero-point magnetization is $M_{K}=\sqrt{\hbar|\gamma|M_{S}/2V}$ with the saturation magnetization $M_{S}$ and the volume $V$ Gonzalez-Ballestero _et al._ (2020a). Then the corresponding magnetization operator is $\hat{\vec{M}}={M_{K}}\left(\tilde{m}_{K}\hat{s}_{K}+\tilde{m}_{K}^{*}\hat{s}_{K}^{\dagger}\right).$ (45) ## Appendix B Quantization of the magnetic field generated by a magnetic sphere and the interaction Hamiltonian In this appendix, we show the quantization of the magnetic field generated by a magnetic sphere concretely. We start from the classical electromagnetism result that the field of a magnetic sphere with magnetization $\vec{M}$ can be described by $\vec{B}(\vec{r})=\frac{\mu_{0}}{3}\frac{R^{3}}{r^{3}}\left[\frac{3(\vec{M}\cdot\vec{r})\vec{r}}{r^{2}}-\vec{M}\right],$ (46) where $\vec{B}(\vec{r})$ is the magnetic field at $\vec{r}=r\cos\theta\vec{e}_{x}+r\sin\theta\vec{e}_{y}$. We then introduce the quantized magnetization operator from Eq. (45), i.e., $\hat{\vec{M}}=M_{K}\left[\left(\hat{s}_{K}+\hat{s}_{K}^{\dagger}\right)\vec{e}_{x}+i\left(\hat{s}_{K}-\hat{s}_{K}^{\dagger}\right)\vec{e}_{y}\right].$ (47) Applying the above expression to Eq. (46), we can obtain the operator expression of the magnetic field $\displaystyle\hat{\vec{B}}_{m}(\vec{r})=$ $\displaystyle\frac{\mu_{0}R^{3}M_{K}}{3r^{3}}\Big{\\{}\big{[}(3C_{\theta}^{2}-1)\hat{X}+3S_{\theta}C_{\theta}\hat{P}\big{]}\vec{e}_{x}$ (48) $\displaystyle+\big{[}3S_{\theta}C_{\theta}\hat{X}+(3S_{\theta}^{2}-1)\hat{P}\big{]}\vec{e}_{y}\Big{\\}},$ where we define $C_{\theta}=\cos\theta$, $S_{\theta}=\sin\theta$, $\hat{X}=\hat{s}_{K}+\hat{s}_{K}^{\dagger}$ and $\hat{P}=i(\hat{s}_{K}-\hat{s}_{K}^{\dagger})$ for convenience. Then we consider the interaction between the spin and the quantized mgnetic field described by $\hat{H}_{N-K}=(-g_{e}\mu_{B}/\hbar)\hat{\vec{B}}\cdot\hat{\vec{S}}$ with the spin operator $\hat{S}=\hat{S}_{x}\vec{e}_{x}+\hat{S}_{y}\vec{e}_{y}+\hat{S}_{z}\vec{e}_{z}$, which leads to the Hamiltonian $\displaystyle\hat{H}_{N-K}=$ $\displaystyle-\hbar g\big{[}(3C_{\theta}^{2}-1)(\hat{s}_{K}+\hat{s}_{K}^{\dagger})(\hat{\sigma}^{+}+\hat{\sigma}^{-})$ (49) $\displaystyle+3iS_{\theta}C_{\theta}(\hat{s}_{K}-\hat{s}_{K}^{\dagger})(\hat{\sigma}^{+}+\hat{\sigma}^{-})$ $\displaystyle-3iS_{\theta}C_{\theta}(\hat{s}_{K}+\hat{s}_{K}^{\dagger})(\hat{\sigma}^{+}-\hat{\sigma}^{-})$ $\displaystyle+(3S_{\theta}^{2}-1)(\hat{s}_{K}-\hat{s}_{K}^{\dagger})(\hat{\sigma}^{+}-\hat{\sigma}^{-})\big{]},$ where the spin operators are defined as $\hat{\sigma}^{\pm}=(\hat{S}_{x}\pm\hat{S}_{y})/\hbar$. Under the rotating wave approximation, the imaginary terms in the second and third lines of the above expression offset with each other, while the left parts can be simplified to the final Hamiltonian $\hat{H}_{N-K}\simeq-\hbar g(\hat{s}_{K}\hat{\sigma}^{+}+H.c.)$. Here, the coupling strength is defined as $\displaystyle g=\sqrt{\frac{|\gamma|M_{s}}{12\pi}}\frac{g_{e}\mu_{0}\mu_{B}R^{3/2}}{(R+d)^{3}},$ (50) where the distance between the spin qubit and the surface of the magnetic sphere is $d=r-R$ (see Fig. 1(a)). ## References * Xiang _et al._ (2013) Z.-L. Xiang, S. Ashhab, J. Q. You, and F. Nori, “Hybrid quantum circuits: Superconducting circuits interacting with other quantum systems,” Rev. Mod. Phys. 85, 623–653 (2013). * Li _et al._ (2015) P.-B. Li, Y.-C. Liu, S.-Y. Gao, Z.-L. Xiang, P. Rabl, Y.-F. Xiao, and F.-L. Li, “Hybrid quantum device based on $nv$ centers in diamond nanomechanical resonators plus superconducting waveguide cavities,” Phys. Rev. Applied 4, 044003 (2015). * Das _et al._ (2017) S. Das, V. E. Elfving, S. Faez, and A. S. Sørensen, “Interfacing superconducting qubits and single optical photons using molecules in waveguides,” Phys. Rev. Lett. 118, 140501 (2017). * Schütz _et al._ (2020) S. Schütz, J. Schachenmayer, D. Hagenmüller, G. K. Brennen, T. Volz, V. Sandoghdar, T. W. Ebbesen, C. Genes, and G. Pupillo, “Ensemble-induced strong light-matter coupling of a single quantum emitter,” Phys. Rev. Lett. 124, 113602 (2020). * Li _et al._ (2020a) T. Li, Z. Wang, and K. Xia, “Multipartite quantum entanglement creation for distant stationary systems,” Opt. Express 28, 1316 (2020a). * Bin _et al._ (2020) Q. Bin, X.-Y. Lü, F. P. Laussy, F. Nori, and Y. Wu, “$n$-phonon bundle emission via the stokes process,” Phys. Rev. Lett. 124, 053601 (2020). * Kubo _et al._ (2011) Y. Kubo, C. Grezes, A. Dewes, T. Umeda, J. Isoya, H. Sumiya, N. Morishita, H. Abe, S. Onoda, T. Ohshima, V. Jacques, A. Dréau, J.-F. Roch, I. Diniz, A. Auffeves, D. Vion, D. Esteve, and P. Bertet, “Hybrid quantum circuit with a superconducting qubit coupled to a spin ensemble,” Phys. Rev. Lett. 107, 220501 (2011). * Faraon _et al._ (2012) A. Faraon, C. Santori, Z. Huang, V. M. Acosta, and R. G. Beausoleil, “Coupling of nitrogen-vacancy centers to photonic crystal cavities in monocrystalline diamond,” Phys. Rev. Lett. 109, 033604 (2012). * Bennett _et al._ (2013) S. D. Bennett, N. Y. Yao, J. Otterbach, P. Zoller, P. Rabl, and M. D. Lukin, “Phonon-induced spin-spin interactions in diamond nanostructures: Application to spin squeezing,” Phys. Rev. Lett. 110, 156402 (2013). * Li _et al._ (2016) P.-B. Li, Z.-L. Xiang, P. Rabl, and F. Nori, “Hybrid quantum device with nitrogen-vacancy centers in diamond coupled to carbon nanotubes,” Phys. Rev. Lett. 117, 015502 (2016). * Li and Nori (2018) P.-B. Li and F. Nori, “Hybrid quantum system with nitrogen-vacancy centers in diamond coupled to surface-phonon polaritons in piezomagnetic superlattices,” Phys. Rev. Applied 10, 024011 (2018). * Li _et al._ (2020b) X.-X. Li, B. Li, and P.-B. Li, “Simulation of topological phases with color center arrays in phononic crystals,” Phys. Rev. Research 2, 013121 (2020b). * Blais _et al._ (2020) A. Blais, S. M. Girvin, and W. D. Oliver, “Quantum information processing and quantum optics with circuit quantum electrodynamics,” Nat. Phys. 16, 247 (2020). * Li _et al._ (2021) X.-X. Li, P.-B. Li, H.-R. Li, H. Gao, and F.-L. Li, “Simulation of topological zak phase in spin-phononic crystal networks,” Phys. Rev. Research 3, 013025 (2021). * Arcizet _et al._ (2011) O. Arcizet, V. Jacques, A. Siria, P. Poncharal, P. Vincent, and S. Seidelin, “A single nitrogen-vacancy defect coupled to a nanomechanical oscillator,” Nat. Phys. 7, 879 (2011). * Li _et al._ (2020c) P.-B. Li, Y. Zhou, W.-B. Gao, and F. Nori, “Enhancing spin-phonon and spin-spin interactions using linear resources in a hybrid quantum system,” Phys. Rev. Lett. 125, 153602 (2020c). * Twamley and Barrett (2010) J. Twamley and S. D. Barrett, “Superconducting cavity bus for single nitrogen-vacancy defect centers in diamond,” Phys. Rev. B 81, 241202 (2010). * Amsüss _et al._ (2011) R. Amsüss, C. Koller, T. Nöbauer, S. Putz, S. Rotter, K. Sandner, S. Schneider, M. Schramböck, G. Steinhauser, H. Ritsch, J. Schmiedmayer, and J. Majer, “Cavity qed with magnetically coupled collective spin states,” Phys. Rev. Lett. 107, 060502 (2011). * Grigoryan _et al._ (2018) V. L. Grigoryan, K. Shen, and K. Xia, “Synchronized spin-photon coupling in a microwave cavity,” Phys. Rev. B 98, 024406 (2018). * Julsgaard _et al._ (2013) B. Julsgaard, C. Grezes, P. Bertet, and K. Mølmer, “Quantum memory for microwave photons in an inhomogeneously broadened spin ensemble,” Phys. Rev. Lett. 110, 250503 (2013). * Imamoğlu (2009) A. Imamoğlu, “Cavity qed based on collective magnetic dipole coupling: Spin ensembles as hybrid two-level systems,” Phys. Rev. Lett. 102, 083602 (2009). * Kubo _et al._ (2010) Y. Kubo, F. R. Ong, P. Bertet, D. Vion, V. Jacques, D. Zheng, A. Dréau, J.-F. Roch, A. Auffeves, F. Jelezko, J. Wrachtrup, M. F. Barthe, P. Bergonzo, and D. Esteve, “Strong coupling of a spin ensemble to a superconducting resonator,” Phys. Rev. Lett. 105, 140502 (2010). * Reiserer and Rempe (2015) A. Reiserer and G. Rempe, “Cavity-based quantum networks with single atoms and optical photons,” Rev. Mod. Phys. 87, 1379–1418 (2015). * Galindo and Martín-Delgado (2002) A. Galindo and M. A. Martín-Delgado, “Information and computation: Classical and quantum aspects,” Rev. Mod. Phys. 74, 347–423 (2002). * Degen _et al._ (2017) C. L. Degen, F. Reinhard, and P. Cappellaro, “Quantum sensing,” Rev. Mod. Phys. 89, 035002 (2017). * Fuchs _et al._ (2011) G. D. Fuchs, G. Burkard, P. V. Klimov, and D. D. Awschalom, “A quantum memory intrinsic to single nitrogen–vacancy centres in diamond,” Nat. Phys. 7, 789 (2011). * Zhong _et al._ (2020) H.-S. Zhong, H. Wang, Y.-H. Deng, M.-C. Chen, L.-C. Peng, Y.-H. Luo, J. Qin, D. Wu, X. Ding, Y. Hu, P. Hu, X.-Y. Yang, W.-J. Zhang, H. Li, Y. Li, X. Jiang, L. Gan, G. Yang, L. You, Z. Wang, L. Li, N.-L. Liu, C.-Y. Lu, and J.-W. Pan, “Quantum computational advantage using photons,” Science 370, 1460 (2020). * Mi _et al._ (2018) X. Mi, M. Benito, S. Putz, D. M. Zajac, J. M. Taylor, G. Burkard, and J. R. Petta, “A coherent spin–photon interface in silicon,” Nature (London) 555, 599 (2018). * Awschalom _et al._ (2018) D. D. Awschalom, R. Hanson, J. Wrachtrup, and B. B. Zhou, “Quantum technologies with optically interfaced solid-state spins,” Nat. Photonics 12, 516 (2018). * Aharonovich _et al._ (2011) I. Aharonovich, A. D. Greentree, and S. Prawer, “Diamond photonics,” Nat. Photonics 5, 397 (2011). * Lee _et al._ (2017) D. Lee, K. W. Lee, J. V. Cady, P. Ovartchaiyapong, and A. C. B. Jayich, “Topical review: spins and mechanics in diamond,” J. Opt. 19, 033001 (2017). * Marcus _et al._ (2013) W. D. Marcus, B. M. Neil, D. Paul, J. Fedor, W. Jörg, and C.L. H. Lloyd, “The nitrogen-vacancy colour centre in diamond,” Phys. Rep. 528, 1 (2013). * Doherty _et al._ (2014) M. W. Doherty, V. V. Struzhkin, D. A. Simpson, L. P. McGuinness, Y. Meng, A. Stacey, T. J. Karle, R. J. Hemley, N. B. Manson, L. C. L. Hollenberg, and S. Prawer, “Electronic properties and metrology applications of the diamond ${\mathrm{nv}}^{-}$ center under pressure,” Phys. Rev. Lett. 112, 047601 (2014). * Barry _et al._ (2020) J. F. Barry, J. M. Schloss, E. Bauch, M. J. Turner, C. A. Hart, L. M. Pham, and R. L. Walsworth, “Sensitivity optimization for nv-diamond magnetometry,” Rev. Mod. Phys. 92, 015004 (2020). * Bar-Gill _et al._ (2013) N. Bar-Gill, L.M. Pham, A. Jarmola, D. Budker, and R.L. Walsworth, “Solid-state electronic spin coherence time approaching one second,” Nat. Commun. 4, 1743 (2013). * Abobeih _et al._ (2018) M. H. Abobeih, J. Cramer, M. A. Bakker, N. Kalb, M. Markham, D. J. Twitchen, and T. H. Taminiau, “One-second coherence for a single electron spin coupled to a multi-qubit nuclear-spin environment,” Nat. Commun. 9, 2552 (2018). * Dolde _et al._ (2011) F. Dolde, H. Fedder, M. W. Doherty, T. Nöbauer, F. Rempp, G. Balasubramanian, T. Wolf, F. Reinhard, L. C. L. Hollenberg, F. Jelezko, and J. Wrachtrup, “Electric-field sensing using single diamond spins,” Nat. Phys. 7, 459 (2011). * Kolkowitz _et al._ (2012) S. Kolkowitz, J. A. C. Bleszynski, Q. P. Unterreithmeier, S. D. Bennett, P. Rabl, J. G. E. Harris, and M. D. Lukin, “Coherent sensing of a mechanical resonator with a single-spin qubit,” Science 335, 1603 (2012). * Roy _et al._ (2017) D. Roy, C. M. Wilson, and O. Firstenberg, “Colloquium: Strongly interacting photons in one-dimensional continuum,” Rev. Mod. Phys. 89, 021001 (2017). * Liu _et al._ (2014) Y.-C. Liu, X. Luan, H.-K. Li, Q. Gong, C. W. Wong, and Y.-F. Xiao, “Coherent polariton dynamics in coupled highly dissipative cavities,” Phys. Rev. Lett. 112, 213602 (2014). * Verdú _et al._ (2009) J. Verdú, H. Zoubi, Ch. Koller, J. Majer, H. Ritsch, and J. Schmiedmayer, “Strong magnetic coupling of an ultracold gas to a superconducting waveguide cavity,” Phys. Rev. Lett. 103, 043603 (2009). * Rabl _et al._ (2006) P. Rabl, D. DeMille, J. M. Doyle, M. D. Lukin, R. J. Schoelkopf, and P. Zoller, “Hybrid quantum processors: Molecular ensembles as quantum memory for solid state circuits,” Phys. Rev. Lett. 97, 033003 (2006). * Zhang _et al._ (2015) D. Zhang, X.-M. Wang, T.-F. Li, X.-Q. Luo, W. Wu, F. Nori, and J.Q. You, “Cavity quantum electrodynamics with ferromagnetic magnons in a small yttrium-iron-garnet sphere,” npj Quantum Information 1, 15014 (2015). * Stancil and Prabhakar (2009) D. D. Stancil and A. Prabhakar, _Spin Waves: Theory and Applications_ (Springer, New York, 2009). * Zhang _et al._ (2016a) X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, “Cavity magnomechanics,” Sci. Adv. 2, e1501286 (2016a). * Haigh _et al._ (2016) J. A. Haigh, A. Nunnenkamp, A. J. Ramsay, and A. J. Ferguson, “Triple-resonant brillouin light scattering in magneto-optical cavities,” Phys. Rev. Lett. 117, 133602 (2016). * Lachance-Quirion _et al._ (2019) D. Lachance-Quirion, Y. Tabuchi, Gloppe A., Usami K., and Nakamura Y., “Hybrid quantum systems based on magnonics,” Appl. Phys. Express 12, 070101 (2019). * Wolski _et al._ (2020) S. P. Wolski, D. Lachance-Quirion, Y. Tabuchi, S. Kono, A. Noguchi, K. Usami, and Y. Nakamura, “Dissipation-based quantum sensing of magnons with a superconducting qubit,” Phys. Rev. Lett. 125, 117701 (2020). * Soykal and Flatté (2010) Ö. O. Soykal and M. E. Flatté, “Strong field interactions between a nanomagnet and a photonic cavity,” Phys. Rev. Lett. 104, 077202 (2010). * Tabuchi _et al._ (2014) Y. Tabuchi, S. Ishino, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, “Hybridizing ferromagnetic magnons and microwave photons in the quantum limit,” Phys. Rev. Lett. 113, 083603 (2014). * Zhang _et al._ (2014) X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, “Strongly coupled magnons and cavity microwave photons,” Phys. Rev. Lett. 113, 156401 (2014). * Tabuchi _et al._ (2015) Y. Tabuchi, S. Ishino, T. Noguchi, A.and Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, “Coherent coupling between a ferromagnetic magnon and a superconducting qubit,” Science 349, 405 (2015). * Lambert _et al._ (2015) N. J. Lambert, J. A. Haigh, and A. J. Ferguson, “Identification of spin wave modes in yttrium iron garnet strongly coupled to a co-axial cavity,” J. Appl. Phys. 117, 053910 (2015). * Lambert _et al._ (2016) N. J. Lambert, J. A. Haigh, S. Langenfeld, A. C. Doherty, and A. J. Ferguson, “Cavity-mediated coherent coupling of magnetic moments,” Phys. Rev. A 93, 021803 (2016). * Bourhill _et al._ (2016) J. Bourhill, N. Kostylev, M. Goryachev, D. L. Creedon, and M. E. Tobar, “Ultrahigh cooperativity interactions between magnons and resonant photons in a yig sphere,” Phys. Rev. B 93, 144420 (2016). * Kostylev _et al._ (2016) N. Kostylev, M. Goryachev, and M. E. Tobar, “Superstrong coupling of a microwave cavity to yttrium iron garnet magnons,” Appl. Phys. Lett. 108, 062402 (2016). * Wang _et al._ (2018) Y.-P. Wang, G.-Q. Zhang, D. Zhang, T.-F. Li, C.-M. Hu, and J. Q. You, “Bistability of cavity magnon polaritons,” Phys. Rev. Lett. 120, 057202 (2018). * Gonzalez-Ballestero _et al._ (2020a) C. Gonzalez-Ballestero, D. Hümmer, J. Gieseler, and O. Romero-Isart, “Theory of quantum acoustomagnonics and acoustomechanics with a micromagnet,” Phys. Rev. B 101, 125404 (2020a). * Gieseler _et al._ (2020) J. Gieseler, A. Kabcenell, E. Rosenfeld, J. D. Schaefer, A. Safira, M. J. A. Schuetz, C. Gonzalez-Ballestero, C. C. Rusconi, O. Romero-Isart, and M. D. Lukin, “Single-spin magnetomechanics with levitated micromagnets,” Phys. Rev. Lett. 124, 163604 (2020). * Sandweg _et al._ (2011) C. W. Sandweg, Y. Kajiwara, A. V. Chumak, A. A. Serga, V. I. Vasyuchka, M. B. Jungfleisch, E. Saitoh, and B. Hillebrands, “Spin pumping by parametrically excited exchange magnons,” Phys. Rev. Lett. 106, 216601 (2011). * Huebl _et al._ (2013) H. Huebl, C. W. Zollitsch, J. Lotze, F. Hocke, M. Greifenstein, A. Marx, R. Gross, and S. T. B. Goennenwein, “High cooperativity in coupled microwave resonator ferrimagnetic insulator hybrids,” Phys. Rev. Lett. 111, 127003 (2013). * Zhang _et al._ (2016b) X. Zhang, C. Zou, L. Jiang, and H. X. Tang, “Superstrong coupling of thin film magnetostatic waves with microwave cavity,” J. Appl. Phys. 119, 023905 (2016b). * Li _et al._ (2019a) Y. Li, T. Polakovic, Y.-L. Wang, J. Xu, S. Lendinez, Z. Zhang, J. Ding, T. Khaire, H. Saglam, R. Divan, J. Pearson, W.-K. Kwok, Z. Xiao, V. Novosad, A. Hoffmann, and W. Zhang, “Strong coupling between magnons and microwave photons in on-chip ferromagnet-superconductor thin-film devices,” Phys. Rev. Lett. 123, 107701 (2019a). * Hou and Liu (2019) J. T. Hou and L. Liu, “Strong coupling between microwave photons and nanomagnet magnons,” Phys. Rev. Lett. 123, 107702 (2019). * Wang _et al._ (2020) H. Wang, J. Chen, T. Liu, J. Zhang, K. Baumgaertl, C. Guo, Y. Li, C. Liu, P. Che, S. Tu, S. Liu, P. Gao, X. Han, D. Yu, M. Wu, D. Grundler, and H. Yu, “Chiral spin-wave velocities induced by all-garnet interfacial dzyaloshinskii-moriya interaction in ultrathin yttrium iron garnet films,” Phys. Rev. Lett. 124, 027203 (2020). * Zhang _et al._ (2020) X. Zhang, G. E. W. Bauer, and T. Yu, “Unidirectional pumping of phonons by magnetization dynamics,” Phys. Rev. Lett. 125, 077203 (2020). * Almulhem _et al._ (2020) N. Almulhem, M. Stebliy, J. Portal, A. Samardak, H. Beere, D. Ritchie, and A. Nogaret, “Photovoltage detection of spin excitation of a ferromagnetic stripe and disk at low temperature,” J. J. Appl. Phys. 59, SEED02 (2020). * Huillery _et al._ (2020) P. Huillery, T. Delord, L. Nicolas, M. Van Den Bossche, M. Perdriat, and G. Hétet, “Spin mechanics with levitating ferromagnetic particles,” Phys. Rev. B 101, 134415 (2020). * Simons and Simons (2001) R. Simons and R. N. Simons, _Coplanar waveguide circuits, components, and systems_ , Vol. 15 (Wiley Online Library, 2001). * Wolf _et al._ (2001) S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton, S. von Molnár, M. L. Roukes, A. Y. Chtchelkanova, and D. M. Treger, “Spintronics: A spin-based electronics vision for the future,” Science 294, 1488 (2001). * Wesenberg _et al._ (2009) J. H. Wesenberg, A. Ardavan, G. A. D. Briggs, J. J. L. Morton, R. J. Schoelkopf, D. I. Schuster, and K. Mølmer, “Quantum computing with an electron spin ensemble,” Phys. Rev. Lett. 103, 070502 (2009). * Tabuchi _et al._ (2016) Y. Tabuchi, S. Ishino, A. Noguchi, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, “Quantum magnonics: The magnon meets the superconducting qubit,” Comptes Rendus Physique 17, 729 (2016). * Morris _et al._ (2017) R. G. E. Morris, A. F. van Loo, S. Kosen, and A. D. Karenowska, “Strong coupling of magnons in a yig sphere to photons in a planar superconducting resonator in the quantum limit,” Sci. Rep. 7, 11511 (2017). * Maier-Flaig _et al._ (2017) H. Maier-Flaig, S. Klingler, C. Dubs, O. Surzhenko, R. Gross, M. Weiler, H. Huebl, and S. T. B. Goennenwein, “Temperature-dependent magnetic damping of yttrium iron garnet spheres,” Phys. Rev. B 95, 214423 (2017). * Dany _et al._ (2019) L.-Q. Dany, T. Yutaka, G. Arnaud, U. Koji, and N. Yasunobu, “Hybrid quantum systems based on magnonics,” Appl. Phys. Express 12, 070101 (2019). * Li _et al._ (2019b) J. Li, S.-Y. Zhu, and G. S. Agarwal, “Squeezed states of magnons and phonons in cavity magnomechanics,” Phys. Rev. A 99, 021801 (2019b). * Martínez-Pérez and Zueco (2019) M. J. Martínez-Pérez and D. Zueco, “Strong coupling of a single photon to a magnetic vortex,” ACS Photonics 6, 360 (2019). * Maze _et al._ (2011) J. R. Maze, A. Gali, E. Togan, Y. Chu, A. Trifonov, E. Kaxiras, and M. D. Lukin, “Properties of nitrogen-vacancy centers in diamond: the group theoretic approach,” New J. Phys. 13, 025025 (2011). * Li _et al._ (2019c) B. Li, P.-B. Li, Y. Zhou, J. Liu, H.-R. Li, and F.-L. Li, “Interfacing a topological qubit with a spin qubit in a hybrid quantum system,” Phys. Rev. Applied 11, 044026 (2019c). * Bushev _et al._ (2011) P. Bushev, A. K. Feofanov, H. Rotzinger, I. Protopopov, J. H. Cole, C. M. Wilson, G. Fischer, A. Lukashenko, and A. V. Ustinov, “Ultralow-power spectroscopy of a rare-earth spin ensemble using a superconducting resonator,” Phys. Rev. B 84, 060501 (2011). * Jenkins _et al._ (2013) M. Jenkins, T. Hümmer, M. J. Martínez-Pérez, J. García-Ripoll, D. Zueco, and F. Luis, “Coupling single-molecule magnets to quantum circuits,” New J. Phys. 15, 095007 (2013). * Jenkins _et al._ (2014) M. D. Jenkins, U. Naether, M. Ciria, J. Sesé, J. Atkinson, C. Sánchez-Azqueta, E. Barco, J. Majer, D. Zueco, and F. Luis, “Nanoscale constrictions in superconducting coplanar waveguide resonators,” Appl. Phys. Lett. 105, 162601 (2014). * Gonzalez-Ballestero _et al._ (2020b) C. Gonzalez-Ballestero, J. Gieseler, and O. Romero-Isart, “Quantum acoustomechanics with a micromagnet,” Phys. Rev. Lett. 124, 093602 (2020b). * Li _et al._ (2012) P.-B. Li, S.-Y. Gao, H.-R. Li, Sh.-L. Ma, and F.-L. Li, “Dissipative preparation of entangled states between two spatially separated nitrogen-vacancy centers,” Phys. Rev. A 85, 042306 (2012). * Johansson _et al._ (2013) J.R. Johansson, P.D. Nation, and Nori F., “Qutip 2: A python framework for the dynamics of open quantum systems,” Comput. Phys. Commun. 184, 1234 (2013). * Aharoni (2000) A. Aharoni, _Introduction to the Theory of Ferromagnetism_ (Clarendon Press, 2000). * Jackson (2007) J. D. Jackson, _Classical electrodynamics_ (John Wiley & Sons, 2007). * Fletcher and Bell (1959) P. C. Fletcher and R. O. Bell, “Ferrimagnetic resonance modes in spheres,” J. Appl. Phys. 30, 687 (1959). * Röschmann and Dötsch (1977) P. Röschmann and H. Dötsch, “Properties of magnetostatic modes in ferrimagnetic spheroids,” Phys. Status Solidi B 82, 11 (1977). * Mills (2006) D.L. Mills, “Quantum theory of spin waves in finite samples,” J. Magn. Magn. Mater. 306, 16 (2006). * Walker (1957) L. R. Walker, “Magnetostatic modes in ferromagnetic resonance,” Phys. Rev. 105, 390–399 (1957).
# Supercurrent decay in ballistic magnetic Josephson junctions Hervé Ness Department of Physics, King’s College London, Strand Campus, London WC2R 2LS, UK Ivan A. Sadovskyy Microsoft Quantum, Microsoft Station Q, University of California, Santa Barbara, California 93106, USA Andrey E. Antipov Microsoft Quantum, Microsoft Station Q, University of California, Santa Barbara, California 93106, USA Mark van Schilfgaarde Department of Physics, King’s College London, Strand Campus, London WC2R 2LS, UK National Renewable Energy Laboratory, Golden, Colorado 80401, USA Roman M. Lutchyn Microsoft Quantum, Microsoft Station Q, University of California, Santa Barbara, California 93106, USA ###### Abstract We investigate transport properties of ballistic magnetic Josephson junctions and establish that suppression of supercurrent is an intrinsic property of the junctions, even in absence of disorder. By studying the role of ferromagnet thickness, magnetization, and crystal orientation we show how the supercurrent decays exponentially with thickness and identify two mechanisms responsible for the effect: (i) large exchange splitting may gap out minority or majority carriers leading to the suppression of Andreev reflection in the junction, (ii) loss of synchronization between different modes due to the significant dispersion of the quasiparticle velocity with the transverse momentum. Our results for Nb/Ni/Nb junctions are in good agreement with recent experimental studies. Our approach combines density functional theory and Bogoliubov-de Gennes model and opens a path for material composition optimization in magnetic Josephson junctions and superconducting magnetic spin valves. Magnetic Josephson junction, $\pi$-junction, JMRAM, ab initio calculations, scattering theory. ## 1 Introduction Coherent quantum tunneling of Cooper pairs through a thin barrier is one of the first examples of macroscopic quantum coherent phenomena. Predicted by Josephson more than 50 years ago [1], it has important applications in quantum circuits used in metrology, quantum sensing and quantum information processing [2]. Most of the previous studies focused on conventional Josephson junctions (JJs) consisting of two $s$-wave superconductors (S) that are connected by an insulating (I) or a normal (N) region [3, 4]. The flow of supercurrent through a JJ depends on the superconducting phase difference $\phi$ between two superconductors and, in general, is characterized by the current-phase relationship $J(\phi)$ (CPR). In conventional JJs CPR should be periodic with $2\pi$, $I(\phi)=I(\phi+2\pi)$ which follows from the BCS theory [5]. This result is a manifestation of a $2e$ charge of Cooper pairs, and is used in metrology to measure electron charge. Time-reversal symmetry requires that $I(\phi)=-I(-\phi)$ which imposes a constraint that the supercurrent should be zero for $\phi=\pi n$ where $n$ is integer. In general, CPR can be expanded in Fourier harmonics, $I(\phi)=\sum_{n}I_{n}\sin(n\phi)$. In many cases, however, CPR is well approximated by the first harmonic $I(\phi)\approx I_{\mathrm{c}}\sin(\phi)$ with $I_{\mathrm{c}}$ being the maximum supercurrent that can flow through the junction, i.e. the critical current. At a microscopic level, the supercurrent through a short SNS junction is determined by bound states forming in the constriction due to Andreev reflection at the NS interfaces. In the Andreev reflection process an incident electron-like quasiparticle with spin $\uparrow$ gets reflected at the NS interface as a hole-like quasiparticle with spin $\downarrow$ and a Cooper pair is emitted into the condensate. When time-reversal symmetry is not broken, electrons and holes propagate with the same velocity in the normal region. In this case no phase shift accumulates between this pair of quasiparticles along their trajectories in the N region, and the sign of $I_{\mathrm{c}}$ is fixed. When $I_{\mathrm{c}}>0$ we refer to this case as $0$-junction. Figure 1: 1 Schematic view of the SFS junction. 1 SFNFS junction. Arrows indicate possible magnetization of ferromagnets. The supercurrent through the spin-valve JJ depends on the relative magnetization of the ferromagnets, which governs the properties of the Josephson magnetic random-access memory (JMRAM) [6]. 1 Ball-and-stick representation of the Nb(110)/Ni(111)/Nb(110) junction with 5 layers of Ni. Nb atoms are light grey, Ni atoms are blue. The top and bottom atomic planes of Nb(110) are repeated periodically in the $z$-direction to create the semi-infinite Nb leads of the junction through which the current flows. Periodic boundary conditions are used in the $xy$-plane. The corresponding reciprocal space defines two-dimensional $\mathbf{k}_{\parallel}=(k_{x},k_{y})$ vectors, i.e. the transverse modes, used in the calculations. In a magnetic Josephson junction (MJJ), exchange splitting breaks time- reversal symmetry and leads to an interesting interplay of superconductivity and magnetism [4, 7, 8, 9, 10]. In superconductor-ferromagnet-superconductor (SFS) junctions [Fig. 1] the correlated quasi-particles and quasi-holes forming Andreev bound states propagate through the junction under the exchange field of the ferromagnet (F). In many ferromagnets, such as Fe or Ni, the exchange splitting is large (of the order of eV) and significantly perturbs the band structure of a metal and, consequently, significantly modifies Fermi velocities of minority (spin $\downarrow$) and majority (spin $\uparrow$) carriers. Strong time-reversal symmetry breaking leads to the appearance of characteristic superconducting correlations with oscillatory dependence determined by the difference in wave numbers, $k_{\mathrm{\scriptscriptstyle F}}^{\uparrow}-k_{\mathrm{\scriptscriptstyle F}}^{\downarrow}$ [11, 12]. This effect opens a possibility for the supercurrent reversal as a function of the thickness of the ferromagnetic region, the so-called Josephson $\pi$-junction [13, 14]. The correlation between the phase shift of the supercurrent and the magnetization provides a possibility for realizing magnetic spin valves, see Fig. 1, which may have promising novel applications for cryogenic superconducting digital technologies [15, 16, 6]. Understanding the microscopic physics of MJJs is of great scientific interest as well as technological importance. $0$-$\pi$ transitions in SFS junctions have been extensively studied experimentally since the early 2000’s and have been observed in different material systems [14, 17, 18, 19, 20, 21, 22, 23, 24, 25]. While qualitatively these observations are consistent with the previous phenomenological theories [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42] the roles of the microscopic band structure arising from the atomic lattice on the supercurrent suppression with junction thickness remain unclear. Our primary goal is to address these essential points. The supercurrent suppression that is exponential in the junction length is often associated with presence of disorder in the ferromagnetic region [43, 7]. However, significant supercurrent suppression can also appear in relatively clean metals (e.g., Ni) whose mean free path is larger than the junction thickness [44, 22, 23]. Here we study the suppression of the critical current in the MJJs shown in Fig. 1 and identify two microscopic mechanisms for its suppression, both a consequence of the band structure asymmetry of the majority and minority carriers in the F region. First, there is an asymmetry in the structure of the Fermi surface, see Fig. 2. For certain bands and momenta, the Fermi surface present in one spin channel may be absent in the other. This is typical in ferromagnetic materials like Fe, Co, and Ni because the bandwidth for $d$-electrons is relatively small and is often comparable to the exchange splitting. As a result, the wave number of one of the constituent quasi- particles forming Andreev bound states in MJJ becomes imaginary and the supercurrent becomes suppressed. We label this scenario as mechanism (i). In a second mechanism (ii), we show there is a dephasing of a harmonic signal originating from different Fourier components of the supercurrent due the Fermi velocity dispersion, see Fig. 3. Both these mechanisms lead to an exponential suppression of the supercurrent, which was previously believed to occur due to the presence of disorder in the magnetic region. We show that band structure effects are important and may be even dominant in many cases. In order to capture realistic band structure, we develop a microscopic theory for the supercurrent in realistic MJJs. We use a combination of density functional theory (DFT) and Bogoliubov-de Gennes (BdG) model to investigate the $0$-$\pi$ transition in realistic material stacks of Nb/Ni/Nb junctions in the clean limit. This method allows one to predict and explain key properties of MJJs such as the period and decay of the critical current oscillations with the ferromagnet thickness. Figure 2: 2 Simplified band structure of a ferromagnet, with majority and minority bands split by $V_{\mathrm{ex}}$. $k_{\mathrm{\scriptscriptstyle F}}^{\uparrow}$ and $k_{\mathrm{\scriptscriptstyle F}}^{\downarrow}$ are the Fermi momenta for majority and minority carriers, respectively. Fermi level $E_{\mathrm{\scriptscriptstyle F}}^{\mathrm{(i)}}$ corresponds to large $V_{\mathrm{ex}}$, where the minority band is pushed above $E_{\mathrm{\scriptscriptstyle F}}^{\mathrm{(i)}}$. $E_{\mathrm{\scriptscriptstyle F}}^{\mathrm{(ii)}}$ corresponds to small $V_{\mathrm{ex}}$. Thus, the propagation of minority quasiparticles is characterized by an imaginary momentum $k_{\mathrm{\scriptscriptstyle F}}^{\downarrow}$ and is suppressed. 2 Supercurrent in SFS junction is carried by Andreev bound states localized in the junction. Solid red line represents quasi-classical trajectory corresponding to an Andreev bound state. The spectrum of Andreev states depends on the relative superconducting phase difference across the junction as well as the phase, $\delta\varphi=|k_{\mathrm{\scriptscriptstyle F}}^{\uparrow}-k_{\mathrm{\scriptscriptstyle F}}^{\downarrow}|w$, accumulated due to the difference of Fermi momenta for majority and minority carriers. Note that in scenario (i) the propagation of minority carriers is suppressed leading to an overall exponential decay of the supercurrent with $w$. This is to be contrasted with the normal transport through the junction. The paper is organized as follows. We describe our findings and summarize our main results in Sec. 2. In Sec. 3 we discuss the numerical method we developed to perform first principles supercurrent calculations. Detailed discussion of the main results is presented in Sec. 4. We conclude with Sec. 5. Some technical points are relegated to Appendices A–E. Figure 3: 3 First Fourier harmonic $J_{1}$ of the supercurrent density [Eq. (3), black circles] and its fit [Eq. (14), solid black line] as a function of Ni layer thickness, $w$, calculated for the Nb(110)/Ni(111)/Nb(110) junctions shown in Fig. 1. Green semitransparent curves correspond to $j_{1}^{\mathrm{fit}}(\mathbf{k}_{\parallel})$ for individual $\mathbf{k}_{\parallel}$. 3 Normal state conductance per unit of area as a function of $w$ for majority ($G_{\uparrow}$) and minority ($G_{\downarrow}$) spins [Eq. (10)] as well as $G=G_{\uparrow}+G_{\downarrow}$. $G$ does not depend on $w$ and is approximated by the single value $\langle G\rangle$. 3–3 Supercurrent as a function of phase difference $\phi$ for 3 4 atomic layers of Ni (strong $0$-junction regime), 3 8 layers (intermediate regime), and 3 13 layers (strong $\pi$-junction regime). In the $0$\- and $\pi$-junction regimes, the $J_{1}$ component dominates. In the intermediate regime higher- order terms may prevail. ## 2 Qualitative discussion and main results In this section we describe basic concepts for the supercurrent flow in MJJs and summarize our results. Our main qualitative conclusions are supported by microscopic calculations for Nb/Ni/Nb MJJs. Ni appears to have fairly long mean free path $l_{\mathrm{\scriptscriptstyle MFP}}\approx 60$ Å (see estimations in Appendix A), which is comparable or larger than the typical thickness of the ferromagnet used in recent experiments [22, 23]. Therefore, the motion of quasiparticles in Nb/Ni/Nb junction is quasi-ballistic, and our method is applicable to this system. Most of this paper focuses on clean Nb/Ni/Nb junctions. First it is illuminating to consider a toy model, a one-dimensional SFS junction, and calculate the supercurrent in such a system for different Fermi energies, see Fig. 2. Using the results of Ref. 45, one finds that both majority and minority spin bands are both occupied in the limit $V_{\mathrm{ex}}/E_{\mathrm{\scriptscriptstyle F}}\ll 1$ [i.e. scenario (ii) in Fig. 2] the supercurrent does not decay with ferromagnet thickness at zero temperature, $\displaystyle I(\phi)=\frac{2e\Delta}{\hbar}\begin{cases}\cos\delta\varphi\sin\cfrac{\phi}{2},&0<\phi<\pi-2\delta\varphi,\\\ -\sin\delta\varphi\cos\cfrac{\phi}{2},&\pi-2\delta\varphi<\phi<\pi+2\delta\varphi,\\\ -\cos\delta\varphi\sin\cfrac{\phi}{2},&\pi+2\delta\varphi<\phi<2\pi.\end{cases}$ (1) Here perfect interface transparency $\mathcal{T}$ is assumed, $\mathcal{T}\approx 1$. The phase offset $\delta\varphi$ originates from the Fermi momentum difference of a quasi-particle and a quasi-hole forming Andreev bound state in the junction, see Fig. 2, and is given by $\delta\varphi=|k_{\mathrm{\scriptscriptstyle F}}^{\uparrow}-k_{\mathrm{\scriptscriptstyle F}}^{\downarrow}|w\approx V_{\mathrm{ex}}w/\hbar v_{\mathrm{\scriptscriptstyle F}}$. The $0$\- and $\pi$-junction regimes can be clearly identified at $\delta\varphi=0$ and $\delta\varphi=\pi/2$, respectively. At the intermediate values $0<\delta\varphi<\pi/2$, the CPR is anharmonic which is a generic feature at $0$-$\pi$ transition as shown below. In the low transparency regime, $\mathcal{T}\ll 1$, one would expect qualitatively similar results with the maximal critical current being suppressed $I_{\mathrm{c}}\sim(e\Delta/\hbar)\mathcal{T}$ but still independent of the ferromagnet thickness, $w$. In the case of large exchange splitting $V_{\mathrm{ex}}/E_{\mathrm{\scriptscriptstyle F}}\gg 1$ [scenario (i) in Fig. 2] the minority band may become unoccupied. Given that minority carriers are gapped out and their propagation through the junction is suppressed, the supercurrent decays exponentially with $w$ [45], $I(\phi)\approx\frac{2e\Delta}{\hbar}\exp(-\kappa w)\Bigl{[}1-\frac{E_{\mathrm{\scriptscriptstyle F}}}{8V_{\mathrm{ex}}}\sin^{2}(kw)\Bigr{]}\sin\phi.$ (2) Here $\kappa=\sqrt{2m^{*}(V_{\mathrm{ex}}\\!-\\!E_{\mathrm{\scriptscriptstyle F}})}/\hbar$ and $k=\sqrt{2m^{*}(V_{\mathrm{ex}}\\!+\\!E_{\mathrm{\scriptscriptstyle F}})}/\hbar$ with $m^{*}$ being effective electron mass. One may notice the drastic difference between normal-state and superconducting transport in this case — the former is weakly affected (because majority and minority contributions are additive) whereas the supercurrent is strongly suppressed. In this case, the measurement of normal-state junction resistance does not necessarily predict the magnitude of the supercurrent through the junction. We now generalize above results for the realistic three-dimensional (3D) geometry and material composition of the MJJ. In the clean limit, the supercurrent $I(\phi)$ in the short-junction limit ($w$ much smaller than the coherence length of the superconductor) is obtained from the spectrum of the Andreev bound states $\varepsilon_{\nu}(\phi,\mathbf{k}_{\parallel})$ localized in the junction [3] which now also depends on the parallel momentum $\mathbf{k}_{\parallel}$. The supercurrent density $J(\phi)$ per junction area $A$ is $J(\phi)=I(\phi)/A$. For the junction with periodic atomic structure in $xy$-plane the supercurrent density at zero temperature is given by $J(\phi)=-\frac{e}{\hbar}\int\limits_{\mathrm{\scriptscriptstyle BZ}}\\!\frac{d\mathbf{k}_{\parallel}}{(2\pi)^{2}}\,\sum_{\nu>0}\frac{\partial\varepsilon_{\nu}(\phi,\mathbf{k}_{\parallel})}{\partial\phi},$ (3) where the $\mathbf{k}_{\parallel}$ integration is performed over the Brillouin zone (BZ) of the corresponding surface supercell of area, $A$, and the sum is carried over positive quasiparticle energies, $\varepsilon_{\nu}(\phi,\mathbf{k}_{\parallel})>0$. Note that we use spin- resolved $\varepsilon_{\nu}$ and therefore omit spin prefactor 2 in Eq. (3). The derivative is periodic in $\phi$ and can be represented as a Fourier series $-\frac{e}{\hbar}\,\frac{\partial\varepsilon_{\nu}(\phi,\mathbf{k}_{\parallel})}{\partial\phi}=\sum_{n\geqslant 1}I_{n\nu}(\mathbf{k}_{\parallel})\sin\bigl{[}n\phi+\delta\varphi_{n\nu}(\mathbf{k}_{\parallel})\bigr{]},$ so that Eq. (3) can be written as $J(\phi)=\int\limits_{\mathrm{\scriptscriptstyle BZ}}\\!\frac{d\mathbf{k}_{\parallel}}{(2\pi)^{2}}\,\\!\sum_{\nu>0}\,\sum_{n\geqslant 1}I_{n\nu}\sin\bigl{[}n\phi+\delta\varphi_{n\nu}(\mathbf{k}_{\parallel})\bigr{]}.$ As we will show below, away from $0$-$\pi$ transition the supercurrent is dominated by the first ($n=1$) harmonic. Therefore, we focus henceforth on the first harmonic contribution and drop $n$ index in the following discussion. Next, one may notice that the supercurrent amplitudes $I_{\nu}(\mathbf{k}_{\parallel})$ and phase offsets $\delta\varphi_{\nu}(\mathbf{k}_{\parallel})$ depend on the parallel momentum $\mathbf{k}_{\parallel}$. One may include this dependence and define an effective Fermi energy $E_{\mathrm{\scriptscriptstyle F}}(\mathbf{k}_{\parallel})$ that counts the energy corresponding to $\mathbf{k}_{\parallel}$ in each band of the ferromagnet from the bottom of the band. Depending on $E_{\mathrm{\scriptscriptstyle F}}(\mathbf{k}_{\parallel})$ and $V_{\mathrm{ex}}$, either scenario (i) or (ii) of Fig. 2 may be realized. Thus, it is important to compare the exchange splitting with the bandwidth of $d$-character states in transition metals to make sure that a perturbation theory in $V_{\mathrm{ex}}$ is justified. Assuming it is the case, one can estimate the phase offset $\delta\varphi_{\nu}(\mathbf{k}_{\parallel})$ by expanding in exchange splitting to find $\delta\varphi_{\nu}(\mathbf{k}_{\parallel})\approx V_{\mathrm{ex}}w/\hbar v_{z\nu}(\mathbf{k}_{\parallel}).$ (4) In general, the dependence of $v_{z\nu}$ on $\mathbf{k}_{\parallel}$ is complicated, especially in $spd$-transition metals. The combination of complicated amplitude and phase offset dependence on $\mathbf{k}_{\parallel}$ leads to a non-trivial supercurrent dependence on the ferromagnet thickness $w$. As shown in Fig. 3, the critical current decays with $w$ for the Nb(110)/Ni(111)/Nb(110) junctions. We analyze the details of the decay and perform an exponential fit in Sec. 4. At small thicknesses, below 30 Å, this decay originates from the evanescent modes corresponding to gapped out minority or majority carriers which cannot propagate through the junction, see Table 1. This is the mechanism (i) discussed above. At larger thicknesses (i.e. $w\gtrsim 50$ Å) a loss of synchronization between different modes due to the dispersion of $v_{z\nu}(\mathbf{k}_{\parallel})$ becomes important. This second mechanism (ii) has been previously discussed in the literature [13, 7] under assumptions of a single spherical Fermi surface and a small uniform exchange splitting $V_{\mathrm{ex}}$ in the magnetic region. Within these assumptions, one finds that critical current should decay algebraically with the thickness $w$ of a magnetic layer [13]. However, as we show below most of these assumptions do not apply to realistic SFS junctions involving transition metals. Thus, in order to understand CPR in realistic MJJs, one needs to use accurate ab initio methods, which capture the physical effects described above. Figure 4: 4 Electronic band structure of bulk Ni (solid lines) calculated from first principles including many-body effects, see Ref. 46. It is the highest fidelity available and is very close to ARPES data (diamonds) in both majority (red) and minority (green) spin bands, with exchange splitting $V_{\mathrm{ex}}=0.3$ eV. 4 Majority (solid line) and minority (dashed line) Fermi surfaces of bulk Ni in the $\mathbf{k}_{\parallel}$ plane (with $k_{z}=0$) corresponding to the (111) plane direction used for the stacking of the Nb/Ni/Nb junctions, as discussed in the text. Axes correspond to two $\Gamma$-L lines: the Fermi surface has a three-fold symmetry in the entire plane. Majority band $6^{\uparrow}$ is depicted as solid red-blue line with color interpolating between red and blue depending on the Fermi velocity $\hbar^{-1}\partial E/\partial k$, which ranges between $3\times 10^{5}$ m/s (red) and $6\times 10^{5}$ m/s (blue). Minority bands are shown by dashed lines: $3^{\downarrow}$ (blue), $4^{\downarrow}$ (cyan), $5^{\downarrow}$ (green), and $6^{\downarrow}$ (red). To make a connection between the decay seen in Fig. 3 and the mechanisms responsible for it, Fig. 4 presents the energy band structure of bulk Ni. It is probably the highest fidelity band structure available: it very closely reproduces ARPES data in both majority and minority spin bands, with exchange splitting $V_{\mathrm{ex}}=0.3$ eV [46], and should be an excellent predictor of the real Fermi surface and velocities in Ni. | Majority ($\uparrow$) | Minority ($\downarrow$) ---|---|--- | $v_{\mathrm{\scriptscriptstyle F}}^{\mathrm{min}}$ | $v_{\mathrm{\scriptscriptstyle F}}^{\mathrm{max}}$ | $\langle v_{\mathrm{\scriptscriptstyle F}}\rangle$ | $\sqrt{\langle v_{\mathrm{\scriptscriptstyle F}}^{2}\rangle}$ | $\rho(E_{\mathrm{\scriptscriptstyle F}})$ | $\Delta{E}$ | $m^{*}\\!/m_{\mathrm{e}}$ | $\kappa$ | $v_{\mathrm{\scriptscriptstyle F}}^{\mathrm{min}}$ | $v_{\mathrm{\scriptscriptstyle F}}^{\mathrm{max}}$ | $\langle v_{\mathrm{\scriptscriptstyle F}}\rangle$ | $\sqrt{\langle v_{\mathrm{\scriptscriptstyle F}}^{2}\rangle}$ | $\rho(E_{\mathrm{\scriptscriptstyle F}})$ Band | [$10^{5}\frac{\mathrm{m}}{\mathrm{s}}$] | [$10^{5}\frac{\mathrm{m}}{\mathrm{s}}$] | [$10^{5}\frac{\mathrm{m}}{\mathrm{s}}$] | [$10^{5}\frac{\mathrm{m}}{\mathrm{s}}$] | [eV-1Å-3] | [eV] | | [Å] | [$10^{5}\frac{\mathrm{m}}{\mathrm{s}}$] | [$10^{5}\frac{\mathrm{m}}{\mathrm{s}}$] | [$10^{5}\frac{\mathrm{m}}{\mathrm{s}}$] | [$10^{5}\frac{\mathrm{m}}{\mathrm{s}}$] | [eV-1Å-3] 2 | | | | | | $-1.59$ | 0.68 | 12 | | | | | 3 | | | | | | $-0.22$ | 3.90 | 13 | 2.7 | 3.6 | 3.3 | 3.6 | 0.004 4 | | | | | | $-0.11$ | 0.61 | 47 | 0.8 | 1.8 | 1.1 | 1.2 | 0.015 5 | | | | | | $-0.10$ | 2.95 | 22 | 0.6 | 3.0 | 1.5 | 1.8 | 0.173 6 | 3.5 | 6.0 | 4.6 | 5.2 | 0.029 | | | | 0.6 | 3.0 | 2.3 | 2.6 | 0.045 Table 1: Minimum, maximum, average, and root mean square of the Fermi velocities $v_{\mathrm{\scriptscriptstyle F}}$ for the majority and minority Fermi surfaces in bulk Ni. $\rho(E_{\mathrm{\scriptscriptstyle F}})$ is the density of states at the Fermi level, $\Delta{E}$ is the energy of the valence band maximum relative to $E_{\mathrm{\scriptscriptstyle F}}$ for the majority bands not crossing $E_{\mathrm{\scriptscriptstyle F}}$, $m^{*}$ ($m_{\mathrm{e}}$) is the effective (bare) electron mass, and $\kappa$ estimates the decay exponent of the evanescent mode for a given band at $E_{\mathrm{\scriptscriptstyle F}}$. Here we use this potential to analyze the bulk Ni Fermi surface [Fig. 4] and Fermi velocities (Table 1). Counting from the bottom $s$-band, bands of Ni $d$ character are bands 2 to 6. These bands are nearly full: only majority band 6↑ and minority bands $3^{\downarrow}$–$6^{\downarrow}$ cross the Fermi level $E_{\mathrm{\scriptscriptstyle F}}$. Only band 6 has both majority and minority carriers at the Fermi surface. Bands 6↑ and 6↓ have roughly the same shape and the energy splitting is approximately constant and equal to $V_{\mathrm{ex}}$ (see the right panel of Fig. 4 in Ref. 46). Beyond this, however, the correspondence between the Ni band structure and a simple parabolic band structure deviate substantially, in two ways that critically affect the analysis. First, the Fermi velocity, $v_{\mathrm{\scriptscriptstyle F}}={\hbar}^{-1}(\partial E/\partial k)|_{k=k_{\mathrm{\scriptscriptstyle F}}}$, is not fixed even for a single pocket: it varies in band $6^{\uparrow}$ by a factor of 2 [see Fig. 4 and Table 1]. Accordingly the splitting $k^{\uparrow}_{\mathrm{\scriptscriptstyle F}}-k^{\downarrow}_{\mathrm{\scriptscriptstyle F}}$ between bands $6^{\uparrow}$ and $6^{\downarrow}$ varies by factor of two as expected from the twofold variation in $v_{\mathrm{\scriptscriptstyle F}}$. Second, bands $3^{\downarrow}$–$5^{\downarrow}$ have no majority counterpart at $E_{\mathrm{\scriptscriptstyle F}}$, indicating that the wave number of bands $3^{\uparrow}$–$5^{\uparrow}$ is complex. This is the origin for the exponential decay in scenario (i) in Fig. 2 as noted above: a large portion of Andreev levels are carried by Cooper pairs made of single-particle wave functions with one or both of $k_{\mathrm{\scriptscriptstyle F}}^{\uparrow}$ and $k_{\mathrm{\scriptscriptstyle F}}^{\downarrow}$ having an imaginary component. The magnitude of $\mathrm{Im}\,k$ depends on the particular mode and $\mathbf{k}_{\parallel}$ leading to a distribution of decay exponents. The slowest decay in each of these evanescent modes can be estimated from the distance $\Delta{E}$ of the closest approach to $E_{\mathrm{\scriptscriptstyle F}}$ and the effective mass $m^{*}/m_{e}$, using $\hbar^{2}k_{\mathrm{min}}^{2}/2m^{*}=\Delta{E}$ and decay $\kappa=2\pi/\mathrm{Im}(k_{\mathrm{min}})$, see Table 1. ($m^{*}/m_{e}$ is found to be highly anisotropic, so only the effective transport mass $m^{*}=3\,[1/m_{1}^{*}+1/m_{2}^{*}+1/m_{3}^{*}]^{-1}$ is shown.) $\kappa$ is only a rough measure of the evanescent mode decay for a given band. We discuss in detail the distribution of decay exponents and phase offsets in Sec. 4. Let us now focus on the mechanism (ii) for the supercurrent decay, i.e. loss of synchronization between different transverse modes. This mechanism is well- known in diffusive systems where quasiparticle trajectory is random and thus the phase offset $\delta\varphi$ accumulated along such a trajectory also gets randomized. Thus, upon averaging Eq. (3) over different disorder realizations, one ends up with exponentially decaying critical current [7]. Previously, such a suppression of the supercurrent with junction thickness, $w$, was often associated with impurity scattering in the ferromagnet. Here we demonstrate that this dephasing mechanism can also appear in clean systems (where quasiparticle trajectory is well defined) due to the dispersion of the velocity $v_{z\nu}$ with an in-plane momentum $\mathbf{k}_{\parallel}$. Specifically, we find that, in the Nb/Ni/Nb junctions, this mechanism becomes relevant for junctions thicker than $50$ Å, see Fig. 3 and Sec. 4. In SFS junctions the combination of disorder in the ferromagnet, interface scattering as well band-structure-induced dephasing ultimately determines the magnitude of the supercurrent. However, we believe that band structure effects provide an upper bound on the magnitude of the critical current as a function of $w$. In Sec. 4, we present numerical results which support the qualitative discussion presented above. ## 3 Method We develop a numerical method to perform realistic simulations of MJJs using a combination of first-principles DFT and BdG calculations. The former is used to obtain the normal-state properties (e.g., band structure, Fermi velocities, magnetization) and to calculate the normal scattering matrices through the inhomogeneous 3D realistic junctions. As a next step we take superconductivity into account and calculate supercurrent through the stack assuming the short junction limit. ### 3.1 Normal transport: ab initio description To calculate the normal scattering matrix we use the Questaal package for electronic structure calculations based on the linear muffin-tin orbital (LMTO) method [47]. It calculates the full non-linear, i.e. non equilibrium, transport properties of an infinite system describing a central (C) region cladded by two semi-infinite left (L) and right (R) leads [48, 49], as represented below: $\overbrace{\ldots\,|\,\mathcal{L}\,|\,\mathcal{L}\,}^{\textstyle\mathrm{L}}\,|\,\overbrace{\,\mathrm{PL}_{0}\,|\,\mathrm{PL}_{1}\,|\,\ldots\,|\,\mathrm{PL}_{L-1}\,}^{\textstyle\mathrm{C}}\,|\,\overbrace{\,\mathcal{R}\,|\,\mathcal{R}\,|\,\ldots}^{\textstyle\mathrm{R}}$ The LCR system [Fig. 1] can be partitioned into an infinite stack of principal layers (PLs) which interact only with their nearest-neighbors. This is possible because the screened LMTO structure constants are short-ranged [50]. In the present case the C region consists of the ferromagnet, plus two layers of Nb at the LC and CR interfaces respectively. This is the range over which the perturbation from C significantly modifies the potential in the L or R region. To construct the Nb/Ni/Nb stack, coincident site lattices for Ni and Nb must be found (details of how this was accomplished are given in Appendix B). Planes of coincident site lattices are stacked to form the Nb/Ni/Nb structures. Figure 5 shows the Nb and Ni planes we used, which are denoted here as ‘surface supercells.’ The electronic current flows along the $z$ direction, perpendicular to the PLs lying in the $xy$-plane (transverse direction), see Fig. 1. Periodic boundary conditions are used within each PL. The corresponding reciprocal space defines the two-dimensional (2D) $\mathbf{k}_{\parallel}$ vectors, i.e. the transverse modes, used in the calculations. The $\mathbf{k}_{\parallel}$ mesh is discretized and integrals over $\mathbf{k}_{\parallel}$ are performed numerically. The electronic structure of the C region can be separated from L and R regions through self-energies, $\Sigma_{\mathrm{\scriptscriptstyle L}}$ and $\Sigma_{\mathrm{\scriptscriptstyle R}}$, that modify the Hamiltonian of C region. They are most easily calculated if the potential of each PL in the L or R region are identical all through the bulk region. This is the reason for adding a few Nb layers folded into the C region. Thus the periodically repeating unit cells in the L and R regions can be safely assumed to have the potential of the bulk crystal. To construct the self-energies, the potentials of the PL in an infinite stack are needed. These potentials are functions only of the PL in their own region, and may be calculated in several ways. $\Sigma_{\mathrm{\scriptscriptstyle L}}$ and $\Sigma_{\mathrm{\scriptscriptstyle R}}$ are obtained from ‘surface’ Green’s function (a fictitious system which consists of a semi-infinite stack of PL, each with the same potential). Note that the potential of the C region is calculated self-consistently. This is important, as the local moments of Ni are small at the boundary layers, and build up gradually, see Fig. 5. With the potential in hand, the normal-state transport can be calculated using scattering formalism [49, 51]. For this, knowing the retarded Green’s function, $\mathcal{G}^{\mathrm{r}}$, of the junction is sufficient. However, this is not the case for the Josephson current: the individual eigenfunctions are required. Within a Green’s function framework, $\mathcal{G}^{\mathrm{r}}$ must be organized by normal modes which correspond to the eigenstates of the L and R leads for a prescribed energy $E$. In the PL representation, the Hamiltonian has been discretized into the linear combinations of the LMTO basis functions, and the normal modes are represented as eigenvectors of these basis functions. The Schrödinger equation becomes a difference equation in the PL basis functions [52]. Eigenvectors are calculated by solving a quadratic eigenvalue problem [52, 53], whose eigenvalues correspond to $\exp(\pm ik_{z,n}a)$, where $a$ is the thickness of the PL. The wave number $k_{z,n}$ of the normal mode $n$ can be complex, but to correspond to a propagating mode $k_{z,n}$ must be real. By solving the equation as a function of the energy $E$, one gets all the eigenvalues and eigenvectors which provide the information needed to construct the self-energies $\Sigma_{\mathrm{\scriptscriptstyle L}}$ and $\Sigma_{\mathrm{\scriptscriptstyle R}}$ (for each $\mathbf{k}_{\parallel}$ and each spin $\sigma$). Note that, in the mode basis, the imaginary part of the self-energies is proportional to the (band) velocity of the modes, and is diagonal for non-degenerate modes [53, 54]. The retarded Green’s function $\mathcal{G}^{\mathrm{r}}$ of the C region (connected to the L and R leads) can be written as a matrix in the normal mode basis, $\mathcal{G}^{\mathrm{r}}_{\sigma}(E,\mathbf{k}_{\parallel})=\Bigl{\\{}[\mathcal{G}^{\mathrm{r}}_{{\mathrm{\scriptscriptstyle C}},\sigma}(E,\mathbf{k}_{\parallel})]^{-1}\\\ -\Sigma_{{\mathrm{\scriptscriptstyle L}},\sigma}(E,\mathbf{k}_{\parallel})-\Sigma_{{\mathrm{\scriptscriptstyle R}},\sigma}(E,\mathbf{k}_{\parallel})\Bigr{\\}}^{-1},$ (5) where $\mathcal{G}^{\mathrm{r}}_{\mathrm{\scriptscriptstyle C}}$ is the Green’s function of the isolated C region. In this basis, $\mathcal{G}^{\mathrm{r}}$ is decomposed into four blocks, $\mathcal{G}^{\mathrm{r}}_{\sigma}(E,\mathbf{k}_{\parallel})=\left[\begin{array}[]{cc}\mathcal{G}^{\mathrm{r}}_{{\mathrm{\scriptscriptstyle LL}},\sigma}(E,\mathbf{k}_{\parallel})&\mathcal{G}^{\mathrm{r}}_{{\mathrm{\scriptscriptstyle LR}},\sigma}(E,\mathbf{k}_{\parallel})\\\ \mathcal{G}^{\mathrm{r}}_{{\mathrm{\scriptscriptstyle RL}},\sigma}(E,\mathbf{k}_{\parallel})&\mathcal{G}^{\mathrm{r}}_{{\mathrm{\scriptscriptstyle RR}},\sigma}(E,\mathbf{k}_{\parallel})\end{array}\right]\\!,$ (6) upon projecting onto the propagating modes of the L and R leads. These four quantities and the mode velocities completely determine the normal state transport properties of the junctions. The transmission matrices are defined by the off-diagonal parts of Eq. (6). More specifically, the transmission coefficients $[t_{{\mathrm{\scriptscriptstyle LR}},\sigma}]_{nm}$, connecting L and R regions, are given by [51] $[t_{{\mathrm{\scriptscriptstyle LR}},\sigma}]_{nm}(E,\mathbf{k}_{\parallel})=i\sqrt{|[v_{{\mathrm{\scriptscriptstyle L}},\sigma}]_{n}(E,\mathbf{k}_{\parallel})|}\\\ \times[\mathcal{G}^{\mathrm{r}}_{{\mathrm{\scriptscriptstyle LR}},\sigma}]_{nm}(E,\mathbf{k}_{\parallel})\,\sqrt{|[v_{{\mathrm{\scriptscriptstyle R}},\sigma}]_{m}(E,\mathbf{k}_{\parallel})|},$ (7) where $[v_{{\mathrm{\scriptscriptstyle L}},\sigma}]_{n}$ and $[v_{{\mathrm{\scriptscriptstyle R}},\sigma}]_{m}$ are the velocity matrix elements for propagating modes $n$ and $m$ in the L and R leads, respectively. The transmission matrix $t_{\mathrm{\scriptscriptstyle RL}}$ can be obtained from Eq. (7) by replacing $\mathrm{L}\leftrightarrow\mathrm{R}$. The reflection coefficients are given by the diagonal blocks of Eq. (6). For instance, on the L side $[r_{{\mathrm{\scriptscriptstyle LL}},\sigma}]_{nn^{\prime}}(E,\mathbf{k}_{\parallel})=i\sqrt{|[v_{{\mathrm{\scriptscriptstyle L}},\sigma}]_{n}(E,\mathbf{k}_{\parallel})|}\\\ \times[\mathcal{G}^{\mathrm{r}}_{{\mathrm{\scriptscriptstyle LL}},\sigma}]_{nn^{\prime}}(E,\mathbf{k}_{\parallel})\,\sqrt{|[v_{{\mathrm{\scriptscriptstyle L}},\sigma}]_{n^{\prime}}(E,\mathbf{k}_{\parallel})|}-\delta_{nn^{\prime}}.$ (8) The reflection matrix $r_{\mathrm{\scriptscriptstyle RR}}$ on the R side is obtained from Eq. (8) by replacing $\mathrm{L}\leftrightarrow\mathrm{R}$. We define the normal scattering matrix as $S=\left[\begin{array}[]{cc|cc}r_{{\mathrm{\scriptscriptstyle LL}},\uparrow}&0&t_{{\mathrm{\scriptscriptstyle LR}},\uparrow}&0\\\ 0&r_{{\mathrm{\scriptscriptstyle LL}},\downarrow}&0&t_{{\mathrm{\scriptscriptstyle LR}},\downarrow}\\\ \hline\cr t_{{\mathrm{\scriptscriptstyle RL}},\uparrow}&0&r_{{\mathrm{\scriptscriptstyle RR}},\uparrow}&0\\\ 0&t_{{\mathrm{\scriptscriptstyle RL}},\downarrow}&0&r_{{\mathrm{\scriptscriptstyle RR}},\downarrow}\end{array}\right]\\!,$ (9) where we omit $E$\- and $\mathbf{k}_{\parallel}$-dependence for brevity. The linear-response normal conductance, $G_{\sigma}$, per spin is given by $\frac{G_{\sigma}}{A}=\frac{e^{2}}{h}\int\limits_{\mathrm{\scriptscriptstyle BZ}}\\!\frac{d\mathbf{k}_{\parallel}}{(2\pi)^{2}}\,\sum_{n,m}\bigl{|}[t_{{\mathrm{\scriptscriptstyle LR}},\sigma}]_{nm}(E_{\mathrm{\scriptscriptstyle F}},\mathbf{k}_{\parallel})\bigr{|}^{2}.$ (10) It is calculated at the Fermi energy, $E=E_{\mathrm{\scriptscriptstyle F}}$. The total conductance is given by $G=G_{\uparrow}+G_{\downarrow}$. Figure 3 shows that $G$, $G_{\uparrow}$, and $G_{\downarrow}$ are almost independent of the junction thickness, $w$. ### 3.2 Superconducting transport: scattering matrix approach Equation (9) is the normal-state scattering matrix for metal-ferromagnet-metal (NFN) structure taking into account reflection at both NF interfaces. To account for superconductivity, we introduce a step-like superconducting pairing potential $\Delta=3.1$ meV and use the Andreev approximation to account for electron-hole scattering processes [3, 10]. This approach combines the details of the atomic structure of Nb/Ni/Nb and superconductivity within the mean-field approximation. The direct contact between S and F layers, as shown in Fig. 1, would lead to an interaction between them. The back-action of the ferromagnet on the superconductor (i.e. the inverse proximity effect) results in a spatial dependence of the pairing potential near the SF interfaces [55, 56, 57]. In the case of a clean SFS junction model, the self-consistent BdG calculations have been discussed in Refs. 58, 59, 60, 61, 62. In typical experimental systems the superconductor is disordered, so disorder effect on pairing potential needs to be considered, see, e.g., Ref. 63. Furthermore, in recent experiments S and F layers are separated by an intermediate spacer layer, which significantly reduces this inverse proximity effect. Thus, to understand inverse proximity effect in realistic SFS devices one would need to include both of the abovementioned ingredients in the model as well as to take into account the inhomogeneous magnetization in the ferromagnet (discussed in the next section) which is outside the scope of this paper. For the sake of clarity, we focus here on a realistic band structure in the ferromagnet and its effect on the supercurrent in the SFS structures. Henceforth, we also neglect the orbital effects of the fringe magnetic field, created by the ferromagnet. Since $\Delta\ll E_{\mathrm{\scriptscriptstyle F}}$, spin-resolved Andreev reflection at SNL and NRS interfaces is described by $r_{\mathrm{\scriptscriptstyle A}}(\phi)=\left[\begin{array}[]{cc|cc}0&\mathbf{1}\,e^{i\phi/2}&0&0\\\ \mathbf{1}\,e^{i\phi/2}&0&0&0\\\ \hline\cr 0&0&0&\mathbf{1}\,e^{-i\phi/2}\\\ 0&0&\mathbf{1}\,e^{-i\phi/2}&0\end{array}\right]\\!,$ (11) where $\phi$ is the phase difference between left and right superconducting leads and $\mathbf{1}$ is the identity matrix. In the short junction limit, the main contribution to the supercurrent comes from Andreev bound states localized in the junction having energy $\varepsilon_{\nu}$. The energy spectrum of Andreev states can be obtained using the following equation [64], $\alpha(\varepsilon)\\!\left[\\!\begin{array}[]{cc}0&\\!\\!r_{\mathrm{\scriptscriptstyle A}}^{*}(\phi)\\\ r_{\mathrm{\scriptscriptstyle A}}(\phi)\\!\\!&0\end{array}\\!\right]\\!\\!\left[\\!\begin{array}[]{cc}S(E_{\mathrm{\scriptscriptstyle F}}\\!+\\!\varepsilon,\mathbf{k}_{\parallel})\\!\\!\\!&0\\\ 0&\\!\\!\\!S^{*}(E_{\mathrm{\scriptscriptstyle F}}\\!-\\!\varepsilon,\mathbf{k}_{\parallel})\end{array}\\!\right]\\!\Psi^{\mathrm{in}}\\!=\\!\Psi^{\mathrm{in}},$ (12) where $\alpha(\varepsilon)=\sqrt{1-\varepsilon^{2}/\Delta^{2}}+i\varepsilon/\Delta$. The vector $\Psi^{\mathrm{in}}=[\psi_{\mathrm{e\uparrow}}^{{\mathrm{\scriptscriptstyle L}}\rightarrow},$ $\psi_{\mathrm{e\downarrow}}^{{\mathrm{\scriptscriptstyle L}}\rightarrow},$ $\psi_{\mathrm{e\uparrow}}^{{\mathrm{\scriptscriptstyle R}}\leftarrow},$ $\psi_{\mathrm{e\downarrow}}^{{\mathrm{\scriptscriptstyle R}}\leftarrow},$ $\psi_{\mathrm{h\uparrow}}^{{\mathrm{\scriptscriptstyle L}}\rightarrow},$ $\psi_{\mathrm{h\downarrow}}^{{\mathrm{\scriptscriptstyle L}}\rightarrow},$ $\psi_{\mathrm{h\uparrow}}^{{\mathrm{\scriptscriptstyle R}}\leftarrow},$ $\psi_{\mathrm{h\downarrow}}^{{\mathrm{\scriptscriptstyle R}}\leftarrow}]^{\mathrm{T}}$ corresponds to the electron- and hole-like (e/h) waves in NL and NR regions incident on the F region from the left ($\rightarrow$) and from the right ($\leftarrow$). Simulations show that $S$ is weakly-dependent on $E$ in the range $[E_{\mathrm{\scriptscriptstyle F}}-\Delta,E_{\mathrm{\scriptscriptstyle F}}+\Delta]$. Therefore, we expand $S(E,\mathbf{k}_{\parallel})$ in $E-E_{\mathrm{\scriptscriptstyle F}}$ and keep only the leading term, i.e. $S(E,\mathbf{k}_{\parallel})\approx S(E_{\mathrm{\scriptscriptstyle F}},\mathbf{k}_{\parallel})=S(\mathbf{k}_{\parallel})$. Using this approximation, one can simplify the quantization condition (12) and reduce it to the matrix eigenvalue problem (see details in Ref. 65). This approach allows one to reliably calculate the Andreev bound states spectrum, $\varepsilon_{\nu}(\phi,\mathbf{k}_{\parallel})$. The zero-temperature supercurrent, $J$, through the junction is given by Eq. (3). Figures 3–3 show $J$ as a function of a phase difference, $\phi$. Figure 3 shows the first Fourier harmonic of the supercurrent as a function of the junction thickness, $w$. ## 4 Results It is illuminating to compare our numerical simulations for the supercurrent with the experimental measurements involving quasi-ballistic MJJs. As previously discussed, we believe that the Nb/Ni/Nb junctions represent a good model system for which experimental data is readily available [16, 22, 23]. The best-performing stacks consist of Nb(110)/Cu/Ni(111)/Cu/Nb(110). Cu spacer layers seem to be essential to get strong supercurrent, likely because it prevents intermixing of the Ni and Nb. Our model junction simulates this geometry, via supercells in the plane normal to the stack to account for the lattice mismatch (see Fig. 1 and Appendix B), though we do not include the Cu layers. We anticipate that Cu spacers will mainly affect transmission matrix elements rather than the dependence of the supercurrent on ferromagnet thickness, which is the main focus of this work. Furthermore, as discussed before, the Cu spacers will suppress the direct interaction between the ferromagnet and the superconductor and reduce inverse proximity effect justifying Andreev approximation for the boundary conditions, see Eq. (11). Therefore, we consider only the simplified Nb/Ni/Nb stack and vary the number of layers (atomic planes) of Ni. Additionally, we also consider effect of different crystallographic orientation of the Ni planes and investigate Nb/Ni(110)/Nb junctions in Appendix C. To make a Nb/Ni superlattice, the unit cells of the Nb and Ni regions in the plane normal to the interface must be coincident. This is complicated by the severe lattice constant mismatch, and also the incompatibility of the (110) and (111) atomic planes. It is necessary to construct superlattices with Nb(110) and Ni(111) both rotated to the $z$ axis, and with lattice vectors in the plane coincident. A supercell with nearly coincident vectors was found (see Appendix B for details). By applying a small shear strain to the Ni, the lattice vectors are made exactly coincident. Figure 5 shows the Nb(110) surface supercell and the Ni(111) surface supercell with equal lattice vectors used to match the Nb/Ni interfaces. Each atomic plane of Ni(111) contains 14 atoms and each atomic plane of Nb(110) consists of 10 atoms. The atomic structure of the Nb(110)/Ni(111)/Nb(110) for 5 layers (atomic planes) of Ni is shown in Fig. 1. Figure 5: 5 Top view of the Nb(110) and Ni(111) surface supercells used to build the Nb/Ni interfaces and the Nb(110)/Ni(111)/Nb(110) stacks shown in Fig. 1. The surface supercell is defined from two 2D vectors $\mathbf{a}_{1}=[10.8,0]$ Å and $\mathbf{a}_{2}=[6.03,7.11]$ Å with periodic boundary condition in the 2D plane. The corresponding reciprocal space defines the 2D $\mathbf{k}_{\parallel}$ vectors used in the calculations. Each atomic plane of Ni (Nb) contains 14 (10) atoms of Ni (Nb). 5 Magnetic moment profile of Nb(110)/Ni(111)/Nb(110) junctions for different thickness of Ni (from 3 to 9 layers). The value of the moment is an averaged over the moments of the 14 Ni atoms in each atomic plane. Note the magnetic dead layer at the Nb/Ni interfaces and that all moments vanish for the shortest junction made of 3 layers of Ni. Next, we performed self-consistent DFT calculations within the local density approximation (LDA) in order to obtain the relaxed structure and corresponding electronic structure. For the smallest structures we performed a constrained optimization. Only the atoms in the planes closest to the Nb/Ni interfaces are allowed to relax to facilitate stacking of arbitrarily large cells. The Nb/Ni interplanar spacing has also been optimized to minimize the total energy, see Appendix E for more details. Once the structure is determined, one can determine the normal-state thermodynamic and transport properties of the junction, e.g., calculate the magnetization profile and spin-resolved conductance through the junction as a function of Ni thickness. For transport calculations, we use a layer transport technique [48] which employs the atomic spheres approximation (ASA). Careful checks were made of ASA band structures of elemental Nb and Ni, and also superlattices, to confirm that they are very similar to the full potential LMTO DFT-LDA ones. We find that the magnetic properties of Ni are sensitive to their local environment, indicative of the itinerant ferromagnetism. As shown in Fig. 5, the magnetization profile is non-uniform in the junction with averaged magnetic moments per atom being suppressed near the Nb interface. For thickness larger than 4 layers, one recovers the bulk value of $\sim 0.6\mu_{\mathrm{\scriptscriptstyle B}}$ in the middle layers, away from the Nb/Ni interfaces. The averaged moment drops down towards the edges and becomes considerably reduced down to $\sim 0.1\mu_{\mathrm{\scriptscriptstyle B}}$ at the interface with Nb. The strong reduction of magnetism is exemplified for the short junction with 3 layers of Ni where the moments on the Ni atoms have completely vanished. Such a non-uniform magnetic moment dependence in Nb/Ni/Nb junctions affect superconducting properties of the SFS junctions in a non- trivial way. For example, Nb/Ni/Nb junctions thinner than 4 layers of Ni behave as essentially SNS junctions. It is well known that the LDA tends to overestimate local moments $M$ in itinerant magnets [66] because spin fluctuations reduce the average moment [67], and underestimate $M$ when local moments are very large [46]. For Ni, LDA yields $M$ in good agreement with the experiment, but this is likely an artifact of an accidental cancellation of errors. Most important for transport is the exchange splitting $V_{\mathrm{ex}}$, which the LDA predicts to be 0.6 eV, about twice larger than the experimental value of 0.3 eV [68]. It is possible to reproduce both $M$ and $V_{\mathrm{ex}}$ at the same time, but a high-level theory, potentially including spin-orbit coupling, is needed to surmount both kinds of errors inherent the LDA [46, 69]. The high cost and poor scaling of such a theory is not practical for these junctions, so we elect to stay within the LDA and scale the self-consistently calculated $V_{\mathrm{ex}}$. This was the approach Karlsson and Aryasetiawan used to calculate the spin wave spectra in Ni [70]. Scaling of $V_{\mathrm{ex}}$ can be accomplished using different approaches, e.g., by adding some effective magnetic field to simulate the effect of spin fluctuations. Since Ni is a simple case with a nearly linear relation between $M$ and $V_{\mathrm{ex}}$, the band structure hardly depends on the details in which the LDA potential is modified. Here we first perform fully self-consistent calculations. Then, to construct the potential for transport properties, we rescale the spin component of the density by a constant factor, which we denote as $M/M_{0}$. This enables parametric studies of transport as a function of $V_{\mathrm{ex}}$. $M/M_{0}=0.5$ yields the observed $V_{\mathrm{ex}}=0.3$ eV, and we use this scaling unless stated otherwise. The conductance per unit of area, $G/A$, is shown in Fig. 3. It is weakly dependent on the thickness, $w$, of the magnetic layer, as expected for a metallic system in the absence of disorder. Figure 6: Comparison of critical current density, $J_{\mathrm{c}}=\max_{\phi}|J(\phi)|$ (blue squares), absolute value of first Fourier component for supercurrent, $|J_{1}|$ [see Eq. (13), red circles], and its fitting $|J_{1}^{\mathrm{fit}}|$ [Eq. (14), black curve]. All these quantities are ‘normalized’ by normal-state conductance, $G$. We now focus on superconducting properties. The dependence of the supercurrent on the phase difference $\phi$ for 5, 8, and 11 Ni layers is shown in Figs. 3–3. One can present current-phase relation, $J(\phi)$ in a form of a Fourier series, $J(\phi)=\sum\limits_{n\geqslant 1}J_{n}\sin(n\phi).$ (13) In the $0$-junction mode, the first term in the Fourier series dominates with $J_{1}>0$ [see solid black line in Fig. 3]. In $\pi$-junction case [Fig. 3], the supercurrent is also mostly defined by the first harmonic but with $J_{1}<0$. Close to the $0$-$\pi$ transition $J_{1}$ dies out, so that the behavior is governed by higher Fourier harmonics [71], e.g. for 8 Ni layers supercurrent has mostly second harmonic, $J_{2}$, shown by dashed black line in Fig. 3. Figure 6 shows the critical current density, $J_{\mathrm{c}}=\max_{\phi}|J(\phi)|$, normalized by normal-state conductance, $eJ_{\mathrm{c}}A/G\Delta$, as a function of $w$ (blue squares). Far from the $0$-$\pi$ transitions, the critical current coincides with the absolute value of the first Fourier harmonic, $e|J_{1}|A/G\Delta$ (red circles). Since $J_{1}$ contains a sign of the current and has better numerical stability than $J_{\mathrm{c}}$, we use this quantity for the analysis. We exclude very thin junctions (3 Ni layers or less) from the analysis since the magnetic properties are suppressed there.111For the first data point in Fig. 6 corresponding to 3 layers of Ni the magnetization is completely suppressed [see Fig. 5] and ratio $eJ_{\mathrm{c}}A/G\Delta\approx 2.2$ is significantly higher than the one for thicker Ni regions with non zero magnetic moments. We compare this value with the result for the short disordered SNS junction. Combination of analytical energy spectrum [64] with Dorokhov distribution of channel transmissions [72, 73] leads to a ratio of 2.1 (horizontal dashed line in Fig. 6). We attribute this difference to the fact that there is no interfacial disorder in our model. The $J_{1}$ dependence on $w$ can be fit by the following expression, $J_{1}^{\mathrm{fit}}(w)=\Theta_{J}\exp(-w/\xi_{J})\cos\bigl{[}\pi(w+\delta_{J})/\lambda_{J}\bigr{]},$ (14) where $\Theta_{J}=3.20\,\mathrm{A}/\mu\mathrm{m}^{2}$, $\xi_{J}=41.1$ Å, $\lambda_{J}=23.2$ Å, and $\delta_{J}=-2.75$ Å are fitting parameters. We interpret $\xi_{J}$ as a decay length, $\lambda_{J}$ as the ‘half-period’ of the oscillation in $J_{1}$ as a function of $w$, and $\delta_{J}$ as a measure of the suppressed magnetization in Ni layers near the Ni/Nb boundaries. $J_{1}^{\mathrm{fit}}(w)$ accurately fits the discrete points $J_{1}$ as shown in Figs. 3, 6, 7, and 10 by black solid line and black circles, accordingly. Figure 7: First Fourier harmonic, $J_{1}$, of the supercurrent as a function of junction thickness, $w$. Full black circles correspond to $J_{1}$ calculated with 290 $\mathbf{k}_{\parallel}$-points, empty circles (mostly superposed onto the full black circles) correspond to 4142 $\mathbf{k}_{\parallel}$-points. Solid black line is $J_{1}^{\mathrm{fit}}(w)$ [$J_{1}$ fit given by Eq. (14)]. Green semitransparent dashes show $J_{1}^{\mathrm{fit}}(w)$ contributions [Eq. (15b)] for the individual $\mathbf{k}_{\parallel}$-points. The vertical ‘errorbars’ correspond to the standard deviation of $j_{1}(\mathbf{k}_{\parallel})$ with respect to $J_{1}$. The standard deviation is the same for both sets of $\mathbf{k}_{\parallel}$-point indicating that these results are independent of the chosen discretization. Figure 8: Colorplot of $j_{1}(\mathbf{k}_{\parallel})$ with $\mathbf{k}_{\parallel}=(k_{x},k_{y})$. Each panel corresponds to the local extrema of $J_{1}^{\mathrm{fit}}(w)$ shown in Fig. 7. For 4 and 13 layers all the $\mathbf{k}_{\parallel}$ channels contribute with the same sign. For 48 layers and larger, different $\mathbf{k}_{\parallel}$ channels lose synchronization and contribute to the total supercurrent with different signs. In order to gain insight into the evolution of $J_{1}$ with $w$, let us resolve contributions from different $\mathbf{k}_{\parallel}$. For this, we rewrite Eq. (3) as $\displaystyle J(\phi)$ $\displaystyle=A\int\limits_{\mathrm{\scriptscriptstyle BZ}}\frac{d\mathbf{k}_{\parallel}}{(2\pi)^{2}}\,j(\phi,\mathbf{k}_{\parallel}),$ (15a) $\displaystyle j(\phi,\mathbf{k}_{\parallel})$ $\displaystyle=-\frac{e}{\hbar}\,\frac{1}{A}\,\sum_{\nu>0}\frac{\partial\varepsilon_{\nu}(\phi,\mathbf{k}_{\parallel})}{\partial\phi}.$ (15b) Here $A=76.75$ Å2 is the area of the surface supercell shown in Fig. 5. Similar to Eq. (13), we denote the first $\phi$-harmonic of $j(\phi,\mathbf{k}_{\parallel})$ as $j_{1}(\mathbf{k}_{\parallel})$. The evolution of the first Fourier harmonic of the supercurrent $J_{1}$ and $j_{1}(\mathbf{k}_{\parallel})$ as a function of $w$ are shown in Figure 7. Calculations were performed for two different sets of $\mathbf{k}_{\parallel}$ with 290 discrete $\mathbf{k}_{\parallel}$-points (full black circles) and 4142 $\mathbf{k}_{\parallel}$-points (empty black circles). One can see that both sets give the same result for $J_{1}$, establishing that the $\mathbf{k}_{\parallel}$ integration is well converged. In Fig. 7, ‘errorbars’ denote the standard deviation in $j_{1}(\mathbf{k}_{\parallel})$ with respect to the $\mathbf{k}_{\parallel}$-summed average, $J_{1}(\phi)$. Individual $j_{1}(\mathbf{k}_{\parallel})$ are shown by semi-transparent horizontal dashes. The important observation is that while $J_{1}$ decays with $w$, the dispersion in $j_{1}(\mathbf{k}_{\parallel})$ does not change significantly. Figure 8 shows colorplots of $j_{1}(\mathbf{k}_{\parallel})$ corresponding to local extrema of $J_{1}^{\mathrm{fit}}(w)$ (4, 13, 25, 36, 48, and 60 layers labeled in Fig. 7). For small $w$, one can see that most of all the $\mathbf{k}_{\parallel}$ contributions to $J_{1}$ have the same sign, i.e. positive in $0$-junction regime and negative in $\pi$-junction regime. In this regime the decay is predominantly due to evanescent modes decaying into the junction. For larger $w$, the dephasing mechanism becomes important since the phase offset spread grows with $w$. One can observe the apparition of contributions of the opposite sign for $w\gtrsim 50$ Å. This dephasing mechanism is mainly due to the variation of the Fermi velocity with $\mathbf{k}_{\parallel}$, and becomes more important with increasing $w$. In order to study the distribution of the phase offsets and decay exponents for different modes, we fit the individual $j_{1}(\mathbf{k}_{\parallel})$ using an expression analogous to Eq. (14). The set of the resulting fitted curves $j_{1}^{\mathrm{fit}}(w,\mathbf{k}_{\parallel})$ for 4142 $\mathbf{k}_{\parallel}$-points are shown by the green semitransparent curves in Fig. 3. Here to minimize the numerical ‘noise,’ $j_{1}(w,\mathbf{k}_{\parallel})$ curves are smoothed over the 2D $\mathbf{k}_{\parallel}$ space using Gaussian filter with $\sigma_{\mathbf{k}_{\parallel}}=0.01$ Å-1 which is of the order of the Fermi wave vector in Nb. Thus, each data point $j_{1}(w,\mathbf{k}_{\parallel})$ approximately corresponds to a transverse conducting channel. This fitting procedure works reasonably well, e.g., the relationship in Eq. (15a) holds if one replaces $J(\phi)$ by $J_{1}^{\mathrm{fit}}(w)$ and $j(\phi,\mathbf{k}_{\parallel})$ by $j_{1}^{\mathrm{fit}}(w,\mathbf{k}_{\parallel})$ for $w\gtrsim 10$ Å. The distribution of the fitting parameters for $j_{1}^{\mathrm{fit}}(w,\mathbf{k}_{\parallel})$ is shown in Fig. 9. Histograms for decay lengths and half-periods reveal a complicated picture describing different contributions to the supercurrent in real materials. First of all, in Fig. 9 one can see the distribution of the half-periods, $\lambda_{j}$, which is similar to a Gaussian distribution with a mean value $\langle\lambda_{j}\rangle=23.2$ Å and standard deviation $2.8$ Å. The mean value $\langle\lambda_{j}\rangle$ is very close to $\lambda_{J}$ [see text after Eq. (14)] while the spread in $\lambda_{j}$ leads to dephasing and is responsible for the exponential decay of $J_{1}$ at large $w$. Indeed, it is well-known that the average of an oscillatory function with respect to a random fluctuating phase (described by a Gaussian distribution) results in an exponentially decaying function. In addition to the dephasing mechanism, the decay of the supercurrent originates from the evanescent modes. The histogram for decay lengths, $\xi_{j}$ is shown in Fig. 9. Here small $\xi_{j}$ corresponds to fast- decaying $j_{1}^{\mathrm{fit}}(w,\mathbf{k}_{\parallel})$, large $\xi_{j}$ is responsible for non-decaying modes (i.e. modes with the decay exponents larger than the junction thickness). The right-skewed distribution of the decay exponents has a mean value of $\langle\xi_{j}\rangle=108$ Å which is much larger than $\xi_{J}$ in Eq. (14). The shoulder at small $\xi_{j}$ presumably corresponds to the evanescent mode decay comprising of $d$ bands 2 and 3, see Table 1, whereas the tail at large $\xi_{j}$ originates predominantly from the band 6. Overall, one can see that a fit with a single decay exponent, discussed in Eq. (14), is quite oversimplified for a Nb/Ni/Nb junction considered here. Figure 9: Analysis of the fitting parameters of $j_{1}^{\mathrm{fit}}(w)$ for individual $\mathbf{k}_{\parallel}$, shown in Fig. 3 by semitransparent green lines. Distribution of 9 half-periods, $\lambda_{j}$, and 9 decay lengths, $\xi_{j}$. We now turn to the discussion of the effect of exchange splitting energy on the supercurrent in MJJs. So far we have used $M/M_{0}=0.5$, which yields the experimentally observed $V_{\mathrm{ex}}=0.3$ eV. It is interesting to investigate how a ferromagnet with a different $V_{\mathrm{ex}}$ (but otherwise the same band structure as Ni) would affect the $w$-dependence of $J_{1}$. In Fig. 10 we show the results for parametric variations in $M/M_{0}$. One can see in Fig. 10 that the half-period, $\lambda_{J}$, and the decay length, $\xi_{J}$, strongly depend on $M$. Here black points correspond to $M/M_{0}=0.5$, and the self-consistent calculations with no rescaling correspond to $M/M_{0}=1$. In order to understand how $\lambda_{J}$ and $\xi_{J}$ depend on $M$, we perform the fitting procedure Eq. (14) for different magnetic moments and plot, in Fig. 10, the supercurrent density as a function of the rescaled thickness, $(w+\delta_{J})/2{\tilde{\lambda}}_{J}$ with ${\tilde{\lambda}}_{J}=(M_{0}/M)\,11.6$ Å. Remarkably $J_{1}$, as a function of the rescaled thickness, collapses to the same universal curve. The inset demonstrates that the fitted half-period, $\lambda_{J}$, is proportional to $1/M$ showing that the oscillation period scales linearly with the inverse of $V_{\mathrm{ex}}$ in this parameter range.222Deviations from linear regime become significant for $M/M_{0}\gtrsim 1.7$. For the clarity of the data, we do not show the results for $M/M_{0}>1$ in Fig. 10. Figure 10: $J_{1}$ for different rescaling of the magnetic moments $M/M_{0}$ as a function of 10 the thickness, $w$, and 10 the rescaled thickness, $(w+\delta_{J})/2{\tilde{\lambda}}_{J}$, where $\delta_{J}$ is the fitting parameter in Eq. (14) and ${\tilde{\lambda}}_{J}=(M_{0}/M)\,11.6$ Å. $J_{1}$ values are shown by circles; corresponding fittings $J_{1}^{\mathrm{fit}}(w)$ are shown by lines of the same color. (Inset) Half-period, $\lambda_{J}$, fitted using Eq. (14) versus ${\tilde{\lambda}}_{J}$. We now discuss the difference in crystal orientation in Nb/Ni/Nb junctions. We have performed calculations for Nb/Ni/Nb junctions built from stacking the Ni atomic planes in the (110) orientation instead of the (111) orientation. The results are given in Appendix C. Qualitatively, the same physics hold for both stacks built from (111) and (110) Ni planes. However, our calculations show that the actual value for the period of oscillation and for the current decay depend crucially on the details of the electronic structure of the junctions, such as the relative crystal orientation. Finally, we considered effect of spin-orbit coupling in SFS junctions. Spin- orbit coupling leads to mixing of the minority and majority channels and may change current-phase relationship. The interplay between Zeeman splitting and spin-orbit coupling have been discussed in Ref. 45; the regime of interest is Zeeman-field-dominated regime considered there. Indeed, we find that SOC in Ni is much smaller than the exchange splitting because of low atomic number of Ni. As we show in Appendix D, the SOC in Ni-based MJJ considered here does not change qualitative picture described above but rather leads to small quantitative changes to the Josephson current. ## 5 Conclusion In this paper we identified two generic mechanisms for the decay of the supercurrent with junction thickness: (i) exchange-splitting induced gap opening for minority or majority carriers and (ii) dephasing between different modes due to the significant quasiparticle velocity dispersion with the transverse momentum. It was previously believed that disorder in the ferromagnet is mainly responsible for the supercurrent decay in SFS junctions. In the present work we have shown that band structure effects also contribute to the critical current suppression and thus provide an upper bound for the supercurrent in ideal (i.e. disorder-free) structure. We found that the Nb/Ni/Nb junction is a suitable system for comparison with the simulations because of the long mean free path in Ni relative to the junction thickness and the quasi-ballistic nature of quasiparticle propagation in the ferromagnet. We have found good agreement with published experimental data for the half-period of the critical current oscillations: $\lambda_{J}\approx 23$ Å [see Eq. (14) and text after it] versus $\approx 26$ Å in experiment, Ref. 22. We have also found that the critical current decays exponentially with the ferromagnet thickness $w$. This is to be contrasted with previously assumed algebraic decay based on results for the clean SFS junctions using simple parabolic-like band structure. We believe that in measured Nb/Ni/Nb junctions with $w\lesssim 50$ Å the mechanism (i) is likely to be responsible for the supercurrent decay. This finding is crucial for material and geometry optimization of MJJs and superconducting magnetic spin valves. Understanding the interplay of band structure effects and disorder in MJJs is an interesting open problem. We believe that interfacial disorder due to, for example, surface roughness will mix different $\mathbf{k}_{\parallel}$ modes and will lead to a larger spread of half-periods. This, in turn, will further enhance the dephasing mechanism (ii) of the supercurrent decay discussed here. Strong disorder in the bulk (i.e. mean free path much smaller than junction thickness $w$) would lead to the diffusive motion of quasiparticles in the ferromagnet which is a significant departure from the quasi-ballistic junction limit considered here. We think that bulk disorder would induce even more dephasing between different modes because phase offsets in this case will depend on different random trajectories of minority and majority carriers. We, therefore, believe that bulk disorder will lead to even stronger decay of the supercurrent with junction thickness, $w$. ###### Acknowledgements. HN is much indebted to Dimitar Pashov for stimulating discussions about developing the Questaal package. The authors express their gratitude to Mason Thomas for the organizational help and discussions at the early stages of the project. The authors acknowledge stimulating discussions with Norman Birge, Anna Herr, Tom Ambrose, Nick Rizzo, and Don Miller. This work is based on support by the U.S. Department of Energy, Office of Science through the Quantum Science Center (QSC), a National Quantum Information Science Research Center. HN and MvS acknowledge financial support from Microsoft Station Q via a sponsor agreement between KCL and Microsoft Quantum. In the late stages of this work MvS was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award # FWP ERW7906. ## Appendix A Mean free path estimate In this section we provide an estimate for the mean free path $l_{\mathrm{\scriptscriptstyle MFP}}$ in the Ni ferromagnet. Our approach is similar to Ref. 44. We use the Kubo formula for conductivity $\sigma_{xx}=e^{2}\sum_{n,\sigma}\tau_{n\sigma}\,\langle v^{2}_{n\sigma}\rangle\,{\rho_{n}(E_{\mathrm{\scriptscriptstyle F}})},$ where index $n$ labels the Ni bands, $\sigma$ is the spin projection, and $\tau_{n\sigma}$ is the scattering time in each band assumed to be momentum independent. Mean square velocity, $\langle v^{2}_{n\sigma}\rangle$, and density of states at the Fermi level, $\rho_{n}(E_{\mathrm{\scriptscriptstyle F}})$, are obtained from our ab initio model calculations and are provided in Table 1. We use the available experimental data [74, 75] for thick Ni samples $w\gtrsim 50$ Å: the low temperature linear resistance $1/(\sigma_{xx}^{\uparrow}+\sigma_{xx}^{\downarrow})\approx 33\,\mathrm{n}\Omega{\cdot}\mathrm{m}$, and bulk spin scattering asymmetry, $\beta_{\mathrm{\scriptscriptstyle F}}=(\sigma_{xx}^{\uparrow}-\sigma_{xx}^{\downarrow})/(\sigma_{xx}^{\uparrow}+\sigma_{xx}^{\downarrow})=0.14$. Only a single $6^{\uparrow}$ band of the majority spin is occupied. Thus, one can readily estimate mean free path for majority carriers $l_{\mathrm{\scriptscriptstyle MFP}}^{\uparrow}=\langle v_{\uparrow}\rangle\tau_{\uparrow}\approx 60$ Å. For the minority channel we assume that the conductance at large thicknesses is dominated by the most mobile band, i.e. $6^{\downarrow}$ [76]. This gives an estimate for the mean free path $l_{\mathrm{\scriptscriptstyle MFP}}^{\downarrow}\approx 61$ Å, similar to its exchange-split partner $6^{\uparrow}$. These estimates are consistent with the available ARPES [77] and computational data [44]. As mentioned in the main text, the mean free path for $6^{\uparrow}$ and $6^{\downarrow}$ carriers exceeds the typical junction thicknesses measured experimentally. ## Appendix B Nb(110)/Ni(111)/Nb(110) junctions Here we describe how coincident site lattices are constructed for the Nb(110)/Ni(111) interface. Supercells of both Ni and Nb are separately constructed in the following way, to make a coincident site lattice. The primitive unit cells (fcc in the Ni case, with a 0 K lattice constant 3.515 Å, and bcc in the Nb case with a 0 K lattice constant 3.295 Å) are rotated. For Ni, the rotation is compactly described in terms of three Euler angles: rotation about $z$ by $\pi/4$, rotation about $x^{\prime}$ by $\arccos(\sqrt{1/3})$, rotation about $z^{\prime\prime}$ by $\arccos(4/\sqrt{19})$. Axes $xyz$ are shown in Fig. 1; $x^{\prime}y^{\prime}z^{\prime}$ and $x^{\prime\prime}y^{\prime\prime}z^{\prime\prime}$ are axes after first and second rotation, respectively. From the first two rotations the $[1\bar{1}1]$ axis becomes the new $z^{\prime\prime}$ axis. The last rotation is needed to make it approximately coincident with Nb. The Nb is rotated about $(1,-1,0)$ by $-\pi/2$, about $z^{\prime}$ by $\pi/4$, and about $z^{\prime}$ again by $\arccos(5/\sqrt{43})$. After rotations both Ni $(1\bar{1}1)$ and Nb $(\bar{1}\bar{1}0)$ planes are normal to $z$, which is the propagation direction. Next, superlattices must be constructed. They are generated by scaling the primitive lattice vectors by the following integer multiples, for Ni and Nb respectively: $\left[{\begin{array}[]{rrr}3&0&2\\\ -1&0&4\\\ 1&1&1\end{array}}\right]\\!,\qquad\left[{\begin{array}[]{rrr}-1&-4&2\\\ 2&-2&0\\\ 1&1&2\end{array}}\right]\\!.$ Figure 11: 11 Top view of the Nb(110)/Ni(110) interface supercell. Ni (Nb) atoms are shown in light blue (grey). A repetition ($3{\times}3$) of the surface supercell is shown. The corresponding Nb(110) and Ni(110) cells contain 2 atoms each. 11 Transverse view of the Nb(110)/Ni(110)/Nb(110) junction with 5 layers of Ni. 11 $J_{1}$ component for the Nb(110)/Ni(110)/Nb(110) and the Nb(110)/Ni(111)/Nb(110) junctions versus the thickness $w$. Both currents have a decaying oscillatory behavior, with a half period of oscillation of $\lambda_{J}\approx 30$ Å for the Ni(110) case and of $\lambda_{J}\approx 23$ Å for the Ni(111) case. The Ni supercell contains 14 atoms, all in a single plane perpendicular to $z$; the Nb supercell consists of 2 planes with 10 atoms per plane [Fig. 5]. The lattice vectors transverse to $z$ are nearly coincident, but not identically so. To render them coincident, we opt to shear the Ni by the following linear transformation, $\left[{\begin{array}[]{cc}0.997&0\\\ 0.050&1.028\end{array}}\right]\\!.$ This shear is a measure of the remaining mismatch of the undistorted Ni and Nb lattices. It is close enough to unity to have a minor effect on the Ni band structure. Along this axis, Nb planes form an ABAB$\dots$ stacking pattern; the Ni are stacked ABCABC$\dots$ Ni/Nb interfaces are formed by stacking varying numbers of Ni planes on a Nb substrate, and relaxing the supercells to minimize the total energy. This was done for a few small superlattices with 3, 4, and 5 Ni planes, restricting the relaxation to the two Ni and two Nb frontier planes. In this way we can build up structures of arbitrarily many Ni planes, sandwiching unrelaxed planes between frontier planes. Thus the entire set of structures has three families of interfaces: those with integer numbers $3n$, $3n+1$ or $3n+2$ of Ni planes. Figure 3 shows that normal conductance, $G$, experiences $\sim 1.3\%$ deviations from its mean $\langle G\rangle$ following the pattern of these three families. Overall, both normal conductance and supercurrent (see Fig. 7) are reasonably smooth functions of the thickness $w$, which suggests that the discreteness of the lattice and the details of lattice relaxation plays a minor role. ## Appendix C Nb(110)/Ni(110)/Nb(110) junctions In this section, we present results for the Nb/Ni/Nb junctions built from stacking the Ni atomic planes with the (110) orientation instead of the (111) orientation. Although the (110) plane orientation is not the most stable configuration for the Nb/Ni interface and most experimental studies focus on (111) orientation, it is interesting to ascertain whether there is an effect of crystallographic orientation on the supercurrent. Indeed, the supercurrent decay and half-period of oscillations with $w$ do depend on crystal orientation. We study the Nb(110)/Ni(110)/Nb(110) trilayer shown in Fig. 11. The inter- plane distance between the Nb and Ni planes at the interfaces is taken to be the average of the Nb and Ni inter-plane distances. For the sake of simplicity, we do not perform atomic relaxations for this system. The lattice parameter of Nb (3.295 Å) has been increased to match the lattice parameter of Ni (3.515 Å) to simplify the construction of periodic supercell. This has a slight effect on the Nb band structure. Calculations of the trilayers were performed self-consistently, and for transport we reconstruct the potential by rescaling the magnetic moments by 0.5, as in the Ni(111) case. The $0$-$\pi$ and $\pi$-$0$ transitions occur around $w\approx 11$ Å and 41 Å [shown in red in Fig. 11], yielding a half- period for the oscillations of $\lambda_{J}\approx 30$ Å. This period is significantly larger than $\lambda_{J}\approx 23$ Å for the Nb(110)/Ni(111)/Nb(110) case [shown in black in Fig. 11]. The difference of $\approx 7$ Å between the two half periods represents roughly 5 inter-plane distances for Ni(110) [3 inter-plane distances for Ni(111)], and is not induced by the unrelaxed atomic structures at the Nb(110)/Ni(110) interfaces. It is noteworthy that the calculated period for the Ni(111) trilayers better coincides with available experimental data [22] than for the Ni(110) case. ## Appendix D Effect of spin-orbit interaction Figure 12: GW Fermi surfaces of bulk Ni in the $\mathbf{k}_{\parallel}$ plane corresponding to Ni(111). 12 No SOC [same as Fig. 4]. Shown are majority band $6^{\downarrow}$ (red) and minority bands $6^{\uparrow}$ (blue), $3^{\downarrow}$ (dark blue), $4^{\downarrow}$ (cyan), and $5^{\downarrow}$ (green). 12 With SOC. SOC opens gaps around the small regions close to the band crossings (circled). In this section, we discuss the effect of spin-orbit coupling (SOC) on Josephson current in quasi-ballistic SFS junctions. The effect of spin-orbit interaction is two-fold: (i) SOC changes the band structure and, therefore, may modify the dependence of the quasiparticle velocities on transverse momenta and (ii) SOC couples spin and orbital motion and leads to spin precession along the junction. The combination of spin precession and scattering on non-magnetic impurities may introduce random spin-flip processes which suppress the phase difference between minority and majority quasiparticles. We analyze both mechanisms below and show that they represent weak perturbations to our main results and do not change qualitative predictions for quasi-ballistic Ni-based MJJs. Detailed theoretical analysis of the bulk Ni band structure with SOC and different orientations of the magnetic moment has been discussed in Ref. 69. In Fig. 12, we compare Fermi surface of Ni with and without SOC using the same high-level theory [46] of Fig. 4. The Questaal code can treat SOC effects, see e.g. tutorial ‘Spin and spin orbit coupling.’ The SOC implementation is conventional for the energy-dependent local basis set and band structure calculations (see Sec. 2.8.2 in Ref. 47), and has also been developed for the Green’s functions (see Supplementary Materials in Ref. 78 and Sec. 2.16 in Ref. 47). One may notice that the spin-orbit coupling only weakly modifies the Fermi surface shown in Fig. 12. Indeed, apart from several pockets (close to the degeneracy points) across the Brillouin zone, the SOC only weakly perturbs the band structure. In fcc crystals, the $d$-orbitals are split into degenerate $t_{\mathrm{2g}}$ and $e_{\mathrm{g}}$ levels, at the $\Gamma$ point, according to the symmetry. The SOC splits further the degenerate $t_{\mathrm{2g}}$ levels. From these level shifts (taken at the $\Gamma$ $\mathbf{k}_{\parallel}$-point), we estimate local SOC coupling to be of the order of $E_{\mathrm{\scriptscriptstyle SO}}\sim 10\,$meV which is much smaller than the exchange splitting in Ni. In Ref. 69, the corresponding SOC strength was found to be 68meV which further corroborates our conclusions. One can now evaluate effect of SOC coupling on quasiparticle propagation through the junction. Given that bulk band structure of Ni is centro- symmetric, Dresselhaus spin orbit coupling is forbidden by symmetry. The Rashba SOC may appear due to the inversion symmetry-breaking along the direction of the junction. However, the rapid screening of the interface potential in the ferromagnet would limit Rashba SOC to a few atomic layers in the junction. Finally, we believe that effect of local spin-flip processes due to scattering on impurities is weak in the regime of interest. Indeed, here we are considering quasi-ballistic magnetic Josephson junctions (i.e. the thickness of the junction smaller than the mean free path $l_{\mathrm{\scriptscriptstyle MFP}}$ in Ni). Thus, we expect impurity-induced spin-relaxation rate to be suppressed as well. ## Appendix E Normal scattering matrix calculation The Questaal code is based on the LMTO technique which uses a set of electron wave-function $\phi_{\mathrm{R}L}$ and its energy-derivative $\dot{\phi}_{\mathrm{R}L}$ as a basis set. The wave-functions $\phi_{\mathrm{R}L}$ are solutions of the spherical Schrödinger equation in a sphere around a given atom at position $R$ with angular quantum numbers $L=l,m$. The LMTO technique is an all-electron approach, with does not rely on the use of pseudo-potential. The basis set contains core electrons as well as ‘valence’ (non-core) electrons. The electron ground state of the systems is obtained with the DFT-LDA-ASA framework [47]. The (energy-dependent) basis set consists of partial waves of s, p, d character on each of the atomic sites. Core levels are integrated separately from the valence partial waves to obtain the all-electron charge density. However, they are not included in the secular matrix. The calculations are converged when variations in the electron density and total energy between the last iterations, are below $3\times 10^{-5}$ ($10^{-5}$) respectively. To obtain the self-consistent charge density, calculations were performed with a $\mathbf{k}_{\parallel}$ mesh of ($8\times 4$). As is well known, for purposes of determining the density, a finer $\mathbf{k}_{\parallel}$ mesh is not needed since the output is a $\mathbf{k}_{\parallel}$-independent potential for a subsequent transport calculation. Atomic relaxations are performed on relatively small cells with a full- potential method [47], with periodic boundary conditions. Once the ground- state is reached, the atomic positions of the frontier atoms at the Nb/Ni and Ni/Nb interfaces are allowed to relax to minimize the total energy. Convergence is achieved when all the forces on the relaxed atoms are below $\sim 25$ mRy/bohr. Atomic relaxations are constrained to the first and second frontier layers; these shifts are then added to the ideal geometry of the (larger) stacked cells in the subsequent ASA calculations of transport. More details of the calculations can be found in the Questaal tutorial ‘Nb/Ni superlattice.’ A denser $\mathbf{k}_{\parallel}$ mesh is needed as the transmission (reflection) probability is a quantity very sensitive to the $\mathbf{k}_{\parallel}$-point sampling. As shown in Fig. 3 of the main text, we used a mesh of ($24\times 24$) and a mesh of ($90\times 92$) points, giving rise to 290 and 4142 irreducible $\mathbf{k}_{\parallel}$ points respectively. This establishes that our transport calculations are well converged in $\mathbf{k}_{\parallel}$. Note that the principal layer technique (used for the transport) does not rely on periodic boundary conditions in the (transport) direction perpendicular to the layers (in our case the $z$-axis). Hence there is not $k_{z}$ point sampling, the wave number $k_{z}(E)$ should be understood as a continuous complex function of $E$. Further details for the constructions of the family of the $3n$, $3n+1$ or $3n+2$ of Ni planes of the Nb(110)/Ni(111)/Nb(110) junctions can be found in the Questaal tutorial ‘Nb(110)/Ni/Nb(110) metallic trilayers.’ ## References * Josephson [1962] B. Josephson, Possible new effects in superconductive tunnelling, Phys. Lett. 1, 251 (1962). * Warburton [2011] P. A. Warburton, The josephson effect: 50 years of science and technology, Phys. Educ. 46, 669 (2011). * Beenakker [1992] C. W. J. Beenakker, Three “universal” mesoscopic Josephson effects, in _Transport phenomena in mesoscopic systems_, Vol. 109, edited by H. Fukuyama and T. Ando (Springer, Berlin, Heidelberg, 1992) pp. 235–253. * Golubov _et al._ [2004] A. A. Golubov, M. Y. Kupriyanov, and E. Il’ichev, The current-phase relation in Josephson junctions, Rev. Mod. Phys. 76, 411 (2004). * Bardeen _et al._ [1957] J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Microscopic theory of superconductivity, Phys. Rev. 106, 162 (1957). * Dayton _et al._ [2018] I. M. Dayton, T. Sage, E. C. Gingrich, M. G. Loving, T. F. Ambrose, N. P. Siwak, S. Keebaugh, C. Kirby, D. L. Miller, A. Y. Herr, Q. P. Herr, and O. Naaman, Experimental demonstration of a Josephson magnetic memory cell with a programmable $\pi$-junction, IEEE Magn. Lett. 9, 1 (2018). * Buzdin [2005] A. I. Buzdin, Proximity effects in superconductor-ferromagnet heterostructures, Rev. Mod. Phys. 77, 935 (2005). * Bergeret _et al._ [2005] F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Odd triplet superconductivity and related phenomena in superconductor-ferromagnet structures, Rev. Mod. Phys. 77, 1321 (2005). * Blamire and Robinson [2014] M. G. Blamire and J. W. A. Robinson, The interface between superconductivity and magnetism: understanding and device prospects, J. Condens. Matter Phys. 26, 453201 (2014). * Eschrig [2015] M. Eschrig, Spin-polarized supercurrents for spintronics: a review of current progress, Rep. Prog. Phys 78, 104501 (2015). * Fulde and Ferrell [1964] P. Fulde and R. A. Ferrell, Superconductivity in a strong spin-exchange field, Phys. Rev. 135, A550 (1964). * Larkin and Ovchinnikov [1965] A. I. Larkin and Y. N. Ovchinnikov, Inhomogeneous state of superconductors, Sov. Phys. JETP 20, 762 (1965). * Buzdin _et al._ [1982] A. L. Buzdin, L. N. Bulaevskil, and S. V. Panyukov, Critical-current oscillations as a function of the exchange field and thickness of the ferromagnetic metal (F) in an S-F-S Josephson junction, JETP Lett. 35, 178 (1982). * Ryazanov _et al._ [2001a] V. V. Ryazanov, V. A. Oboznov, A. Y. Rusanov, A. V. Veretennikov, A. A. Golubov, and J. Aarts, Coupling of two superconductors through a ferromagnet: Evidence for a $\pi$ junction, Phys. Rev. Lett. 86, 2427 (2001a). * Bell _et al._ [2004] C. Bell, G. Burnell, C. W. Leung, E. J. Tarte, D.-J. Kang, and M. G. Blamire, Controllable Josephson current through a pseudospin-valve structure, Appl. Phys. Lett 84, 1153 (2004). * Gingrich _et al._ [2016] E. C. Gingrich, B. M. Niedzielski, J. A. Glick, Y. Wang, D. L. Miller, R. Loloee, W. P. P. Jr, and N. O. Birge, Controllable 0-$\pi$ Josephson junctions containing a ferromagnetic spin valve, Nat. Phys. 12, 564 (2016). * Ryazanov _et al._ [2001b] V. V. Ryazanov, V. A. Oboznov, A. V. Veretennikov, and A. Y. Rusanov, Intrinsically frustrated superconducting array of superconductor-ferromagnet-superconductor $\pi$ junctions, Phys. Rev. B 65, 020501 (2001b). * Kontos _et al._ [2002] T. Kontos, M. Aprili, J. Lesueur, F. Genêt, B. Stephanidis, and R. Boursier, Josephson junction through a thin ferromagnetic layer: Negative coupling, Phys. Rev. Lett. 89, 137007 (2002). * Sellier _et al._ [2003] H. Sellier, C. Baraduc, F. m. c. Lefloch, and R. Calemczuk, Temperature-induced crossover between $0$ and $\pi$ states in S/F/S junctions, Phys. Rev. B 68, 054531 (2003). * Robinson _et al._ [2006] J. W. A. Robinson, S. Piano, G. Burnell, C. Bell, and M. G. Blamire, Critical current oscillations in strong ferromagnetic $\pi$ junctions, Phys. Rev. Lett. 97, 177003 (2006). * Khaire _et al._ [2009] T. S. Khaire, W. P. Pratt, and N. O. Birge, Critical current behavior in Josephson junctions with the weak ferromagnet PdNi, Phys. Rev. B 79, 094523 (2009). * Baek _et al._ [2017] B. Baek, M. L. Schneider, M. R. Pufall, and W. H. Rippard, Phase offsets in the critical-current oscillations of Josephson junctions based on Ni and Ni-(Ni81Fe19)xNby barriers, Phys. Rev. Applied 7, 064013 (2017). * Baek _et al._ [2018] B. Baek, M. L. Schneider, M. R. Pufall, and W. H. Rippard, Anomalous supercurrent modulation in Josephson junctions with Ni-based barriers, IEEE Trans. Appl. Supercond 28, 1 (2018). * Aguilar _et al._ [2020] V. Aguilar, D. Korucu, J. A. Glick, R. Loloee, W. P. Pratt, and N. O. Birge, Spin-polarized triplet supercurrent in Josephson junctions with perpendicular ferromagnetic layers, Phys. Rev. B 102, 024518 (2020). * Mishra _et al._ [2021] S. S. Mishra, R. Loloee, and N. O. Birge, Supercurrent transmission through Ni/Ru/Ni synthetic antiferromagnets, Appl. Phys. Lett. 119, 172603 (2021). * McMillan [1968] W. L. McMillan, Theory of superconductor—normal-metal interfaces, Phys. Rev. 175, 559 (1968). * Wolfram [1968] T. Wolfram, Tomasch oscillations in the density of states of superconducting films, Phys. Rev. 170, 481 (1968). * Kulik [1970] I. O. Kulik, Macroscopic quantization and the proximity effect in S-N-S junctions, J. Exp. Theor. Phys. 30, 944 (1970). * Demers and Griffin [1971] J. Demers and A. Griffin, Scattering and tunneling of electronic excitations in the intermediate state of superconductors, Can. J. Phys. 49, 285 (1971). * Griffin and Demers [1971] A. Griffin and J. Demers, Tunneling in the normal-metal-insulator-superconductor geometry using the Bogoliubov equations of motion, Phys. Rev. B 4, 2202 (1971). * Entin-Wohlman [1977] O. Entin-Wohlman, Effect of a barrier at the superconducting-normal metal interface, J. Low Temp. Phys 27, 777 (1977). * Blonder _et al._ [1982] G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Transition from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversion, Phys. Rev. B 25, 4515 (1982). * Furusaki and Tsukada [1991] A. Furusaki and M. Tsukada, Dc Josephson effect and Andreev reflection, Solid State Commun 78, 299 (1991). * Furusaki _et al._ [1992] A. Furusaki, H. Takayanagi, and M. Tsukada, Josephson effect of the superconducting quantum point contact, Phys. Rev. B 45, 10563 (1992). * Furusaki [1994] A. Furusaki, DC Josephson effect in dirty SNS junctions: Numerical study, Physica B 203, 214 (1994). * de Jong and Beenakker [1995] M. J. M. de Jong and C. W. J. Beenakker, Andreev reflection in ferromagnet-superconductor junctions, Phys. Rev. Lett. 74, 1657 (1995). * Tanaka and Kashiwaya [1997] Y. Tanaka and S. Kashiwaya, Theory of Josephson effect in superconductor-ferromagnetic-insulator-superconductor junction, Physica C 274, 357 (1997). * Žutić and Valls [1999] I. Žutić and O. T. Valls, Spin-polarized tunneling in ferromagnet/unconventional superconductor junctions, Phys. Rev. B 60, 6320 (1999). * Radović _et al._ [2003] Z. Radović, N. Lazarides, and N. Flytzanis, Josephson effect in double-barrier superconductor-ferromagnet junctions, Phys. Rev. B 68, 014501 (2003). * Cayssol and Montambaux [2005] J. Cayssol and G. Montambaux, Incomplete andreev reflection in a clean superconductor-ferromagnet-superconductor junction, Phys. Rev. B 71, 012507 (2005). * Konschelle _et al._ [2008] F. Konschelle, J. Cayssol, and A. I. Buzdin, Nonsinusoidal current-phase relation in strongly ferromagnetic and moderately disordered sfs junctions, Phys. Rev. B 78, 134505 (2008). * Tzortzakakis and Flytzanis [2019] A. F. Tzortzakakis and N. Flytzanis, _Josephson junctions with spin-orbit and spin-flip interactions_ , Master’s thesis, University of Crete (2019). * Demler _et al._ [1997] E. A. Demler, G. B. Arnold, and M. R. Beasley, Superconducting proximity effects in magnetic metals, Phys. Rev. B 55, 15174 (1997). * Gall [2016] D. Gall, Electron mean free path in elemental metals, J. Appl. Phys 119, 085101 (2016). * Cheng and Lutchyn [2012] M. Cheng and R. M. Lutchyn, Josephson current through a superconductor/semiconductor-nanowire/superconductor junction: Effects of strong spin-orbit coupling and Zeeman splitting, Phys. Rev. B 86, 134522 (2012). * Sponza _et al._ [2017] L. Sponza, P. Pisanti, A. Vishina, D. Pashov, C. Weber, M. van Schilfgaarde, S. Acharya, J. Vidal, and G. Kotliar, Self-energies in itinerant magnets: A focus on Fe and Ni, Phys. Rev. B 95, 041112 (2017). * Pashov _et al._ [2020] D. Pashov, S. Acharya, W. R. Lambrecht, J. Jackson, K. D. Belashchenko, A. Chantis, F. Jamet, and M. van Schilfgaarde, Questaal: A package of electronic structure methods based on the linear muffin-tin orbital technique, Comput. Phys. Commun 249, 107065 (2020). * Faleev _et al._ [2005] S. V. Faleev, F. Léonard, D. A. Stewart, and M. van Schilfgaarde, Ab initio tight-binding LMTO method for nonequilibrium electron transport in nanosystems, Phys. Rev. B 71, 195422 (2005). * Meir and Wingreen [1992] Y. Meir and N. S. Wingreen, Landauer formula for the current through an interacting electron region, Phys. Rev. Lett. 68, 2512 (1992). * Andersen and Jepsen [1984] O. K. Andersen and O. Jepsen, Explicit, first-principles tight-binding theory, Phys. Rev. Lett. 53, 2571 (1984). * Fisher and Lee [1981] D. S. Fisher and P. A. Lee, Relation between conductivity and transmission matrix, Phys. Rev. B 23, 6851 (1981). * Chen _et al._ [1989] A.-B. Chen, Y.-M. Lai-Hsu, and W. Chen, Difference-equation approach to the electronic structures of surfaces, interfaces, and superlattices, Phys. Rev. B 39, 923 (1989). * Fujimoto and Hirose [2003] Y. Fujimoto and K. Hirose, First-principles treatments of electron transport properties for nanoscale junctions, Phys. Rev. B 67, 195315 (2003). * Wimmer [2008] M. Wimmer, _Quantum transport in nanostructures: From computational concepts to spintronics in graphene and magnetic tunnel junctions_ , Ph.D. thesis, Universität Regensburg (2008). * Halterman and Valls [2001] K. Halterman and O. T. Valls, Proximity effects at ferromagnet-superconductor interfaces, Phys. Rev. B 65, 014509 (2001). * Halterman and Valls [2002] K. Halterman and O. T. Valls, Proximity effects and characteristic lengths in ferromagnet-superconductor structures, Phys. Rev. B 66, 224516 (2002). * Csire _et al._ [2018] G. Csire, A. Deák, B. Nyári, H. Ebert, J. F. Annett, and B. Újfalussy, Relativistic spin-polarized KKR theory for superconducting heterostructures: Oscillating order parameter in the Au layer of Nb/Au/Fe trilayers, Phys. Rev. B 97, 024514 (2018). * Halterman and Valls [2004] K. Halterman and O. T. Valls, Layered ferromagnet-superconductor structures: The $\pi$ state and proximity effects, Phys. Rev. B 69, 014517 (2004). * Halterman _et al._ [2007] K. Halterman, P. H. Barsic, and O. T. Valls, Odd triplet pairing in clean superconductor/ferromagnet heterostructures, Phys. Rev. Lett. 99, 127002 (2007). * Halterman _et al._ [2015] K. Halterman, O. T. Valls, and C.-T. Wu, Charge and spin currents in ferromagnetic Josephson junctions, Phys. Rev. B 92, 174516 (2015). * Halterman and Alidoust [2016] K. Halterman and M. Alidoust, Josephson currents and spin-transfer torques in ballistic SFSFS nanojunctions, Supercond. Sci. Technol. 29, 055007 (2016). * Alidoust and Halterman [2020] M. Alidoust and K. Halterman, Supergap and subgap enhanced currents in asymmetric ${\mathrm{s}}_{1}{\mathrm{fs}}_{2}$ Josephson junctions, Phys. Rev. B 102, 224504 (2020). * Yagovtsev _et al._ [2021] V. O. Yagovtsev, N. G. Pugach, and M. Eschrig, The inverse proximity effect in strong ferromagnet–superconductor structures, Supercond. Sci. Technol. 34, 025003 (2021). * Beenakker [1991] C. W. J. Beenakker, Universal limit of critical-current fluctuations in mesoscopic josephson junctions, Phys. Rev. Lett. 67, 3836 (1991). * van Heck _et al._ [2014] B. van Heck, S. Mi, and A. R. Akhmerov, Single fermion manipulation via superconducting phase differences in multiterminal Josephson junctions, Phys. Rev. B 90, 155450 (2014). * Aguayo _et al._ [2004] A. Aguayo, I. I. Mazin, and D. J. Singh, Why Ni3Al is an itinerant ferromagnet but Ni3Ga is not, Phys. Rev. Lett. 92, 147201 (2004). * Moriya [1985] T. Moriya, _Spin fluctuations in itinerant electron magnetism_ (Springer-Verlag, Berlin, 1985). * Himpsel _et al._ [1979] F. J. Himpsel, J. A. Knapp, and D. E. Eastman, Experimental energy-band dispersions and exchange splitting for ni, Phys. Rev. B 19, 2919 (1979). * Bünemann _et al._ [2008] J. Bünemann, F. Gebhard, T. Ohm, S. Weiser, and W. Weber, Spin-orbit coupling in ferromagnetic nickel, Phys. Rev. Lett. 101, 236404 (2008). * Karlsson and Aryasetiawan [2000] K. Karlsson and F. Aryasetiawan, A many-body approach to spin-wave excitations in itinerant magnetic systems, J. Phys. Condens. Matter 12, 7617 (2000). * Stoutimore _et al._ [2018] M. J. A. Stoutimore, A. N. Rossolenko, V. V. Bolginov, V. A. Oboznov, A. Y. Rusanov, D. S. Baranov, N. Pugach, S. M. Frolov, V. V. Ryazanov, and D. J. Van Harlingen, Second-harmonic current-phase relation in Josephson junctions with ferromagnetic barriers, Phys. Rev. Lett. 121, 177702 (2018). * Dorokhov [1984] O. N. Dorokhov, On the coexistence of localized and extended electronic states in the metallic phase, Solid State Commun 51, 381 (1984). * Mello _et al._ [1988] P. A. Mello, P. Pereyra, and N. Kumar, Macroscopic approach to multichannel disordered conductors, Ann Phys 181, 290 (1988). * Moreau _et al._ [2007] C. E. Moreau, I. C. Moraru, N. O. Birge, and W. P. Pratt, Measurement of spin diffusion length in sputtered Ni films using a special exchange-biased spin valve geometry, Appl. Phys. Lett 90, 012101 (2007). * Bass [2011] J. Bass, Giant magnetoresistance: Experiment, in _Handbook of spin transport and magnetism_, edited by I. Zutic and E. Y. Tsymbal (CRC Press, 2011) pp. 69–94. * Tsymbal _et al._ [2011] E. Y. Tsymbal, D. Pettifor, and S. Maekawa, Giant magnetoresistance: Theory, in _Handbook of spin transport and magnetism_, edited by I. Zutic and E. Y. Tsymbal (CRC Press, 2011) pp. 95–114. * Petrovykh _et al._ [1998] D. Y. Petrovykh, K. N. Altmann, H. Höchst, M. Laubscher, S. Maat, G. J. Mankey, and F. J. Himpsel, Spin-dependent band structure, Fermi surface, and carrier lifetime of permalloy, Appl. Phys. Lett 73, 3459 (1998). * Belashchenko _et al._ [2015] K. D. Belashchenko, L. Ke, M. Däne, L. X. Benedict, T. N. Lamichhane, V. Taufour, A. Jesche, S. L. Bud’ko, P. C. Canfield, and V. P. Antropov, Origin of the spin reorientation transitions in (Fe1-xCox)2B alloys, Appl. Phys. Lett. 106, 062408 (2015).
# Patterns for Representing Knowledge Graphs to Communicate Situational Knowledge of Service Robots Shengchen Zhang<EMAIL_ADDRESS>College of Design and Innovation, Tongji University281 Fuxin RdShanghaiChina , Zixuan Wang <EMAIL_ADDRESS>College of Design and Innovation, Tongji University281 Fuxin RdShanghaiChina , Chaoran Chen<EMAIL_ADDRESS>College of Design and Innovation, Tongji University281 Fuxin RdShanghaiChina , Yi Dai<EMAIL_ADDRESS>College of Design and Innovation, Tongji University281 Fuxin RdShanghaiChina , Lyumanshan Ye<EMAIL_ADDRESS>College of Design and Innovation, Tongji University281 Fuxin RdShanghaiChina and Xiaohua Sun<EMAIL_ADDRESS>College of Design and Innovation, Tongji University281 Fuxin RdShanghaiChina (2021) ###### Abstract. Service robots are envisioned to be adaptive to their working environment based on situational knowledge. Recent research focused on designing visual representation of knowledge graphs for expert users. However, how to generate an understandable interface for non-expert users remains to be explored. In this paper, we use knowledge graphs (KGs) as a common ground for knowledge exchange and develop a pattern library for designing KG interfaces for non- expert users. After identifying the types of robotic situational knowledge from the literature, we present a formative study in which participants used cards to communicate the knowledge for given scenarios. We iteratively coded the results and identified patterns for representing various types of situational knowledge. To derive design recommendations for applying the patterns, we prototyped a lab service robot and conducted Wizard-of-Oz testing. The patterns and recommendations could provide useful guidance in designing knowledge-exchange interfaces for robots. Design Patterns, Interface Design, Knowledge Graph, Human-Robot Interaction ††copyright: acmcopyright††journalyear: 2021††price: 15.00††doi: 10.1145/3411764.3445767††isbn: 978-1-4503-8096-6/21/05††conference: CHI Conference on Human Factors in Computing Systems; May 8–13, 2021; Yokohama, Japan††booktitle: CHI Conference on Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan††submissionid: 7721††ccs: Human-centered computing Graphical user interfaces††ccs: Human-centered computing User interface design††ccs: Computer systems organization Robotics††ccs: Computing methodologies Cognitive robotics††ccs: Human-centered computing User studies Figure 1. In this work, we use observations and analysis of human presentation of situational knowledge to develop a pattern library for interface design. We further derive design recommendations by prototyping and testing the interface of a service robot. A three-step image illustrating the process of our research framework. We use observations and analysis of human presentation of situational knowledge to develop an interface design pattern library. We further derive design recommendations by prototyping and testing the interface of a service robot. ## 1\. Introduction Service robots are increasingly pervasive in our daily life and are anticipated to execute complex tasks based on high-level goals and interact with users in an easily-understandable way(Kattepur, 2019). This requires robots to effectively organize and represent situational knowledge — the in- situ information about humans, objects, places, and events in the robot’s working environment(Javia and Cimiano, 2016). However, since situational knowledge is often highly related to users and objects on the spot, robots need to interact with humans to mediate the mismatch between perception and comprehension of situational knowledge(Fang et al., 2015). Knowledge graphs(Hogan et al., 2020) are a common method used in both artificial intelligence (AI) to incorporate human knowledge into AI models, and in robotics to build knowledge-enabled robots. Previous research have been exploring designing interfaces to help domain experts to directly view and manipulate the knowledge graph in a robot in order to understand and operate it(Lemaignan et al., 2017a). However, for non-expert users who have little technical knowledge in ontological models like KGs, it can be hard to understand and interact with such interfaces. Existing works are developing a set of human-friendly vocabulary to build robot ontology(Diprose et al., 2012b), but research on KG interface design for non-expert users is still relatively few. Meanwhile, previous works showed that design patterns benefit the field of HCI and HRI by providing reusable solutions for recurring problems(Kahn et al., 2008a). For KG interfaces, the study of interface design patterns is especially useful. A set of patterns can lower the requirement on the understanding of technical details for the designers, while also accelerating the development of such interfaces. This paper describes the process (Figure 1) of developing a library of patterns for visually presenting situational, robotic knowledge graphs. The goal of the patterns is to help designing effective interfaces to communicate the situational knowledge of service robots. Our contribution is three-fold: * • We developed a pattern library of high-level and low-level patterns of situational knowledge communication that can be used to design robot interfaces. * • We described in detail our process of constructing the library from how non- expert users present KG elements on a canvas. * • We further derived design recommendations for using the pattern library through prototyping and a Wizard-of-Oz field study. ## 2\. related work Our work builds on prior research on situational knowledge exchange in humans and robots, challenges of knowledge graph in cognitive robot, and patterns of human robot interaction. ### 2.1. Situational knowledge exchange in humans and robots Situational knowledge exchange has played an increasingly critical role in human-robot interaction, for both robotic perception and comprehension(Javia and Cimiano, 2016). Many studies have focused on leverage symbol grounding techniques (connecting numeric and symbolic representation of real-world objects and the environment) to make situational knowledge exchange more accurate and easier to understand (Bastianelli et al., 2013). These techniques have been used to support situational knowledge representation for robots(Topp, 2017; Pronobis and Jensfelt, 2012; Nieto-Granda et al., 2010), situation assessment in human-robot interaction(Riley et al., 2010; Sisbot et al., 2011), and context-based human-robot collaboration(Zachary et al., 2013, 2015). Most relevant to our work is the research focused on grounding and representing situational knowledge for robots in a human-understandable way. For example, Bastianelli et al.(Bastianelli et al., 2013) proposed using metric maps and a multi-model interface that allowed users to guide robots to ground symbolic information in the operational environment. Carpenter et al.(Carpenter and Zachary, 2017) presented CARIL, a computational architecture that used declarative situational representation to adapt robots’ behavior to humans during their collaboration. While several aspects of exchanging situational knowledge have been studied, such as creating robot task plans(Paxton et al., 2018; Kattepur, 2019) and programming the Robotic Operating System (ROS)(Tiddi et al., 2017; Leonardi et al., 2019; Diprose et al., 2012b), few studies explored a unified form of visual representation for situational knowledge in robots. Knowledge graphs can represent heterogeneous and interconnected knowledge(Jiang, 2020), therefore often used as a unified format of knowledge representation in robots. Our work aims to identify the patterns and represent situational knowledge based on the knowledge graph in a service robot. ### 2.2. Challenges of knowledge graph in cognitive robots Although knowledge graphs have been applied to facilitate human-robot interaction, it is hindered by three main challenges: static systems, monotonous interaction modality and intricate interfaces. Firstly, robots based on static systems are hard to adapt to user preferences and intuitively interact with users (Jokinen et al., 2019; Javia and Cimiano, 2016). To address this problem and explore human-friendly systems, some recent research enhanced the accuracy of human-robot knowledge communication by acquiring multi-modal human behavior data or the AR interface (Kennington and Shukla, 2017; Liu et al., 2018; Tsiakas et al., 2017). Secondly, previous research has indicated that uni-modal interaction (typically voice interaction for robots) might lead to misunderstanding the users’ requests(Kennington and Shukla, 2017). Liu et al. proposed a graphic interface that adds synchronous visual feedback of users’ decision sequence based on a dialogue interaction system, which successfully reduced misunderstandings(Liu et al., 2010). It demonstrates the effectiveness of GUI- based interaction in human-robot knowledge communication. Thirdly, there is still relatively few research focused on enhancing non- expert users’ understanding of ontological models and developing interactive systems for this purpose. Some relevant research explored human-readable ways of describing robot behavior(Diprose et al., 2012b), and designed an interface using ontological abstraction to help non-experts users to program robots with low learning cost(Tiddi et al., 2018). These works inspired us to explore how to generate a systemic and understandable interface for exchanging knowledge between non-expert users and service robots. ### 2.3. Patterns of human robot interaction Pattern language was first proposed by Alexander in 1977(Alexander, 1977), and then thrived in the field of Human computer interaction(HCI)(Pea, 1987; Seffah, 2010) and human robot interaction(HRI)(Kahn et al., 2008b; Mioch et al., 2014; Sauppé and Mutlu, 2014). Kahn et al. argued that design patterns could benefit HRI by providing designers with the necessary knowledge and save their time when reusing these patterns to solve recurring problems(Kahn et al., 2008a). Patterns also serve as scaffolding to help future research explore their relevant direction. Prior research focused on extracting patterns to guide the design of HRI. For example, Oliveira et al. yielded social interaction patterns by observing human and robot players’ behaviors in card games(Oliveira et al., 2018). Oguz et al. developed an ontological framework from human interaction demonstrations, which could be transferred to HRI scenarios(Oguz et al., 2019). Most relevant to our research are the patterns of presenting robot knowledge on an graphical interface. Robots have shown the ability to use interface to effectively communicate various types of information and promote task efficiency in teleoperating(Barba et al., 2020; Chen et al., 2007),manufacturing(Marvel et al., 2020) and elderly care(Klakegg et al., 2017). Although prior research showed interest in exploring the design of interfaces, few focused on extracting patterns of knowledge presentation on the robot’s interface. Our work aims to facilitate more design of robot interface to exchange situational knowledge. ## 3\. Formative Study Alexander points out in his foundational work(Alexander, 1979) that patterns “cannot be made, but only be generated, indirectly, by the ordinary actions of the people.” Regarding applying design patterns to HRI, Freier et al. further argued that design patterns in nature are “patterns of human interaction with the physical and social world(Kahn et al., 2008b).” In line with this reasoning, we grounded our formative study in observations and analysis of non-expert users’ methods to visually communicate situational knowledge. Participants are tasked with presenting the knowledge needed in an HRI scenario using cards on a canvas. We then iteratively coded the results and identified patterns for presenting various types of robotic situational knowledge. The following sections describe the process of discovering and formalizing these patterns. ### 3.1. Scenarios To create a comprehensive and believable setting to ground our observation, we first identify three HRI scenarios that span three categories of situational knowledge exchange interactions: semantic, procedural and episodic. As knowledge communication involves both robot and human parties, we referenced research in both human knowledge (Alexander and Judy, 1988; de Jong and Ferguson-Hessler, 1996) and robotic knowledge(Laird et al., 2017). We adopted the categorization in (Laird et al., 2017) as the knowledge types are applicable to both humans and robots, and are more robot-specific. ##### Communicating semantic knowledge Semantic knowledge is ”semantically abstract facts(Laird et al., 2017).” In semantic scenarios, the participants are asked to present information about user information, object ownership, or environment locations on the canvas. ##### Communicating procedural knowledge Procedural knowledge is ”knowledge about actions, whether internal or external(Laird et al., 2017).” In procedural scenarios, the participants are asked to present information about robot tasks or actions in a planned procedure. ##### Communicating episodic knowledge Episodic knowledge is ”contextualized experiential knowledge”(Laird et al., 2017). In episodic scenarios, the participants present knowledge such as a perceived sequence of events or information related to a certain period of time. To ensure our results’ applicability to a wide variety of working environments, we expanded each scenario into three types of contexts from private to public: home, office, and elderly care center. This results in nine scenarios used in the formative study. We use comic strips to present each scenario to the participants to reduce narration’s influence on the results. An example of the scenario comic is shown in Figure 2. The complete collection of comics can be found in the supplementary materials. Figure 2. An example of the scenario comic An example of the scenario comic. It consists of four panels. The first three depict a scene where a user asks the robot what it has learned on its first day at home. The fourth panel has instructions for the participant. It asks the participant to play the role of the robot, and arrange cards on the canvas to display the information. Figure 3. The design of the knowledge cards. On the right is an example of a card in the group “User”, with ontology term “User profile”. The design of the knowledge cards. The card is hexagonal with shaded outlines. The ontology group that the card belongs to is written in the shaded area, near the top and bottom of the card. In the center is the name of the KG node and its ontology term. There is also a small text-box showing the scenario ID that the card belongs to. The back of the card features a list of related card names. On the right of the image is an example of a card in the group “User”, with ontology term “User profile”. ### 3.2. Knowledge Cards To achieve a more comprehensive coverage of the types of situational knowledge in service robots, we conducted a literature survey of works on ontology for service robots in the Scopus database111https://www.scopus.com. Intially, we used “situational” and its variations (such as “situated” and “in situ”) in a pilot search. To the best of our efforts, we found no existing works categorizing robot situational knowledge types. Therefore, we searched for works categorizing general knowledge types of robot, using the query shown below: {robot AND knowledge graph} OR {robot AND ontology} OR {robot AND semantic network} OR {robot AND knowledge model} AND NOT industr* AND NOT agricultur* AND NOT farming The search returned 228 papers. We further excluded works that focused on a specific type of knowledge and those that focused on specific applications of robot ontology. In the end, 11 papers were selected. We reviewed each paper for ontology terms and grouped similar terms to form an initial list of robot knowledge types. We finally removed non-situational knowledge about robotic components and capabilities to arrive at Table 1. Table 1. List of situational knowledge used in our formative study Group | Ontology | Description | Examples ---|---|---|--- Objects | Object | A physical entity in the environment, excluding agents such as users and robots. | (Olivares-Alarcos et al., 2019) (Lim et al., 2011) (Il Hong Suh et al., 2007) (Chang et al., 2020) (Jeon et al., [n.d.]) (Tenorth and Beetz, 2013) (Azevedo et al., 2019) (Lemaignan et al., 2017b) (Kruijff et al., 2007) | Affordance | The possibility of action(Olivares-Alarcos et al., 2019) on an object, as perceived by the robot. | (Olivares-Alarcos et al., 2019) (Jeon et al., [n.d.]) (Tenorth and Beetz, 2013) Environment | Environment Map | A spatial representation of the robot’s working environment(Olivares-Alarcos et al., 2019). | (Olivares-Alarcos et al., 2019) (Lim et al., 2011) (Il Hong Suh et al., 2007) (Chang et al., 2020) (Jeon et al., [n.d.]) (Tenorth and Beetz, 2013) (Lemaignan et al., 2017b) (Kruijff et al., 2007) Users | User profile | The basic information of the users(Chang et al., 2020), such as name and ID in the face recognition module. | (Lemaignan et al., 2017b) (Chang et al., 2020) (Jeon et al., [n.d.]) (Tenorth and Beetz, 2013) (Azevedo et al., 2019) (Mahieu et al., 2019) (Bruno et al., 2019) | Social concept | Social knowledge of the user obtained while interacting with a human.(Chang et al., 2020) | (Chang et al., 2020) (Mahieu et al., 2019) (Bruno et al., 2019) | Emotion/Intention | The emotional state and intention of the user, as detected by the robot. | (Chang et al., 2020) (Azevedo et al., 2019) (Mahieu et al., 2019) Action | Action & Task | A task represents “a piece of work that has to be done by the robot(Olivares-Alarcos et al., 2019)”. An action refers to “a way to execute a task(Olivares-Alarcos et al., 2019)”, which we use to denote a step that the robot takes to complete a task. | (Olivares-Alarcos et al., 2019) (Lim et al., 2011) (Il Hong Suh et al., 2007) (Tenorth and Beetz, 2013) (Mahieu et al., 2019) (Lemaignan et al., 2017b) | Activity & Behavior | The behavior that the robot adopts when carrying out a task. Example behaviors may include scripted interactions, open dialogue, or simply command-and-response. | (Olivares-Alarcos et al., 2019) (Lim et al., 2011) (Il Hong Suh et al., 2007) (Chang et al., 2020) (Jeon et al., [n.d.]) (Tenorth and Beetz, 2013) | Plan & Method | A sequence of actions that the robots would take in order to fulfill a task. This can be either programmed or generated. | (Olivares-Alarcos et al., 2019) (Chang et al., 2020) (Tenorth and Beetz, 2013) | Interaction & Communication | A group of pre-defined actions that involves interacting or communicating with the user. This type of knowledge is often associated with one or more types of behavior. | (Olivares-Alarcos et al., 2019) (Chang et al., 2020) (Tenorth and Beetz, 2013) (Kruijff et al., 2007) Context | Spatial & temporal context | (Lim et al., 2011) used these terms to refer to the spatial and temporal relationship between objects. We expand this definition to include non-object concepts such as events and interactions. | (Lim et al., 2011) (Lemaignan et al., 2017b) | Situation | (Lim et al., 2011) used this terms to refer to the detected spatial status of objects (such as “crowded”). We expand this definition to general social situations. | (Lim et al., 2011) (Lemaignan et al., 2017b) | Event | An event is a notable happening detected by the robot. | (Tenorth and Beetz, 2013) We then manually compiled knowledge graph datasets for the nine scenarios, with each dataset spanning all types of knowledge listed. The total number of nodes is kept roughly the same for each dataset to maintain similar task difficulty across the scenarios. We used hexagonal cards to represent each node in the knowledge graph, inspired by Padilla et al. (Padilla et al., 2017), as hexagon allows for versatile arrangement and efficient use of space222The source files we used to print the cards can be found at https://github.com/tongji-cdi/robot-knowledge- canvases/tree/master/Cards%20PDF.. An example of the card design is shown in Figure 3. ### 3.3. Set up The study was carried out in a controlled environment. As shown in Figure 4, the set up consisted of a table with a sheet of erasable canvas and hexagonal cards sorted by knowledge types. We provided markers and erasers for annotation. There are also two empty boxes for participants to put unused and unclear cards. Figure 4. A photo of our study set-up. A researcher plays the role of the participant to protect the participants’ identity. An image showing our study set-up. Here, a researcher plays the role of the participant for anonymity reasons. The image shows a tabletop. On the tabletop there are several items. In front of the participant is a large canvas with six stacks of cards on it. The participant is taking some cards from the stacks and arranging them on the canvas. On the left, there are two boxes for unused and unclear cards. A scenario comic strip is also there. On the right, there’s a rectangular frame, along with markers and other tools. Farther away from the participant is a microphone and a video camera on a tripod. The camera is out of the picture, and points down at the canvas. ### 3.4. Procedure/Task After signing a consent form, the participant can browse the material and ask questions. We then asked the participant to arrange the knowledge cards on the canvas. They are to arrange the cards in a way that they think best communicates the information as required in the scenario comic. The researcher leaves the working area during the process and observes the participant through a live camera feed. The participant can use the ”question” and ”finished” cards to signal the researcher when needed. There’s no time limit to complete the task. After the creation, the participant is asked to communicate the knowledge on the display to the researcher using a frame. Figure 5 shows an example of the narration. By defining the task in this manner, it allowed us to identify the grouping and sequence of the cards presented. Finally, the researcher interviews the participant about the arrangement logic and other interesting phenomena noticed during the process. Figure 5. An example of the narration process of one of the participants (P2). An example of the narration process of participant P2 for scenario C2. The participant is moving a rectangular frame across the canvas, stopping here and there to point to and narrate the cards inside the frame. In total, each participant would go through three similar sessions with scenarios communicating different knowledge types (semantic, procedural, episodic), which were randomly paired with three working environments (home, office, elderly care center). After that, we will ask participants to compare the knowledge representations they did with the three finished canvases shown on a screen. ### 3.5. Data Collection A total of 12 university students aged from 21 to 27 (M = 23.83, SD = 1.51) from diverse fields took part in this study. We exclude participants knowledgeable in KG or graph visualization to avoid interference from their prior knowledge. During the study, a camera was used to record the behavior of the participants. We only recorded the canvas area to maintain as much anonymity as possible. A photo of the canvas was taken after each session to record the result333The canvases created by the participants can be found at https://github.com/tongji-cdi/robot-knowledge-canvases.. All interviews were recorded. ### 3.6. Data Analysis Grounded theory is a common method for qualitative pattern generation in content and thematic analysis. However, it is mostly used to analyze interview text(Martin and Turner, 1986). Methods for analyzing diagram images and generating patterns from them are less explored. A relevant approach is visual grounded theory(Konecki, 2011), which analyzes the results of visual narrations and iteratively codes data into various categories. Although it is often used to code images from realistic scenes, the work of Chapman et al.(Chapman et al., 2017) inspired us. We developed a process that consists of five steps: positional coding, coding verification, abstraction, relational coding, and pattern generation. An overview of our methodology can be seen in Figure 6. Figure 6. The process of our analysis. A diagram showing the process of our analysis. The steps are the same as described in Section 3.6. Figure 7. An illustration of the analysis process. An illustration of the abstraction process. This diagram shows four stages. The first panel show small patches selected from the canvas image, and coded with different names. The second panel shows abstract representation of the small patterns using circles and arrows. The third panel shows that the content of the canvas image is substituted by abstract diagrams from the second panel. The last panel shows that similar diagrams are merged to become the final patterns. ##### Positional coding: Two researchers open coded any interesting phenomenon that emerged from the canvas photos separately. These phenomena are coded as nodes in NVivo444The NVivo source file is released under https://github.com/tongji-cdi/robot- knowledge-canvases/releases/download/1.1/Coding_result.nvp.. As a result of this step, each researcher generated a set of codes with description and examples. ##### Coding verification: The two researchers exchanged the set of codes and reviewed them together. The review process consisted of merging similar codes and standardizing naming conventions. This process results in a codebook that consists of code names and criteria for inclusion. After that, two researchers separately re-coded the images according to the codebook. Comparison of the two researchers’ coding results shows reliable coding agreement (99% agreement, Cohen’s $\kappa$ = .79). ##### Abstraction: The researchers first established a standard graphical library of low-level codes and used them to transcribe the canvas photos into abstract diagrams. An example of this process can be seen in Figure 7. By doing so, we abstracted the relationship between the codes away from the cards’ position and names. ##### Relational coding: One researcher open coded the diagrams generated from the previous step. During coding, the researcher focused on discovering patterns in the overall composition of low-level codes while paying particular attention to what constitutes the main storyline of the diagram. This coding process resulted in a set of high-level codes. A second researcher reviewed the coding results and discussed with the first until a consensus is reached. The two researchers then compiled the final codebook for high-level patterns. ##### Pattern generation: The two researchers worked together to review the diagrams of low-level and high-level codes. They identified diagrams that share similar structures and merged the codes to arrive at the final patterns. In the end, we identified eight low-level patterns and four high-level patterns. The relations between these patterns are analyzed to form a pattern library. Figure 8. An excerpt of our pattern library. The component and module layer corresponds to the low-level and high-level patterns we discuss below. The elements corresponds to ontology groups defined in Table 1. The syntax described in the legends are used throughout this paper. An excerpt of our pattern library. There are three layers: element, component, and module. The component and module layer corresponds to the low-level and high-level patterns we discuss below. The elements corresponds to ontology groups defined in Table 1. ## 4\. Pattern Library The pattern library we proposed consists of three parts: element, component, and module. The element part includes all types of the robotic situational knowledge we analyzed in Table 1. The components (low-level patterns) are a typical combination and representation of certain elements. The modules (high- level patterns) are further built upon the components and can be easily applied to an interface. The structure of our pattern library can be seen in Figure 8, which is an example of Module d (Location as the Main Story Line). ### 4.1. Low-level Patterns Low-level patterns are found as components that make up the whole canvas and mostly communicate a single message. These patterns can be employed when designing interface components that display specific information. An overview of low-level patterns can be found in Figure 9. The following sections provide a detailed description of these patterns. Figure 9. An overview of low-level patterns in our pattern library. “Any” in a diagram means a node of any type. “Entity” denotes something that physically exist, such as a user, an object, or a location. An overview of low-level patterns in our pattern library. Eight patterns are present, each corresponding to a section below. ##### 1) Action Flowchart: This pattern occurs when the participants want to show the logical flow of a series of actions. The inner structure connecting the actions in this pattern can be sequential, branching, and looping, similar to a flowchart. ##### 2) Action and Required Information: This pattern is used to show that the robot used pre-existing knowledge in order to undertake an action. It takes the shape of a central action card connected to its required information. The knowledge type of the required information can be any of the semantic types, including environment, object, and user. ##### 3) Action and Its Results: This pattern is often similar in shape and content as Action and Required Information. However, it is used to show that an action of the robot resulted in the addition of certain new knowledge. A typical example is when the robot recognizes a new agent in its working place. Then it may add the knowledge automatically or by asking the user related. ##### 4) Event Sequence: This pattern is used to show a series of events that happened in chronological order. However, not all events are necessarily shown. Participants often skip events that they deem irrelevant. ##### 5) Map: This pattern is used to showcase a collection of entities (e.g., environment, users, and objects) in their corresponding locations, similar to a real-world map. The pattern occurs when the knowledge communicated is highly related to locations, and there’s no apparent order in the knowledge (e.g., time and alphabetical order). ##### 6) Entity and Relevant Information: This pattern is employed to show detailed information about the central entity, which can be any semantic knowledge. A typical example is when a user card is placed. The participant will surround it with relevant information like age and room location. ##### 7) Classification Tree: This pattern is typically a tree-like structure, depicting the hierarchical classification of a collection of entities of the same type. An example is when a participant is trying to show all the people in an office. They used the department information to sort the people into two groups. ##### 8) Assorted Entities: This pattern describes a cluster of entities of the same type, without any specific order. ### 4.2. High Level Patterns Figure 10. An overview of high-level patterns in our pattern library. Shaded parts indicate the “main story line” of the high-level pattern. An overview of high-level patterns in our pattern library. Four patterns are present, each corresponding to a section below. High-level patterns communicate a more complex message by combining and re- arranging low-level patterns. These patterns can be employed when designing interface modules for different knowledge types and scenarios. Figure 10 presents an overview of the high-level patterns. Paragraphs below detailed each pattern with description, usage scenario, design considerations. #### 4.2.1. Action as the Main Story Line ##### Pattern description: An Action Flowchart is the most prominent feature of this pattern, used often in combination with Action and Required Information or Action and Its Results to show information that’s relevant to each action in the flowchart. ##### When to use: This pattern is often employed to explain the logical flow of robotic actions or to narrate the process of gathering specific information. Therefore, it mostly occurs in scenarios where the communicated knowledge is procedural or semantic. ##### Design considerations: The action flowchart can be naturally represented on-screen as a flowchart, while relevant information is linked to the actions without affecting the flowchart structure. Participants used various low-level patterns to organize the related semantic information, such as Assorted Entities and Classification Tree. The link between the action and the other cards are depicted in various ways, including arrows, lines, or simply with proximity. This suggests that interface designers have a wide range of options when employing this pattern. In scenarios where mainly procedural knowledge is communicated, some of the participants neglected the semantic cards altogether and chose to present only the actions. Therefore, we suggest that the relevant information be collapsed into the action element in a procedural scenario and remain hidden until the user specifically asks for details of the action. #### 4.2.2. Time as the Main Story Line ##### Pattern description: This pattern features an Event Sequence as its main component. Other low-level patterns, especially Entity and Relevant Information, can be used to provide context. ##### When to use: This pattern is most typically found in an episodic scenario, where the robot needs to show the knowledge related to a specific time in the past. ##### Design considerations: The Event Sequence can be naturally presented as a timeline. Both horizontal and vertical timelines have been observed during the study. However, no participants tried to show the exact time interval between two events on the timeline, even when such information was given. This suggests that when using this pattern, the ordering of events is essential while the exact timing of events is not. Many participants find the naming of the event elements hard to understand, as they are from the robot’s perspective (e.g. “faces detected”, “start interacting with user”). Some participants omitted events they deemed irrelevant, especially those without relevant semantic information. This suggests that designers should set rules to filter out irrelevant events to the user and translate the event names to user-friendly language when using this pattern. #### 4.2.3. Relationship as the Main Story Line ##### Pattern description: This pattern features a large collection of semantic information, which is typically sorted into groups using Assorted Entities, or displayed as a Classification Tree. Sometimes a mix of the two is used, where part of the knowledge is organized in a tree-shaped structure and others categorized into groups. ##### When to use: This pattern is often seen in semantic scenarios when the goal is to quickly convey a large amount of inter-related semantic knowledge to the user. A typical example is when the user wants to check the knowledge of a specific type. An interface that sorts knowledge by type can easily guide the user towards the goal. ##### Design considerations: The Classification Tree can be shown as a collapsible tree. However, the designer may also choose to use hyperlinks between multiple screens to reduce clutter. Meanwhile, Assorted Entities can be represented most naturally as a list or group of elements on-screen. However, care must be taken when using Assorted Entities. A large number of elements of the same type can easily overwhelm the user. Many participants preferred Classification Tree and considered it a natural way to organize information. Some participants used symbols to represent cards frequently linked to and drew these symbols whenever a connection needs to be shown. This method can prove useful in the face of a large number of relations. #### 4.2.4. Location as the Main Story Line ##### Pattern description: This pattern uses a Map as its core, linking other related knowledge to locations in the Map, using the Entity and Relevant Information pattern. ##### When to use: This pattern is typical in semantic scenarios and occurs when the knowledge to be communicated highly related to the location. Such knowledge includes objects, people, and named locations in space. However, participants also used this pattern when the robot’s action involved traveling to multiple locations. ##### Design considerations: A map of the working environment would be a suitable background when using this pattern. The objects, users, or named locations can be placed on the map accordingly. When using this pattern to describe the robot’s action, some participants used an abstract representation of the working environment, placing the locations in a sequence to show the robot’s route. ## 5\. Prototyping To ensure our pattern library’s applicability, we further investigated how the patterns may be applied by prototyping a service robot. We conducted Wizard- of-Oz testing using our prototype and identified several design challenges and recommendations by analyzing the questionnaire and interview data. ### 5.1. Design We prototyped a service robot that took participants on a guided tour of the lab. The rationale behind choosing this scenario is two-fold. First, it is natural for the robot to communicate a large amount of knowledge of all three types (semantic, procedural, and episodic) during a tour. Second, it allowed us to apply real-world data to test the applicability of our patterns. Three researchers who were not involved in the pattern generation process were provided with the pattern library. They were given limited time (4 hours) to use the patterns to design a series of screens for displaying the necessary knowledge in a specific scenario. We didn’t expect highly complete interfaces since the focus of the design session is to gain first-hand experience in applying the patterns. ### 5.2. Implementation Figure 11. The Wizard-of-Oz testing system implemented in our study. The Wizard-of-Oz testing system implemented in our study consists of a control environment in Node-RED, a graph database, and an Android application that runs on the Temi robot. The Temi application receives commands from Node-RED and queries information from the KG database. We implemented the Wizard-of-Oz testing system on the Temi robot555https://www.robotemi.com. As shown in Figure 11, our system consists of a graph database using Neo4j666https://neo4j.com for querying the KG, a remote controlling environment using Node-RED777http://nodered.org, and an Android application that runs on Temi that receives control commands and displays the UI888Available at https://github.com/tongji-cdi/temi-woz-frontend (dataset and UI) and https://github.com/tongji-cdi/temi-woz-android (Node-RED remote control and Temi Android application). We then compiled a knowledge graph dataset according to the types of situational knowledge (Table 1). To test the applicability of our patterns to real-world problems, we curated data of the users, projects, objects, and locations from lab administration documents. We took care to anonymize the data by replacing person and object names. In the development process, our patterns demonstrated the ability to translate to knowledge graph queries quickly. An example of this is given in Figure 12. The designer specified a task flowchart consisting of task steps and relevant KG nodes. The patterns were then directly translated to the Neo4j query language Cypher999https://neo4j.com/developer/cypher to pull the knowledge from the database. Figure 12. Turning patterns into interface design and a Cypher query. The designer used the ”Action as the Main Story Line” pattern, which consists of an action flowchart connected to the relevant information of the actions. The patterns are used to design an interface template. The designers specified which patterns are used. The patterns directly correspond to structures in the robot knowledge graph, which in turn translates to graph database queries that matches data in the robot KG. The match results are then used to fill information into the user interface. ### 5.3. Testing Ten participants without expert knowledge of KGs performed in a simulated scenario with six tasks to complete. Participants assumed the role of a new research assistant and were guided by the robot to tour the lab. Along the tour, the accompanying researcher gave six tasks to the participant and recorded their completion status. Each task required the participant to look for some information using the interface on the robot’s screen. The tasks cover the three scenarios (communicating semantic, procedural, and episodic knowledge) as outlined in Section 3.1. Each scenario corresponds to two tasks, one requiring the participant to look for information on the current robot screen, and one requiring browsing the KG to search for the information. We designed a customized questionnaire based on the User Experience Questionnaire(UEQ) (Schrepp and Thomaschewski, 2019) as few standard questionnaires target robotic interfaces aimed at communicating knowledge. The customized questionnaire contains four dimensions: usefulness, comprehensibility, perspicuity, and clarity. The questionnaire can be found in the supplementary materials. Interviews were conducted after participants filled out the questionnaires. We discussed the overall impression of the interface, the effects of the design, obstacles for the task, and their opinions on using screen-based interaction for situational knowledge exchange with the robot. The recordings of the interviews are transcribed for further analysis. ### 5.4. Results Figure 13. Questionnaire results from our Wizard-of-Oz testing. Scores range from -3 to 3, with positive numbers indicating a positive rating. Boxplot of the questionnaire responses in Usefulness, comprehensibility, perspicuity, and clarity. Scores range from -3 to 3, with positive numbers indicating a positive rating. Mean scores are all above 1, while the mean score for Clarity is above 2. Scores for comprehensibility and perspicuity is slightly lower with smaller standard deviation. As seen in Figure 13, participants gave an overall positive rating for all four aspects of the interface. This indicates that the patterns were able to support us in designing effective interfaces to communicate situational knowledge. Participants gave high ratings for the clarity of our interface (questions 13-16, mean=1.58, sd=1.69). Many consider this interface to be useful (question 1, mean = 1.90, sd = .88) and can provide help when using the robot (question 2, mean = 2.00, sd = .94). Participants rate our interface as easy to learn (question 6, mean = 2.10, sd = .99). However, scores for comprehensibility and perspicuity is lower. Of note was that some participants gave lower scores for helpfulness in understanding the robot (question 7, mean=.70, sd=1.77). Some also think that the information presented can be confusing (question 12, mean=.80, sd=1.81). For some participants, it can be hard to find the information they want (question 11, mean=.80, sd=1.23). The reasoning for the scores was discussed in the interviews that followed. In total, we recorded 122 minutes of interview audio. We analyzed the interview results while focusing on the successful and unsuccessful usage of patterns. We further discuss design recommendations generated from the analysis and our work’s limitations in the following sections. ### 5.5. Design Recommendations Below, we highlight themes found in the interview data, and correlate design recommendations for the patterns in our library. ##### 1) Use user-friendly language when presenting KG data. All participants agreed that the classification of robotic knowledge into objects, users, and environment is intuitive. On the other hand, P10 felt uncertain what type of information to look for, and P3 proposed that the naming of knowledge types can be customized. Therefore, we recommend that when using the Classification Tree pattern, naming the groups according to users’ habits and customizing name by users can be considered. Moreover, it is advisable to translate robot ontology terms to more user-friendly names before display. ##### 2) Avoid loops in the knowledge graph Repetitive information can confuse the user. P3 indicated that it feels confusing when entering a loop of often repeated information. They further commented on how it interfered with information retrieval. P8 thought some page jump is cyclic and repeated. This is due to the inherent cyclic structure present in the KG. We recommend to remove cycles when applying the Map pattern, and organize information in a hierarchical fashion using Classification Tree. ##### 3) Make it easier to look for information by relationships Four of the ten participants said the interface hierarchy was too deep. Three of them went back to the ontology list and looked through all nodes of a specific ontology type without following the relationships in the KG. Participants who found the information by following relationships felt it easier to complete the task than those who did not. Thus we recommend to use Entity and Relevant Information and enable the users to preview the information. ##### 4) Use Classification Tree to reduce information overload. Users tend to reduce the complexity of information. P3 thought that the interface has a large amount of information and needed to spend more time to explore; P2 suggested to present less information at the beginning. P5 suggested that the information presented by the interface should be gradually increased according to the user’s need. We have two design recommendations. First, Assorted Entities pattern is not applicable with a large amount of information at the same level. We recommend combine Assorted Entities to simplify information by common attributes or concepts. Second, using Classification Tree to present information from a large scale to a small scale is also recommended. ## 6\. Discussion and future work In this work, we described our process of developing a pattern library for designing interfaces to communicate the situational knowledge in service robots. Results from prototyping and analysis focused on three main findings: the use of design patterns, the challenges of presenting a knowledge graph to non-expert users, and human-robot interaction with a KG interface. These results offer many future directions for improving the pattern library and designing better interactions for human-robot knowledge exchange. ### 6.1. Design patterns During prototyping and testing, the pattern library was shown to aid our design and development process effectively. The patterns can be applied to real-world data, and interfaces designed using the patterns can easily use knowledge graph queries to populate its content. Our design session demonstrated that the patterns enable designers to produce interface designs without knowing the content of the robotic knowledge graph. Participants also responded favorably to the interface, highlighting its clarity in presenting robotic knowledge. Interviews with them provided us with insights on how to apply the patterns better. ### 6.2. Challenges of designing a KG interface Our testing also highlighted challenges that surface when designing a KG interface for non-expert users. We identified several challenges that originate from the knowledge graph structure, its content, and its size. Participants often experienced confusion when loops occur in the KG structure. The patterns we discovered typically use categorization and trees to organize semantic information, which does not contain loops. This warrants further investigation into how to support efficient exploration of the KG while maintaining a hierarchical abstraction. The naming of knowledge in the KG presents another challenge when displayed to non-expert users. The ontology of a KG is typically defined to facilitate effective programming without taking understandability into account. Work is being done to define an ontology that is understandable for non-expert users(Diprose et al., 2012a). We believe that this is a necessary foundation that enables better human-robot knowledge exchange. The large amount of knowledge to be communicated poses another challenge. We have discussed how to apply hierarchical patterns to reduce clutter, but designers must also consider the increased depth of the interface. ### 6.3. Designing better interactions to communicate situational knowledge Participants suggested various ways to incorporate multimodal interaction when using our interface. Some noted that using an interface is good when the robot is showing a large amount of information. Meanwhile, many participants also noted that it is more natural to use simple voice commands for querying a smaller amount of knowledge. This points toward a future research direction of using multimodal interaction (e.g., voice, gesture, and gaze) combined with a KG interface to enable better communication of situational knowledge. ### 6.4. Limitations and Future Work The pattern library that we developed relied on our observations and analysis of twelve participants’ knowledge presentation behavior under nine human-robot interaction scenarios. While we carefully mapped out these scenarios to cover as many potential cases as possible and recruited participants from diverse backgrounds, additional scenarios and testing may lead to the discovery of more patterns. Moreover, the datasets we used are compiled specifically for the scenarios, which may have overlooked many complexities in real-world robot knowledge graphs. Future work may include building a public, real-world KG dataset. This will enable comparing the performance of patterns and interfaces, as well as promote further research into human-understandable robot knowledge ontology. In the prototyping session, researchers with less practical knowledge in user interface design were tasked with designing the interfaces. While the participants gave positive feedback, results may be different had professional robot interaction and interface designers be employed to conduct the experiment. However, we hypothesize that this would likely improve participant feedback, which does not change our conclusions. Finally, the pattern library we developed demonstrated its ability to apply to various knowledge communication scenarios. This warrants further research into developing authoring environments that support designers to design and prototype robot knowledge interfaces quickly. ## 7\. Conclusion This paper proposes a pattern library that shows how end-users envisioned service robots to organize and visually represent situational knowledge. We then prototyped a service robot based on the patterns and used Wizard-of-Oz testing to generate a series of design recommendations for knowledge-based interface design in robots. Future work would include using the patterns to create more robot interfaces and testing the design in more scenarios. We hope our work can inspire more researchers and interface designers to explore diverse approaches in situational knowledge exchange between humans and robots. ## References * (1) * Alexander (1977) Christopher Alexander. 1977\. _A pattern language: towns, buildings, construction_. Oxford university press. * Alexander (1979) Christopher Alexander. 1979\. _The Timeless Way of Building_. Oxford University Press, New York, NY. * Alexander and Judy (1988) Patricia A. Alexander and Judith E. Judy. 1988. The Interaction of Domain-Specific and Strategic Knowledge in Academic Performance. _Review of Educational Research_ 58, 4 (1988), 375–404. https://doi.org/10.3102/00346543058004375 arXiv:https://doi.org/10.3102/00346543058004375 * Azevedo et al. (2019) Helio Azevedo, José Pedro R. Belo, and Roseli A. F. Romero. 2019\. Using Ontology as a Strategy for Modeling the Interface Between the Cognitive and Robotic Systems. _Journal of Intelligent & Robotic Systems_ (Aug. 2019). https://doi.org/10.1007/s10846-019-01076-0 * Barba et al. (2020) Evan Barba, Anthony Lioon, Christopher Miller, and Yasir Majeed Khan. 2020. Tele-robotic Interface Design in Context: A Case for Recursive Design. In _Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–8. * Bastianelli et al. (2013) Emanuele Bastianelli, Domenico Bloisi, Roberto Capobianco, Guglielmo Gemignani, Luca Iocchi, and Daniele Nardi. 2013. Knowledge representation for robots through human-robot interaction. _arXiv preprint arXiv:1307.7351_ (2013). * Bruno et al. (2019) Barbara Bruno, Carmine Tommaso Recchiuto, Irena Papadopoulos, Alessandro Saffiotti, Christina Koulouglioti, Roberto Menicatti, Fulvio Mastrogiovanni, Renato Zaccaria, and Antonio Sgorbissa. 2019\. Knowledge Representation for Culturally Competent Personal Robots: Requirements, Design Principles, Implementation, and Assessment. _International Journal of Social Robotics_ 11, 3 (June 2019), 515–538. https://doi.org/10.1007/s12369-019-00519-w * Carpenter and Zachary (2017) Taylor J Carpenter and Wayne W Zachary. 2017. Using context and robot-human communication to resolve unexpected situational conflicts. In _2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)_. IEEE, 1–7. * Chang et al. (2020) Doo Soo Chang, Gun Hee Cho, and Yong Suk Choi. 2020\. Ontology-based knowledge model for human-robot interactive services. In _Proceedings of the 35th Annual ACM Symposium on Applied Computing_. 2029–2038. * Chapman et al. (2017) Mimi V Chapman, Shiyou Wu, and Meihua Zhu. 2017. What is a picture worth? A primer for coding and interpreting photographic data. _Qualitative Social Work_ 16, 6 (2017), 810–824. * Chen et al. (2007) Jessie YC Chen, Ellen C Haas, and Michael J Barnes. 2007\. Human performance issues and user interface design for teleoperated robots. _IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)_ 37, 6 (2007), 1231–1245. * de Jong and Ferguson-Hessler (1996) Ton de Jong and Monica G.M. Ferguson-Hessler. 1996\. Types and qualities of knowledge. _Educational Psychologist_ 31, 2 (1996), 105–113. https://doi.org/10.1207/s15326985ep3102_2 arXiv:https://doi.org/10.1207/s15326985ep3102_2 * Diprose et al. (2012b) James P Diprose, Beryl Plimmer, Bruce A MacDonald, John Hosking, et al. 2012b. How people naturally describe robot behaviour. (2012). * Diprose et al. (2012a) J P Diprose, B Plimmer, B A MacDonald, and J G Hosking. 2012a. How People Naturally Describe Robot Behaviour. _New Zealand._ (2012), 9\. * Fang et al. (2015) Rui Fang, Malcolm Doering, and Joyce Y Chai. 2015\. Embodied collaborative referring expression generation in situated human-robot interaction. In _Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction_. 271–278. * Hogan et al. (2020) Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d’Amato, Gerard de Melo, Claudio Gutierrez, José Emilio Labra Gayo, Sabrina Kirrane, Sebastian Neumaier, Axel Polleres, et al. 2020\. Knowledge graphs. _arXiv preprint arXiv:2003.02320_ (2020). * Il Hong Suh et al. (2007) Il Hong Suh, Gi Hyun Lim, Wonil Hwang, Hyowon Suh, Jung-Hwa Choi, and Young-Tack Park. 2007. Ontology-based multi-layered robot knowledge framework (OMRKF) for robot intelligence. In _2007 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, San Diego, CA, USA, 429–436. https://doi.org/10.1109/IROS.2007.4399082 * Javia and Cimiano (2016) Binal Javia and Philipp Cimiano. 2016. A knowledge-based architecture supporting declarative action representation for manipulation of everyday objects. In _Proceedings of the 3rd Workshop on Model-Driven Robot Software Engineering_. 40–46. * Jeon et al. ([n.d.]) Hwawoo Jeon, Kyon-Mo Yang, Sungkee Park, Jongsuk Choi, and Yoonseob Lim. [n.d.]. An Ontology-based Home Care Service Robot for Persons with Dementia. ([n. d.]), 6. * Jiang (2020) Meng Jiang. 2020\. Improving situational awareness with collective artificial intelligence over knowledge graphs. In _Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II_ , Vol. 11413. International Society for Optics and Photonics, 114130J. * Jokinen et al. (2019) Kristiina Jokinen, Satoshi Nishimura, Kentaro Watanabe, and Takuichi Nishimura. 2019. Human-robot dialogues for explaining activities. In _9th International Workshop on Spoken Dialogue System Technology_. Springer, 239–251. * Kahn et al. (2008a) Peter H Kahn, Nathan G Freier, Takayuki Kanda, Hiroshi Ishiguro, Jolina H Ruckert, Rachel L Severson, and Shaun K Kane. 2008a. Design patterns for sociality in human-robot interaction. In _Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction_. 97–104. * Kahn et al. (2008b) Peter H. Kahn, Nathan G. Freier, Takayuki Kanda, Hiroshi Ishiguro, Jolina H. Ruckert, Rachel L. Severson, and Shaun K. Kane. 2008b. Design patterns for sociality in human-robot interaction. In _Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction_ _(HRI ’08)_. Association for Computing Machinery, New York, NY, USA, 97–104. https://doi.org/10.1145/1349822.1349836 * Kattepur (2019) Ajay Kattepur. 2019\. RoboPlanner: autonomous robotic action planning via knowledge graph queries. In _Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing_. 953–956. * Kennington and Shukla (2017) Casey Kennington and Aprajita Shukla. 2017. A Graphical Digital Personal Assistant that Grounds and Learns Autonomously. In _Proceedings of the 5th International Conference on Human Agent Interaction_. 353–357. * Klakegg et al. (2017) Simon Klakegg, Niels van Berkel, Aku Visuri, Hanna-Leena Huttunen, Simo Hosio, Chu Luo, Jorge Goncalves, and Denzil Ferreira. 2017\. Designing a context-aware assistive infrastructure for elderly care. In _Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers_. 563–568. * Konecki (2011) Krzysztof Tomasz Konecki. 2011\. Visual grounded theory: A methodological outline and examples from empirical work. _Revija za sociologiju_ 41, 2 (2011), 131–160. * Kruijff et al. (2007) Geert-Jan M. Kruijff, Hendrik Zender, Patric Jensfelt, and Henrik I. Christensen. 2007. Situated Dialogue and Spatial Organization: What, Where… and Why? _International Journal of Advanced Robotic Systems_ 4, 1 (March 2007), 16. https://doi.org/10.5772/5701 * Laird et al. (2017) John E. Laird, Christian Lebiere, and Paul S. Rosenbloom. 2017\. A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. _AI Magazine_ 38, 4 (Dec. 2017), 13–26. https://doi.org/10.1609/aimag.v38i4.2744 * Lemaignan et al. (2017a) Séverin Lemaignan, Mathieu Warnier, E Akin Sisbot, Aurélie Clodic, and Rachid Alami. 2017a. Artificial cognition for social human–robot interaction: An implementation. _Artificial Intelligence_ 247 (2017), 45–69. * Lemaignan et al. (2017b) Séverin Lemaignan, Mathieu Warnier, E. Akin Sisbot, Aurélie Clodic, and Rachid Alami. 2017b. Artificial cognition for social human–robot interaction: An implementation. _Artificial Intelligence_ 247 (June 2017), 45–69. https://doi.org/10.1016/j.artint.2016.07.002 * Leonardi et al. (2019) Nicola Leonardi, Marco Manca, Fabio Paternò, and Carmen Santoro. 2019. Trigger-action programming for personalising humanoid robot behaviour. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_. 1–13. * Lim et al. (2011) Gi Hyun Lim, Il Hong Suh, and Hyowon Suh. 2011. Ontology-Based Unified Robot Knowledge for Service Robots in Indoor Environments. _IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans_ 41, 3 (May 2011), 492–509. https://doi.org/10.1109/TSMCA.2010.2076404 * Liu et al. (2010) Changsong Liu, Jacob Walker, and Joyce Y Chai. 2010\. Ambiguities in spatial language understanding in situated human robot dialogue. In _2010 AAAI Fall Symposium Series_. * Liu et al. (2018) Hangxin Liu, Yaofang Zhang, Wenwen Si, Xu Xie, Yixin Zhu, and Song-Chun Zhu. 2018\. Interactive robot knowledge patching using augmented reality. In _2018 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 1947–1954. * Mahieu et al. (2019) Christof Mahieu, Femke Ongenae, Femke De Backere, Pieter Bonte, Filip De Turck, and Pieter Simoens. 2019\. Semantics-based platform for context-aware and personalized robot interaction in the internet of robotic things. _Journal of Systems and Software_ 149 (March 2019), 138–157. https://doi.org/10.1016/j.jss.2018.11.022 * Martin and Turner (1986) Patricia Yancey Martin and Barry A Turner. 1986. Grounded theory and organizational research. _The journal of applied behavioral science_ 22, 2 (1986), 141–157. * Marvel et al. (2020) Jeremy A Marvel, Shelly Bagchi, Megan Zimmerman, and Brian Antonishek. 2020. Towards Effective Interface Designs for Collaborative HRI in Manufacturing: Metrics and Measures. _ACM Transactions on Human-Robot Interaction (THRI)_ 9, 4 (2020), 1–55. * Mioch et al. (2014) Tina Mioch, Wietse Ledegang, Rosie Paulissen, Mark A Neerincx, and Jurriaan van Diggelen. 2014\. Interaction design patterns for coherent and re-usable shape specifications of human-robot collaboration. In _Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems_. 75–83. * Nieto-Granda et al. (2010) Carlos Nieto-Granda, John G Rogers, Alexander JB Trevor, and Henrik I Christensen. 2010. Semantic map partitioning in indoor environments using regional analysis. In _2010 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, 1451–1456. * Oguz et al. (2019) Ozgur S Oguz, Wolfgang Rampeltshammer, Sebastian Paillan, and Dirk Wollherr. 2019. An Ontology for Human-Human Interactions and Learning Interaction Behavior Policies. _ACM Transactions on Human-Robot Interaction (THRI)_ 8, 3 (2019), 1–26. * Olivares-Alarcos et al. (2019) Alberto Olivares-Alarcos, Daniel Beßler, Alaa Khamis, Paulo Goncalves, Maki K. Habib, Julita Bermejo-Alonso, Marcos Barreto, Mohammed Diab, Jan Rosell, João Quintas, Joanna Olszewska, Hirenkumar Nakawala, Edison Pignaton, Amelie Gyrard, Stefano Borgo, Guillem Alenyà, Michael Beetz, and Howard Li. 2019. A review and comparison of ontology-based approaches to robot autonomy. _The Knowledge Engineering Review_ 34 (2019), e29. https://doi.org/10.1017/S0269888919000237 * Oliveira et al. (2018) Raquel Oliveira, Patrícia Arriaga, Patrícia Alves-Oliveira, Filipa Correia, Sofia Petisca, and Ana Paiva. 2018\. Friends or foes? Socioemotional support and gaze behaviors in mixed groups of humans and robots. In _Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction_. 279–288. * Padilla et al. (2017) Stefano Padilla, Thomas S. Methven, David A. Robb, and Mike J. Chantler. 2017. Understanding Concept Maps: A Closer Look at How People Organise Ideas. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems_. ACM, Denver Colorado USA, 815–827. https://doi.org/10.1145/3025453.3025977 * Paxton et al. (2018) Chris Paxton, Felix Jonathan, Andrew Hundt, Bilge Mutlu, and Gregory D Hager. 2018. Evaluating methods for end-user creation of robot task plans. In _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 6086–6092. * Pea (1987) Roy D Pea. 1987\. User centered system design: new perspectives on human-computer interaction. (1987). * Pronobis and Jensfelt (2012) Andrzej Pronobis and Patric Jensfelt. 2012. Large-scale semantic mapping and reasoning with heterogeneous modalities. In _2012 IEEE international conference on robotics and automation_. IEEE, 3515–3522. * Riley et al. (2010) Jennifer M Riley, Laura D Strater, Sheryl L Chappell, Erik S Connors, and Mica R Endsley. 2010\. Situation awareness in human-robot interaction: Challenges and user interface requirements. _Human-Robot Interactions in Future Military Operations_ (2010), 171–192. * Sauppé and Mutlu (2014) Allison Sauppé and Bilge Mutlu. 2014. Design patterns for exploring and prototyping human-robot interactions. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_. 1439–1448. * Schrepp and Thomaschewski (2019) Martin Schrepp and Jörg Thomaschewski. 2019. Eine modulare Erweiterung des User Experience Questionnaire. https://doi.org/10.18420/muc2019-up-0108 * Seffah (2010) Ahmed Seffah. 2010\. The evolution of design patterns in HCI: from pattern languages to pattern-oriented design. In _Proceedings of the 1st International Workshop on Pattern-Driven Engineering of Interactive Computing Systems_. 4–9. * Sisbot et al. (2011) E Akin Sisbot, Raquel Ros, and Rachid Alami. 2011. Situation assessment for human-robot interactive object manipulation. In _2011 RO-MAN_. IEEE, 15–20. * Tenorth and Beetz (2013) Moritz Tenorth and Michael Beetz. 2013. KnowRob: A knowledge processing infrastructure for cognition-enabled robots. _The International Journal of Robotics Research_ 32, 5 (April 2013), 566–590. https://doi.org/10.1177/0278364913481635 * Tiddi et al. (2017) Ilaria Tiddi, Emanuele Bastianelli, Gianluca Bardaro, Mathieu d’Aquin, and Enrico Motta. 2017\. An ontology-based approach to improve the accessibility of ROS-based robotic systems. In _Proceedings of the Knowledge Capture Conference_. 1–8. * Tiddi et al. (2018) Ilaria Tiddi, Emanuele Bastianelli, Gianluca Bardaro, and Enrico Motta. 2018. A User-friendly Interface to Control ROS Robotic Platforms.. In _International Semantic Web Conference (P &D/Industry/BlueSky)_. * Topp (2017) Elin Anna Topp. 2017\. Interaction patterns in human augmented mapping. _Advanced Robotics_ 31, 5 (2017), 258–267. * Tsiakas et al. (2017) Konstantinos Tsiakas, Michalis Papakostas, Michail Theofanidis, Morris Bell, Rada Mihalcea, Shouyi Wang, Mihai Burzo, and Fillia Makedon. 2017. An interactive multisensing framework for personalized human robot collaboration and assistive training using reinforcement learning. In _Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments_. 423–427. * Zachary et al. (2015) Wayne Zachary, Matthew Johnson, R Hoffman, Travis Thomas, Andrew Rosoff, and Thomas Santarelli. 2015\. A context-based approach to robot-human interaction. _Procedia Manufacturing_ 3 (2015), 1052–1059. * Zachary et al. (2013) W Zachary, A Rosoff, L Miller, S Read, K Laskey, I Emmons, and P Costa. 2013\. Context as cognitive process. In _Proc. of_.
††thanks: Email<EMAIL_ADDRESS>Email<EMAIL_ADDRESS> # System size dependence of baryon-strangeness correlations in relativistic heavy ion collisions from a multiphase transport model Dong-Fang Wang(王东方) Song Zhang(张松) Yu-Gang Ma(马余刚) Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Institute of Modern Physics, Fudan University, Shanghai 200433, China ###### Abstract The system size dependence of baryon-strangeness (BS) correlations ($C_{BS}$) are investigated with a multiphase transport (AMPT) model for various collision systems from $\mathrm{{}^{10}B+^{10}B}$, $\mathrm{{}^{12}C+^{12}C}$, $\mathrm{{}^{16}O+^{16}O}$, $\mathrm{{}^{20}Ne+^{20}Ne}$, $\mathrm{{}^{40}Ca+^{40}Ca}$, $\mathrm{{}^{96}Zr+^{96}Zr}$, and $\mathrm{{}^{197}Au+^{197}Au}$ at RHIC energies $\sqrt{s_{NN}}$ of 200, 39, 27, 20, and 7.7 GeV. Both effects of hadron rescattering and a combination of different hadrons play a leading role for baryon-strangeness correlations. When the kinetic window is limited to absolute rapidity $|y|>3$, these correlations tend to be constant after the final-state interaction whatever kind of hadrons subset we chose based on the AMPT framework. The correlation is found to smoothly increase with baryon chemical potential $\mu_{B}$, corresponding to the collision system or energy from the quark-gluon-plasma- like phase to the hadron-gas-like phase. Besides, the influence of initial nuclear geometrical structures of $\alpha$-clustered nuclear collision systems of $\mathrm{{}^{12}C+^{12}C}$ as well as $\mathrm{{}^{16}O+^{16}O}$ collisions is discussed but the effect is found negligible. The current model studies provide baselines for searching for the signals of Quantum Chromodynamics (QCD) phase transition and critical point in heavy-ion collisions through the BS correlation. ## I Introduction Relativistic heavy-ion collisions create nuclear matter with sufficient energy density that one expects a quark-gluon plasma to form Adams _et al._ (2005); Back _et al._ (2005); Adcox _et al._ (2005); Arsene _et al._ (2005). The fundamental challenge remains how to identify this hot and dense quark matter and fully understand the phase diagram of QGP matter. Lattice QCD calculations have indicated that the transition from hadronic phase to QGP phase is a crossover at zero baryon-chemical potential $(\mu_{B}=0)$ with a transition temperature $T_{c}\approx 166$ MeV Aoki _et al._ (2009); Bazavov _et al._ (2012a). For the finite size system, the transition temperature $T_{c}$ could shift to a value higher than that in an unconstrained space Han _et al._ (2020). In an attempt to address these considerations, researchers performed, from 2010 to 2017, a beam-energy scan Ackermann _et al._ (2003); Aggarwal et al. (2010); Adamczyk _et al._ (2017) at the BNL Relativistic Heavy Ion Collider (RHIC). One of the promising approaches to probe the QGP phase transition involves fluctuations Koch (2008); Jeon and Koch (2003). Theoretical calculations predicted that fluctuations and correlations of conserved charges were distinctly different in the hadronic or QGP phase Luo and Xu (2017), and they were experimentally accessible to distinguishing between these two phases Adare _et al._ (2016). Experimental analysis of event-by-event fluctuations of net-conserved charges like baryon number ($B$), electric charge ($Q$) and strangeness ($S$), in particular, their higher-order cumulants were reported at RHIC Chatterjee (2019); Adamczyk _et al._ (2014) and LHC Friman _et al._ (2011); Rustamov (2017). One of the event-by-event fluctuations observable was proposed by Koch Koch _et al._ (2005), namely the baryon-strangeness correlation coefficient, $\displaystyle\begin{aligned} C_{BS}=-3\frac{\left\langle BS\right\rangle-\left\langle B\right\rangle\left\langle S\right\rangle}{\left\langle S^{2}\right\rangle-\left\langle S\right\rangle^{2}},\end{aligned}$ (1) where $B$ and $S$ are the net-baryon number and net-strangeness in one event, respectively. The average value of $B$ and $S$ over a suitable ensemble of events is denoted by $\left\langle\cdot\right\rangle$. The $BS$ correlation was considered as a useful tool to characterize that the highly compressed and heated matter created in heavy-ion collisions underwent ideal QGP phase, strongly coupled QGP phase, or hadronic phase. In previous analyses, several specific models were applied, such as $(2+1)$ Polyakov Quark Meson Model Chatterjee and Mohan (2012), hadron resonance gas model Bazavov _et al._ (2012b); Bhattacharyya _et al._ (2014), UrQMD Chatterjee _et al._ (2016); Haussler _et al._ (2007); Yang _et al._ (2017) model as well as the AMPT model Jin _et al._ (2008) to study the fluctuations and compare them with Lattice QCD results Gavai and Gupta (2006); Bazavov _et al._ (2014). Research on small systems has been performed for several years, both experimentally and theoretically Loizides (2016); Koop _et al._ (2016), and several proposals for a system scan (e.g. $\mathrm{O}+\mathrm{O}$) Sievert and Noronha Hostler (2019) have been made to study the possible signals of QGP matter in small systems as well as to investigate the initial state effects on the final state observables Huang _et al._ (2020); Sievert and Noronha Hostler (2019); Zhang _et al._ (2020). We noticed that the ALICE collaboration reported the enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions Adam et al. (ALICE Collaboration). In the same context, we consider the baryon-strangeness correlation which is related to the QGP phase transition may also be sensitive to the fluctuations from small systems to large systems through heavy-ion collisions. Table 1: AMPT input parameters and $\left\langle\mathrm{N_{part}}\right\rangle$ values of different collision systems. | $\sqrt{s_{NN}}$ = 200 GeV | $\sqrt{s_{NN}}$ = 20 GeV | $\sqrt{s_{NN}}$ = 7.7 GeV ---|---|---|--- System | $\mathrm{\it{b_{\text{max}}}}$[fm] | $\left\langle\mathrm{N_{part}}\right\rangle$ | Event counts | $\left\langle\mathrm{N_{part}}\right\rangle$ | Event counts | $\left\langle\mathrm{N_{part}}\right\rangle$ | Event counts $\mathrm{\leftidx{{}^{10}}B}+\mathrm{\leftidx{{}^{10}}B}$ | 1.15619 | 14.8 | 7$\times 10^{4}$ | 13.2 | 12$\times 10^{4}$ | 13.1 | 16$\times 10^{4}$ $\mathrm{\leftidx{{}^{12}}C}+\mathrm{\leftidx{{}^{12}}C}$ | 1.22864 | 18.7 | 10$\times 10^{4}$ | 16.8 | 6$\times 10^{4}$ | 16.7 | 10$\times 10^{4}$ $\mathrm{\leftidx{{}^{16}}O}+\mathrm{\leftidx{{}^{16}}O}$ | 1.35229 | 25.5 | 10$\times 10^{4}$ | 23.1 | 4$\times 10^{4}$ | 23.0 | 10$\times 10^{4}$ $\mathrm{\leftidx{{}^{20}}Ne}+\mathrm{\leftidx{{}^{20}}Ne}$ | 1.45671 | 32.8 | 2$\times 10^{4}$ | 30.0 | 4$\times 10^{4}$ | 29.8 | 2$\times 10^{4}$ $\mathrm{\leftidx{{}^{40}}Ca}+\mathrm{\leftidx{{}^{40}}Ca}$ | 1.83534 | 69.3 | 2$\times 10^{4}$ | 65.0 | 1$\times 10^{4}$ | 64.9 | 1$\times 10^{4}$ $\mathrm{\leftidx{{}^{96}}Zr}+\mathrm{\leftidx{{}^{96}}Zr}$ | 2.45727 | 174.2 | 2$\times 10^{4}$ | 167.3 | 2$\times 10^{4}$ | 166.9 | 3$\times 10^{4}$ $\mathrm{\leftidx{{}^{197}}Au}+\mathrm{\leftidx{{}^{197}}Au}$ | 3.1226 | 364.1 | 1$\times 10^{4}$ | 354 | 3$\times 10^{4}$ | 353.8 | 3$\times 10^{4}$ In this work, we adopt a multiphase transport model to study the influence of collision system size on baryon-strangeness correlations $C_{BS}$. By tuning the collision energies of two nuclei, we investigate the energy dependence of correlations $C_{BS}$. The maximum rapidity acceptance $y_{\text{max}}$ dependence and the influence of initial nuclear geometry structure are also discussed. The paper is organized as follows. First, in Sec.II, a short introduction to a multiphase transport model and some input parameters are presented and the physical picture of baryon-strangeness correlations is briefly manifested. Next, based on AMPT model, the dependence of baryon-strangeness correlations as a function of system size, center-of-mass energy, and $y_{\text{max}}$ are discussed in Sec.III. Finally, a brief summary is presented in Sec.IV. ## II Model and methodology ### II.1 Brief introduction to AMPT model A multi-phase transport model (AMPT), which is a hybrid dynamic model, is employed to calculate different collision systems. The AMPT model can describe the $p_{T}$ distribution of charged particles Xu and Ko (2011); Pal and Bleicher (2012); Ye _et al._ (2017); Jin _et al._ (2018) and their elliptic flow of Pb+Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV, as measured through the LHC-ALICE Collaboration. The model includes four main components to describe the relativistic heavy ion collision process: the initial condition which is simulated using the Heavy Ion Jet Interaction Generator (HIJING) model Wang and Gyulassy (1991); Gyulassy and Wang (1994), the partonic interaction which is described by Zhang’s Parton Cascade (ZPC) model Zhang (1998), the hadronization process which goes through by a Lund string fragmentation or coalescence model, and the hadron rescattering process which is treated by A Relativistic Transport (ART) model Li and Ko (1995). There are two versions of AMPT: 1) the AMPT version with a string melting mechanism, in which a partonic phase is generated from excited strings in the HIJING model, where a simple quark coalescence model is used to combine the partons into hadrons; and 2) the default AMPT version which only undergoes a pure hadron gas phase. The AMPT model succeeds to describe extensive physics topics for relativistic heavy-ion collisions at the RHIC Lin _et al._ (2005) as well as the LHC Ma and Lin (2016) energies, e.g. for hadron HBT correlation Lin _et al._ (2002), di-hadron azimuthal correlation Ma _et al._ (2006); Wang _et al._ (2019), collective flows Abelev et al. (2008); Bzdak and Ma (2014), strangeness productions Jin _et al._ (2018, 2019) as well as chiral magnetic effects and so on Zhao _et al._ (2019); Liu and Huang (2020); Wang and Zhao (2018); Xu _et al._ (2018). The details of AMPT can be found in Ref. Lin _et al._ (2005). In the AMPT model, impact parameter $b$, demonstrating the transverse distance between the centers of the two collided nuclei, can determine the collision centrality. The number of participants $\mathrm{N_{part}}$ is also related to the centrality or impact parameter. We adopt the AMPT parameters as suggested in Ref. Ye _et al._ (2017). The calculated collision systems and energies, their corresponding maximum impact parameters, $\mathrm{N_{part}}$ and the number of events are listed in Table 1. ### II.2 Baryon-strangeness correlations Finding a suitable probe to distinguish QGP matter is the key to understanding the QGP phase transition in relativistic heavy-ion collisions. The correlation coefficient $C_{BS}$, calculated via conserved quantities which are less affected due to uncertainty from hadronization, has an advantage over other probes. Under ideal QGP assumption, where the basic degrees of freedom are weakly interacting quarks and gluons at high temperature, $\left\langle S\right\rangle$ keeps $0$ and $C_{BS}$ can be written as $C_{BS}=-3\frac{\left\langle BS\right\rangle}{\left\langle S^{2}\right\rangle}=1$, noting that the strangeness is only carried by $s$ quark Koch _et al._ (2005). However, this feature is different from a hadron gas phase where this coefficient strongly depends on the hadronic environment. Based on an assumption of uncorrelated multiplicities, $C_{BS}$ can be written as Koch _et al._ (2005) $C_{BS}\approx 3\frac{\langle\Lambda\rangle+\langle\bar{\Lambda}\rangle+\cdots+3\left\langle\Omega^{-}\right\rangle+3\left\langle\bar{\Omega}^{+}\right\rangle}{\left\langle K^{0}\right\rangle+\left\langle\bar{K}^{0}\right\rangle+\cdots+9\left\langle\Omega^{-}\right\rangle+9\left\langle\bar{\Omega}^{+}\right\rangle}.$ (2) In actual calculation Koch _et al._ (2005), $C_{BS}$ is expressed as $C_{BS}=-3\frac{\sum_{n}B^{(n)}S^{(n)}-\frac{1}{N}\left(\sum_{n}B^{(n)}\right)\left(\sum_{n}S^{(n)}\right)}{\sum_{n}\left(S^{(n)}\right)^{2}-\frac{1}{N}\left(\sum_{n}S^{(n)}\right)^{2}},$ (3) where $B$ and $S$ denote the net baryon number and net strangeness observed for a given event, respectively, and $N$ is the total number of events. Furthermore, some attention should also be paid to the statistical errors as suggested in Refs. Yang _et al._ (2017); Xiao (2012) (see the Appendix). ## III Results and discussion The combination of hadrons would play an important role in the measurement of the baryon-strangeness correlation $C_{BS}$. To investigate this effect, the distribution of net-baryon $B$ versus net-strangeness $S$ was presented in Fig. 1. We chose two combinations of hadrons for baryon-strangeness correlations calculation in this figure: Case (i@) $p$+$n$+$\Lambda$+$\Sigma^{\pm}$+$\Xi^{\pm}$+$\Omega^{-}$+$K$ and Case (ii@) $p$+$n$+$K$, where both the particles and anti-particles were included with kinetic windows $0.1<p_{T}<3.0$ GeV/c and $|y|<0.2$. In this figure, we observed that the baryon-strangeness distribution was more concentrated on the center if the effect of hadron rescattering is off, which contributes to stronger correlations. Once counting more strange baryons (and anti-baryons), the distribution would be stretched into elliptical distribution and leads to finite negative correlations. This correlation can be represented as the following: $\rho_{B,S}=\frac{\operatorname{cov}(B,S)}{\sigma_{B}\sigma_{S}}=\frac{\langle(B-\langle B\rangle)(S-\langle S\rangle)\rangle}{\sqrt{\left\langle B^{2}\right\rangle-\langle B\rangle^{2}}\sqrt{\left\langle S^{2}\right\rangle-\langle S\rangle^{2}}}.$ (4) The more strange baryons were used, the more negative correlation was presented between $B$ and $S$. Figure 1: The correlation between net-baryon $B$ and net-strangeness $S$ for two different subsets of hadrons in the most central (0$-$5%) $\mathrm{{}^{197}Au+^{197}Au}$ collisions at $\sqrt{s_{NN}}=200$ GeV with the string melting AMPT framework. Panel (a) and (b) correspond to the Case (II) w/ and w/o hadron rescattering, while (c) and (d) correspond to the Case (I) w/ and w/o hadron rescattering. We would focus on hadron combination case (ii@) for calculating correlations, and also present case (i@) results for comparing effects from different hadrons combinations. Figure 2 shows the system size dependence for all particles (ii@) under the effect with/without hadron rescattering in 0-5% $\mathrm{{}^{10}B+^{10}B}$, $\mathrm{{}^{12}C+^{12}C}$, $\mathrm{{}^{16}O+^{16}O}$, $\mathrm{{}^{20}Ne+^{20}Ne}$, $\mathrm{{}^{40}Ca+^{40}Ca}$, $\mathrm{{}^{96}Zr+^{96}Zr}$, and $\mathrm{{}^{197}Au+^{197}Au}$ collisions at $\sqrt{s_{NN}}$= 200 (a), 20 (b), and 7.7 (c) GeV from the AMPT model. In the case without hadron rescattering where the hadronized system just experienced a partonic phase, the baryon-strangeness correlation $C_{BS}$ keeps constant at 200 and 20 GeV as collision system size increases. At $\sqrt{s_{NN}}$ = 7.7 GeV, $C_{BS}$ almost keeps a constant but displays a slightly decreasing trend with system size. As collision energy increases, $C_{BS}$ approaches the value conformed to an ideal QGP assumption ($C_{BS}$ = 1). If hadron rescattering was taken into account the baryon-strangeness correlation $C_{BS}$ exhibits similar behavior at 200 and 20 GeV, while it is not completely flat at 7.7 GeV. The rescattering process would erase the signal of partonic matter which is consistent with the earlier AMPT study Jin _et al._ (2008). This dependence is also related to rapidity distribution, thus the baryon and strangeness yield $dN/dy$ (rapidity density) were also presented for $\mathrm{{}^{197}Au+^{197}Au}$ collisions at RHIC energies $\sqrt{s_{NN}}$ = 200, 20, and 7.7 GeV based on the string melting AMPT model, as shown in Fig. 3. We observed that the rapidity distribution of net-baryon $B$ becomes more concentrated in the middle rapidity when collision energy decreases as presented in Fig. 3(a), (b), and (c). At $\sqrt{s_{NN}}$ = 20 and 7.7 GeV, the lower collision energy makes positive baryon $B^{+}$ much larger than negative baryon $B^{-}$ (almost contributed by anti-proton). After the hadron rescattering process, as a result of decay particles contribution, net-baryon $B$ shows a little higher than the values without hadron rescattering. The non-Gaussian distribution of the $B$ rapidity at 200 GeV has also been found in Ref. Lin _et al._ (2017). Figure 3(e) and (f) display the rapidity distribution of net-strangeness $S$ at $\sqrt{s_{NN}}$ = 20 and 7.7 GeV, respectively, where $S$ is always negative. However, $S$ turns to positive at 200 GeV in mid-rapidity as shown in Fig. 3(d). As energy increases, the rapidity density of $B$ decreases in the chosen region, and the baryon-strangeness correlation $C_{BS}$ is closer to 1, manifesting the system is close to the QGP state. The more net-baryon $B$ the collision system has, the state would be closer to the hadron gas phase with larger value of $C_{BS}$. As energy decreases, the rapidity densities of net-baryon $B$ and net- strangeness $S$ are growing a more sharp peak as plotted in Fig. 3. Therefore, the final result of $C_{BS}$ will be affected greatly by a slightly changing of the dynamic window size. Consequently, we could draw a conclusion that $C_{BS}$ is also affected by kinetic windows due to this non-flat rapidity distribution. Figure 2: The baryon-strangeness correlation $C_{BS}$ versus $\left\langle\mathrm{N_{part}}\right\rangle$ at $\sqrt{s_{NN}}$ = 200, 20 and 7.7 GeV in the most central collisions (0$-$5%) of $\mathrm{{}^{10}B+^{10}B}$, $\mathrm{{}^{12}C+^{12}C}$, $\mathrm{{}^{16}O+^{16}O}$, $\mathrm{{}^{20}Ne+^{20}Ne}$, $\mathrm{{}^{40}Ca+^{40}Ca}$, $\mathrm{{}^{96}Zr+^{96}Zr}$, and $\mathrm{{}^{197}Au+^{197}Au}$ systems at RHIC energies $\sqrt{s_{NN}}$ = 200 (a), 20 (b), and 7.7 GeV (c) in the string melting AMPT framework. Figure 3: The AMPT results of positive baryon (strangeness) $B^{+}$ ($S^{+}$), negative baryon (strangeness) $B^{-}$ ($S^{-}$) and net-baryon (-strangeness) $B$ ($S$) $dN/dy$ for identified particles, namely $p$, $n$, $\Lambda$, $\Sigma^{\pm}$, $\Xi^{\pm}$, $\Omega^{-}$, and $K$ in $\mathrm{{}^{197}Au+^{197}Au}$ collisions at RHIC energies $\sqrt{s_{NN}}$ = 200 (a,d), 20 (b,e), and 7.7 (c,f) GeV based on the string melting AMPT framework. The meaning of different symbols are illustrated in the insert of (b) and (e). Here the kinematics window is $|y|<0.2$. Figure 4: The maximum rapidity acceptance ($|y|<y_{\text{max}}$) dependence of the correlation coefficient $C_{BS}$ in $\mathrm{{}^{197}Au+^{197}Au}$ collisions at $\sqrt{s_{NN}}=200$ GeV in the string melting AMPT framework. Two different subsets of hadrons are adopted to show different dependencies. The two dashed lines indicate theoretical estimate of simple QGP ($C_{BS}$ = 1) and hadron gas ($C_{BS}$ = 0.66) at chemical freeze-out condition of $T$ = 170 MeV and $\mu_{b}$ = 0, respectively. Figure 5: (a) The maximum rapidity acceptance ($|y|<y_{\text{max}}$) dependence of the numerator ($C_{11}^{BS}$) and (b) the denominator ($C_{2}^{S}$) (b) of $C_{BS}$ in $\mathrm{{}^{197}Au+^{197}Au}$ collisions at $\sqrt{s_{NN}}=200$ GeV with the string melting AMPT framework. Two different subsets of hadrons are adopted to show different dependencies. Figure 6: (a) The correlation coefficient $C_{BS}$ in the most central (0$-$5%) $\mathrm{{}^{197}Au+^{197}Au}$ collisions is shown as a function of $\sqrt{s_{NN}}$; (b) The correlation coefficient $C_{BS}$ for a hadron gas at freeze-out is shown as a function of the baryon chemical potential $\mu_{B}$ in the most central (0$-$5%) collision at different collision systems and energy. For the $C_{BS}$ system scan, we chose $\mathrm{{}^{10}B+^{10}B}$, $\mathrm{{}^{12}C+^{12}C}$, $\mathrm{{}^{16}O+^{16}O}$, $\mathrm{{}^{20}Ne+^{20}Ne}$, $\mathrm{{}^{40}Ca+^{40}Ca}$, $\mathrm{{}^{96}Zr+^{96}Zr}$, and $\mathrm{{}^{197}Au+^{197}Au}$ collisions at $\sqrt{s_{NN}}$ = 200 GeV. For the $C_{BS}$ energy scan, the choice of energy is $\sqrt{s_{NN}}$ = 11.5, 20, 27, 39, and 200 GeV. We also presented $C_{BS}$ as a function of the rapidity acceptance range $y_{\text{max}}$ in Au+Au collisions at $\sqrt{s_{NN}}$ = 200 GeV by the AMPT model. As manifested in Fig. 4, we observed two different $y_{\text{max}}$ dependence based on the two different combinations of hadrons. The correlation coefficient tends to increase with $y_{\text{max}}$ in the Case (ii@) hadrons combination. However, when the Case (i@) hadrons combination was chosen, the coefficient tends to decrease. Although hadrons combination was different, $C_{BS}$ goes asymptotically to a constant as $y_{\text{max}}>3$. Additionally, the choice of hadron combination has no effect on the result at large $y_{\text{max}}$ at the hadron rescattering stage as a consequence of conserved quantities of baryon number and strangeness. In previous study Koch _et al._ (2005), the correlation coefficient $C_{BS}$ first increases with $y_{\text{max}}$ and reaches a maximum value at a certain $y_{\text{max}}$ before it drops to 0. To understand this phenomenon, Fig. 5(a) and (b) display the maximum rapidity acceptance ($|y|<y_{\text{max}}$) dependence of the numerator ($C_{11}^{BS}=\left\langle BS\right\rangle-\left\langle B\right\rangle\left\langle S\right\rangle$) and the denominator ($C_{2}^{S}=\left\langle S^{2}\right\rangle-\left\langle S\right\rangle^{2}$) of $C_{BS}$ in $\mathrm{{}^{197}Au+^{197}Au}$ collisions at $\sqrt{s_{NN}}=200$ GeV with the string melting AMPT model, respectively. In the case of (ii@) hadrons combination, both the $-3C_{11}^{BS}$ and the $C_{2}^{S}$ gradually tend to a constant value as $y_{\text{max}}$ increases as shown in Fig 5(a) and (b), respectively. However, in the case of (i@) hadrons combination, the $-3C_{11}^{BS}$ increases with $y_{\text{max}}$ and then drops to a constant value with the hadron rescattering process. Thus, the value of $C_{11}^{BS}$ is the dominant factor affecting the correlation coefficient $C_{BS}$. The $C_{BS}$ calculated in this model was plotted in Fig. 6(a) at $\sqrt{s_{NN}}$ = 11.5, 20, 27, 39, and 200 GeV in $\mathrm{{}^{197}Au+^{197}Au}$ central collisions and presented strong energy dependence. As energy increases, $C_{BS}$ goes down to 0.8 at the top RHIC energy. This result was also below the expected value for an ideal QGP phase which was mentioned in Ref. Haussler _et al._ (2007). Figure 6(b) shows $C_{BS}$ as a function of baryon chemical potential $\mu_{B}$ at chemical freeze-out for $\sqrt{s_{NN}}$ = 200 GeV collision systems, where $\mu_{B}$ was extracted based on the thermal model as given in our previous paper Wang _et al._ (2020). At given collision energy, $\mu_{b}$ increases with system size, and a similar trend also appears in Fig. 2(a). The correlation coefficient $C_{BS}$ with the hadron rescattering process slightly increases with $\mu_{b}$, which is consistent with the previous conclusion Koch _et al._ (2005). Meanwhile, the collision energy dependence of $C_{BS}$ was displayed in Fig. 6(b). For a given system, there were more net baryons in the collision system as energy decreases, leading to the $C_{BS}$ enhancement. The correlation coefficient presents a smooth baryon chemical potential dependence if collision systems and collision energies are characterized by the potential. Finally, we investigated the possible effect of $\alpha$-clustering structure of light nuclei on the correlation coefficient $C_{BS}$. Some previous works proposed the signatures of $\alpha$-clustering structure in light nuclei could be observed via heavy-ion collisions at ultra-relativistic energies Broniowski and Ruiz Arriola (2014); He _et al._ (2014); Bożek _et al._ (2014); Zhang _et al._ (2017, 2018); Cheng _et al._ (2019); Lim _et al._ (2019); Huang _et al._ (2020); He _et al._ (2020); Ma _et al._ (2020). In this context, we examined the influence of the fluctuation of initial nuclear geometry structure on the correlation coefficient $C_{BS}$. To this end, the nucleon distribution might be considered as a three-$\alpha$ clustering triangle structure for ${}^{12}\mathrm{C}$ and four-$\alpha$ clustering tetrahedron structure for ${}^{16}\mathrm{O}$ in the present study. Fig. 7 demonstrates $\alpha$-clustering effects on $C_{BS}$ for $\mathrm{{}^{12}C+^{12}C}$ as well as $\mathrm{{}^{16}O+^{16}O}$ collisions at $\sqrt{s_{NN}}$ = 10, 200, and 6370 GeV, respectively. These collision systems with different $\alpha$-clustering structure and energies were labeled by $\left\langle\mathrm{N_{part}}\right\rangle$. The results show nearly no difference visibly between the Woods-Saxon nucleon distribution and the $\alpha$-clustering structures via $C_{BS}$ coefficient. It implies that the baryon-strangeness correlation was insensitive to initial nucleon distribution, which could in turn help to isolate other ingredients for affecting $C_{BS}$, such as the hadron rescattering effect as discussed in this work. Figure 7: The correlation coefficient $C_{BS}$ as a function of the number of participants $\langle\mathrm{N_{part}}\rangle$ which are obtained from different center-of-mass energy at $\sqrt{s_{NN}}$ = 10, 200, and 6370 GeV in the most central collisions (impact parameter $b=0$) of $\mathrm{{}^{12}C+^{12}C}$ and $\mathrm{{}^{16}O+^{16}O}$ systems. ## IV summary In summary, we studied the system and energy dependence of the baryon- strangeness correlation coefficient in the framework of the AMPT model. The hadron rescattering process partly weaken the baryon-strangeness correlation as expected and weak decay contributions for strangeness or the count of baryons might have an effect on final results which need to be further investigated. The combination of different hadrons additionally affects the results significantly. Besides, it was found when the maximum rapidity acceptance $y_{\text{max}}>3$, the baryon-strangeness coefficient is independent of the combination of different hadrons in the final state based on the AMPT model. The correlation coefficient could be grouped if the collision systems and collision energies were characterized by the baryon chemical potential. In addition, we investigated the effect of initial nucleon distribution for light nuclei, specifically with either the Woods-Saxon nucleon distribution or the $\alpha$-clustering structure for 12C and 16O nuclei, on the final baryon-strangeness correlation results but found negligible effect. These AMPT model studies provide baselines for searching for the signals of QCD phase transition and critical point in heavy-ion collisions through the BS correlation. ###### Acknowledgements. This work was supported in part by Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, the Strategic Priority Research Program of CAS under Grant No. XDB34000000, the National Natural Science Foundation of China under contract Nos. 11875066, 11890710, 11890714, 11925502, 11961141003, National Key R&D Program of China under Grant No. 2016YFE0100900 and 2018YFE0104600, and the Key Research Program of Frontier Sciences of the CAS under Grant No. QYZDJ-SSW-SLH002. ## V Appendix ## Appendix A observable The joint cumulant of several random variables $X_{1},...,X_{n}$ is defined by a similar cumulant generating function $K\left(t_{1},t_{2},\ldots,t_{n}\right)=\log E\left(\mathrm{e}^{\sum_{j=1}^{n}t_{j}X_{j}}\right).$ (5) A consequence is that $\kappa\left(X_{1},\ldots,X_{n}\right)=\sum_{\pi}(|\pi|-1)!(-1)^{|\pi|-1}\prod_{B\in\pi}E\left(\prod_{i\in B}X_{i}\right)$ (6) where $\pi$ runs through the list of all partitions of $\\{1,...,n\\}$, $B$ runs through the list of all blocks of the partition $\pi$, and $|\pi|$ is the number of parts in the partition. In this analysis, we use $B$ and $S$ to represent the net-baryon number and net-strangeness in one event, respectively. The deviation of $B$ and $S$ from their mean value are expressed by $\delta B=B-\left\langle B\right\rangle$ and $\delta S=S-\left\langle S\right\rangle$, respectively. As mentioned above, we use $\left\langle\cdot\right\rangle$ to represent expected value. According Eq.(2) $\displaystyle C(\delta B,\delta S)$ $\displaystyle=\left\langle\delta B\delta S\right\rangle=\left\langle BS\right\rangle-\left\langle B\right\rangle\left\langle S\right\rangle$ (7) $\displaystyle C(\delta S,\delta S)$ $\displaystyle=\left\langle\delta S\delta S\right\rangle=\left\langle S^{2}\right\rangle-\left\langle S\right\rangle^{2}$ Thus, the baryon-strangeness correlation coefficient: $\displaystyle C_{BS}=-3\frac{C(\delta B,\delta S)}{C(\delta S,\delta S)}=-3\frac{\left\langle BS\right\rangle-\left\langle B\right\rangle\left\langle S\right\rangle}{\left\langle S^{2}\right\rangle-\left\langle S\right\rangle^{2}}$ (8) ## Appendix A The statistical error of $C_{BS}$ In the appendix of Ref. Yang _et al._ (2017), the authors showed in detail how to calculate the statistical uncertainty by way of the covariance of the multivariate moments. According to the definition of the covariance, $\operatorname{cov}\left(f_{i,j},f_{k,h}\right)=\frac{1}{N}\left(f_{i+k,j+h}-f_{i,j}f_{k,h}\right),$ (9) higher-order terms must be introduced for calculating the covariance. So we give all the items since those are necessary for calculating the error throughout the table below. Form the equation we know the error is proportional to $1/\sqrt{N}$, however, the corresponding event statistics we use are relatively small, and the statistical errors of results would be large. Table 2: This table list all the variables needed to calculate the results and statistical errors in Fig. 2(a) at $\sqrt{s_{NN}}=200$ GeV with hadron rescattering process in the AMPT framework. System | Events | $\left\langle B\right\rangle$ | $\left\langle S\right\rangle$ | $\left\langle BS\right\rangle$ | $\left\langle S^{2}\right\rangle$ | $\left\langle S^{3}\right\rangle$ | $\left\langle S^{4}\right\rangle$ | $\left\langle BS^{2}\right\rangle$ | $\left\langle BS^{3}\right\rangle$ | $\left\langle B^{2}\right\rangle$ | $\left\langle B^{2}S\right\rangle$ | $\left\langle B^{2}S^{2}\right\rangle$ | error ---|---|---|---|---|---|---|---|---|---|---|---|---|--- $\mathrm{\leftidx{{}^{10}}B}+\mathrm{\leftidx{{}^{10}}B}$ | 69800 | 0.0736533 | 0.0896132 | -0.359728 | 1.49643 | 0.513395 | 9.36554 | 0.00531519 | -2.32606 | 1.09706 | 0.0775501 | 2.47291 | 0.0233956 $\mathrm{\leftidx{{}^{12}}C}+\mathrm{\leftidx{{}^{12}}C}$ | 99800 | 0.0812625 | 0.109599 | -0.493968 | 1.95445 | 0.898196 | 14.9372 | -0.0448697 | -3.89148 | 1.45303 | 0.119419 | 4.09048 | 0.0178936 $\mathrm{\leftidx{{}^{16}}O}+\mathrm{\leftidx{{}^{16}}O}$ | 100000 | 0.12687 | 0.16194 | -0.66834 | 2.72044 | 1.69218 | 27.0776 | -0.00802 | -6.71508 | 2.02597 | 0.2679 | 7.38878 | 0.0153406 $\mathrm{\leftidx{{}^{20}}Ne}+\mathrm{\leftidx{{}^{20}}Ne}$ | 20000 | 0.21585 | 0.20645 | -0.90485 | 3.62225 | 2.55395 | 45.4021 | 0.18435 | -11.515 | 2.81145 | 0.28175 | 13.1649 | 0.0283403 $\mathrm{\leftidx{{}^{40}}Ca}+\mathrm{\leftidx{{}^{40}}Ca}$ | 20000 | 0.57985 | 0.49615 | -1.86935 | 8.62085 | 12.5274 | 234.946 | 2.55565 | -50.8957 | 6.67815 | 1.27765 | 67.8139 | 0.00965114 $\mathrm{\leftidx{{}^{96}}Zr}+\mathrm{\leftidx{{}^{96}}Zr}$ | 19900 | 2.08171 | 1.28734 | -3.71834 | 24.8124 | 93.0328 | 1867.91 | 34.6337 | -295.664 | 22.621 | 3.92558 | 588.743 | 0.0124063 $\mathrm{\leftidx{{}^{197}}Au}+\mathrm{\leftidx{{}^{197}}Au}$ | 10000 | 5.8683 | 2.5061 | -0.6767 | 58.6817 | 434.255 | 10679.6 | 258.236 | -324.446 | 77.1819 | 14.6115 | 4089.6 | 0.496962 ## References * Adams _et al._ (2005) J. Adams _et al._ , Nucl. Phys. A 757, 102 (2005). * Back _et al._ (2005) B. B. Back _et al._ , Nucl. Phys. A 757, 28 (2005). * Adcox _et al._ (2005) K. Adcox _et al._ , Nucl. Phys. A 757, 184 (2005). * Arsene _et al._ (2005) I. Arsene _et al._ , Nucl. Phys. A 757, 1 (2005). * Aoki _et al._ (2009) Y. Aoki, S. Borsnyi, S. Drr, Z. Fodor, S. D. Katz, S. Krieg, and K. Szabo, Journal of High Energy Physics 2009, 088 (2009). * Bazavov _et al._ (2012a) A. Bazavov, T. Bhattacharya, M. Cheng, C. DeTar, H. T. Ding, S. Gottlieb, R. Gupta, P. Hegde, U. M. Heller, F. Karsch, _et al._ (Hot QCD Collaboration), Phys. Rev. D 85, 054503 (2012a). * Han _et al._ (2020) Z. Han, B. Chen, and Y. Liu, Chin. Phys. Lett. 37, 112501 (2020). * Ackermann _et al._ (2003) K. H. Ackermann _et al._ , NIM A 499, 624 (2003). * Aggarwal et al. (2010) M. Aggarwal et al. (STAR collaboration), (2010), arXiv:1007.2613 [nucl-ex] . * Adamczyk _et al._ (2017) L. Adamczyk _et al._ (STAR Collaboration), Phys. Rev. C 96, 044904 (2017). * Koch (2008) V. Koch, “Hadronic fluctuations and correlations,” (2008), arXiv:0810.2520 [nucl-th] . * Jeon and Koch (2003) S. Jeon and V. Koch, “Event-by-event fluctuations,” (2003), arXiv:hep-ph/0304012 [hep-ph] . * Luo and Xu (2017) X. F. Luo and N. Xu, Nucl. Sci. Tech. 28, 112 (2017). * Adare _et al._ (2016) A. Adare _et al._ (PHENIX Collaboration), Phys. Rev. C 93, 011901 (2016). * Chatterjee (2019) A. Chatterjee, PoS CORFU2018, 164 (2019). * Adamczyk _et al._ (2014) L. Adamczyk _et al._ (STAR Collaboration), Phys. Rev. Lett. 113, 092301 (2014). * Friman _et al._ (2011) B. Friman, F. Karsch, K. Redlich, and V. Skokov, Euro. Phys. J. C 71, 1694 (2011). * Rustamov (2017) A. Rustamov, Nucl. Phys. A 967, 453 (2017). * Koch _et al._ (2005) V. Koch, A. Majumder, and J. Randrup, Phys. Rev. Lett. 95, 182301 (2005). * Chatterjee and Mohan (2012) S. Chatterjee and K. A. Mohan, Phys. Rev. D 86, 114021 (2012). * Bazavov _et al._ (2012b) A. Bazavov, T. Bhattacharya, C. E. DeTar, H.-T. Ding, S. Gottlieb, R. Gupta, P. Hegde, U. M. Heller, F. Karsch, E. Laermann, _et al._ (HotQCD Collaboration), Phys. Rev. D 86, 034509 (2012b). * Bhattacharyya _et al._ (2014) A. Bhattacharyya, S. Das, S. K. Ghosh, R. Ray, and S. Samanta, Phys. Rev. C 90, 034909 (2014). * Chatterjee _et al._ (2016) A. Chatterjee, S. Chatterjee, T. K. Nayak, and N. R. Sahoo, J. Phys. G 43, 125103 (2016). * Haussler _et al._ (2007) S. Haussler, S. Scherer, and M. Bleicher, AIP Conference Proceedings 892, 372 (2007). * Yang _et al._ (2017) Z. Z. Yang, X. F. Luo, and B. Mohanty, Phys. Rev. C 95, 014914 (2017). * Jin _et al._ (2008) F. Jin, Y. G. Ma, G. L. Ma, J. H. Chen, S. Zhang, X. Z. Cai, H. Z. Huang, J. Tian, C. Zhong, and J. X. Zuo, J. Phys. G 35, 044070 (2008). * Gavai and Gupta (2006) R. V. Gavai and S. Gupta, Phys. Rev. D 73, 014004 (2006). * Bazavov _et al._ (2014) A. Bazavov, H. T. Ding, P. Hegde, O. Kaczmarek, F. Karsch, E. Laermann, Y. Maezawa, S. Mukherjee, H. Ohno, P. Petreczky, C. Schmidt, S. Sharma, W. Soeldner, and M. Wagner, Phys. Rev. Lett. 113, 072001 (2014). * Loizides (2016) C. Loizides, Nucl. Phys. A 956, 200 (2016). * Koop _et al._ (2016) J. D. O. Koop, R. Belmont, P. Yin, and J. L. Nagle, Phys. Rev. C 93, 044910 (2016). * Sievert and Noronha Hostler (2019) M. Sievert and J. Noronha Hostler, Phys. Rev. C 100, 024904 (2019). * Huang _et al._ (2020) S. L. Huang, Z. Y. Chen, W. Li, and J. Y. Jia, Phys. Rev. C 101, 021901 (2020). * Zhang _et al._ (2020) S. Zhang, Y. G. Ma, G. L. Ma, J. H. Chen, Q. Y. Shou, W. B. He, and C. Zhong, Phys. Lett. B 804, 135366 (2020). * Adam et al. (ALICE Collaboration) J. Adam et al. (ALICE Collaboration), Nature Physics 13, 535 (2017). * Xu and Ko (2011) J. Xu and C. M. Ko, Phys. Rev. C 83, 034904 (2011). * Pal and Bleicher (2012) S. Pal and M. Bleicher, Phys. Lett. B 709, 82 (2012). * Ye _et al._ (2017) Y. J. Ye, J. H. Chen, Y. G. Ma, S. Zhang, and C. Zhong, Chin. Phys. C 41, 084101 (2017). * Jin _et al._ (2018) X. H. Jin, J. H. Chen, Y. G. Ma, S. Zhang, C. J. Zhang, and C. Zhong, Nucl. Sci. Tech. 29, 54 (2018). * Wang and Gyulassy (1991) X. N. Wang and M. Gyulassy, Phys. Rev. D 44, 3501 (1991). * Gyulassy and Wang (1994) M. Gyulassy and X. N. Wang, Comp. Phys. Commun. 83, 307 (1994). * Zhang (1998) B. Zhang, Comp. Phys. Commun. 109, 193 (1998). * Li and Ko (1995) B. A. Li and C. M. Ko, Phys. Rev. C 52, 2037 (1995). * Lin _et al._ (2005) Z. W. Lin, C. M. Ko, B. A. Li, B. Zhang, and S. Pal, Phys. Rev. C 72, 064901 (2005). * Ma and Lin (2016) G. L. Ma and Z. W. Lin, Phys. Rev. C 93, 054911 (2016). * Lin _et al._ (2002) Z. W. Lin, C. M. Ko, and S. Pal, Phys. Rev. Lett. 89, 152301 (2002). * Ma _et al._ (2006) G. L. Ma, S. Zhang, and Y. G. Ma et al., Phys. Lett. B 641, 362 (2006). * Wang _et al._ (2019) H. Wang, J. H. Chen, Y. G. Ma, and S. Zhang, Nucl. Sci. Tech. 30, 185 (2019). * Abelev et al. (2008) B. I. Abelev et al. (STAR Collaboration), Phys. Rev. Lett. 101, 252301 (2008). * Bzdak and Ma (2014) A. Bzdak and G. L. Ma, Phys. Rev. Lett. 113, 252301 (2014). * Jin _et al._ (2019) X. H. Jin, J. H. Chen, Z. W. Lin, G. L. Ma, Y. G. Ma, and S. Zhang, Sci. China Phys. Mech. Astron. 62, 11012 (2019). * Zhao _et al._ (2019) X. L. Zhao, G. L. Ma, and Y. G. Ma, Phys. Lett. B 792, 413 (2019). * Liu and Huang (2020) Y. C. Liu and X. G. Huang, Nucl. Sci. Tech. 31, 56 (2020). * Wang and Zhao (2018) F. Q. Wang and J. Zhao, Nucl. Sci. Tech. 29, 179 (2018). * Xu _et al._ (2018) Z. W. Xu, S. Zhang, Y. G. Ma, J. H. Chen, and C. Zhong, Nucl. Sci. Tech. 29, 186 (2018). * Xiao (2012) F. L. Xiao, J. Phys. G 39, 025008 (2012). * Lin _et al._ (2017) Y. F. Lin, L. Z. Chen, and Z. M. Li, Phys. Rev. C 96, 044906 (2017). * Wang _et al._ (2020) D. F. Wang, S. Zhang, and Y. G. Ma, Phys. Rev. C 101, 034906 (2020). * Broniowski and Ruiz Arriola (2014) W. Broniowski and E. Ruiz Arriola, Phys. Rev. Lett. 112, 112501 (2014). * He _et al._ (2014) W. B. He, Y. G. Ma, X. G. Cao, X. Z. Cai, and G. Q. Zhang, Phys. Rev. Lett. 113, 032506 (2014). * Bożek _et al._ (2014) P. Bożek, W. Broniowski, E. R. Arriola, and M. Rybczyński, Phys. Rev. C 90, 064902 (2014). * Zhang _et al._ (2017) S. Zhang, Y. G. Ma, J. H. Chen, W. B. He, and C. Zhong, Phys. Rev. C 95, 064904 (2017). * Zhang _et al._ (2018) S. Zhang, Y. G. Ma, J. H. Chen, W. B. He, and C. Zhong, Eur. Phys. J. A 54, 161 (2018). * Cheng _et al._ (2019) Y. L. Cheng, S. Zhang, Y. G. Ma, J. H. Chen, and C. Zhong, Phys. Rev. C 99, 054906 (2019). * Lim _et al._ (2019) S. H. Lim, J. Carlson, C. Loizides, D. Lonardoni, J. E. Lynn, J. L. Nagle, J. D. Orjuela Koop, and J. Ouellette, Phys. Rev. C 99, 044904 (2019). * He _et al._ (2020) J. J. He, S. Zhang, Y. G. Ma, J. H. Chen, and C. Zhong, Eur. Phys. J. A 56, 52 (2020). * Ma _et al._ (2020) L. Ma, Y. G. Ma, and S. Zhang, Phys. Rev. C 102, 014910 (2020).
# Moderate-temperature near-field thermophotovoltaic systems with thin-film InSb cells Rongqian Wang<EMAIL_ADDRESS>School of physical science and technology & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou 215006, China. Jincheng Lu School of physical science and technology & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou 215006, China. Center for Phononics and Thermal Energy Science, China-EU Joint Center for Nanophononics, Shanghai Key Laboratory of Special Artificial Microstructure Materials and Technology, School of Physics Science and Engineering, Tongji University, Shanghai 200092 China Jian-Hua Jiang<EMAIL_ADDRESS>School of physical science and technology & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou 215006, China. ###### Abstract Near-field thermophotovoltaic systems functioning at 400$\sim$900 K based on graphene-hexagonal-boron-nitride heterostructures and thin-film InSb $p$-$n$ junctions are investigated theoretically. The performances of two near-field systems with different emitters are examined carefully. One near-field system consists of a graphene-hexagonal-boron-nitride-graphene sandwich structure as the emitter, while the other system has an emitter made of the double graphene-hexagonal-boron-nitride heterostructure. It is shown that both systems exhibit higher output power density and energy efficiency than the near-field system based on mono graphene-hexagonal-boron-nitride heterostructure. The optimal output power density of the former device can reach to $1.3\times 10^{5}~{}\rm{W\cdot m^{-2}}$, while the optimal energy efficiency can be as large as $42\%$ of the Carnot efficiency. We analyze the underlying physical mechanisms that lead to the excellent performances of the proposed near-field thermophotovoltaic systems. Our results are valuable toward high-performance moderate temperature thermophotovoltaic systems as appealing thermal-to-electric energy conversion (waste heat harvesting) devices. ## I Introduction Thermophotovoltaic (TPV) systems are solid-state renewable energy resource that are of immense potentials in a wide range of applications including solar energy harvesting and waste heat recovery Shockley and Queisser (1961); Martín and Algora (2004); Nagashima _et al._ (2007); Fraas and Ferguson (2000); Sulima and Bett (2001); Wu _et al._ (2012); Chan _et al._ (2013); Liao _et al._ (2016); Zhao _et al._ (2017a); Tervo _et al._ (2018). In the TPV system, a photovoltaic (PV) cell is placed in the proximity of a thermal emitter and converts the thermal radiation from the emitter into electricity via infrared photoelectric conversion. However, the frequency mismatch between a moderate-temperature thermal emitter and the PV cell leads to significantly reduced performance of the TPV systems at moderate temperatures (i.e., 400$\sim$900 K which is the majority spectrum of the industry waste heat). To overcome this obstacle, materials which support surface polaritons have been used to introduce a resonant near-field energy exchange between the emitter and the absorber Wu _et al._ (2012); Svetovoy _et al._ (2012); Ilic _et al._ (2012); Svetovoy and Palasantzas (2014); Basu _et al._ (2015); Zhao _et al._ (2017a). As a consequence, near-field TPV (NTPV) systems have been proposed to achieve appealing energy efficiency and output power Narayanaswamy and Chen (2003); Laroche _et al._ (2006); Park _et al._ (2008); Ilic _et al._ (2012); Bright _et al._ (2014); Molesky and Jacob (2015); St-Gelais _et al._ (2017); Jiang and Imry (2018); Papadakis _et al._ (2020). Near-field systems based on graphene, hexagonal-boron-nitride (h-BN) and their heterostructures have been shown to demonstrate excellent near-field couplings due to surface plasmon polaritons (SPPs), surface phonon polaritons (SPhPs) and their hybridizations [i.e., surface plasmon-phonon polaritons (SPPPs)] Svetovoy _et al._ (2012); Messina and Ben-Abdallah (2013); Svetovoy and Palasantzas (2014); Woessner _et al._ (2015); Zhao and Zhang (2015); Zhao _et al._ (2017b); Shi _et al._ (2017). It was demonstrated that a heterostructure consisting of graphene-h-BN multilayers performs better than the monocell structure and the heat flux is found to be three times larger than that of the monocell structure Zhao and Zhang (2015); Zhao _et al._ (2017b); Shi _et al._ (2017). Here, we consider the graphene-h-BN-graphene sandwich structure and graphene-h-BN-graphene-h-BN four-layer structure as the near-field thermal emitters to provide the enhanced radiative heat transfer. In order to convert the infrared radiation into electricity, the energy bandgap ($E_{\rm gap}$) of the NTPV cell (i.e., the $p$-$n$ junction) must match the radiative spectrum generated by the emitter. III-V group compound semiconductors like Gallium Arsenide (GaAs), Gallium antimonide(GaSb), Indium antimonide (InSb) and Indium Arsenide (InAs) have been used due to the small bandgap energy, high electron mobility and low electron effective mass Ilic _et al._ (2012); Laroche _et al._ (2006); Messina and Ben-Abdallah (2013); Zhao _et al._ (2017a). Recently, semiconductor thin-films have been explored in NTPV systems. In Refs. Zhao _et al._ (2017a) and Papadakis _et al._ (2020), a NTPV system based on a InAs thin-film cell has exhibits appealing performance operating at high temperatures. But the system suffers from low energy efficiency (below $10\%$) when operating at moderate temperature due to the parasitic heat transfer induced by the phonon-polaritons of InAs. Here, we use InSb as the near-field absorber since the bandgap energy of InSb is lower compared to InAs and its photon-phonon interaction is much weaker than InAs. For the temperature of the InSb cell, $T_{\rm cell}=320~{}{\rm K}$, which has been proved an optimal cell temperature in our previous work Wang _et al._ (2019), the gap energy is $E_{\rm gap}=0.17$ eV and the corresponding angular frequency is $\omega_{\rm gap}=2.5\times 10^{14}$ rad/s. In this work, we examine the performances of two NTPV devices: the graphene-$h$-BN-graphene-InSb cell (denoted as G-FBN-G-InSb cell, with the graphene-$h$-BN-graphene sandwich structure being the emitter and the InSb thin-film being the cell) and the graphene-$h$-BN-graphene-$h$-BN-InSb cell (denoted as G-FBN-G-FBN-InSb cell, with the double graphene-$h$-BN heterostructure being the emitter and the InSb thin-film being the cell). We study and compare the performances of these two systems and reveal their underlying physical mechanisms. We further optimize the performance of the near-field TPV systems for various parameters. In this work, we address two issues: First, we try to optimize the design of the h-BN-graphene heterostructures as the emitter to improve the performance of the NTPV system. Second, we try to discuss the effect of finite thickness of the InSb cell on the performance of the NTPV system. This work is structured as follows. In Sec. II, we describe our NTPV system and clarify the radiative heat flux exchanged between the emitter and the cell. In Sec. III, we study and compare the performances of the two NTPV systems and analyze the spectral distributions of the photo-induced current density and indicent heat flux. We also study the photon tunneling coefficient to further elucidate the physical mechanisms. In Sec. IV, we examine the performances of the two NTPV systems for various InSb thin-film thicknesses and emitter temperatures to optimize the performances of the NTPV systems. Finally, we summarize and conclude in Sec. V. Figure 1: Schematic illustration of the NTPV systems. (a) A representative TPV device. A thermal emitter of temperature $T_{\rm emit}$ is located at a distance $d$ of a thermophotovoltaic cell of temperature $T_{\rm cell}$. Two compositions of the thermal emitter with (b) graphene-h-BN-graphene structure and (c) graphene-h-BN-graphene-$h$-BN structure. (d) The thermophotovoltaic cell with a InSb thin-film of thickness $h_{\rm InSb}$ supported by a semi- infinite substrate. ## II System and Theory In Fig. 1, we consider the graphene-h-BN-InSb NTPV systems. Fig. 1(a) is a schematic presentation of a typical NTPV system, which consists of a thermal emitter and a thermophotovoltaic cell. The emitter and the thermophotovoltaic cell is separated by a vacuum gap with thickness $d$. The temperatures of the emitter and cell are kept at $T_{\rm emit}$ and $T_{\rm cell}$, respectively. The thermal radiation from the emitter is absorbed by the cell and then converted into electricity via photoelectric conversion. Figs. 1(b) and 1(c) present the two compositions of the emitter. Fig. 1(b) is a graphene-h-BN- graphene sandwich structure and Fig. 1(c) is made of two graphene-h-BN heterostructures. The thickness of h-BN thin film is $h_{\rm BN}$ and the graphene monolayer is model as a layer of thickness $h_{\rm g}$. Fig. 1(d) is a thin-film InSb $p$-$n$ junction supported by a substrate. The thickness of the InSb thin-film is $h_{\rm InSb}$ and the substrate is set to be semi- infinite intrinsic InSb. When the InSb cell is located at a distance $d$ which is on the order of or smaller than the thermal wavelength $\lambda_{\rm th}=2\pi\hbar c/k_{\rm B}T_{\rm emit}$ from the emitter, the thermal radiation can be significantly enhanced due to energy transfer via evanescent waves Ilic _et al._ (2012). The radiative heat exchange between the emitter and the cell is given by Polder and Van H. (1971); Pendry (1999) $\displaystyle Q_{\rm rad}=Q_{\omega<\omega_{\rm gap}}+Q_{\omega\geq\omega_{\rm gap}}$ (1) where $Q_{\omega<\omega_{\rm gap}}$ and $Q_{\omega\geq\omega_{\rm gap}}$ are the heat exchanges below and above the band gap of the cell, respectively. The below $Q_{\omega<\omega_{\rm gap}}$ and above-gap heat exchange $Q_{\omega\geq\omega_{\rm gap}}$ are respectively given by $\displaystyle Q_{\omega<\omega_{\rm gap}}=\int_{0}^{\omega_{\rm gap}}\frac{d\omega}{4\pi^{2}}\left[\Theta_{1}\left(T_{\rm emit},\omega\right)-\Theta_{2}\left(T_{\rm cell},\omega\right)\right]\sum_{j}\int kdk\zeta_{j}(\omega,k),$ (2) and $\displaystyle Q_{\omega\geq\omega_{\rm gap}}=\int_{\omega_{\rm gap}}^{\infty}\frac{d\omega}{4\pi^{2}}\left[\Theta_{1}\left(T_{\rm emit},\omega\right)-\Theta_{2}\left(T_{\rm cell},\omega,\Delta\mu\right)\right]\sum_{j}\int kdk\zeta_{j}(\omega,k),$ (3) where $\Theta_{1}\left(T_{\rm emit},\omega\right)={\hbar\omega}/[\exp{\left(\frac{\hbar\omega}{k_{\rm B}T_{\rm emit}}\right)}-1]$ and $\Theta_{2}\left(T_{\rm cell},\omega,\Delta\mu\right)={\hbar\omega}/{[\exp{\left(\frac{\hbar\omega-\Delta\mu}{k_{\rm B}T_{\rm cell}}\right)}-1]}$ are the Planck mean oscillator energies of blackbody at temperature $T_{\rm emit}$ and $T_{\rm cell}$, respectively. $\Delta\mu$ is the electrochemical potential difference across the $p$-$n$ junction, which describes the effects of charge injection or depletion on the carrier recombination processes and hence modify the number of photons through the detailed balance. $k$ is the magnitude of the in-plane wavevector of thermal radiation waves. $\zeta_{j}(\omega,k)$ is the photon transmission coefficient for the $j$-th polarization $(j=s,p)$, which consists of the contributions from both the propagating and the evanescent waves Mulet _et al._ (2002) $\zeta_{j}(\omega,k)=\begin{cases}\frac{\left(1-\left|r_{\rm emit}\right|^{2}\right)\left(1-\left|r_{\rm cell}\right|^{2}\right)}{\left|1-r_{\rm emit}^{j}r_{\rm cell}^{j}\exp(2ik_{z}^{0}d)\right|^{2}},&k<\frac{\omega}{c}\\\ \frac{4{\rm Im}\left(r_{\rm emit}^{j}\right){\rm Im}\left(r_{\rm cell}^{j}\right)\exp(2ik_{z}^{0}d)}{\left|1-r_{\rm emit}^{j}r_{\rm cell}^{j}\exp(2ik_{z}^{0}d)\right|^{2}},&k\geq\frac{\omega}{c}\end{cases}$ (4) where $k_{z}^{0}=\sqrt{\omega^{2}/c^{2}-k^{2}}$ is the perpendicular-to-plane component of the wavevector in vacuum. $r_{\rm emit}^{j}$ ($r_{\rm cell}^{j}$) with $j=s,p$ is the complex reflection coefficient at the interface between the emitter (cell) and the air. Here, the reflection coefficients of the emitter and cell are calculated by the scattering matrix approach Whittaker and Culshaw (1999); Zhang (2007). Via the infrared photoelectric conversion, the above-gap radiative heat exchange is then converted into electricity. Based on the detailed balance analysis, the electric current density of a NTPV cell is given by Shockley and Queisser (1961); Ashcroft and Mermin (1976) $\displaystyle I=I_{\rm ph}-I_{0}[\exp(V/V_{\rm cell})-1],$ (5) where $V=\Delta\mu/e$ is the voltage bias across the NTPV cell, $V_{\rm cell}=k_{\rm B}T_{\rm cell}/e$ is a voltage which measures the temperature of the cell Shockley and Queisser (1961). $I_{\rm ph}$ and $I_{0}$ are the photo- generation current density and reverse saturation current density, respectively. In Eq. (5), an ideal rectifier condition has been used to simplify the nonradiative recombination Shockley and Queisser (1961). The actual nonradiative mechanisms include Shockley-Read-Hall (RSH) and Auger nonradiative processes. Here, for the sake of simplicity, we just follow the Shockley-Queisser analysis Shockley and Queisser (1961). The reverse saturation current density is determined by the diffusion of minority carriers in the InSb $p$-$n$ junction, which is given by $\displaystyle I_{0}=en_{\rm i}^{2}\left(\frac{1}{N_{\rm A}}\sqrt{\frac{D_{\rm e}}{\tau_{\rm e}}}+\frac{1}{N_{\rm D}}\sqrt{\frac{D_{\rm h}}{\tau_{\rm h}}}\right),$ (6) where $n_{\rm i}$ is the intrinsic carrier concentration, $D_{\rm e}$ and $D_{\rm h}$ are the diffusion coefficients of the electrons and holes, respectively. $N_{\rm A}$ and $N_{\rm D}$ are the $p$-region and $n$-region impurity concentrations, respectively Shur (1996). $\tau_{\rm e}$ and $\tau_{h}$ are correspondingly the relaxation time of the electron-hole pairs in the $n$-region and $p$-region. Numerical values of these parameters are taken from Refs. Shur (1996) and Lim _et al._ (2015). The photo-generation current density is contributed from the above-gap thermal heat exchange Laroche _et al._ (2006); Messina and Ben-Abdallah (2013) $\displaystyle I_{\rm ph}=\frac{e}{4\pi^{2}}\int_{\omega_{\rm gap}}^{\infty}\frac{d\omega}{\hbar\omega}\left[\Theta_{1}\left(T_{\rm emit},\omega\right)-\Theta_{2}\left(T_{\rm cell},\omega,\Delta\mu\right)\right]$ (7) $\displaystyle\hskip 28.45274pt\times\sum_{j}\int kdk\zeta_{j}(\omega,k)\left(1-\exp{\left[-2\rm Imag(\it k_{z}^{\rm InSb})\it h_{\rm InSb}\right]}\right),$ where $\it k_{z}^{\rm InSb}=\sqrt{\varepsilon_{\rm InSb}\omega^{2}/c^{2}-k^{2}}$ is the perpendicular-to-plane component of the wavevector in the InSb $p$-$n$ junction. $\varepsilon_{\rm InSb}$ is the dielectric function of the InSb cell, which is given by $\varepsilon_{\rm InSb}=\left(n+\frac{i\rm c\it\alpha(\omega)}{2\omega}\right)^{2}$. $n_{\rm InSb}=4.12$ is the refractive index and $c$ is the speed of light in vacuum. $\alpha(\omega)$ is a step-like function describing the photonic absorption, which is given by Messina and Ben-Abdallah (2013) $\alpha(\omega)=0$ for $\omega<\omega_{\rm gap}$ and $\alpha(\omega)=\alpha_{0}\sqrt{\omega/\omega_{\rm gap}-1}$ for $\omega>\omega_{\rm gap}$ with $\alpha_{0}$ = 0.7 $\mu\rm m^{-1}$ Messina and Ben-Abdallah (2013). Since the NTPV cell is a thin film, the exponential decay characteristic of the electromagnetic wave propagating in the InSb thin-film must be considered. The term $\left(1-\exp{\left[-2\rm Imag(\it k_{z}^{\rm InSb})\it h_{\rm InSb}\right]}\right)$ is the absorption probability of the incident radiation in the InSb film of thickness $h_{\rm InSb}$, which measures the actual availability of the above-gap photons in the photon- carrier generation process. The dielectric function of h-BN is described by a Drude-Lorentz model, which is given by Kumar _et al._ (2015) $\varepsilon_{m}=\varepsilon_{\infty,m}\left(1+\frac{\omega^{2}_{\rm LO,\it m}-\omega^{2}_{\rm TO,\it m}}{\omega^{2}_{\rm TO,\it m}-i\gamma_{m}\omega-\omega^{2}}\right),$ (8) where $m={\rm\parallel,\perp}$ denotes the out-of-plane and the in-plane directions, respectively. $\varepsilon_{\infty,m}$ is the high-frequency relative permittivity, $\omega_{\rm TO}$ and $\omega_{\rm LO}$ are the transverse and longitudinal optical phonon frequencies, respectively. $\gamma_{m}$ is the damping constant of the optical phonon modes. The values of these parameters are chosen as those determined by experiments Geick _et al._ (1966); Caldwell _et al._ (2014). The effective dielectric function of the graphene monolayer is modeled as Vakil and Engheta (2011) $\varepsilon_{\rm g}=1+i\frac{\sigma_{\rm g}}{\varepsilon_{0}\omega h_{\rm g}},$ (9) where $\sigma_{\rm g}$ is the optical conductivity Falkovsky (2008). The output electric power density $P_{\rm e}$ of the NTPV system is defined as the product of the net electric current density and the voltage bias, $\displaystyle P_{\rm e}=-I_{\rm e}V,$ (10) and the energy efficiency $\eta$ is given by the ratio between the output electric power density $P_{\rm e}$ and incident radiative heat flux $Q_{\rm inc}$, $\displaystyle\eta=\frac{P_{\rm e}}{Q_{\rm inc}},$ (11) where the incident radiative heat flux is given by the radiative heat exchange defined in Eq. (1). The second law of thermodynamics imposes an upper bound on the energy efficiency, which is the Carnot efficiency, $\displaystyle\eta_{c}=1-\frac{T_{\rm cell}}{T_{\rm emit}}.$ (12) When studying the energy efficiency of various NTPV systems with different temperatures, we use the Carnot efficiency to quantify and compare their energy efficiencies. ## III Performances of graphene-h-BN-InSb near-field systems Figure 2: Performances of two NTPV cells. (a) Electrical power density and (b) energy$-$conversion efficiency in units of Carnot efficiency ($\eta_{\rm c}$). The temperatures of the emitter and the cell are set at $T_{\rm emit}=450$ K and $T_{\rm cell}=320$ K, respectively. The vacuum gap distance is $d=20$ nm, the h-BN film thickness is $h_{\rm BN}=20$ nm and the thickness of the InSb thin film is $h_{\rm InSb}=1000$ nm. The chemical potential of graphene is $\mu_{\rm g}=1.0$ eV. The Carnot efficiency is given by $\eta_{\rm c}=1-{T_{\rm cell}}/{T_{\rm emit}}$. We first examine the performances of the two types of NTPV cells. Fig. 2 shows the output power density and energy efficiency as a function of voltage bias for the two set-ups, respectively denoted as G-FBN-G-InSb cell (graphene-h-BN- graphene heterostructure as the emitter and thin-film InSb $p$-$n$ junction as the receiver) and G-FBN-G-FBN-InSb cell (graphene-h-BN-graphene-h-BN heterostructure as the emitter and thin-film InSb $p$-$n$ junction as the receiver). The output power density and energy efficiency are optimized for various physical parameters, including the temperature of the cell, the chemical potential of graphene and h-BN film thickness Wang _et al._ (2019). The analysis in Ref. [Wang _et al._ , 2019] shows that setting $\mu_{\rm g}=1.0$ eV, $h_{\rm BN}=20$ nm and $T_{\rm cell}=320$ K provides roughly optimal performance, both in terms of power and efficiency, for the NTPV systems considered in this work. Therefore, these parameters are kept as those constants in the main text. The thickness of the InSb thin film is set as $h_{\rm InSb}=1000$ nm, which is close to the thickness of experimental samples Yang _et al._ (2006). As shown in Fig. 2, the maximum output power density of the G-FBN-G-InSb cell is about $4.1\times 10^{2}~{}\rm{W\cdot m^{-2}}$, nearly 1.1 times of the the maximum output power density of the G-FBN-G-FBN-InSb cell (about $3.9\times 10^{2}~{}\rm{W\cdot m^{-2}}$). For the energy-conversion efficiency, the maximum values of the G-FBN-G-InSb cell is about $15\%\eta_{c}$, which slightly higher than the maximum efficiency of the FBN-G-FBN-InSb cell (about $14.9\%\eta_{c}$). In order to analyze the physical mechanisms responsible for such performances of these two setups, we show the spectral distributions of the photo-induced current and incident radiative heat flux at the maximum electric power density for the G-FBN-G-InSb cell and G-FBN-G-FBN-InSb cell in Fig. 3. The two shaded areas in Fig. 3(a) and (b) are the two reststrahlen bands of h-BN Jacob (2014). As exhibited in Fig. 3(a) that higher photo-induced current spectra of the two systems appear at and near the reststrahlen band due to the reststrahlen effect. The reststrahlen effect originates from the hyperbolic optical properties of the h-BN film due to the photon—optical-phonon interactions. In the reststrahlen band presented in Fig. 3(a), the in-plane dielectric function of h-BN is negative, leading to strong reflection and suppressed transmission of incident photons Jacob (2014). The near-field radiation effects are essentially caused by the evanescent propagation of the incident photons, which dramatically enhances the radiative heat transfer between two closed spaced objects Wang _et al._ (2019); Brar _et al._ (2014); Kumar _et al._ (2015); Woessner _et al._ (2015); Zhao and Zhang (2015); Zhao _et al._ (2017b); Shi _et al._ (2017). Such enhancement of the radiative heat transfer eventually leads to the significant increase of the input heat flux, output electric power and the energy efficiency Wang _et al._ (2019). It is noticed that the overall photo-induced current spectrum of the G-FBN-G- InSb cell is higher than the one of the G-FBN-G-FBN-InSb cell except in the frequency range from $3.1\times 10^{14}~{}\rm{rad\cdot s^{-1}}$ to $4.1\times 10^{14}~{}\rm{rad\cdot s^{-1}}$. After integrating over $\omega$, this higher photo-induced current spectrum gives rise to improved output power of the G-FBN-G-InSb cell. Figure 3: (a) Photo-induced current spectra $I_{\rm ph}(\omega)$ and (b) incident radiative heat spectra $Q_{\rm inc}(\omega)$ at the maximum electric power density for the two configurations. The temperatures of the emitter and the cell are kept at $T_{\rm emit}=450$ K and $T_{\rm cell}=320$ K, respectively. The vacuum gap is $d=20$ nm and the chemical potential of graphene is set as $\mu_{\rm g}=1.0$ eV. The h-BN film thickness is $h_{\rm film}=20$ nm. The chemical potential difference across the InSb $p$-$n$ junction $\Delta\mu$ is optimized independently for maximum output power for each configuration. Contrary to the photo-induced current spectra, the overall incident radiative heat flux of the G-FBN-G-FBN-InSb cell is higher than the G-FBN-G-InSb cell. Since the energy-conversion efficiency is defined as the ratio of the output electric power density and the incident radiation heat flux, the higher output power density and lower incident radiation heat flux give a higher energy efficiency of the G-FBN-G-InSb cell. To further elucidate the mechanism for the enhanced performance of the near- field systems, we examine the photon tunneling coefficient $\zeta_{j}(\omega,k)$ (given by Eq. 4). As shown in Fig. 4, the bright color indicates a high transmission coefficient. The green dashed lines are the light lines of vacuum and InSb, respectively. Note that the maximum transmission coefficient is 2 due to the contribution of both $s$ and $p$ polarizations. In the above-gap range, both G-FBN-G-InSb cell and G-FBN-G-FBN- InSb cell exhibit enhanced photon transmission, thanks to the SPPPs supported by the graphene-h-BN heterostructures. Fig. 4 shows that the photon transmission spectra of the two near-field NTPV systems do not differ much. Therefore, the performances of the two NTPV systems are comparable. Figure 4: Photon transmission coefficient $\zeta(\omega,k)$ for (a) G-FBN-G- InSb cell and (b) G-FBN-G-FBN-InSb cell. The temperatures of the emitter and the cell are kept at $T_{\rm emit}=450$ K and $T_{\rm cell}=320$ K, respectively. The vacuum gap is $d=20$ nm and the chemical potential of graphene is set as $\mu_{\rm g}=1.0$ eV. The h-BN film thickness is $h_{\rm film}=20$ nm and the InSb thin-film thickness is $h_{\rm InSb}=1000$ nm. Figure 5: Performances of two near-field TPV cells for various InSb thin-film thickness. (a) Electrical power density and (b) energy$-$conversion efficiency in units of Carnot efficiency ($\eta_{\rm c}$). The temperatures of the emitter and the cell are set at $T_{\rm emit}=450$ K and $T_{\rm cell}=320$ K, respectively. The vacuum gap distance is $d=20$ nm, the h-BN film thickness is $h_{\rm BN}=20$ nm and the chemical potential of graphene is $\mu_{\rm g}=1.0$ eV. Figure 6: Performances of two near-field TPV cells for various emitter temperatures. (a) Electrical power density and (b) energy$-$conversion efficiency in units of Carnot efficiency ($\eta_{\rm c}$). The temperature of the cell is set at $T_{\rm cell}=320$ K. The vacuum gap distance is $d=20$ nm, the h-BN film thickness is $h_{\rm BN}=20$ nm and the thickness of the InSb thin film is chosen as an optimal value of $h_{\rm InSb}=10000$ nm. The chemical potential of graphene is $\mu_{\rm g}=1.0$ eV. ## IV Optimization of graphene-h-BN-InSb near-field systems Figure 7: Optimal performances of two NTPV cells. (a) Maximum electrical power density and (b) energy$-$conversion efficiency at maximum electric power in units of Carnot efficiency ($\eta_{\rm c}$). The temperatures of the emitter and the cell are set at $T_{\rm emit}=800$ K and $T_{\rm cell}=320$ K, respectively. The h-BN film thickness is $h_{\rm BN}=20$ nm and the thickness of the InSb thin film is $h_{\rm InSb}=10000$ nm. The chemical potential of graphene is $\mu_{\rm g}=1.0$ eV. The Carnot efficiency is given by $\eta_{\rm c}=1-{T_{\rm cell}}/{T_{\rm emit}}$. We now study the performances of the two near-field systems for various InSb thin-film thicknesses and emitter temperatures in Figs. 5 and 6. As presented in Fig. 5 that both of the output power density and the energy-conversion efficiency of the two setups are improved with increasing InSb thin-film thickness. Especially for $h_{\rm InSb}=10000$ nm, the maximum power densities of the G-FBN-G-InSb cell and G-FBN-G-FBN-InSb cell are about $4.5\times 10^{2}~{}\rm{W/m^{2}}$ and $4.0\times 10^{2}~{}\rm{W/m^{2}}$, respectively. The maximum values of the corresponding efficiency are close to $16\%\eta_{\rm C}$ and $15\%\eta_{\rm C}$, respectively. This enhancement can be attributed to the increased absorption factor $\left(1-\exp{\left[-\rm Imag(\it k_{z}^{\rm InSb})\it h_{\rm InSb}\right]}\right)$ because it is dependent on $h_{\rm InSb}$. However, when the InSb thin-film thickness is increased from 4000 nm to 10000 nm, the increase of the output power density and the energy- conversion efficiency for both systems are soon saturated. This is essentially due to the near-field nature of the heat transfer: the photons transferred from the emitter to the InSb cell is dominated by the evanescent electromagnetic waves which are highly localized at the surface of the emitter. Further increase of the thickness of the InSb cell does not change the amount of photons received by the InSb cell. Fig. 6 shows that by increasing the emitter temperature, both the performances of the two setups can be improved by orders of magnitudes. Here, the InSb thin-film thickness has been chosen as the optimal value with $h_{\rm InSb}=10000$ nm. For the G-FBN-G-InSb cell, the maximum output power density and energy efficiency at $T_{\rm emit}=800$ K are increased to $3.3\times 10^{4}~{}\rm{W/m^{2}}$ and $33\%\eta_{c}$, respectively. For the G-FBN-G-FBN- InSb cell, the maximum output power density and energy efficiency at $T_{\rm emit}=800$ K are also improved to $3.1\times 10^{4}~{}\rm{W/m^{2}}$ and $32\%\eta_{c}$, respectively. We now examine the maximum output electric power density and the efficiency at maximum power. We consider specifically the situation with $h_{\rm InSb}=10000$ nm and $T_{\rm emit}=800$ K. As shown in Fig. 7, both the maximum electric power density and the efficiency at the maximum power of the two near-field NTPV systems dramatically increase as the vacuum gap $d$ decreases. The best performances are achieved when the vacuum gap is $d=10$ nm for both systems. Specifically, the maximum electric power density reaches to $1.3\times 10^{5}~{}\rm{W/m^{2}}$, while the efficiency at maximum power goes up to $42\%$ of the Carnot efficiency. The two near-field NTPV systems perform nearly equally well. ## V Summary and conclusions In conclusion, we investigated two NTPV devices, denoted as the G-FBN-G-InSb cell and G-FBN-G-FBN-InSb cell, which have different emitters. The purpose of the investigation is to find a high-performance emitter design with relatively simple structure. Indeed, we find that both systems perform better than the near-field NTPV system based on mono graphene-$h$-BN heterostructure, i.e., graphene-$h$-BN-InSb cell Wang _et al._ (2019). The optimal output power density of the G-FBN-G-InSb cell can reach to $1.3\times 10^{5}~{}\rm{W\cdot m^{-2}}$, nearly twice of the optimal power density of the graphene-$h$-BN- InSb cell. The optimal energy efficiency can be as large as $42\%$ of the Carnot efficiency, which is $24\%$ higher than the optimal efficency of the monocell-system. We analyze the underlying physical mechanisms that lead to the excellent performances of the G-FBN-G-InSb cell. We further show that the performance of the proposed NTPV system is affected negligibly by the finite thickness of the InSb cell which is due to the near-field nature of the heat transfer: the absorbed photons are highly localized at the surface of the InSb cell. Our study shows that NTPV systems are promising for high-performance moderate temperature thermal-to-electric energy conversion. ## VI Acknowledgements We acknowledge support from the National Natural Science Foundation of China (NSFC Grant No. 11675116), the National Natural Science Foundation of China (NSFC Grant No. 12074281), the Jiangsu distinguished professor funding and a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD). J.L thanks China’s Postdoctoral Science Foundation funded project (No. 2020M681376). ## References * Shockley and Queisser (1961) W. Shockley and H. J. Queisser, “Detailed balance limit of efficiency of p-n junction solar cells,” J. Appl. Phys. 32, 510–519 (1961). * Martín and Algora (2004) D. Martín and C. Algora, “Temperature-dependent gasb material parameters for reliable thermophotovoltaic cell modelling,” Semicond. Sci. Tech. 19, 1040 (2004). * Nagashima _et al._ (2007) T. Nagashima, K. Okumura, and M. Yamaguchi, “A germanium back contact type thermophotovoltaic cell,” AIP Conf. Proc. 890, 174–181 (2007). * Fraas and Ferguson (2000) L. M. Fraas and L. G. Ferguson, “Three-layer solid infrared emitter with spectral output matched to low bandgap thermophotovoltaic cells,” US Patent 6, 091, 018 (2000). * Sulima and Bett (2001) O. V. Sulima and A. W. Bett, “Fabrication and simulation of $\rm{GaSb}$ thermophotovoltaic cells,” Sol. Energ. Mat. Sol. C. 66, 533–540 (2001). * Wu _et al._ (2012) C. Wu, B. Neuner III, J. John, A. Milder, B. Zollars, S. Savoy, and G. Shvets, “Metamaterial-based integrated plasmonic absorber/emitter for solar thermo-photovoltaic systems,” J. Optics 14, 024005 (2012). * Chan _et al._ (2013) W. R. Chan, P. Bermel, R. C. N. Pilawa-Podgurski, C. H Marton, K. F. Jensen, J. J. Senkevich, J. D. Joannopoulos, M. Soljačić, and I. Celanovic, “Toward high-energy-density, high-efficiency, and moderate-temperature chip-scale thermophotovoltaics,” Proc. Nat. Acad. Sci. 110, 5309–5314 (2013). * Liao _et al._ (2016) T. Liao, L. Cai, Y. Zhao, and J. Chen, “Efficiently exploiting the waste heat in solid oxide fuel cell by means of thermophotovoltaic cell,” J. Power Sources 306, 666–673 (2016). * Zhao _et al._ (2017a) B. Zhao, K. Chen, S. Buddhiraju, G. Bhatt, M. Lipson, and S. Fan, “High-performance near-field thermophotovoltaics for waste heat recovery,” Nano Energy 41, 344–350 (2017a). * Tervo _et al._ (2018) E. Tervo, E. Bagherisereshki, and Z. M. Zhang, “Near-field radiative thermoelectric energy converters: a review,” Front. Energy 12, 5–21 (2018). * Svetovoy _et al._ (2012) V. B. Svetovoy, P. J. Van Zwol, and J. Chevrier, “Plasmon enhanced near-field radiative heat transfer for graphene covered dielectrics,” Phys. Rev. B 85, 155418 (2012). * Ilic _et al._ (2012) O. Ilic, M. Jablan, J. D. Joannopoulos, I. Celanovic, and M. Soljačić, “Overcoming the black body limit in plasmonic and graphene near-field thermophotovoltaic systems,” Opt. Express 20, A366–A384 (2012). * Svetovoy and Palasantzas (2014) V. B. Svetovoy and G. Palasantzas, “Graphene-on-silicon near-field thermophotovoltaic cell,” Phys. Rev. Appl. 2, 034006 (2014). * Basu _et al._ (2015) S. Basu, Y. Yang, and L. Wang, “Near-field radiative heat transfer between metamaterials coated with silicon carbide thin films,” Appl. Phys. Lett. 106, 033106 (2015). * Narayanaswamy and Chen (2003) A. Narayanaswamy and G. Chen, “Surface modes for near field thermophotovoltaics,” Appl. Phys. Lett. 82, 3544–3546 (2003). * Laroche _et al._ (2006) M. Laroche, R. Carminati, and J. J. Greffet, “Near-field thermophotovoltaic energy conversion,” J. Appl. Phys. 100, 063704 (2006). * Park _et al._ (2008) K. Park, S. Basu, W. P. King, and Z. M. Zhang, “Performance analysis of near-field thermophotovoltaic devices considering absorption distribution,” J. Quant. Spectros. Radiat. Transfer 109, 305–316 (2008). * Bright _et al._ (2014) T. J. Bright, L. P. Wang, and Z. M. Zhang, “Performance of near-field thermophotovoltaic cells enhanced with a backside reflector,” J. Heat Transfer 136, 062701 (2014). * Molesky and Jacob (2015) S. Molesky and Z. Jacob, “Ideal near-field thermophotovoltaic cells,” Phys. Rev. B 91, 205435 (2015). * St-Gelais _et al._ (2017) R. St-Gelais, G. R. Bhatt, L. Zhu, S. Fan, and M. Lipson, “Hot carrier-based near-field thermophotovoltaic energy conversion,” ACS nano 11, 3001–3009 (2017). * Jiang and Imry (2018) J.-H. Jiang and Yoseph Imry, “Near-field three-terminal thermoelectric heat engine,” Phys. Rev. B 97 (2018). * Papadakis _et al._ (2020) G. T. Papadakis, S. Buddhiraju, Z. Zhao, B. Zhao, and S. Fan, “Broadening near-field emission for performance enhancement in thermophotovoltaics,” Nano Lett. 20, 1654–1661 (2020). * Messina and Ben-Abdallah (2013) R. Messina and P. Ben-Abdallah, “Graphene-based photovoltaic cells for near-field thermal energy conversion,” Sci. Rep. 3, 1383 (2013). * Woessner _et al._ (2015) A. Woessner, M. B. Lundeberg, Y. Gao, A. Principi, P. Alonso-González, M. Carrega, K. Watanabe, T. Taniguchi, G. Vignale, M. Polini, James H., R. Hillenbrand, and F. H. L. Koppens, “Highly confined low-loss plasmons in graphene-boron nitride heterostructures,” Nat. Materials 14, 421 (2015). * Zhao and Zhang (2015) B. Zhao and Z. M. Zhang, “Enhanced photon tunneling by surface plasmon cphonon polaritons in graphene/$\rm{hBN}$ heterostructures,” J. Heat Transfer 139, 022701–022701–8 (2015). * Zhao _et al._ (2017b) B. Zhao, B. Guizal, Z. M. Zhang, S. Fan, and M. Antezza, “Near-field heat transfer between graphene/$\rm{hBN}$ multilayers,” Phys. Rev. B 95, 245437 (2017b). * Shi _et al._ (2017) K. Shi, F. Bao, and S. He, “Enhanced near-field thermal radiation based on multilayer graphene-hbn heterostructures,” ACS Photonics 4, 971–978 (2017). * Wang _et al._ (2019) R. Wang, J. Lu, and J.-H. Jiang, “Enhancing thermophotovoltaic performance using graphene-bn-$\mathrm{In}\mathrm{Sb}$ near-field heterostructures,” Phys. Rev. Applied 12, 044038 (2019). * Polder and Van H. (1971) D. Polder and M. Van H., “Theory of radiative heat transfer between closely spaced bodies,” Phys. Rev. B 4, 3303 (1971). * Pendry (1999) J. B. Pendry, “Radiative exchange of heat between nanostructures,” J. Phys.: Condens. Matter 11, 6621 (1999). * Mulet _et al._ (2002) J. P. Mulet, K. Joulain, R. Carminati, and J. J. Greffet, “Enhanced radiative heat transfer at nanometric distances,” Microscale Thermophys. Eng. 6, 209–222 (2002). * Whittaker and Culshaw (1999) D. M. Whittaker and I. S. Culshaw, “Scattering-matrix treatment of patterned multilayer photonic structures,” Phys. Rev. B 60, 2610–2618 (1999). * Zhang (2007) Z. M. Zhang, _Nano/microscale heat transfer_ (McGraw-Hill, 2007). * Ashcroft and Mermin (1976) N. W. Ashcroft and N. D. Mermin, _Solid state physics_ (Cengage Learning, 1976). * Shur (1996) M. S. Shur, _Handbook series on semiconductor parameters_ , Vol. 1 (World Scientific, 1996). * Lim _et al._ (2015) M. Lim, S. Jin, S. S. Lee, and B. J. Lee, “Graphene-assisted si-insb thermophotovoltaic system for low temperature applications,” Opt. Express 23, A240–A253 (2015). * Kumar _et al._ (2015) A. Kumar, T. Low, K. H. Fung, P. Avouris, and N. X. Fang, “Tunable light-matter interaction and the role of hyperbolicity in graphene-$\rm{hBN}$ system,” Nano Lett. 15, 3172–3180 (2015). * Geick _et al._ (1966) R. Geick, C. H. Perry, and G. Rupprecht, “Normal modes in hexagonal boron nitride,” Phys. Rev. 146, 543 (1966). * Caldwell _et al._ (2014) J. D. Caldwell, A. V. Kretinin, Y. Chen, V. Giannini, M. M. Fogler, Y. Francescato, C. T. Ellis, J. G. Tischler, C. R. Woods, A. J. Giles, M. Hong, K. Watanabe, T. Taniguchi, S. A. Maier, and K. S. Novoselov, “Sub-diffractional volume-confined polaritons in the natural hyperbolic material hexagonal boron nitride,” Nature Commun. 5, 5221 (2014). * Vakil and Engheta (2011) A. Vakil and N. Engheta, “Transformation optics using graphene,” Science 332, 1291–1294 (2011). * Falkovsky (2008) L. A. Falkovsky, “Optical properties of graphene,” J. Phys.: Conference Series 129, 012004 (2008). * Yang _et al._ (2006) T.-R. Yang, Y. Cheng, J.-B. Wang, and Z. C. Feng, “Optical and transport properties of insb thin films grown on gaas by metalorganic chemical vapor deposition,” Thin solid films 498, 158–162 (2006). * Jacob (2014) Z. Jacob, “Nanophotonics: hyperbolic phonon-polaritons,” Nat. Mater. 13, 1081 (2014). * Brar _et al._ (2014) V. W. Brar, M. S. Jang, M. Sherrott, S. Kim, J. J. Lopez, L. B. Kim, M. Choi, and H. Atwater, “Hybrid surface-phonon-plasmon polariton modes in graphene/monolayer h-bn heterostructures,” Nano Lett. 14, 3876–3880 (2014).
# The study of ${\eta}_{c}(1S)$ ${\to}$ $PP^{\prime}$ decays Yueling Yang Institute of Particle and Nuclear Physics, Henan Normal University, Xinxiang 453007, China Xule Zhao Institute of Particle and Nuclear Physics, Henan Normal University, Xinxiang 453007, China Shuangshi Fang Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China Jinshu Huang School of Physics and Electronic Engineering, Nanyang Normal University, Nanyang 473061, China Junfeng Sun Institute of Particle and Nuclear Physics, Henan Normal University, Xinxiang 453007, China ###### Abstract The ${\eta}_{c}(1S)$ ${\to}$ $PP^{\prime}$ decays are the parity violation modes. These decays can be induced by the weak interactions within the standard model, and have been searched for based on the available experimental data. To meet the needs of experimental investigation, the ${\eta}_{c}(1S)$ ${\to}$ $PP^{\prime}$ decays are studied with the perturbative QCD approach. It is found that branching ratios are the order of $10^{-15}$ and less, which offers a ready reference for future analyses. Charmonium is a system containing the charmed quark and antiquark $c\bar{c}$. Recently, the study of charmonium regained a great renewed interest due to many new discoveries from the massive dedicated investigation by BES-II, CLEO-c, BES-III, BaBar, Belle, Belle-II and LHCb pdg2020 . The ${\eta}_{c}(1S)$ meson is commonly referred to as ${\eta}_{c}$. Both the total spin and orbital angular momentum of $c$ and $\bar{c}$ quarks in ${\eta}_{c}$ are zero. The ${\eta}_{c}$ particle is the paracharmonium state with the well established quantum number of $J^{PC}$ $=$ $0^{-+}$ pdg2020 . Its $J^{PC}$ is different from that of photon. ${\eta}_{c}$ cannot be directly produced at the $e^{+}e^{-}$ collisions. However, ${\eta}_{c}$ can be produced via the magnetic dipole transition process $J/{\psi}$ ${\to}$ ${\gamma}{\eta}_{c}$ with the branching ratio of ${\cal B}r(J/{\psi}{\to}{\gamma}{\eta}_{c})$ $=$ $(1.7{\pm}0.4)\%$ pdg2020 . Up to now, there is over $10^{10}$ $J/{\psi}$ data samples with BESIII detector dataweb , the largest amount of available statistics, and corresponding to more than $10^{8}$ ${\eta}_{c}$. Given the large $J/{\psi}$ production cross section ${\sigma}$ ${\sim}$ $3400$ $nb$ nimpra614.345 , it is expected that more than $10^{13}$ $J/{\psi}$, corresponding to more than $10^{11}$ ${\eta}_{c}$, could be accumulated at the Super Tau Charm Facility (STCF) with $3\,ab^{-1}$ on-resonance dataset in the future. This provides a good opportunity for studying the properties of ${\eta}_{c}$ particle. Although there is a large amount of data, the experimental study of ${\eta}_{c}$ decays is comparatively limited. So far, only 33 exclusive ${\eta}_{c}$ decay modes have been reported with concrete numerical value. The sum of the 33 branching ratios is about 63%, and most of measurements have very large uncertainties pdg2020 . The mass of ${\eta}_{c}$ particle, $m_{{\eta}_{c}}$ $=$ $2983.9{\pm}0.5$ MeV pdg2020 , is minimal among charmonium, and lies below the open charm threshold. So the ${\eta}_{c}$ decay into hadronic states through the strong interactions is severely hindered by the phenomenological Okubo-Zweig-Iizuka (OZI) rule ozi-o ; ozi-z ; ozi-i . The $c\bar{c}$ quark pair in the ${\eta}_{c}$ state can annihilate into two gluons and two photons with branching ratio of ${\cal B}r({\eta}_{c}{\to}{\gamma}{\gamma})$ $=$ $(1.58{\pm}0.11){\times}10^{-4}$ pdg2020 . Among the nonleptonic ${\eta}_{c}$ decays, the simplest hadronic final states are two pseudoscalar mesons. However, it should be pointed out that the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays are the parity violating modes, so they should be induced by the weak interactions rather than the strong and electromagnetic ones. The ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays were experimentally studied at BES-II and BES-III, but no significant signals are observed and only the upper limits on branching ratios are obtained by now pdg2020 ; prd84.032006 ; epjc45.337 . As far as we know, there is no theoretical investigation on the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays yet. In this paper, according to the future experimental prospects, we will study the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays within the standard model (SM) of the elementary particles in order to offer a ready reference for future analysis. Within SM, the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays are induced by the $W^{\pm}$ exchange interaction. At the quark level, based on the operator product expansion and renormalization group (RG) method, the effective Hamiltonian in charge of the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays is written as rmp68.1125 , ${\cal H}_{\rm eff}\ =\ \frac{G_{F}}{\sqrt{2}}\,\sum\limits_{q_{1},q_{2}}\,V_{cq_{1}}\,V_{cq_{2}}^{\ast}\,\big{\\{}C_{1}({\mu})\,O_{1}({\mu})+C_{2}({\mu})\,O_{2}({\mu})\big{\\}}+{\rm h.c.},$ (1) where $G_{F}$ ${\simeq}$ $1.166{\times}10^{-5}\,{\rm GeV}^{-2}$ pdg2020 is the Fermi coupling constant, and $q_{1,2}$ $=$ $d$ and $s$. The averaged values of the Cabibbo-Kobayashi-Maskawa (CKM) elements are ${|}V_{cs}{|}$ $=$ $0.987(11)$ and ${|}V_{cd}{|}$ $=$ $0.221(4)$ pdg2020 . The parameter ${\mu}$ is a factorization scale, which divides the physical contributions into two parts, the short- and long-distance contributions. The Wilson coefficients $C_{1,2}$ summarize the short-distance physical contributions above the scales of ${\mu}$. They are computable with the RG-improved perturbation theory at the scale of the mass of gauge $W$ boson, $m_{W}$, and then evolved to a characteristic scale of ${\mu}$ for $c$ quark decay. $\vec{C}({\mu})\,=\,U_{4}({\mu},m_{b})\,U_{5}(m_{b},m_{W})\,\vec{C}(m_{W}),$ (2) where the explicit expression of $U_{f}({\mu}_{f},{\mu}_{i})$ can be found in Ref. rmp68.1125 . The operators describing the local interactions among four quarks are defined as, $\displaystyle O_{1}$ $\displaystyle=$ $\displaystyle\big{[}\bar{c}_{\alpha}\,{\gamma}_{\mu}\,(1-{\gamma}_{5})\,q_{1,{\alpha}}\big{]}\,\big{[}\bar{q}_{2,{\beta}}\,{\gamma}^{\mu}\,(1-{\gamma}_{5})\,c_{\beta}\big{]},$ (3) $\displaystyle O_{2}$ $\displaystyle=$ $\displaystyle\big{[}\bar{c}_{\alpha}\,{\gamma}_{\mu}\,(1-{\gamma}_{5})\,q_{1,{\beta}}\big{]}\,\big{[}\bar{q}_{2,{\beta}}\,{\gamma}^{\mu}\,(1-{\gamma}_{5})\,c_{\alpha}\big{]},$ (4) where ${\alpha}$ and ${\beta}$ are color indices. The contributions of penguin operators are neglected because of the strong suppression from the CKM factors, $(V_{uq_{1}}\,V_{uq_{2}}^{\ast}+V_{cq_{1}}\,V_{cq_{2}}^{\ast})/(V_{cq_{1}}\,V_{cq_{2}}^{\ast})$ $=$ $-(V_{tq_{1}}\,V_{tq_{2}}^{\ast})/(V_{cq_{1}}\,V_{cq_{2}}^{\ast})$ $=$ ${\cal O}({\lambda}^{4})$ with the Wolfenstein parameter ${\lambda}$ ${\approx}$ $0.2$. The decay amplitudes can be written as, ${\cal A}({\eta}_{c}{\to}PP^{\prime})\,=\,{\langle}PP^{\prime}{|}{\cal H}_{\rm eff}{|}{\eta}_{c}{\rangle}\,=\,\frac{G_{F}}{\sqrt{2}}\,\sum\limits_{q_{1},q_{2}}\,V_{cq_{1}}\,V_{cq_{2}}^{\ast}\,\sum\limits_{i=1}^{2}\,C_{i}({\mu})\,{\langle}PP^{\prime}{|}O_{i}({\mu})\,{|}{\eta}_{c}{\rangle}.$ (5) In Eq.(5), the Fermi constant $G_{F}$, the CKM elements ${|}V_{cs}{|}$ and ${|}V_{cd}{|}$ have been pretty well determined from data, and the Wilson coefficients $C_{1,2}$ could be reliably computed. So the remaining theoretical work is the evaluations of hadronic matrix elements (HMEs) ${\langle}O_{i}{\rangle}$ $=$ ${\langle}PP^{\prime}{|}O_{i}({\mu})\,{|}{\eta}_{c}{\rangle}$. HMEs describe the complex transformation between quarks and hadrons, and contain the perturbative and nonperturbative contributions. Recently, some QCD-inspired phenomenological models, such as the QCD factorization (QCDF) approach prl83.1914 ; npb591.313 ; npb606.245 ; plb488.46 ; plb509.263 ; prd64.014036 and the perturbative QCD (pQCD) approach prl74.4388 ; plb348.597 ; prd52.3958 ; prd63.074006 ; prd63.054008 ; prd63.074009 ; plb555.197 , have been technically proposed and widely applied to evaluate HMEs. With these phenomenological models, HMEs are generally written as the convolution of scattering amplitudes and the hadronic wave functions (WFs). The scattering amplitudes and WFs reflect the contributions at the quark and hadron levels, respectively. The scattering amplitudes arising from hard gluon exchanges among quarks are calculable with the perturbative theory. WFs including the momentum distributions of hadronic compositions are universal, and could be obtained by nonperturbative methods or from data. In the practical theoretical calculation, the transverse momentum and Sudakov factors are proposed by the pQCD approach to provide an effective cutoff for the endpoint singularities from the collinear approximation. In this paper, we will investigate the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays with the pQCD approach, where the decay amplitudes are expressed as the convolution integral of three parts : the Wilson coefficients $C_{i}$, scattering amplitudes ${\cal H}$ and hadronic WFs ${\Phi}$. $\displaystyle{\cal A}_{i}$ $\displaystyle=$ $\displaystyle{\int}dx_{1}\,dx_{2}\,dx_{3}\,db_{1}\,db_{2}\,db_{3}\,C_{i}(t_{i})\,{\cal H}_{i}(x_{1},x_{2},x_{3},b_{1},b_{2},b_{3})$ (6) $\displaystyle\quad{\Phi}_{{\eta}_{c}}(x_{1},b_{1})\,e^{-S_{{\eta}_{c}}}\,{\Phi}_{P}(x_{2},b_{2})\,e^{-S_{P}}\,{\Phi}_{P^{\prime}}(x_{3},b_{3})\,e^{-S_{P^{\prime}}},$ where $x_{i}$ is the longitudinal momentum fraction of the valence quark, $b_{i}$ is the conjugate variable of the transverse momentum, and $e^{-S_{i}}$ is the Sudakov factor. In the calculation, it is convenient to use the light-cone vectors to define the kinematic variables. In the rest frame of the ${\eta}_{c}$ meson, one has $p_{{\eta}_{c}}\,=\,p_{1}\,=\,\frac{m_{{\eta}_{c}}}{\sqrt{2}}(1,1,0),$ (7) $p_{P}\,=\,p_{2}\,=\,\frac{m_{{\eta}_{c}}}{\sqrt{2}}(1,0,0),$ (8) $p_{P^{\prime}}\,=\,p_{3}\,=\,\frac{m_{{\eta}_{c}}}{\sqrt{2}}(0,1,0),$ (9) $k_{1}\,=\,\frac{m_{{\eta}_{c}}}{\sqrt{2}}(x_{1},x_{1},\vec{k}_{1T}),$ (10) $k_{2}\,=\,\frac{m_{{\eta}_{c}}}{\sqrt{2}}(x_{2},0,\vec{k}_{2T}),$ (11) $k_{3}\,=\,\frac{m_{{\eta}_{c}}}{\sqrt{2}}(0,x_{3},\vec{k}_{3T}),$ (12) where $k_{i}$, $x_{i}$ and $\vec{k}_{iT}$ are respectively the momentum, longitudinal momentum fraction and transverse momentum, as shown in Fig. 1 (a). (a) (b) (c) (d) Figure 1: The Feynman diagrams for the ${\eta}_{c}$ ${\to}$ $K^{-}K^{+}$ decay with the pQCD approach, where (a,b) are factorizable diagrams, and (c,d) are nonfactorizable diagrams. The dots denote appropriate interactions, and the dashed circles denote scattering amplitudes. With the convention of Refs. epjc73.2437 ; jhep0605.004 ; prd65.014007 ; 2012.10581 , the WFs and distribution amplitudes (DAs) are defined as follows. ${\langle}\,0\,{|}\,\bar{c}_{\alpha}(0)c_{\beta}(z)\,{|}{\eta}_{c}(p_{1})\,{\rangle}\,=\,-\frac{i}{4}\,f_{{\eta}_{c}}\,{\int}_{0}^{1}\,dx_{1}\,e^{-i\,k_{1}{\cdot}z}\,\big{\\{}\big{[}\\!\not{p}_{1}\,{\phi}_{{\eta}_{c}}^{a}+m_{{\eta}_{c}}\,{\phi}_{{\eta}_{c}}^{p}\big{]}\,{\gamma}_{5}\big{\\}}_{{\beta}{\alpha}},$ (13) $\displaystyle{\langle}\,P(p_{2})\,{|}\,\bar{q}_{\alpha}(0)\,q_{1{\beta}}(z)\,{|}\,0\,{\rangle}$ (14) $\displaystyle=$ $\displaystyle-\frac{i\,f_{P}}{4}\,{\int}_{0}^{1}\,dx_{1}\,e^{+i\,k_{2}{\cdot}z}\,\big{\\{}{\gamma}_{5}\,\big{[}\\!\not{p}_{2}\,{\phi}_{P}^{a}+{\mu}_{P}\,{\phi}_{P}^{p}-{\mu}_{P}\,\big{(}\\!\not{n}_{+}\\!\not{n}_{-}-1\big{)}\,{\phi}_{P}^{t}\big{]}\big{\\}}_{{\beta}{\alpha}},$ $\displaystyle{\langle}\,P^{\prime}(p_{3})\,{|}\,\bar{q}_{2{\alpha}}(0)\,q_{\beta}(z)\,{|}\,0\,{\rangle}$ (15) $\displaystyle=$ $\displaystyle-\frac{i\,f_{P^{\prime}}}{4}\,{\int}_{0}^{1}\,dx_{1}\,e^{+i\,k_{3}{\cdot}z}\,\big{\\{}{\gamma}_{5}\,\big{[}\\!\not{p}_{3}\,{\phi}_{P^{\prime}}^{a}+{\mu}_{P^{\prime}}\,{\phi}_{P^{\prime}}^{p}-{\mu}_{P^{\prime}}\,\big{(}\\!\not{n}_{-}\\!\not{n}_{+}-1\big{)}\,{\phi}_{P^{\prime}}^{t}\big{]}\big{\\}}_{{\beta}{\alpha}},$ where $f_{{\eta}_{c}}$ and $f_{P,P^{\prime}}$ are decay constants. ${\mu}_{P,P^{\prime}}$ $=$ $1.6{\pm}0.2$ GeV jhep0605.004 is the chiral mass. $n_{+}$ $=$ $(1,0,0)$ and $n_{-}$ $=$ $(0,1,0)$ are the null vectors. The explicit expressions of ${\phi}_{{\eta}_{c}}^{a,p}$ and ${\phi}_{P}^{a,p,t}$ can be found in Ref. epjc73.2437 and Refs. 2012.10581 ; jhep0605.004 , respectively. We collect and display these WFs and DAs as follows. ${\phi}_{{\eta}_{c}}^{a}(x,b)\,=\,N^{a}\,x\,\bar{x}\,{\exp}\Big{\\{}-\frac{m_{c}}{\omega}\,x\,\bar{x}\,\Big{[}\Big{(}\frac{x-\bar{x}}{2\,x\,\bar{x}}\Big{)}^{2}+{\omega}^{2}\,b^{2}\Big{]}\Big{\\}},$ (16) ${\phi}_{{\eta}_{c}}^{p}(x,b)\,=\,N^{p}\,{\exp}\Big{\\{}-\frac{m_{c}}{\omega}\,x\,\bar{x}\,\Big{[}\Big{(}\frac{x-\bar{x}}{2\,x\,\bar{x}}\Big{)}^{2}+{\omega}^{2}\,b^{2}\Big{]}\Big{\\}},$ (17) ${\phi}_{P}^{a}(x)\,=\,6\,x\,\bar{x}\,\big{\\{}1+a_{1}^{P}\,C_{1}^{3/2}({\xi})+a_{2}^{P}\,C_{2}^{3/2}({\xi})\big{\\}},$ (18) $\displaystyle{\phi}_{P}^{p}(x)$ $\displaystyle=$ $\displaystyle 1+3\,{\rho}_{+}^{P}-9\,{\rho}_{-}^{P}\,a_{1}^{P}+18\,{\rho}_{+}^{P}\,a_{2}^{P}$ (19) $\displaystyle+$ $\displaystyle\frac{3}{2}\,({\rho}_{+}^{P}+{\rho}_{-}^{P})\,(1-3\,a_{1}^{P}+6\,a_{2}^{P})\,{\ln}(x)$ $\displaystyle+$ $\displaystyle\frac{3}{2}\,({\rho}_{+}^{P}-{\rho}_{-}^{P})\,(1+3\,a_{1}^{P}+6\,a_{2}^{P})\,{\ln}(\bar{x})$ $\displaystyle-$ $\displaystyle(\frac{3}{2}\,{\rho}_{-}^{P}-\frac{27}{2}\,{\rho}_{+}^{P}\,a_{1}^{P}+27\,{\rho}_{-}^{P}\,a_{2}^{P})\,C_{1}^{1/2}(\xi)$ $\displaystyle+$ $\displaystyle(30\,{\eta}_{P}-3\,{\rho}_{-}^{P}\,a_{1}^{P}+15\,{\rho}_{+}^{P}\,a_{2}^{P})\,C_{2}^{1/2}(\xi),$ $\displaystyle{\phi}_{P}^{t}(x)$ $\displaystyle=$ $\displaystyle\frac{3}{2}\,({\rho}_{-}^{P}-3\,{\rho}_{+}^{P}\,a_{1}^{P}+6\,{\rho}_{-}^{P}\,a_{2}^{P})$ (20) $\displaystyle-$ $\displaystyle C_{1}^{1/2}(\xi)\big{\\{}1+3\,{\rho}_{+}^{P}-12\,{\rho}_{-}^{P}\,a_{1}^{P}+24\,{\rho}_{+}^{P}\,a_{2}^{P}$ $\displaystyle\quad+\frac{3}{2}\,({\rho}_{+}^{P}+{\rho}_{-}^{P})\,(1-3\,a_{1}^{P}+6\,a_{2}^{P})\,{\ln}(x)$ $\displaystyle\quad+\frac{3}{2}\,({\rho}_{+}^{P}-{\rho}_{-}^{P})\,(1+3\,a_{1}^{P}+6\,a_{2}^{P})\,{\ln}(\bar{x})\big{\\}}$ $\displaystyle-$ $\displaystyle 3\,(3\,{\rho}_{+}^{P}\,a_{1}^{P}-\frac{15}{2}\,{\rho}_{-}^{P}\,a_{2}^{P})\,C_{2}^{1/2}(\xi),$ where $\bar{x}$ $=$ $1$ $-$ $x$ and ${\xi}$ $=$ $x$ $-$ $\bar{x}$ $=$ $2\,x$ $-$ $1$. ${\omega}$ $=$ $m_{c}\,{\alpha}_{s}(m_{c})$ is the shape parameter. The parameters $N^{a,p}$ is determined by the normalization conditions, ${\int}{\phi}_{{\eta}_{c}}^{a,p}(x,0)\,dx\,=\,1.$ (21) The meaning and definition of other parameters can refer to Refs. 2012.10581 ; jhep0605.004 . From Fig. 1, it can be clearly seen that there are only annihilation amplitudes for the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays in SM, because the valence quarks of the final states are entirely different from those of the initial state. The annihilation contributions are necessary and important in nonleptonic two-body $B$ meson decays prd65.074001 ; prd65.094025 ; prd68.054003 ; npb675.333 ; npb774.64 ; prd90.054019 ; prd91.074026 ; plb740.56 ; plb743.444 ; plb504.6 ; prd70.034009 ; prd85.094003 ; prd76.074018 ; prd88.014043 . The ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays offer another processes to investigate the annihilation contributions within the factorization approaches, besides the $B_{d}$ ${\to}$ $K^{+}K^{-}$ and $B_{s}$ ${\to}$ ${\pi}{\pi}$ decays. The decay amplitudes are written as follows. ${\cal A}({\eta}_{c}{\to}K^{+}K^{-})\,=\,\frac{G_{F}}{\sqrt{2}}\,V_{cs}\,V_{cs}^{\ast}\,\big{\\{}a_{2}\,{\cal A}_{ab}(\overline{K},K)+C_{1}\,{\cal A}_{cd}(\overline{K},K)\big{\\}},$ (22) $\displaystyle{\cal A}({\eta}_{c}{\to}K^{0}\overline{K}^{0})$ $\displaystyle=$ $\displaystyle\frac{G_{F}}{\sqrt{2}}\,\big{\\{}V_{cs}\,V_{cs}^{\ast}\,\big{[}a_{2}\,{\cal A}_{ab}(\overline{K},K)+C_{1}\,{\cal A}_{cd}(\overline{K},K)\big{]}$ (23) $\displaystyle\hskip 17.34189pt+V_{cd}\,V_{cd}^{\ast}\,\big{[}a_{2}\,{\cal A}_{ab}(K,\overline{K})+C_{1}\,{\cal A}_{cd}(K,\overline{K})\big{]}\big{\\}},$ ${\cal A}({\eta}_{c}{\to}{\pi}^{+}K^{-})\,=\,\frac{G_{F}}{\sqrt{2}}\,V_{cd}\,V_{cs}^{\ast}\,\big{\\{}a_{2}\,{\cal A}_{ab}(\overline{K},{\pi})+C_{1}\,{\cal A}_{cd}(\overline{K},{\pi})\big{\\}},$ (24) ${\cal A}({\eta}_{c}{\to}{\pi}^{0}\overline{K}^{0})\,=\,-\frac{G_{F}}{2}\,V_{cd}\,V_{cs}^{\ast}\,\big{\\{}a_{2}\,{\cal A}_{ab}(\overline{K},{\pi})+C_{1}\,{\cal A}_{cd}(\overline{K},{\pi})\big{\\}},$ (25) ${\cal A}({\eta}_{c}{\to}{\pi}^{+}{\pi}^{-})\,=\,\frac{G_{F}}{\sqrt{2}}\,V_{cd}\,V_{cd}^{\ast}\,\big{\\{}a_{2}\,{\cal A}_{ab}({\pi},{\pi})+C_{1}\,{\cal A}_{cd}({\pi},{\pi})\big{\\}},$ (26) $\sqrt{2}\,{\cal A}({\eta}_{c}{\to}{\pi}^{0}{\pi}^{0})\,=\,\frac{G_{F}}{\sqrt{2}}\,V_{cd}\,V_{cd}^{\ast}\,\big{\\{}a_{2}\,{\cal A}_{ab}({\pi},{\pi})+C_{1}\,{\cal A}_{cd}({\pi},{\pi})\big{\\}},$ (27) ${\cal A}({\eta}_{c}{\to}\overline{K}^{0}{\eta}_{s})\,=\,\frac{G_{F}}{\sqrt{2}}\,V_{cd}\,V_{cs}^{\ast}\,\big{\\{}a_{2}\,{\cal A}_{ab}({\eta}_{s},\overline{K})+C_{1}\,{\cal A}_{cd}({\eta}_{s},\overline{K})\big{\\}},$ (28) ${\cal A}({\eta}_{c}{\to}\overline{K}^{0}{\eta}_{q})\,=\,\frac{G_{F}}{2}\,V_{cd}\,V_{cs}^{\ast}\,\big{\\{}a_{2}\,{\cal A}_{ab}(\overline{K},{\eta}_{q})+C_{1}\,{\cal A}_{cd}(\overline{K},{\eta}_{q})\big{\\}},$ (29) ${\cal A}({\eta}_{c}{\to}\overline{K}^{0}{\eta})\,=\,{\cal A}({\eta}_{c}{\to}\overline{K}^{0}{\eta}_{q})\,{\cos}{\phi}-{\cal A}({\eta}_{c}{\to}\overline{K}^{0}{\eta}_{s})\,{\sin}{\phi},$ (30) ${\cal A}({\eta}_{c}{\to}\overline{K}^{0}{\eta}^{\prime})\,=\,{\cal A}({\eta}_{c}{\to}\overline{K}^{0}{\eta}_{q})\,{\sin}{\phi}+{\cal A}({\eta}_{c}{\to}\overline{K}^{0}{\eta}_{s})\,{\cos}{\phi},$ (31) $\displaystyle{\cal A}({\eta}_{c}{\to}{\pi}^{0}{\eta}_{q})$ $\displaystyle=$ $\displaystyle-\frac{1}{2}\,\frac{G_{F}}{\sqrt{2}}\,V_{cd}\,V_{cd}^{\ast}\,\big{\\{}a_{2}\,\big{[}{\cal A}_{ab}({\pi},{\eta}_{q})+{\cal A}_{ab}({\eta}_{q},{\pi})\big{]}$ (32) $\displaystyle\qquad\qquad\qquad+C_{1}\,\big{[}{\cal A}_{cd}({\pi},{\eta}_{q})+{\cal A}_{cd}({\eta}_{q},{\pi})\big{]}\big{\\}},$ ${\cal A}({\eta}_{c}{\to}{\pi}^{0}{\eta})\,=\,{\cal A}({\eta}_{c}{\to}{\pi}^{0}{\eta}_{q})\,{\cos}{\phi},$ (33) ${\cal A}({\eta}_{c}{\to}{\pi}^{0}{\eta}^{\prime})\,=\,{\cal A}({\eta}_{c}{\to}{\pi}^{0}{\eta}_{q})\,{\sin}{\phi},$ (34) ${\cal A}({\eta}_{c}{\to}{\eta}_{s}{\eta}_{s})\,=\,\sqrt{2}\,G_{F}\,V_{cs}\,V_{cs}^{\ast}\,\big{\\{}a_{2}\,{\cal A}_{ab}({\eta}_{s},{\eta}_{s})+C_{1}\,{\cal A}_{cd}({\eta}_{s},{\eta}_{s})\big{\\}},$ (35) ${\cal A}({\eta}_{c}{\to}{\eta}_{q}{\eta}_{q})\,=\,\frac{G_{F}}{\sqrt{2}}\,V_{cd}\,V_{cd}^{\ast}\,\big{\\{}a_{2}\,{\cal A}_{ab}({\eta}_{q},{\eta}_{q})+C_{1}\,{\cal A}_{cd}({\eta}_{q},{\eta}_{q})\big{\\}},$ (36) $\sqrt{2}\,{\cal A}({\eta}_{c}{\to}{\eta}{\eta})\,=\,{\cal A}({\eta}_{c}{\to}{\eta}_{q}{\eta}_{q})\,{\cos}^{2}{\phi}+{\cal A}({\eta}_{c}{\to}{\eta}_{s}{\eta}_{s})\,{\sin}^{2}{\phi},$ (37) ${\cal A}({\eta}_{c}{\to}{\eta}{\eta}^{\prime})\,=\,\big{\\{}{\cal A}({\eta}_{c}{\to}{\eta}_{q}{\eta}_{q})-{\cal A}({\eta}_{c}{\to}{\eta}_{s}{\eta}_{s})\big{\\}}\,{\sin}{\phi}\,{\cos}{\phi},$ (38) $\sqrt{2}\,{\cal A}({\eta}_{c}{\to}{\eta}^{\prime}{\eta}^{\prime})\,=\,{\cal A}({\eta}_{c}{\to}{\eta}_{q}{\eta}_{q})\,{\sin}^{2}{\phi}+{\cal A}({\eta}_{c}{\to}{\eta}_{s}{\eta}_{s})\,{\cos}^{2}{\phi},$ (39) where the amplitude building blocks ${\cal A}_{ij}$ are listed in Appendix A. As for the above decay amplitudes, there are two comments. (1) The isoscalar states ${\eta}$ and/or ${\eta}^{\prime}$ with the same $J^{PC}$ are mixtures of the $SU(3)$ octet and singlet states. In our calculation, we will adopt the quark-flavor basis description proposed in Ref. prd58.114006 , i.e., $\left(\begin{array}[]{c}{\eta}\\\ {\eta}^{\prime}\end{array}\right)\,=\,\left(\begin{array}[]{cc}{\cos}{\phi}&-{\sin}{\phi}\\\ {\sin}{\phi}&{\cos}{\phi}\end{array}\right)\,\left(\begin{array}[]{c}{\eta}_{q}\\\ {\eta}_{s}\end{array}\right),$ (40) where the mixing angle is ${\phi}$ $=$ $39.3(1.0)^{\circ}$, and the flavor bases are ${\eta}_{q}$ $=$ $(u\bar{u}+d\bar{d})/{\sqrt{2}}$ and ${\eta}_{s}$ $=$ $s\bar{s}$ prd58.114006 . Here, it is assumed that (a) the components of glueball, charmonium or bottomonium are negligible, and (b) that the DAs of ${\eta}_{q}$ and ${\eta}_{s}$ are the same as those of ${\pi}$ meson, but with different decay constants and mass prd58.114006 ; prd76.074018 ; prd89.114019 , $f_{q}\,=\,1.07(2)\,f_{\pi},$ (41) $f_{s}\,=\,1.34(6)\,f_{\pi},$ (42) $m_{{\eta}_{q}}^{2}\,=\,m_{\eta}^{2}\,{\cos}^{2}{\phi}+m_{{\eta}^{\prime}}^{2}\,{\sin}^{2}{\phi}-\frac{\sqrt{2}\,f_{s}}{f_{q}}(m_{{\eta}^{\prime}}^{2}-m_{\eta}^{2})\,{\cos}{\phi}\,{\sin}{\phi},$ (43) $m_{{\eta}_{s}}^{2}\,=\,m_{\eta}^{2}\,{\sin}^{2}{\phi}+m_{{\eta}^{\prime}}^{2}\,{\cos}^{2}{\phi}-\frac{f_{q}}{\sqrt{2}\,f_{s}}(m_{{\eta}^{\prime}}^{2}-m_{\eta}^{2})\,{\cos}{\phi}\,{\sin}{\phi},$ (44) (2) One distinctive feature is that the amplitudes for ${\eta}_{c}$ ${\to}$ $K^{+}K^{-}$ and ${\pi}^{+}{\pi}^{-}$ decays are respectively proportional to the module square of the CKM elements $V_{cs}$ and $V_{cd}$. It is well known that the magnitudes of $V_{cs}$ and $V_{cd}$ are extracted from leptonic and semileptonic charm decays. If these nonleptonic decay modes are accurately measured in the future, they could offer another determinations or constraints of ${|}V_{cs}{|}$ and ${|}V_{cd}{|}$. Table 1: The values of the input parameters, where their central values will be regarded as the default inputs unless otherwise specified. The numbers in parentheses are errors. mass, width and decay constants of the particles pdg2020 --- $m_{{\pi}^{0}}$ $=$ $134.98$ MeV, | $m_{K^{0}}$ $=$ $497.61$ MeV, | $f_{{\pi}}$ $=$ $130.2(1.2)$ MeV, $m_{{\pi}^{\pm}}$ $=$ $139.57$ MeV, | $m_{K^{\pm}}$ $=$ $493.68$ MeV, | $f_{K}$ $=$ $155.7(3)$ MeV, $m_{\eta}$ $=$ $547.86$ MeV, | $m_{{\eta}^{\prime}}$ $=$ $957.78$ MeV, | $f_{{\eta}_{c}}$ $=$ $398.1(1.0)$ MeV prd102.054511 , $m_{c}$ $=$ $1.67(7)$ GeV, | $m_{{\eta}_{c}}$ $=$ $2983.9(5)$ MeV, | ${\Gamma}_{{\eta}_{c}}$ $=$ $32.1(8)$ MeV, Gegenbauer moments at the scale of ${\mu}$ $=$ 1 GeV jhep0605.004 $a_{1}^{\pi}$ $=$ $0$, $a_{2}^{\pi}$ $=$ $0.25(15)$, $a_{1}^{K}$ $=$ $0.06(3)$, $a_{2}^{K}$ $=$ $0.25(15)$ Table 2: Branching ratios for the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays, where the uncertainties come from $m_{c}$, ${\mu}_{P}$ and $a_{2}^{P}$, respectively. modes | branching ratio | modes | branching ratio ---|---|---|--- ${\eta}_{c}$ ${\to}$ $K^{+}K^{-}$ | $(1.47^{+0.14+0.63+0.22}_{-0.13-0.48-0.19}){\times}10^{-15}$ | ${\eta}_{c}$ ${\to}$ $\overline{K}^{0}{\eta}$ | $(4.36^{+0.12+1.07+14.18}_{-0.12-0.95-~{}1.60}){\times}10^{-17}$ ${\eta}_{c}$ ${\to}$ $K^{0}\overline{K}^{0}$ | $(1.55^{+0.15+0.67+0.22}_{-0.13-0.51-0.20}){\times}10^{-15}$ | ${\eta}_{c}$ ${\to}$ $\overline{K}^{0}{\eta}^{\prime}$ | $(1.84^{+0.08+0.57+1.89}_{-0.08-0.46-1.24}){\times}10^{-16}$ ${\eta}_{c}$ ${\to}$ ${\pi}^{+}K^{-}$ | $(5.15^{+0.41+1.61+17.00}_{-0.38-1.22-~{}4.32}){\times}10^{-17}$ | ${\eta}_{c}$ ${\to}$ ${\pi}^{0}{\eta}$ | $(9.18^{+0.85+4.15+1.35}_{-0.77-3.07-1.18}){\times}10^{-19}$ ${\eta}_{c}$ ${\to}$ ${\pi}^{0}\overline{K}^{0}$ | $(2.65^{+0.21+0.81+8.55}_{-0.19-0.62-2.22}){\times}10^{-17}$ | ${\eta}_{c}$ ${\to}$ ${\pi}^{0}{\eta}^{\prime}$ | $(5.71^{+0.53+2.58+0.84}_{-0.48-1.91-0.73}){\times}10^{-19}$ ${\eta}_{c}$ ${\to}$ ${\pi}^{+}{\pi}^{-}$ | $(1.38^{+0.13+0.62+0.20}_{-0.12-0.46-0.18}){\times}10^{-18}$ | ${\eta}_{c}$ ${\to}$ ${\eta}{\eta}$ | $(5.08^{+0.47+2.18+1.02}_{-0.42-1.54-0.92}){\times}10^{-16}$ ${\eta}_{c}$ ${\to}$ ${\pi}^{0}{\pi}^{0}$ | $(6.91^{+0.64+3.12+1.02}_{-0.58-2.32-0.89}){\times}10^{-19}$ | ${\eta}_{c}$ ${\to}$ ${\eta}{\eta}^{\prime}$ | $(1.30^{+0.12+0.55+0.26}_{-0.11-0.39-0.24}){\times}10^{-15}$ | | ${\eta}_{c}$ ${\to}$ ${\eta}^{\prime}{\eta}^{\prime}$ | $(9.12^{+0.84+3.89+1.84}_{-0.76-2.75-1.66}){\times}10^{-16}$ The branching ratio is defined as follows. ${\cal B}r\,=\,\frac{p_{\rm cm}}{8\,{\pi}\,m_{{\eta}_{c}}^{2}\,{\Gamma}_{{\eta}_{c}}}\,{|}{\cal A}({\eta}_{c}{\to}PP^{\prime}){|}^{2},$ (45) where $p_{\rm cm}$ is the center-of-mass momentum of final states in the rest frame of the ${\eta}_{c}$ meson. The numerical results of branching ratios obtained with the input parameters in Table 1 are listed in Table 2. Our comments are listed as follows. (1) Almost all of the decay width of the ${\eta}_{c}$ meson should come from the strong interactions. The parity violating ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays can only be induced from the weak interactions. For the ${\eta}_{c}$ meson, compared with the strong decays, the occurrence probability of the weak decay is very tiny, only about $1/{\tau}_{D}\,{\Gamma}_{{\eta}_{c}}$ ${\sim}$ ${\cal O}(10^{-11})$. By analogy with the nonleptonic $B$ meson decays, the pure annihilation decay modes are dynamically suppressed by helicity. Branching ratios for the pure annihilation $B_{s}$ ${\to}$ ${\pi}{\pi}$ decays are about $4$ orders of magnitude smaller than those of $B_{s}$ ${\to}$ $D_{s}{\pi}$ decay pdg2020 . So it is not difficult to imagine that the pure annihilation ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays should have very small branching ratios, about $10^{-15}$ or less. (2) It is turned out that branching ratios for the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays within SM are the order of $10^{-15}$ ${\sim}$ $10^{-19}$, and far beyond the measurable precision limit of BES-III and future STCF. However, these branching ratios are not as small as the order of $10^{-27}$ expected in Ref. prd84.032006 . More particularly, the branching ratios for the ${\eta}_{c}$ ${\to}$ $K\overline{K}$ decays can reach up to the order of $10^{-15}$ even without a considerable additional contribution from new physics (NP) beyond the SM. The observation of these decays at any level in the next few decades would be a signal of parity violations from new sources and a hint of NP. (3) The ${\eta}_{c}$ ${\to}$ $K\overline{K}$ decays are Cabibbo-favored. The ${\eta}_{c}$ ${\to}$ ${\pi}\overline{K}$ decays are singly Cabibbo-suppressed. And the ${\eta}_{c}$ ${\to}$ ${\pi}{\pi}$ decays are doubly Cabibbo- suppressed. In addition, the decay constants $f_{K}$ $>$ $f_{\pi}$. Hence, there is a clear hierarchical pattern among branching ratios, ${\cal B}r({\eta}_{c}{\to}K\overline{K})\,>\,{\cal B}r({\eta}_{c}{\to}{\pi}\overline{K})\,>\,{\cal B}r({\eta}_{c}{\to}{\pi}{\pi}).$ (46) If there are enough experimental data to study the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays in the future, then the Cabibbo-favored ${\eta}_{c}$ ${\to}$ $K\overline{K}$ decays should have more probabilities to be measured firstly. (4) The study of the pure annihilation ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays further confirmed that when the two final states are the particle and antiparticle pair, such as the $K\overline{K}$ and ${\pi}{\pi}$, the factorizable annihilation contributions from Fig. 1 (a) and (b) exactly cancel each other because of the isospin symmetry, as analyzed in Refs. npb606.245 ; prd70.034009 ; prd85.094003 . In addition, the interferences between the nonfactorizable annihilation amplitudes for Fig.1 (c) and (d) are destructive for ${\eta}_{c}$ decays because of the opposite signs of the momentum of charm quark propagators. The above factors also led to the small branching ratios for the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays. In summary, the parity violating ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays have been investigated based on the available BES-II and BES-III data, while the corresponding theoretical study is lack of references for a long time. In this paper, considering the experimental needs and the high enthusiasms in searching for NP at the intensity frontier, the ${\eta}_{c}$ ${\to}$ $PP^{\prime}$ decays are studied with the pQCD approach within SM. It is found that branching ratios for the concerned processes are the order of $10^{-15}$ and less, and beyond the current detection capability. This study offer a ready reference for future analyses. ## Acknowledgments The work is supported by the National Natural Science Foundation of China (Grant Nos. 11705047, U1632109 and 11547014). ## Appendix A Building blocks of decay amplitudes For the sake of convenience in writing, some shorthands are used. ${\phi}_{{\eta}_{c}}^{a,p}\,=\,{\phi}_{{\eta}_{c}}^{a,p}(x_{1})\,e^{-S_{{\eta}_{c}}},$ (47) ${\phi}_{P,P^{\prime}}^{a}\,=\,{\phi}_{P,P^{\prime}}^{a}(x_{i})\,e^{-S_{P,P^{\prime}}},$ (48) ${\phi}_{P,P^{\prime}}^{p,t}\,=\,\frac{{\mu}_{P}}{m_{{\eta}_{c}}}\,{\phi}_{P,P^{\prime}}^{p,t}(x_{i})\,e^{-S_{P,P^{\prime}}},$ (49) $C_{i}\,{\cal A}_{jk}(P^{\prime},P)\,=\,i\,m_{{\eta}_{c}}^{4}\,f_{{\eta}_{c}}\,f_{P}\,f_{P^{\prime}}\,\frac{{\pi}\,C_{F}}{N_{c}}\,\big{\\{}{\cal A}_{j}(P^{\prime},P,C_{i})+{\cal A}_{k}(P^{\prime},P,C_{i})\big{\\}}.$ (50) The subscript $i$ of building block ${\cal A}_{i}$ corresponds to the indices of Fig. 1. The expressions of $\mathcal{A}_{i}$ are written as follows. $\displaystyle{\cal A}_{a}$ $\displaystyle=$ $\displaystyle{\int}_{0}^{1}dx_{2}\,dx_{3}{\int}_{0}^{\infty}db_{2}\,db_{3}\,{\alpha}_{s}(t_{a})\,H_{a}({\alpha}_{g},{\beta}_{a},b_{2},b_{3})\,C_{i}(t_{a})$ (51) $\displaystyle\quad\ S_{t}(\bar{x}_{2})\,\big{\\{}{\phi}_{P}^{a}\,{\phi}_{P^{\prime}}^{a}\,\bar{x}_{2}+2\,{\phi}_{P^{\prime}}^{p}\,\big{[}{\phi}_{P}^{p}\,(1+\bar{x}_{2})+{\phi}_{P}^{t}\,x_{2}\big{]}\big{\\}},$ $\displaystyle{\cal A}_{b}$ $\displaystyle=$ $\displaystyle-{\int}_{0}^{1}dx_{2}\,dx_{3}{\int}_{0}^{\infty}db_{2}\,db_{3}\,{\alpha}_{s}(t_{b})\,H_{b}({\alpha}_{g},{\beta}_{b},b_{2},b_{3})\,C_{i}(t_{b})$ (52) $\displaystyle\qquad S_{t}(x_{3})\,\big{\\{}{\phi}_{P}^{a}\,{\phi}_{P^{\prime}}^{a}\,x_{3}+2\,{\phi}_{P}^{p}\,\big{[}{\phi}_{P^{\prime}}^{p}\,(1+x_{3})-{\phi}_{P^{\prime}}^{t}\,\bar{x}_{3}\big{]}\big{\\}},$ $\displaystyle{\cal A}_{c}$ $\displaystyle=$ $\displaystyle\frac{1}{N_{c}}\,{\int}_{0}^{1}dx_{1}\,dx_{2}\,dx_{3}{\int}_{0}^{\infty}db_{1}\,db_{2}\,{\alpha}_{s}(t_{c})\,H_{c}({\alpha}_{g},{\beta}_{c},b_{1},b_{2})\,C_{i}(t_{c})$ (53) $\displaystyle\quad\ \big{\\{}{\phi}_{{\eta}_{c}}^{a}\,\big{[}{\phi}_{P}^{a}\,{\phi}_{P^{\prime}}^{a}\,(x_{3}-x_{1})+\big{(}{\phi}_{P}^{p}\,{\phi}_{P^{\prime}}^{t}-{\phi}_{P}^{t}\,{\phi}_{P^{\prime}}^{p}\big{)}\,(x_{3}-\bar{x}_{2})$ $\displaystyle\quad\qquad+\big{(}{\phi}_{P}^{p}\,{\phi}_{P^{\prime}}^{p}-{\phi}_{P}^{t}\,{\phi}_{P^{\prime}}^{t}\big{)}\,(x_{3}+\bar{x}_{2}-2\,x_{1})\big{]}$ $\displaystyle\quad\ +{\phi}_{{\eta}_{c}}^{p}\big{[}\frac{1}{2}\,{\phi}_{P}^{a}\,{\phi}_{P^{\prime}}^{a}\,+2\,{\phi}_{P}^{p}\,{\phi}_{P^{\prime}}^{p}\big{]}\big{\\}}_{b_{2}=b_{3}},$ $\displaystyle{\cal A}_{d}$ $\displaystyle=$ $\displaystyle\frac{1}{N_{c}}\,{\int}_{0}^{1}dx_{1}\,dx_{2}\,dx_{3}{\int}_{0}^{\infty}db_{1}\,db_{2}\,{\alpha}_{s}(t_{d})\,H_{d}({\alpha}_{g},{\beta}_{d},b_{1},b_{2})\,C_{i}(t_{d})$ (54) $\displaystyle\quad\ \big{\\{}{\phi}_{{\eta}_{c}}^{a}\,\big{[}{\phi}_{P}^{a}\,{\phi}_{P^{\prime}}^{a}\,(x_{2}-x_{1})+\big{(}{\phi}_{P}^{p}\,{\phi}_{P^{\prime}}^{t}-{\phi}_{P}^{t}\,{\phi}_{P^{\prime}}^{p}\big{)}\,(x_{3}-\bar{x}_{2})$ $\displaystyle\quad\qquad+\big{(}{\phi}_{P}^{p}\,{\phi}_{P^{\prime}}^{p}-{\phi}_{P}^{t}\,{\phi}_{P^{\prime}}^{t}\big{)}\,(2\,\bar{x}_{1}-\bar{x}_{2}-x_{3})\,$ $\displaystyle\quad\ -{\phi}_{{\eta}_{c}}^{t}\big{[}\frac{1}{2}\,{\phi}_{P}^{a}\,{\phi}_{P^{\prime}}^{a}\,+2\,{\phi}_{P}^{p}\,{\phi}_{P^{\prime}}^{p}\big{]}\big{\\}}_{b_{2}=b_{3}},$ $S_{{\eta}_{c}}\,=\,s(x_{1},p_{1}^{+},1/b_{1})+2\,{\int}_{1/b_{1}}^{t}\frac{d{\mu}}{{\mu}}{\gamma}_{q},$ (55) $S_{P}\,=\,s(x_{2},p_{2}^{+},1/b_{2})+s(\bar{x}_{2},p_{2}^{+},1/b_{2})+2\,{\int}_{1/b_{2}}^{t}\frac{d{\mu}}{{\mu}}{\gamma}_{q},$ (56) $S_{P^{\prime}}\,=\,s(x_{3},p_{3}^{-},1/b_{3})+s(\bar{x}_{3},p_{3}^{-},1/b_{3})+2\,{\int}_{1/b_{3}}^{t}\frac{d{\mu}}{{\mu}}{\gamma}_{q},$ (57) ${\alpha}_{g}\,=\,m_{{\eta}_{c}}^{2}\,\bar{x}_{2}\,x_{3},$ (58) ${\beta}_{a}\,=\,m_{{\eta}_{c}}^{2}\,\bar{x}_{2},$ (59) ${\beta}_{b}\,=\,m_{{\eta}_{c}}^{2}\,x_{3},$ (60) ${\beta}_{c}\,=\,{\alpha}_{g}-m_{{\eta}_{c}}^{2}\,x_{1}\,(\bar{x}_{2}+x_{3}),$ (61) ${\beta}_{d}\,=\,{\alpha}_{g}-m_{{\eta}_{c}}^{2}\,\bar{x}_{1}\,(\bar{x}_{2}+x_{3}),$ (62) and other definations can be found in Ref. 2101.00549 . ## References * (1) P. Zyla et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2020, 083C01 (2020). * (2) http://english.ihep.cas.cn/bes/doc/2250.html. * (3) M. Ablikim et al. (BESIII Collaboration), Nucl. Instr. Meth. Phys. Res. A 614, 345 (2010). * (4) S. Okubo, Phys. Lett. 5, 165 (1963). * (5) G. Zweig, CERN-TH-401, 402, 412 (1964). * (6) J. Iizuka, Prog. Theor. Phys. Suppl. 37-38, 21 (1966). * (7) M. Ablikim et al. (BESIII Collaboration), Phys. Rev. D 84, 032006 (2011). * (8) M. Ablikim et al. (BESIII Collaboration), Eur. Phys. J. C 45, 337 (2006). * (9) G. Buchalla, A. Buras, M. Lautenbacher, Rev. Mod. Phys. 68, 1125, (1996). * (10) M. Beneke, G. Buchalla, M. Neubert, C. Sachrajda, Phys. Rev. Lett. 83, 1914 (1999). * (11) M. Beneke, G. Buchalla, M. Neubert, C. Sachrajda, Nucl. Phys. B 591, 313 (2000). * (12) M. Beneke, G. Buchalla, M. Neubert, C. Sachrajda, Nucl. Phys. B 606, 245 (2001). * (13) D. Du, D. Yang, G. Zhu, Phys. Lett. B 488, 46 (2000). * (14) D. Du, D. Yang, G. Zhu, Phys. Lett. B 509, 263 (2001). * (15) D. Du, D. Yang, G. Zhu, Phys. Rev. D 64, 014036 (2001). * (16) H. Li, H. Yu, Phys. Rev. Lett. 74, 4388 (1995). * (17) H. Li, Phys. Lett. B 348, 597 (1995). * (18) H. Li, Phys. Rev. D 52, 3958 (1995). * (19) Y. Keum, H. Li, Phys. Rev. D 63, 074006 (2001). * (20) Y. Keum, H. Li, A. Sanda, Phys. Rev. D 63, 054008 (2001). * (21) C. Lü, K. Ukai, M. Yang, Phys. Rev. D 63, 074009 (2001). * (22) H. Li, K. Ukai, Phys. Lett. B 555, 197 (2003). * (23) J. Sun, Z. Xiong, Y. Yang, G. Lu, Eur. Phys. J. C 73, 2437 (2013). * (24) Y. Yang, L. Lang, X. Zhao, J. Huang, J. Sun, arXiv:2012.10581 * (25) P. Ball, V. Braun, A. Lenz, JHEP 0605, 004 (2006). * (26) T. Kurimoto, H. Li, A. Sanda, Phys. Rev. D 65, 014007 (2001). * (27) D. Du, H. Gong, J. Sun, D. Yang, G. Zhu, Phys. Rev. D 65, 074001 (2002). * (28) D. Du, H. Gong, J. Sun, D. Yang, G. Zhu, Phys. Rev. D 65, 094025 (2002). Erratum, Phys. Rev. D 66, 079904 (2002). * (29) J. Sun, G. Zhu, D. Du, Phys. Rev. D 68, 054003 (2003). * (30) M. Beneke, M. Neubert, Nucl. Phys. B 675, 333 (2003). * (31) M. Beneke, J. Rohrer, D. Yang, Nucl. Phys. B 774, 64 (2007). * (32) Q. Chang, J. Sun, Y. Yang, X. Li, Phys. Rev. D 90, 054019 (2014). * (33) Q. Chang, X. Hu, J. Sun, Y. Yang, Phys. Rev. D 91, 074026 (2015). * (34) Q. Chang, J. Sun, Y. Yang, X. Li, Phys. Lett. B 740, 56 (2015). * (35) J. Sun, Q. Chang, X. Hu, Y. Yang, Phys. Lett. B 743, 444 (2015). * (36) Y. Keum, H. Li, A. Sanda, Phys. Lett. B 504, 6 (2001). * (37) Y. Li, C. Lü, Z. Xiao, X. Yu, Phys. Rev. D 70, 034009 (2004). * (38) Z. Xiao, W. Wang, Y. Fan, Phys. Rev. D 85, 094003 (2012). * (39) A. Ali, G. Kramer, Y. Li et al., Phys. Rev. D 76, 074018 (2007). * (40) K. Wang, G. Zhu, Phys. Rev. D 88, 014043 (2013). * (41) Th. Feldmann, P. Kroll, B. Stech, Phys. Rev. D 58, 114006 (1998). * (42) J. Sun, Y. Yang, Q. Chang, G. Lu, Phys. Rev. D 89, 114019 (2014). * (43) D. Hatton, C. Davies, B. Galloway et al., Phys. Rev. D 102, 054511 (2020). * (44) Y. Yang, M. Duan, J. Lu, J. Huang, J. Sun, arXiv:2101.00549
# Determining the temperature in heavy-ion collisions with multiplicity distribution Yi-Dan Song Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Institute of Modern Physics, Fudan University, Shanghai $200433$, China Rui Wang<EMAIL_ADDRESS>Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai $201800$, China Yu-Gang Ma <EMAIL_ADDRESS>Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Institute of Modern Physics, Fudan University, Shanghai $200433$, China Xian-Gai Deng Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Institute of Modern Physics, Fudan University, Shanghai $200433$, China Huan-Ling Liu Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai $201800$, China ###### Abstract By relating the charge multiplicity distribution and the temperature of a de- exciting nucleus through a deep neural network, we propose that the charge multiplicity distribution can be used as a thermometer of heavy-ion collisions. Based on an isospin-dependent quantum molecular dynamics model, we study the caloric curve of reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ with the apparent temperature determined through the charge multiplicity distribution. The caloric curve shows a characteristic signature of nuclear liquid-gas phase transition around the apparent temperature $T_{\rm ap}$ $=$ $6.4~{}\rm MeV$, which is consistent with that through a traditional heavy-ion collision thermometer, and indicates the viability of determining the temperature in heavy-ion collisions with multiplicity distribution. heavy-ion collision, temperature, machine learning, multiplicity ###### pacs: 24.10.Ai, 25.70.Gh, 25.70.Mn, 27.60.+j ## I Introduction Understanding the properties of nuclear matter is one of the major objectives in nuclear physics. At zero temperature, the properties of nuclear matter have been studied extensively, and its equation of state (EOS), including its isospin dependence, i.e., symmetry energy, has been determined relatively well Lattimer and Prakash (2007); Baldo and Burgio (2016); Roca-Maza and Paar (2018), while its properties at finite temperature are relatively little touched upon. Among these properties, two noticeable examples are the nuclear liquid-gas phase transition Finn et al. (1982); Siemens (1983); Panagiotou et al. (1984); Pochodzalla et al. (1995); Ma et al. (1997); Ma (1999); Chomaz et al. (2000); Richert and Wagner (2001); Natowitz et al. (2002a, b); Ma et al. (2005); Ma and Ma (2018); Borderie and Frankland (2019) and the temperature dependence of the ratio of shear viscosity to entropy density ($\eta/s$) Auerbach and Shlomo (2009); Dang (2011); Fang et al. (2014); Deng et al. (2016); Mondal et al. (2017); Guo et al. (2017). The latter is also connected to the nuclear giant dipole resonance at finite temperature Bracco et al. (1989); Bortignon et al. (1991); Wieland et al. (2006), since both of them are related to the two-body dissipation of nucleons. The difficulties of studying the finite temperature properties of nuclear matter mainly come from the preparation of a finite temperature nuclear system, as well as the determination of its temperature. Heavy-ion collisions (HICs) at intermediate-to-low energies provide a possible venue of investigating the finite temperature properties of nuclear matter Wada et al. (1989). During the reaction, a transient excited system is formed, and commonly it can be regarded as a (near)-equilibrium state d’Enterria et al. (2001); Borderie et al. (1996), since the evolution of its constituent nucleons is sufficiently short comparing with the global evolution. Its temperature can be accessed by, e.g., energy spectra through moving source fitting Wada et al. (1989), excited state populations Chitwood et al. (1986); Schwarz et al. (1993), (double)-isotope ratios Tsang et al. (1997); Serfling et al. (1998); Albergo et al. (1985), or quadruple momentum fluctuation Wuenschel et al. (2010) etc. For a reliable thermometer of HICs, we require it is insensitive to both the collective effects and the secondary decay of unstable nuclei after the system disintegrates, which is commonly hard to achieve. Besides that, because of the difficulty of examining the accuracy of the apparent temperature obtained through these thermometers, it is not a trivial task to propose different ways of determining the apparent temperature, and thus provide more opportunities of crosscheck. Machine-learning techniques LeCun et al. (2015); Jordan and Mitchell (2015), which have been applied extensively in physics Carleo et al. (2019) due to their ability of recognizing and characterizing complex sets of data, provide an alternative and peculiar way of determining the apparent temperature of HICs at intermediate-to-low energies. Besides the common uses like particle identification and tagging in experiments, machine-learning techniques have various novel applications in physics, e.g., solving quantum many-body problem Carleo and Troyer (2017); Zhang et al. (2020), analyzing strong gravitational lenses Hezaveh et al. (2017), classifying phases of matter Carrasquilla and Melko (2017); van Nieuwenburg et al. (2017); Rodriguez-Nieva and Scheurer (2019); Rao (2020), extrapolating the cross section of nuclear reactions Ma et al. (2020), and instructing single crystal growth Yao et al. (2019) and optimizing experimental control Wu et al. (2020). In a recent work of studying nuclear liquid-gas phase transition with machine-learning techniques Wang et al. (2020), it has been shown that machine-learning techniques can capture the essential features of HICs directly from experimental final-state charge multiplicity distribution. In the present letter, we propose that, with the help of machine-learning technique, the charge multiplicity distribution can be applied to determine the apparent temperature of HICs at intermediate-to-low energies. Our basic methodology is as follows. We first obtain the multiplicity of fragments of an excited nuclear source with given temperature based on a theoretical model, e.g., transport models Bertsch and Das Gupta (1988); Aichelin (1991), statistical models Bondorf et al. (1995); Charity et al. (1988) or their hybrid Gaitanos et al. (2009); Zhang et al. (2018); Ono (2019). We then relate the final-state charge multiplicity distribution with the temperature of the source via a deep neural network (DNN). This relation can be employed to determine the apparent temperature of a certain transient state during HICs at intermediate-to-low energies through their final-state charge multiplicity distribution. We use the above method to determine the apparent temperature of a fragmentation reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$. We further test the viability of the present method by comparing it with the momentum fluctuation thermometer Wuenschel et al. (2010), and by studying the characteristic signature of the caloric curve with the apparent temperature determined through the above method. To adopt the above method to realistic HICs analyses with experimental charge multiplicity distribution relies on the precise determination of the fragment multiplicities from theoretical model. Nevertheless, as a viability quest, in the present work, we employ an isospin- dependent quantum molecular dynamics (IQMD) model Ma et al. (2006) to simulate the de-excitation of the finite temperature nuclear source, and do not require it to describe precisely the experimental fragment multiplicities. More accurate description of the experimental fragment multiplicities can be achieved through, e.g., combining certain transport model with statistical model of multi-fragmentation Ono (2019), and it is beyond the scope of the present work. ## II Methodology In the present work, we focus on fragmentation reactions Sümmerer and Blank (2000); Song et al. (2018), i.e., central $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ collision with incident energies ranging from 20 to 400$A$ MeV, and try to determine their apparent temperature through its final-state charge multiplicity distribution $M_{\rm c}(Z_{\rm cf})$, where $Z_{\rm cf}$ represents the charge number of the final charged fragments, ranging from $1$ to the charge number of the reaction system. In the process of intermediate- to-low energy HICs, especially for fragmentation reactions, the two incident nuclei collide and then form a compound-like system. This compound-like system is regarded as a (near-)equilibrium system, and we can mimic approximately this transient state of the reaction by a nuclear source or a finite nucleus at given temperature, with mass number $A$ and proton number $Z$, which are the same as those in the reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ Charity (2010). This finite temperature nuclear source has been employed to study finite-size scaling phenomenon Liu et al. (2019). We first simulate the evolution of the nuclear source 112Sn ($A=112$ and $Z=50$) at different initial temperature $T^{model}$, and obtain their $M_{\rm c}(Z_{\rm cf})$ based on the IQMD model. These simulations provide us $M_{\rm c}(Z_{\rm cf})$ from a nuclear source with a given temperature, which is difficult to obtain directly from HICs. We then establish a relation between the source temperature $T^{model}$ and $M_{\rm c}(Z_{\rm cf})$ through a DNN, which recasts the complex relation into a non-linear map through its neurons. Based on this relation, the apparent temperature of the reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ can be determined through its final-state $M_{\rm c}(Z_{\rm cf})$. In Fig. 1, we show the basic procedure of the proposed method to determine apparent temperature of fragmentation reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ through the nuclear source 112Sn. Figure 1: The basic procedure of determining the $T_{ap}$ of $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ through its final-state $M_{\rm c}(Z_{\rm cf})$ with DNN. ### II.1 IQMD model The IQMD model, a many-body theory to describe the dynamics of HICs, can be derived from a time-dependent Hartree theory with Gaussian single-particle wave function $\phi_{i}(\vec{r},t)$, $\phi_{i}(\vec{r},t)=\frac{1}{(2{\pi}L)^{3/4}}{\rm exp}\Big{[}-\frac{[\vec{r}-\vec{r}_{i}(t)]^{2}}{4L}+\frac{i\vec{p}_{i}(t)\cdot\vec{r}}{\hbar}\Big{]},$ (1) with its spatial center $\vec{r}_{i}(t)$ and momentum center $\vec{p}_{i}(t)$ as variational parameters. Other quantities can be obtained subsequently through $\vec{r}_{i}(t)$ and $\vec{p}_{i}(t)$. In the above equation, $L$ is the square of the width of Gaussian wave packet and is set to be 2.18 $\rm fm^{2}$. Through a product of single-particle wave functions $\phi_{i}(\vec{r},t)$, we can get the system wave function, $\psi(\vec{r}_{1},...\vec{r}_{n},t)=\prod_{i=1}^{A}\phi_{i}(\vec{r},t),$ (2) where $A$ is the mass number of the system. The potential energy $U$ in the IQMD model includes Skyrme, Yukawa, symmetry, momentum-dependent and Coulomb terms, $U=U_{\text{Sky}}+U_{\text{Yuk}}+U_{\text{sym}}+U_{\text{MDI}}+U_{\text{Coul}}.$ (3) Detailed descriptions of the IQMD model, including the potential and equations of motion of $\vec{r}_{i}(t)$ and $\vec{p}_{i}(t)$, can be found in Ref. Hartnack et al. (1998); Ma et al. (2006). Numerous applications of IQMD have been made for different observable, and some recent ones can be found in Ref. Li et al. (2018); Feng (2018); Yan et al. (2019); Yan and Li (2019); Yu et al. (2020). In the IQMD model, the initial nucleons are generated through the local density approximation. For a ground-state nucleus, we generate $A$ Gaussian wave-packets, with their spatial center $\vec{r}_{i}$ $(t=0)$ sampled randomly within a sphere of radius given by $r_{0}A^{1/3}$ with $r_{0}$ $=$ $1.12~{}\rm fm$, and their momentum center $\vec{p}_{i}$ $(t=0)$ sampled following the zero-temperature Fermi-Dirac distribution. A finite nucleus at a given temperature $T$ can be generated by sampling $\vec{p}_{i}$ $(t=0)$ according to finite temperature Fermi-Dirac distribution Fang et al. (2014), which is given by, $\begin{split}f(\epsilon)=\frac{1}{\exp(\frac{\epsilon-\mu}{T})+1},\end{split}$ (4) where $\epsilon$ $=$ $\sqrt{p^{2}+m^{2}}$ $+$ $U$ is the single particle energy, and $\mu$ is the chemical potential. The parameters $p$, $m$ and $U$ are momentum, mass and potential energy, respectively. For simplicity, we have omitted the contribution from momentum-dependent part in the above equation. ### II.2 Deep neural network Figure 2: (a). A single artificial neuron, with $n$ inputs labelled as $x_{1}$ through $x_{n}$ and an output $y$. The output of the neuron is computed by applying the activation function $f(z)$ to the product of x, weights $W$, and biases $b$, e.g., z = $\textbf{W}\cdot\textbf{x}$ $+$ b. (b). The feed-forward neural network used in the present work, consisting of an input layer, an output layer and four hidden layers. The input of the network is the charge- weighted charge multiplicity distribution $ZM_{c}(Z_{cf})$. In the present work, we adopt a feed-forward DNN to establish the relation between the temperature $T^{\rm Model}$ of the de-exciting nuclear source generated in IQMD and its final-state charge multiplicity distribution $M_{\rm c}(Z_{\rm cf})$. We treat $M_{\rm c}(Z_{\rm cf})$ as the input image, while its corresponding temperature as its label. The DNN contains successively one input layer, several hidden layers, and one output layer. Each layer generates its output z through a matrix multiplication of its input x, i.e., z = $\textbf{W}\cdot\textbf{x}$ $+$ b. The elements in the matrix W are known as weights and in the vector b as biases. In a normal full-connected neural network these parameters are single values. The neuron is then followed by an activation function $f(\textbf{z})$, which turns a linear transform to a non- linear one. Commonly used activation functions are _sigmoid_ , _tanh_ , and _ReLU_ (rectified linear unit). $f(\textbf{z})$ is then used as the input of the next layer. In the present work, the neural network can be treated as a functional $T_{\rm ap}^{\rm DNN}=g\big{[}M_{\rm c}(Z_{\rm cf});\textbf{W},\textbf{b}\big{]},$ (5) which relates non-linearly a given input $M_{\rm c}(Z_{\rm cf})$, a vector with $50$ elements in our case, to an output predicted apparent temperature $T_{\rm ap}^{\rm DNN}$. We employ four hidden layers, with each consists of 32 artificial neurons. The input layer and the first three hidden layers are followed by _ReLU_ , while the last hidden layer and the output predicted $T_{\rm ap}^{\rm DNN}$ are connected directly by the matrix multiplication. The sketch of the DNN used in the present work can be seen in Fig. 2. We train the network based on a data set $\big{\\{}M_{\rm c}(Z_{\rm cf}),T^{\rm Model}\big{\\}}$, to minimize the cost function, the difference between the given temperature and the DNN prediction, i.e., $(T^{\rm model}-T_{\rm ap}^{\rm DNN})^{2}$, by adjusting its parameters W and b. The optimization is fulfilled by the $Adam$ Kingma and Ba (2017) package in _Tensorflow_. During training the network, we use an exponential decreasing learning rate $\alpha$ $=$ $10^{-6}$ $+$ $(10^{-3}-10^{-6})\exp(-i/10000)$, with $i$ the training epoch and it is equal to 10,000. To prevent the network from over fitting the data set, we include a standard $l_{2}$ regularization term, i.e., a term proportional to the norm of the weight W and the bias b, $l_{2}(\|\textbf{W}\|^{2}/2+\|\textbf{b}\|^{2}/2)$, with $l_{2}$ a positive number, in the cost function of the neural network. The $l_{2}$ regularization prevents the weights and biases from increasing to arbitrary large values during the optimization. ## III Results and discussion ### III.1 Apparent temperature Figure 3: The charge distribution of fragments from excited nuclear source $\isotope[112]{Sn}$ at several $T^{\rm model}$. For each $T^{\rm model}$, we show the simulated data of one run consisting of 2,000 events from the IQMD model. Based on the IQMD model, we simulate the de-excitation process of a $\isotope[112]{Sn}$ nucleus at different $T^{\rm model}$. The [112]Sn nucleus has the same mass and charge number with the $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ reaction, and it can be regarded as a nuclear source to mimic the transient excited state of the reaction. We perform $50$ runs for each $T^{\rm model}$, with each run consists of $2,000$ events. $T^{\rm model}$ ranges from $0$ to $20~{}\rm MeV$ with $1~{}\rm MeV$ interval. Fig. 3 displays charge distributions of fragments from hot nuclear source [112]Sn at several $T^{\rm model}$ with the data sample of one run ($2,000$ events) at each temperature. These fragment charge distributions exhibit typical changes of disassembly mechanism of hot nuclei with temperature Ma et al. (2005); Ma (1999), i.e., from evaporation mechanism at lower temperature (e.g. $T^{\rm model}$= 2 MeV), to multifragmentation at medium temperature (e.g. $T^{\rm model}$= 8 MeV), till vaporization at higher temperature (e.g. $T^{\rm model}$= 17 MeV). According to the shapes of these charge distributions versus temperature, a nuclear liquid-gas phase transition shall begin to happen at a certain moderate temperature for this system Ma et al. (2005); Liu et al. (2019); Wang et al. (2020); Wada et al. (2019). We can train the DNN once we get the above-mentioned charge multiplicity distributions $M_{\rm c}(Z_{\rm cf})$. In the present work, the total $50\times 21$ charge multiplicity distributions $M_{\rm c}(Z_{\rm cf})$, and their corresponding $T^{\rm model}$, are treated as the images and labels, respectively, and they form the data set $\big{\\{}M_{\rm c}(Z_{\rm cf}),T^{\rm model}\big{\\}}$ of the DNN. The data set is further divided into training set and testing set, each contains half of the total data set. After we trained the DNN with the training set, i.e., determining the parameters W and b in Eq. (5) to best reproduce the given $T^{\rm model}$, one can predict the apparent temperature with a given $M_{\rm c}(Z_{\rm cf})$ through Eq. (5). We show in Fig.4 the histogram of the DNN’s predicting error $\sigma_{\rm T}$, i.e., the difference between the original $T^{\rm model}$ and its DNN prediction $T_{\rm ap}^{\rm DNN}$, of the testing set. The standard error of the DNN prediction is about $0.62~{}\rm MeV$, which is small enough for further analyses based on the apparent temperature obtained through the present way. Figure 4: The histogram of errors $\sigma_{\rm T}$ between the predicted apparent temperature from DNN $T_{\rm ap}^{\rm DNN}$ and the original $T^{\rm model}$ given in the IQMD simulations for the excited nuclear source (finite temperature [112]Sn). The red line is the Gaussian fitting of the histogram. The inset shows the testing accuracy $\langle\sigma_{\rm T}^{2}\rangle$ for different charge multiplicity distributions. $X$ represents different ranges of charge multiplicity distribution. In the inset of Fig. 4, we show the dependence of the testing accuracy on the range of charge multiplicity distribution, i.e., only the charge multiplicity distributions within the given range (represented by $X$ in the inset) are used in training and testing the DNN. We notice from the inset that the light fragments, i.e., $Z\in[1,5]$ play major role in determining the apparent temperature, while including heavier fragments do help to increase the accuracy. In another perspective, this feature actually indicates the superiority of the present method to the traditional isotopic ratio thermometer, since for the later only the information of light fragments is taken into consideration. At present, we only use the charge multiplicity distribution to predict the apparent temperature by the DNN. In principle, the information in momentum space can be included in the present method for better accuracy. After establishing the relation between the apparent temperature and $M_{\rm c}(Z_{\rm cf})$ through training the DNN, we turn to determine the apparent temperature of $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ fragmentation reactions. We first examine the reaction dynamics of $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ within the IQMD model. We simulate the reactions $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ with incident energy $E_{\rm lab}$ ranging from 20$A$ MeV to 400$A$ MeV, and for each incident energy we employ $2,000$ events. As the incident energy increases, the reaction becomes more violent, and the apparent temperature of the reaction increases. In fragmentation reaction, the projectile and target nuclei initially form a compound-like system after they collide each other. In the early stage of the reaction, only a small number of nucleons evaporate or eject, so the mass of the compound-like system approximately equals to the sum of the projectile and the target. We exhibit in Fig. 5 the time evolution of the central density of the heaviest fragment formed in $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ reaction at different incident energies from the IQMD model. The density in the IQMD model is obtained through the sum of the single-particle density, $\rho(\vec{r})=\sum^{A}_{i=1}\rho_{i}=\sum^{A}_{i=1}\frac{1}{(2{\pi}L)^{3/2}}{\rm exp}\bigg{\\{}-\frac{\big{[}\vec{r}-\vec{r}_{i}(t)\big{]}^{2}}{2L}\bigg{\\}}.$ (6) We notice from the figure that the central density of the heaviest fragment exhibits some oscillations at the beginning. This reflects the breathing mode caused by the initial compression of the system, since before the compound- like system dismantling, the largest fragment is the compound-like system itself. The black dotted line represents $\rho$ $=$ $0.156~{}\rm fm^{-3}$, which is the initial central density of the nuclear source ([112]Sn) at finite temperature we generate for training the DNN. Therefore, the initial compound- like system with density around nuclear saturation density can be mimicked reasonably by an excited nuclear source with certain given temperature. Figure 5: Time evolution of the central density of the heaviest fragment in reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ with the IQMD simulation. Different lines denote the results from different incident energies from 20$A$ MeV to 400$A$ MeV. The black dashed line represents the initial central density of the nuclear source [112]Sn. With the final-state charge multiplicity distribution $M_{\rm c}(Z_{\rm cf})$ of the reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ simulated with the IQMD, we obtain their apparent temperature through the trained DNN in Eq. (5). We plot the obtained apparent temperature $T_{\rm ap}$ of the reaction [103]Pd $+$ [9]Be at different incident energies $E_{\rm lab}$ in Fig. 6. Since the finite temperature nuclear source generated to train the DNN is initialized at around nuclear saturation density, $T_{\rm ap}$ obtained through the present method actually reflects the apparent temperature of the early stage of the compound system in reaction [103]Pd $+$ [9]Be. In Fig. 6, the black symbols are the predicted $T_{\rm ap}$ of compound system at the early stage of the reaction by DNN. We further test the effects of imperfect acceptance and efficiency on the obtained $T_{\rm ap}$, by applying an acceptance and efficiency cut, respectively, in the IQMD simulations. It shows that these effects are negligible, which indicates the robustness of the present method on imperfect experimental acceptance and efficiency. Figure 6: The apparent temperature $T_{\rm ap}$ of the reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ at different $E_{\rm lab}$, predicted by DNN (black dots) and MFT (red dots). The solid lines are their fittings to guide for the eye. In order to provide a crosscheck of the present method, we employ a momentum fluctuation thermometer (MFT) Wuenschel et al. (2010) to determine the apparent temperature of the reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ simulated by IQMD. In MFT, the distribution of a certain species of light fragment (deuteron in the present work) is assumed to be Maxwellian, and the temperature of the system is related to the variance of its quadrupole moment $\sigma$ through $\sigma^{2}=4A^{2}m^{2}_{0}T^{2},$ (7) where $m_{0}$ is the mass of a nucleon and $A$ is the mass number of the fragment. We add in Fig. 6 the apparent temperature predicted by MFT with red dots. We notice that the $T_{\rm ap}$ of the two methods are very close, which increased the reliability of the extracted apparent temperature. ### III.2 Caloric curve Figure 7: Caloric curve of the reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$. Black open squares represent the result based on the IQMD model, with $T_{\rm ap}$ determined by DNN using $M_{\rm c}(Z_{\rm cf})$. Blue dashed line is its polynomial fit. The inset shows the specific heat capacity $\tilde{c}$ derived from the fitted formula. The caloric curve, i.e., the apparent temperature as a function of excitation energy per nucleon $E^{*}/A$ of HICs has been considered as an important probe to the existence of the nuclear liquid-gas phase transition Pochodzalla et al. (1995); Natowitz et al. (2002a, b); Wada et al. (2019). In order to examine the validity of $T_{\rm ap}$ determined by DNN using the charge multiplicity distribution $M_{\rm c}(Z_{\rm cf})$, we study the caloric curve of the reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$. In the IQMD model, the excitation energy of the compound system at the early stage of the reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ can be obtained by $E^{*}=U+E_{\text{k}}-E_{\text{0}},$ (8) where $U$, $E_{\text{k}}$ and $E_{\text{0}}$ are the potential energy, kinetic energy and experimental binding energy Wang et al. (2017), respectively. To properly account the energy deposited in the system, the kinetic energy of emitted or evaporated nucleons, should be counted when calculating the excitation energy. We exhibit in Fig. 7 the caloric curve of the $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ reaction from the IQMD simulation with the apparent temperature determined by DNN using $M_{\rm c}(Z_{\rm cf})$ of the reaction. As shown in the figure, the increase of $T_{\rm ap}$ slows down when $E^{*}/A$ reaches to about $8~{}\rm MeV$. Traditionally, this characteristic behavior of the caloric curve is explained that, as the excitation energy increases, the system is driven to a spinodal region, in which part of the excitation energy begins to transfer to latent heat. To characterize this feature of caloric curve quantitatively, the specific heat capacity of the system Chomaz et al. (2000) is defined $\tilde{c}\equiv\frac{d(E^{*}/A)}{dT_{\rm ap}}.$ (9) Note that it is different from $c_{p}$ or $c_{v}$ because it is not practicable to maintain the external condition on pressure or volume during the reaction. The apparent temperature corresponding to the maximum of $\tilde{c}$ is called limiting temperature $T_{\rm lim}$, which can be used to deduce the critical temperature of nuclear matter Natowitz et al. (2002b). We further obtain $\tilde{c}$ through a polynomial fit (red dashed line in Fig. 7) of the obtained caloric curve, as shown in the inset of Fig. 7. Based on $T_{\rm ap}$ obtained by DNN from $M_{\rm c}(Z_{\rm cf})$, the obtained $T_{\rm lim}$ of the reaction $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ is about $6.4~{}\rm MeV$. This value of $T_{\rm lim}$ follows the general trend of the Natowitz’s limiting-temperature dependence to the system size Natowitz et al. (2002a), and thus indicates the validity of determining the apparent temperature through charge multiplicity distribution presented in this article. ## IV Summary and outlook In the present work, we have examined the possibility of determining the apparent temperature $T_{\rm ap}$ of HICs at intermediate-to-low energies through their final-state charge multiplicity distribution $M_{\rm c}(Z_{\rm cf})$. Based on the IQMD simulations of de-exciting nuclear sources ([112]Sn) at given temperatures, we have established a relation between the final-state $M_{\rm c}(Z_{\rm cf})$ of a nuclear source, and its corresponding temperature, through training a DNN. The trained DNN can predict the apparent temperature within an error of $0.62~{}\rm MeV$, which is small enough for applying it to analyze the reaction dynamics. We have then employed the above method to obtain the apparent temperature of the $\isotope[103]{Pd}$ $+$ $\isotope[9]{Be}$ reactions at different incident energies simulated by the IQMD, and subsequently the caloric curve. The caloric curve shows a characteristic behavior at, i.e., $T_{\rm lim}$ $=$ $6.4~{}\rm MeV$, which follows the general trend of the limiting temperature’s dependence to system size and indicates of nuclear liquid-gas phase transition of a given system. The present method provides an alternative way to determine the apparent temperature of the HICs at intermediate-to-low energy, and it can be used as a supplement of the traditional nuclear thermometers. To apply the present method to the analyses of experimental data relies on the accurate description of the fragments multiplicity from dynamical model. One of the possible ways to achieve this goal is using transport model plus statistical multifragmentation process. Studies following this line are in progress. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ###### Acknowledgements. This work is partially supported by the National Natural Science Foundation of China under Contracts No. $11890710$, $11890714$ and No. $11625521$, the Key Research Program of Frontier Sciences of the CAS under Grant No. QYZDJ-SSW- SLH$002$, the Strategic Priority Research Program of the CAS under Grants No. XDB34000000, Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, and the Postdoctoral Innovative Talent Program of China under Grants No. BX$20200098$. ## References * Lattimer and Prakash (2007) J. Lattimer and M. Prakash, Phys. Rep. 442, 109 (2007). * Baldo and Burgio (2016) M. Baldo and G. Burgio, Prog. Part. Nucl. Phys. 91, 203 (2016). * Roca-Maza and Paar (2018) X. Roca-Maza and N. Paar, Prog. Part. Nucl. Phys. 101, 96 (2018). * Finn et al. (1982) J. E. Finn, S. Agarwal, A. Bujak, J. Chuang, L. J. Gutay, A. S. Hirsch, R. W. Minich, N. T. Porile, R. P. Scharenberg, B. C. Stringfellow, et al., Phys. Rev. Lett. 49, 1321 (1982). * Siemens (1983) P. J. Siemens, Nature 305, 410 (1983). * Panagiotou et al. (1984) A. D. Panagiotou, M. W. Curtin, H. Toki, D. K. Scott, and P. J. Siemens, Phys. Rev. Lett. 52, 496 (1984). * Pochodzalla et al. (1995) J. Pochodzalla, T. Möhlenkamp, T. Rubehn, A. Schüttauf, A. Wörner, E. Zude, M. Begemann-Blaich, T. Blaich, H. Emling, A. Ferrero, et al., Phys. Rev. Lett. 75, 1040 (1995). * Ma et al. (1997) Y. G. Ma, A. Siwek, J. Péter, F. Gulminelli, R. Dayras, L. Nalpas, B. Tamain, E. Vient, G. Auger, C. O. Bacri, et al., Phys. Lett. B 390, 41 (1997). * Ma (1999) Y. G. Ma, Phys. Rev. Lett. 83, 3617 (1999). * Chomaz et al. (2000) P. Chomaz, V. Duflot, and F. Gulminelli, Phys. Rev. Lett. 85, 3587 (2000). * Richert and Wagner (2001) J. Richert and P. Wagner, Phys. Rep. 350, 1 (2001). * Natowitz et al. (2002a) J. B. Natowitz, R. Wada, K. Hagel, T. Keutgen, M. Murray, A. Makeev, L. Qin, P. Smith, and C. Hamilton, Phys. Rev. C 65, 034618 (2002a). * Natowitz et al. (2002b) J. B. Natowitz, K. Hagel, Y. Ma, M. Murray, L. Qin, R. Wada, and J. Wang, Phys. Rev. Lett. 89, 212701 (2002b). * Ma et al. (2005) Y. G. Ma, J. B. Natowitz, R. Wada, K. Hagel, J. Wang, T. Keutgen, Z. Majka, M. Murray, L. Qin, P. Smith, et al., Phys. Rev. C 71, 054606 (2005). * Ma and Ma (2018) C.-W. Ma and Y.-G. Ma, Prog. Part. Nucl. Phys. 99, 120 (2018). * Borderie and Frankland (2019) B. Borderie and J. Frankland, Prog. Part. Nucl. Phys. 105, 82 (2019). * Auerbach and Shlomo (2009) N. Auerbach and S. Shlomo, Phys. Rev. Lett. 103, 172501 (2009). * Dang (2011) N. D. Dang, Phys. Rev. C 84, 034309 (2011). * Fang et al. (2014) D. Q. Fang, Y. G. Ma, and C. L. Zhou, Phys. Rev. C 89, 047601 (2014). * Deng et al. (2016) X. G. Deng, Y. G. Ma, and M. Veselský, Phys. Rev. C 94, 044622 (2016). * Mondal et al. (2017) D. Mondal, D. Pandit, S. Mukhopadhyay, S. Pal, B. Dey, S. Bhattacharya, A. De, S. Bhattacharya, S. Bhattacharyya, P. Roy, et al., Phys. Rev. Lett. 118, 192501 (2017). * Guo et al. (2017) C. Q. Guo, Y. G. Ma, W. B. He, X. G. Cao, D. Q. Fang, X. G. Deng, and C. L. Zhou, Phys. Rev. C 95, 054622 (2017). * Bracco et al. (1989) A. Bracco, J. J. Gaardhøje, A. M. Bruce, J. D. Garrett, B. Herskind, M. Pignanelli, D. Barnéoud, H. Nifenecker, J. A. Pinston, C. Ristori, et al., Phys. Rev. Lett. 62, 2080 (1989). * Bortignon et al. (1991) P. F. Bortignon, A. Bracco, D. Brink, and R. A. Broglia, Phys. Rev. Lett. 67, 3360 (1991). * Wieland et al. (2006) O. Wieland, A. Bracco, F. Camera, G. Benzoni, N. Blasi, S. Brambilla, F. Crespi, A. Giussani, S. Leoni, P. Mason, et al., Phys. Rev. Lett. 97, 012501 (2006). * Wada et al. (1989) R. Wada, D. Fabris, K. Hagel, G. Nebbia, Y. Lou, M. Gonin, J. B. Natowitz, R. Billerey, B. Cheynis, A. Demeyer, et al., Phys. Rev. C 39, 497 (1989). * d’Enterria et al. (2001) D. G. d’Enterria, L. Aphecetche, A. Chbihi, H. Delagrange, J. Díaz, M. J. van Goethem, M. Hoefman, A. Kugler, H. Löhner, G. Martínez, et al., Phys. Rev. Lett. 87, 022701 (2001). * Borderie et al. (1996) B. Borderie, D. Durand, F. Gulminelli, M. Parlog, M. F. Rivet, L. Tassan-Got, G. Auger, C. O. Bacri, J. Benlliure, E. Bisquer, et al., Phys. Lett. B 388, 224 (1996). * Chitwood et al. (1986) C. B. Chitwood, C. K. Gelbke, J. Pochodzalla, Z. Chen, D. J. Fields, W. G. Lynch, R. Morse, M. B. Tsang, D. H. Boal, and J. C. Shillcock, Phys. Lett. B 172, 27 (1986). * Schwarz et al. (1993) C. Schwarz, W. G. Gong, N. Carlin, C. K. Gelbke, Y. D. Kim, W. G. Lynch, T. Murakami, G. Poggi, R. T. de Souza, M. B. Tsang, et al., Phys. Rev. C 48, 676 (1993). * Tsang et al. (1997) M. B. Tsang, W. G. Lynch, H. Xi, and W. A. Friedman, Phys. Rev. Lett. 78, 3836 (1997). * Serfling et al. (1998) V. Serfling, C. Schwarz, R. Bassini, M. Begemann-Blaich, S. Fritz, S. J. Gaff, C. Groß, G. Immé, I. Iori, U. Kleinevoß, et al., Phys. Rev. Lett. 80, 3928 (1998). * Albergo et al. (1985) S. Albergo, S. Costa, E. Costanzo, and A. Rubbino, IL Nuov. Cim. A 89, 1 (1985). * Wuenschel et al. (2010) S. Wuenschel, A. Bonasera, L. W. May, G. A. Souliotis, R. Tripathi, S. Galanopoulos, Z. Kohley, K. Hagel, D. V. Shetty, K. Huseman, et al., Nucl. Phys. A 843, 1 (2010). * LeCun et al. (2015) Y. LeCun, Y. Bengio, and G. Hinton, Nature 521, 436 (2015). * Jordan and Mitchell (2015) M. I. Jordan and T. M. Mitchell, Science 349, 255 (2015). * Carleo et al. (2019) G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, and L. Zdeborová, Rev. Mod. Phys. 91, 045002 (2019). * Carleo and Troyer (2017) G. Carleo and M. Troyer, Science 355, 602 (2017). * Zhang et al. (2020) Z.-W. Zhang, S. Yang, Y.-H. Wu, C.-X. Liu, Y.-M. Han, C.-H. Lee, Z. Sun, G.-J. Li, and X. Zhang, Chin. Phys. Lett. 37, 018401 (2020). * Hezaveh et al. (2017) Y. D. Hezaveh, L. P. Levasseur, and P. J. Marshall, Nature 548, 555 (2017). * Carrasquilla and Melko (2017) J. Carrasquilla and R. G. Melko, Nat. Phys. 13, 431 (2017). * van Nieuwenburg et al. (2017) E. P. L. van Nieuwenburg, Y.-H. Liu, and S. D. Huber, Nat. Phys. 13, 435 (2017). * Rodriguez-Nieva and Scheurer (2019) J. F. Rodriguez-Nieva and M. S. Scheurer, Nat. Phys. 15, 790 (2019). * Rao (2020) W.-J. Rao, Chin. Phys. Lett. 37, 080501 (2020). * Ma et al. (2020) C.-W. Ma, D. Peng, H.-L. Wei, Z.-M. Niu, Y.-T. Wang, and R. Wada, Chin. Phys. C 44, 014104 (2020). * Yao et al. (2019) T.-S. Yao, C.-Y. Tang, M. Yang, K.-J. Zhu, D.-Y. Yan, C.-J. Yi, Z.-L. Feng, H.-C. Lei, C.-H. Li, L. Wang, et al., Chin. Phys. Lett. 36, 068101 (2019). * Wu et al. (2020) Y. Wu, Z. Meng, K. Wen, C. Mi, J. Zhang, and H. Zhai, Chin. Phys. Lett. 37, 103201 (2020). * Wang et al. (2020) R. Wang, Y.-G. Ma, R. Wada, L.-W. Chen, W.-B. He, H.-L. Liu, and K.-J. Sun, Phys. Rev. Research 2, 043202 (2020). * Bertsch and Das Gupta (1988) G. Bertsch and S. Das Gupta, Phys. Rep. 160, 189 (1988). * Aichelin (1991) J. Aichelin, Phys. Rep. 202, 233 (1991). * Bondorf et al. (1995) J. P. Bondorf, A. Botvina, A. Iljinov, I. Mishustin, and K. Sneppen, Phys. Rep. 257, 133 (1995). * Charity et al. (1988) R. Charity, M. McMahan, G. Wozniak, R. McDonald, L. Moretto, D. Sarantites, L. Sobotka, G. Guarino, A. Pantaleo, L. Fiore, et al., Nucl. Phys. A 483, 371 (1988). * Gaitanos et al. (2009) T. Gaitanos, H. Lenske, and U. Mosel, Prog. Part. Nucl. Phys. 62, 439 (2009). * Zhang et al. (2018) Z.-F. Zhang, D.-Q. Fang, and Y.-G. Ma, Nucl. Sci. Tech. 29, 78 (2018). * Ono (2019) A. Ono, Prog. Part. Nucl. Phys. 105, 139 (2019). * Ma et al. (2006) Y. G. Ma, Y. B. Wei, W. Q. Shen, X. Z. Cai, J. G. Chen, J. H. Chen, D. Q. Fang, W. Guo, C. W. Ma, G. L. Ma, et al., Phys. Rev. C 73, 014604 (2006). * Sümmerer and Blank (2000) K. Sümmerer and B. Blank, Phys. Rev. C 61, 034607 (2000). * Song et al. (2018) Y.-D. Song, H.-L. Wei, C.-W. Ma, and J.-H. Chen, Nucl. Sci. Tech. 29, 96 (2018). * Charity (2010) R. J. Charity, Phys. Rev. C 82, 014610 (2010). * Liu et al. (2019) H. L. Liu, Y. G. Ma, and D. Q. Fang, Phys. Rev. C 99, 054614 (2019). * Hartnack et al. (1998) C. Hartnack, R. K. Puri, J. Aichelin, J. Konopka, S. Bass, H. Stöcker, and W. Greiner, Eur. Phys. J. A 1, 151 (1998). * Li et al. (2018) P.-C. Li, Y.-J. Wang, Q.-F. Li, and H.-F. Zhang, Nucl. Sci. Tech. 29, 177 (2018). * Feng (2018) Z.-Q. Feng, Nucl. Sci. Tech. 29, 40 (2018). * Yan et al. (2019) T.-Z. Yan, S. Li, Y.-N. Wang, F. Xie, and T.-F. Yan, Nucl. Sci. Tech. 30, 15 (2019). * Yan and Li (2019) T.-Z. Yan and S. Li, Nucl. Sci. Tech. 30, 43 (2019). * Yu et al. (2020) H. Yu, D.-Q. Fang, and Y.-G. Ma, Nucl. Sci. Tech. 31, 61 (2020). * Kingma and Ba (2017) D. P. Kingma and J. Ba, ArXiv14126980 Cs (2017), eprint 1412.6980. * Wada et al. (2019) R. Wada, W. Lin, P. Ren, H. Zheng, X. Liu, M. Huang, K. Yang, and K. Hagel, Phys. Rev. C 99, 024616 (2019). * Wang et al. (2017) M. Wang, G. Audi, F. G. Kondev, W. J. Huang, S. Naimi, and X. Xu, Chin. Phys. C 41, 030003 (2017).
# Strong decays of the newly $P_{cs}(4459)$ as a strange hidden-charm $\Xi_{c}\bar{D}^{*}$ molecule Rui Chen1,2<EMAIL_ADDRESS>1Center of High Energy Physics, Peking University, Beijing 100871, China 2 School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China ###### Abstract In our former work [arXiv:2011.07214], the $P_{cs}(4459)$ observed by the LHCb Collaboration can be explained as a coupled strange hidden-charm $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{\prime}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$. Here, we further discuss the two-body strong decay behaviors of the $P_{cs}(4459)$ in the meson-baryon molecular scenario by input the former obtained bound solutions. Our results support the $P_{cs}(4459)$ as the strange hidden-charm $\Xi_{c}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$. The relative decay ratio between $\Lambda_{c}D_{s}^{*}$ and $J/\psi\Lambda$ is around 10, where the partial decay width for the $\Lambda_{c}D_{s}^{*}$ channel is around 0.6 to 2.0 MeV. ###### pacs: 12.39.Pn, 14.40.Lb, 14.20.Pt, 13.30.Eg ## I introduction In 2019, the LHCb Collaboration discovered three narrow hidden-charm pentaquarks, namely $P_{c}(4312)$, $P_{c}(4440)$, and $P_{c}(4457)$, by using the combined data set collected in Run 1 plus Run 2 Aaij:2019vzc . These three $P_{c}$ states locate just below the $\Sigma_{c}\bar{D}^{(*)}$ continuum thresholds, they are very likely to be $\Sigma_{c}\bar{D}^{(*)}$ hidden-charm molecular pentaquarks. Several phenomenological models have been adopted to calculate the mass spectrum of the meson-baryon hidden-charm molecules, like the QCD sum rule, the meson-exchange model, the quark delocalization model, and so on (see review papers Chen:2016qju ; Liu:2019zoy ; Brambilla:2019esw ; Guo:2017jvc ; Esposito:2016noz ; Hosaka:2016pey for more details). In particular, through adopting the one-boson-exchange model (OBE) and considering the coupled channel effect, we have demonstrated that the $P_{c}(4312)$, $P_{c}(4440)$, and $P_{c}(4457)$ are corresponding to the loosely bound $\Sigma_{c}\bar{D}$ state with $I(J^{P})=1/2(1/2^{-})$, $\Sigma_{c}\bar{D}^{*}$ state with $I(J^{P})=1/2(1/2^{-})$, and $\Sigma_{c}\bar{D}^{*}$ state with $I(J^{P})=1/2(3/2^{-})$, respectively Chen:2019asm . And the coupled-channel effect plays an important role in generating hidden-charm molecular pentaquarks. The hadronic molecule is an important component of exotic states. Experimental and theoretical studies on the hadronic molecules can deepen our understanding of the nonperturbative behavior of quantum chromodynamics (QCD). Especially, not only the study of the mass spectrums but also the predictions of the decay behaviors for the $P_{c}$ states can help us to test the binding mechanism of pentaquark states. So far, many groups have discussed the strong decay behaviors of the $P_{c}$ states in the meson-baryon hadronic molecular picture. For example, the decay branch fraction of $P_{c}\to\eta_{c}p$ and $P_{c}\to J/\psi p$ processes were predicted by using the heavy quark symmetry Wang:2019spc ; Voloshin:2019aut ; Sakai:2019qph ; Chen:2020pac ; Xu:2019zme ; Wang:2019hyc ; Gutsche:2019mkg . The effective Lagrangian method was adopted to study the partial widths of all the allowed decay channels for these $P_{c}$ states at the hadronic level Lin:2019qiv . As we seen, all the results are model dependent. The coupled channel effect is not well taken into consideration in the strong decay of the $P_{c}$ states. Recently, the LHCb Collaboration reported an evidence of a possible strange hidden-charm pentaquark $P_{cs}(4459)$ decaying to the $J/\psi\Lambda$ channel in the $\Xi_{b}^{-}\to J/\psi\Lambda K^{-}$ process 1837464 . The smallest significance is 3.1 $\sigma$. Its resonance parameters are $M=4458.8\pm 2.9_{-1.2}^{+4.7}~{}\text{MeV}$, $\Gamma=17.3\pm 6.5_{-5.7}^{+8.0}~{}\text{MeV}$, respectively. The spin-parity $J^{P}$ of the $P_{cs}(4459)$ remains undetermined due to the lacking of the experimental data. After the observation of the $P_{cs}(4459)$, it was interpreted as the $\Xi_{c}\bar{D}^{*}$ strange hidden-charm molecular pentaquark with $J^{P}=1/2^{-}(3/2^{-})$ or the tightly pentaquark state with $J^{P}=1/2^{-}$ Chen:2020kco ; Chen:2020uif ; Peng:2020hql ; Wang:2020eep . In particular, we can reproduce the mass of the $P_{cs}(4459)$ in the coupled $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{\prime}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ system with $I(J^{P})=0(3/2^{-})$ by adopting the one-boson-exchange model Chen:2020kco . And the $\Xi_{c}\bar{D}^{*}$ and $\Xi_{c}^{*}\bar{D}$ systems are dominant. In fact, the hidden-charm pentaquarks with strangeness $P_{cs}$ were predicted Wang:2019nvm ; Chen:2016ryt ; Anisovich:2015zqa ; Wang:2015wsa ; Feijoo:2015kts ; Lu:2016roh ; Xiao:2019gjd ; Zhang:2020cdi ; Shen:2020gpw ; Ferretti:2020ewe , and suggested to search for in the $\Lambda_{b}(\Xi_{b})\to J/\psi\Lambda K(\eta)$ Lu:2016roh ; Feijoo:2015kts ; Chen:2015sxa . In this work, we will study the two-body strong decay properties for the $P_{cs}(4459)$ as a strange hidden-charm molecule. In our calculation, we consider the coupled channel effect and input the bound state wave functions obtained in our former work Chen:2020kco . In fact, Zou et al. already have predicted the two-body strong decay behaviors of the possible $\Lambda_{c\bar{c}}$ states in the single hadronic molecule pictures Shen:2019evi . The obtained total widths and decay patterns can be valuable in identify the molecular assumptions and spin parities of the strange hidden- charm molecular pentaquarks. This paper is organized as follows. After the introduction, we present the two-body strong decay amplitudes for the $P_{cs}(4459)$ as a strange hidden- charm $\Xi_{c}\bar{D}^{*}$ molecule in Sec. II. The corresponding numerical results for the decay widths is given in Sec. III. The paper ends up with a summary. ## II Two-body strong decay For the decay process $P_{cs}\to f_{1}+f_{2}$, its decay width reads $\displaystyle d\Gamma$ $\displaystyle=$ $\displaystyle\frac{1}{2J+1}\frac{|\bm{p}|}{32\pi^{2}m_{P_{cs}}^{2}}|\mathcal{M}(P_{cs}\to f_{1}+f_{2})|^{2}d\Omega,$ (2.1) which is expressed in the rest frame of the $P_{cs}$ state. $m_{P_{cs}}$, $J$, and $\bm{p}$ stand for the mass and spin of the initial $P_{cs}$ state and the momentum of the final state $(f_{1},~{}f_{2})$, respectively, $\displaystyle|\bm{p}|$ $\displaystyle=$ $\displaystyle\sqrt{\left(m_{P_{cs}}^{2}-(m_{f_{1}}+m_{f_{2}})^{2}\right)\left(m_{P_{cs}}^{2}-(m_{f_{1}}-m_{f_{2}})^{2}\right)}/(2m_{P_{cs}}).$ As we discussed in Ref. Chen:2020kco , the $P_{cs}(4459)$ can be explained as the $\Xi_{c}\bar{D}^{*}$ molecular state with $I(J^{P})=0(3/2^{-})$. When the binding energy is taken as $-19.28$ MeV, the probabilities for the $\Xi_{c}\bar{D}^{*}$, $\Xi_{c}^{*}\bar{D}$, $\Xi_{c}^{\prime}\bar{D}^{*}$, and $\Xi_{c}^{*}\bar{D}^{*}$ channels are 38.95%, 34.58%, 6.61%, and 18.86%, respectively. After introducing the coupled channel effect, the interaction for the $P_{cs}(4459)\to f_{1}+f_{2}$ process can be express as $\displaystyle\langle f_{1}+f_{2}|P_{cs}\rangle$ $\displaystyle=$ $\displaystyle\langle f_{1}+f_{2}|\left(|\Xi_{c}\bar{D}^{*}\rangle\langle\Xi_{c}\bar{D}^{*}|+|\Xi_{c}^{*}\bar{D}\rangle\langle\Xi_{c}^{*}\bar{D}|\right.$ $\displaystyle\left.+|\Xi_{c}^{\prime}\bar{D}^{*}\rangle\langle\Xi_{c}^{\prime}\bar{D}^{*}|+|\Xi_{c}^{*}\bar{D}^{*}\rangle\langle\Xi_{c}^{*}\bar{D}^{*}|\right)P_{cs}\rangle$ $\displaystyle=$ $\displaystyle\int\frac{d^{3}k}{(2\pi)^{3}}d^{3}re^{-i\bm{k}\cdot\bm{r}}\psi_{\Xi_{c}\bar{D}^{*}}(\bm{r})\langle f_{1}+f_{2}|\Xi_{c}\bar{D}^{*}\rangle$ $\displaystyle+\int\frac{d^{3}k}{(2\pi)^{3}}d^{3}re^{-i\bm{k}\cdot\bm{r}}\psi_{\Xi_{c}^{*}\bar{D}}(\bm{r})\langle f_{1}+f_{2}|\Xi_{c}^{*}\bar{D}\rangle$ $\displaystyle+\int\frac{d^{3}k}{(2\pi)^{3}}d^{3}re^{-i\bm{k}\cdot\bm{r}}\psi_{\Xi_{c}^{\prime}\bar{D}^{*}}(\bm{r})\langle f_{1}+f_{2}|\Xi_{c}^{\prime}\bar{D}^{*}\rangle$ $\displaystyle+\int\frac{d^{3}k}{(2\pi)^{3}}d^{3}re^{-i\bm{k}\cdot\bm{r}}\psi_{\Xi_{c}^{*}\bar{D}^{*}}(\bm{r})\langle f_{1}+f_{2}|\Xi_{c}^{*}\bar{D}^{*}\rangle.$ Here, $\psi_{\Xi_{c}\bar{D}^{*}}(\bm{r})$, $\psi_{\Xi_{c}^{*}\bar{D}}(\bm{r})$, $\psi_{\Xi_{c}^{\prime}\bar{D}^{*}}(\bm{r})$, and $\psi_{\Xi_{c}^{*}\bar{D}^{*}}(\bm{r})$ are the wave functions for the $\Xi_{c}\bar{D}^{*}$, $\Xi_{c}^{*}\bar{D}$, $\Xi_{c}^{\prime}\bar{D}^{*}$, and $\Xi_{c}^{*}\bar{D}^{*}$ channels in the $r-$coordinate space, respectively. And we define $\displaystyle\langle f_{1}+f_{2}|P_{cs}\rangle=-\frac{\mathcal{M}(P_{cs}\to f_{1}+f_{2})}{(2\pi)^{3/2}\sqrt{2E_{P_{cs}}}\sqrt{2E_{f_{1}}}\sqrt{2E_{f_{2}}}},$ (2.3) $\displaystyle\langle f_{1}+f_{2}|\Xi_{c}^{{}^{\prime},*}\bar{D}^{(*)}\rangle=$ $\displaystyle\quad\quad\quad\,-\frac{\mathcal{M}\left(\Xi_{c}^{{}^{\prime},*}(\bm{k})+\bar{D}^{(*)}(-\bm{k})\to f_{1}(\bm{p})+f_{2}(-\bm{p})\right)}{(2\pi)^{3/2}\sqrt{2E_{\Xi_{c}^{{}^{\prime},*}}}\sqrt{2E_{\bar{D}^{(*)}}}\sqrt{2E_{f_{1}}}\sqrt{2E_{f_{2}}}}.\quad\,\,$ (2.4) There are three kinds of two-body strong decay processes, the hidden-charm modes, the open-charm modes, and the $c\bar{c}-$annihilation modes. In Table 1, we collect the possible two-body strong decay channels. Table 1: Two-body strong decay final states for the $P_{cs}(4459)$ as a $\Xi_{c}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$. Here, the masses for the final states are in the unite of MeV. The $S$ and $D$ stand for the $S-$wave and $D-$wave decay modes, respectively. $\eta_{c}\Lambda(4100)$ | $J/\psi\Lambda(4212)$ | $\Lambda_{c}\bar{D}_{s}(4254)$ | $\Lambda_{c}\bar{D}_{s}^{*}(4398)$ ---|---|---|--- $D$ | $S$ | $D$ | $S$ $\Sigma_{c}\bar{D}_{s}(4423)$ | $\Xi_{c}\bar{D}(4337)$ | $\Xi_{c}^{\prime}\bar{D}(4447)$ | $\phi\Lambda(2135)$ $D$ | $D$ | $D$ | $S$ $\eta\Lambda(1662)$ | $\omega\Lambda(2007)$ | $\rho\Sigma(1971)$ | $\pi\Sigma(1326)$ $D$ | $S$ | $S$ | $D$ $\bar{K}N(1432)$ | $\bar{K}^{*}N(1830)$ | $K\Xi(1810)$ | $K^{*}\Xi(2210)$ $D$ | $S$ | $D$ | $S$ Because the $D-$wave interactions are strongly suppressed in comparison with the $S-$wave interactions, in the following, we only focus on the $J/\psi\Lambda$, $\Lambda_{c}\bar{D}_{s}^{*}$, $\phi\Lambda$, $\omega\Lambda$, $\rho\Sigma$, $\bar{K}^{*}N$, and $K^{*}\Xi$ decay channels. Figure 1 shows the corresponding decay processes. For the isoscalar $P_{cs}$ state as the coupled $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{{}^{\prime}}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ molecule, we can obtain $\displaystyle\mathcal{M}_{P_{cs}\to J/\psi\Lambda}$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}\left(\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to J/\psi\Lambda}-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to J/\psi\Lambda}\right),$ $\displaystyle\mathcal{M}_{P_{cs}\to\Lambda_{c}\bar{D}_{s}^{*}}$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}\left(\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to\Lambda_{c}^{+}D_{s}^{*-}}-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to\Lambda_{c}^{+}D_{s}^{*-}}\right),$ $\displaystyle\mathcal{M}_{P_{cs}\to\phi\Lambda}$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}\left(\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to\phi\Lambda^{0}}-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to\phi\Lambda^{0}}\right),$ $\displaystyle\mathcal{M}_{P_{cs}\to\omega\Lambda}$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}\left(\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to\omega\Lambda^{0}}-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to\omega\Lambda^{0}}\right),$ $\displaystyle\mathcal{M}_{P_{cs}\to\rho\Sigma}$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{6}}\left(\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to\rho^{+}\Sigma^{-}}-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to\rho^{+}\Sigma^{-}}\right.$ $\displaystyle\left.-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to\rho^{0}\Sigma^{0}}+\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to\rho^{0}\Sigma^{0}}\right.$ $\displaystyle\left.+\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to\rho^{-}\Sigma^{+}}-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to\rho^{-}\Sigma^{+}}\right),$ $\displaystyle\mathcal{M}_{P_{cs}\to\bar{K}^{*}N}$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left(\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to\bar{K}^{*0}n}-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to\bar{K}^{*0}n}\right.$ $\displaystyle\left.+\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to K^{*-}p}-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to K^{*-}p}\right),$ $\displaystyle\mathcal{M}_{P_{cs}\to{K}^{*}\Xi}$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left(\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to{K}^{*+}\Xi^{-}}-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to{K}^{*+}\Xi^{-}}\right.$ $\displaystyle\left.-\mathcal{M}_{\Xi_{c}^{(^{\prime},*)+}{D}^{(*)-}\to K^{*0}\Xi}+\mathcal{M}_{\Xi_{c}^{(^{\prime},*)0}\bar{D}^{(*)0}\to K^{*0}\Xi}\right).$ Here, we need to mention that the sum of the decay amplitude for the $\Xi_{c}^{(^{\prime},*)}\bar{D}^{(*)}\to\bar{K}^{*}N$ by exchanging the $\Lambda_{c}/\Sigma_{c}$ is zero in the isospin conservation case. Figure 1: Two-body strong decay diagrams for the $P_{cs}(4459)$ as a coupled $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{\prime}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$ in the hadron level. The interaction Lagrangians related to the discussed decay processes are given as Lin:1999ad ; Nagahiro:2008mn ; Liu:2001ce ; MuellerGroeling:1990cw $\displaystyle\mathcal{L}_{PPV}$ $\displaystyle=$ $\displaystyle\frac{iG}{2\sqrt{2}}\left\langle\partial^{\mu}{P}\left({P}{V}_{\mu}-{V}_{\mu}{P}\right)\right\rangle,$ (2.5) $\displaystyle\mathcal{L}_{VVP}$ $\displaystyle=$ $\displaystyle\frac{G^{\prime}}{\sqrt{2}}\epsilon^{\mu\nu\alpha\beta}\left\langle\partial_{\mu}{V}_{\nu}\partial_{\alpha}{V}_{\beta}{P}\right\rangle,$ (2.6) $\displaystyle\mathcal{L}_{VVV}$ $\displaystyle=$ $\displaystyle\frac{iG}{2\sqrt{2}}\left\langle\partial^{\mu}{V}^{\nu}\left({V}_{\mu}{V}_{\nu}-{V}_{\nu}{V}_{\mu}\right)\right\rangle,$ (2.7) $\displaystyle\mathcal{L}_{BBP}$ $\displaystyle=$ $\displaystyle g_{p}\left\langle\bar{{B}}i\gamma_{5}{P}{B}\right\rangle,$ (2.8) $\displaystyle\mathcal{L}_{BBV}$ $\displaystyle=$ $\displaystyle g_{v}\langle\bar{{B}}\gamma^{\mu}{V}_{\mu}{B}\rangle+\frac{f_{v}}{2m}\left\langle\bar{{B}}\sigma^{\mu\nu}\partial_{\mu}{V}_{\nu}{B}\right\rangle,$ (2.9) $\displaystyle\mathcal{L}_{BDP}$ $\displaystyle=$ $\displaystyle\frac{g_{BDP}}{m_{{P}}}\left(\bar{{D}}^{\mu}{B}+\bar{{B}}{{D}}^{\mu}\right)\partial_{\mu}{P},$ (2.10) $\displaystyle\mathcal{L}_{BDV}$ $\displaystyle=$ $\displaystyle-i\frac{g_{BDV}}{m_{{V}}}\left(\bar{{D}}^{\mu}\gamma^{5}\gamma^{\nu}{B}+\bar{{B}}\gamma^{5}\gamma^{\nu}{{D}}^{\mu}\right)\left(\partial_{\mu}{V}_{\nu}-\partial_{\nu}{V}_{\mu}\right).$ Here, ${P}$, ${V}$, ${B}$, and ${D}$ stand for the pseudoscalar and vector mesons, octet, and decuplet baryons. For example, in the $SU(4)$ quark model, the pseudoscalar and vector mesons are expressed as $\displaystyle{P}$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{cccc}\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}}+\frac{\eta_{c}}{\sqrt{12}}&\pi^{+}&K^{+}&\bar{D}^{0}\\\ \pi^{-}&-\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}}+\frac{\eta_{c}}{\sqrt{12}}&K^{0}&D^{-}\\\ K^{-}&\bar{K}^{0}&-\frac{2\eta}{\sqrt{6}}+\frac{\eta_{c}}{\sqrt{12}}&D_{s}^{-}\\\ D^{0}&D^{+}&D_{s}^{+}&-\frac{3\eta_{c}}{\sqrt{12}}\end{array}\right),$ $\displaystyle{V}$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{cccc}\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega_{8}}{\sqrt{6}}+\frac{J/\psi}{\sqrt{12}}&\rho^{+}&K^{*+}&\bar{D}^{*0}\\\ \rho^{-}&-\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega_{8}}{\sqrt{6}}+\frac{J/\psi}{\sqrt{12}}&K^{*0}&D^{*-}\\\ K^{*-}&\bar{K}^{*0}&-\frac{2\omega_{8}}{\sqrt{6}}+\frac{J/\psi}{\sqrt{12}}&D_{s}^{*-}\\\ D^{*0}&D^{*+}&D_{s}^{*+}&-\frac{3J/\psi}{\sqrt{12}}\end{array}\right),$ with $\omega_{8}=\omega\text{cos}\theta+\phi\text{sin}\theta$ and $\text{sin}\theta=-0.761$ particle . Coupling constants adopted in the following calculations are estimated from the $\rho-\pi-\pi$, $N-N-\pi$, $N-N-\rho(\omega)$, $N-\Delta-\pi$, and $N-\Delta-\rho$ interactions. For example, when explicitly expanding the the $SU(4)$ invariant interaction Lagrangians between baryons and pseudoscalar mesons, one can obtain $\displaystyle\mathcal{L}_{BBP}$ $\displaystyle=$ $\displaystyle\frac{5b-4a}{4\sqrt{2}}G_{p}(\bar{n}i\gamma^{5}\pi^{0}n-\bar{p}i\gamma^{5}\pi^{0}p$ (2.14) $\displaystyle+\bar{p}i\gamma^{5}\pi^{+}n+\bar{n}i\gamma^{5}\pi^{-}p)$ $\displaystyle+\frac{\sqrt{3}}{8}(a+b)G_{p}(\bar{\Xi}_{c}^{0}i\gamma^{5}D^{0}\Lambda^{0}-\bar{\Xi}_{c}^{+}i\gamma^{5}D^{+}\Lambda^{0})$ $\displaystyle+\frac{3}{8}(a-b)G_{p}(\bar{\Xi}_{c}^{{}^{\prime}0}i\gamma^{5}D^{0}\Lambda^{0}-\bar{\Xi}_{c}^{{}^{\prime}+}i\gamma^{5}D^{+}\Lambda^{0})$ $\displaystyle-\frac{a+b}{4\sqrt{2}}G_{p}(\bar{\Xi}_{c}^{+}i\gamma^{5}D_{s}^{+}\Xi^{0}+\bar{\Xi}_{c}^{0}i\gamma^{5}D_{s}^{+}\Xi^{-})$ $\displaystyle-\frac{3}{4}\sqrt{\frac{3}{2}}(a-b)G_{p}(\bar{\Xi}_{c}^{{}^{\prime}+}i\gamma^{5}D_{s}^{+}\Xi^{0}+\bar{\Xi}_{c}^{{}^{\prime}0}i\gamma^{5}D_{s}^{+}\Xi^{-})$ $\displaystyle+\frac{\sqrt{3}}{8}(a-2b)G_{p}(\bar{\Lambda}_{c}^{+}i\gamma^{5}K^{0}\Xi_{c}^{+}-\bar{\Lambda}_{c}^{+}i\gamma^{5}K^{+}\Xi_{c}^{0})$ $\displaystyle-\frac{3}{8}aG_{p}(\bar{\Lambda}_{c}^{+}i\gamma^{5}K^{0}\Xi_{c}^{{}^{\prime}+}-\bar{\Lambda}_{c}^{+}i\gamma^{5}K^{+}\Xi_{c}^{{}^{\prime}0})+\ldots.$ In Refs. Liu:2001ce ; Adelseck:1990ch ; Lin:1999ad ; Nagahiro:2008mn ; Ronchen:2012eg ; Janssen:1996kx , $b/a=5.3$, $g_{\pi NN}=13.5$, $g_{\rho NN}=3.25$, $f_{\rho NN}=6.1$, $g_{BDP}=2.127$, $g_{BDV}=16.03$, $G=12.00$, and $G^{\prime}=\frac{3G^{2}}{\left(32\sqrt{2}\right)\left(\pi^{2}f_{\pi}^{2}\right)}$ with $f_{\pi}=0.132$ GeV. All the coupling constants are determined by comparing the corresponding coefficients in Eq. (2.14). The scattering amplitudes for all the discussed decay processes can be expressed as $\displaystyle\mathcal{M}_{{BP\to BV}}^{{P}}$ $\displaystyle=$ $\displaystyle g_{p}\bar{u}_{3}\gamma_{5}u_{1}\frac{1}{q^{2}-m_{\mathbb{P}}^{2}}{g_{PPV}\epsilon_{4}^{\mu{\dagger}}(-p_{2\mu}+q_{\mu})},$ (2.15) $\displaystyle\mathcal{M}_{{BP\to BV}}^{{V}}$ $\displaystyle=$ $\displaystyle\left\\{g_{v}\bar{u}_{3}\gamma^{\mu}u_{1}+\frac{f_{v}}{4m^{*}}\bar{u}_{3}(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu})q_{\nu}u_{1}\right\\}$ (2.16) $\displaystyle\times\frac{g_{\mu\beta}-q_{\mu}q_{\beta}/m_{\mathbb{V}}^{2}}{q^{2}-m_{\mathbb{V}}^{2}}g_{VVP}\varepsilon^{\lambda\nu\alpha\beta}p_{4\nu}\epsilon_{4\lambda}^{{\dagger}}q_{\alpha},$ $\displaystyle\mathcal{M}_{{BP\to VB}}^{{B}}$ $\displaystyle=$ $\displaystyle g_{p}\bar{u}_{4}\gamma_{5}\frac{1}{\hbox to0.0pt{/\hss}{q}-m_{\mathbb{B}}}\left\\{g_{v}\epsilon_{3\mu}^{{\dagger}}\gamma^{\mu}u_{1}\right.$ (2.17) $\displaystyle\left.-\frac{f_{v}}{4m^{*}}p_{3\mu}\epsilon_{3\nu}^{{\dagger}}(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu})q_{\nu}u_{1}\right\\},$ $\displaystyle\mathcal{M}_{{BV\to BV}}^{{P}}$ $\displaystyle=$ $\displaystyle g_{p}\bar{u}_{3}\gamma_{5}u_{1}\frac{-1}{q^{2}-m_{\mathbb{P}}^{2}}g_{VVP}\varepsilon^{\lambda\sigma\alpha\beta}p_{4\lambda}\epsilon_{4\sigma}^{{\dagger}}p_{2\alpha}\epsilon_{2\beta},$ $\displaystyle\mathcal{M}_{{BV\to BV}}^{{V}}$ $\displaystyle=$ $\displaystyle\left\\{g_{v}\bar{u}_{3}\gamma^{\mu}u_{1}+\frac{f_{v}}{4m^{*}}\bar{u}_{3}(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu})q_{\nu}u_{1}\right\\}$ (2.19) $\displaystyle\times\frac{g_{\mu\beta}-q_{\mu}q_{\beta}/m_{\mathbb{V}}^{2}}{q^{2}-m_{\mathbb{V}}^{2}}g_{VVV}\left[\epsilon_{4}^{\alpha{\dagger}}\epsilon_{2}^{\beta}(p_{2\alpha}-q_{\alpha})\right.$ $\displaystyle\left.-\epsilon_{2\alpha}\epsilon_{4}^{\alpha{\dagger}}(p_{2}^{\beta}+p_{4}^{\beta})+\epsilon_{2\alpha}(\epsilon_{4}^{\beta{\dagger}}q^{\alpha}+p_{4}^{\alpha}\epsilon_{4}^{\beta{\dagger}})\right],$ $\displaystyle\mathcal{M}_{{BV\to VB}}^{{B}}$ $\displaystyle=$ $\displaystyle\left\\{g_{v}\bar{u}_{4}\gamma^{\mu}\epsilon_{2\mu}+\frac{f_{v}}{4m^{*}}\bar{u}_{4}(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu})p_{2\mu}\epsilon_{2\nu}\right\\}$ (2.20) $\displaystyle\times\frac{1}{\hbox to0.0pt{/\hss}{q}-m_{\mathbb{B}}}\left\\{g_{v}^{\prime}\gamma^{\alpha}\epsilon_{3\alpha}^{{\dagger}}u_{1}\right.$ $\displaystyle\left.+\frac{f_{v}^{\prime}}{4m^{\prime}}(\gamma^{\alpha}\gamma^{\beta}-\gamma^{\alpha}\gamma^{\beta})p_{3\alpha}\epsilon_{3\beta}^{{\dagger}}u_{1}\right\\},$ $\displaystyle\mathcal{M}_{{DP\to BV}}^{{P}}$ $\displaystyle=$ $\displaystyle\frac{g_{BDP}}{m_{P}}\bar{u}_{3}q_{\mu}u_{1}^{\mu}\frac{1}{q^{2}-m_{\mathbb{P}}^{2}}ig_{PPV}\epsilon_{4}^{\nu{\dagger}}(q_{\nu}-p_{4\nu}),$ (2.21) $\displaystyle\mathcal{M}_{{DP\to BV}}^{{V}}$ $\displaystyle=$ $\displaystyle-\frac{g_{BDV}}{m_{V}}\bar{u}_{3}\gamma^{5}(\gamma^{\nu}u_{1}^{\mu}-\gamma^{\mu}u_{1}^{\nu})q_{\mu}$ (2.22) $\displaystyle\times\frac{g_{\nu\beta}-q_{\nu}q_{\beta}/m_{\mathbb{V}}^{2}}{q^{2}-m_{\mathbb{V}}^{2}}ig_{VVP}\varepsilon^{\lambda\beta\alpha\delta}q_{\lambda}p_{4\alpha}\epsilon_{4\delta}^{{\dagger}},$ $\displaystyle\mathcal{M}_{{DP\to VB}}^{{B}}$ $\displaystyle=$ $\displaystyle ig_{BBP}\bar{u}_{4}\gamma^{5}\frac{1}{\hbox to0.0pt{/\hss}{q}-m_{\mathbb{B}}}\frac{g_{BDV}}{m_{V}}\gamma^{5}(\gamma^{\nu}u_{1}^{\mu}-\gamma^{\mu}u_{1}^{\nu})q_{\mu}\epsilon_{3\nu}^{{\dagger}},$ $\displaystyle\mathcal{M}_{{DV\to BV}}^{{P}}$ $\displaystyle=$ $\displaystyle\frac{g_{BDP}}{m_{P}}\bar{u}_{3}q_{\mu}u_{1}^{\mu}\frac{i}{q^{2}-m_{\mathbb{P}}^{2}}g_{VVP}\varepsilon^{\lambda\delta\alpha\beta}p_{2\lambda}\epsilon_{2\delta}p_{4\alpha}\epsilon_{4\beta}^{{\dagger}},$ $\displaystyle\mathcal{M}_{{DV\to BV}}^{{V}}$ $\displaystyle=$ $\displaystyle\frac{g_{BDV}}{m_{V}}\bar{u}_{3}\gamma^{5}(\gamma^{\nu}u_{1}^{\mu}-\gamma^{\mu}u_{1}^{\nu})q_{\mu}\frac{g_{\nu\beta}-q_{\nu}q_{\beta}/m_{\mathbb{V}}^{2}}{q^{2}-m_{\mathbb{V}}^{2}}$ (2.25) $\displaystyle\times g_{VVV}\left[\epsilon_{4}^{\alpha{\dagger}}\epsilon_{2}^{\beta}(p_{2\alpha}-q_{\alpha})\right.$ $\displaystyle\left.-\epsilon_{2\alpha}\epsilon_{4}^{\alpha{\dagger}}(p_{2}^{\beta}+p_{4}^{\beta})+\epsilon_{2\alpha}(\epsilon_{4}^{\beta{\dagger}}q^{\alpha}+p_{4}^{\alpha}\epsilon_{4}^{\beta{\dagger}})\right],$ $\displaystyle\mathcal{M}_{{DV\to VB}}^{{B}}$ $\displaystyle=$ $\displaystyle\left\\{g_{BBV}\bar{u}_{4}\gamma^{\alpha}\epsilon_{2\alpha}+\frac{f_{BBV}}{4m^{*}}\bar{u}_{4}(\gamma^{\alpha}\gamma^{\beta}-\gamma^{\beta}\gamma^{\alpha})p_{2\alpha}\epsilon_{2\beta}\right\\}$ (2.26) $\displaystyle\times\frac{1}{\hbox to0.0pt{/\hss}{q}-m_{\mathbb{B}}}\frac{g_{BDV}}{m_{V}}\gamma^{5}(\gamma^{\nu}u_{1}^{\mu}-\gamma^{\mu}u_{1}^{\nu})p_{3\mu}\epsilon_{3\nu}^{{\dagger}}.$ Here, $\mathcal{M}_{{i_{1}i_{2}\to f_{1}f_{2}}}^{E}$ corresponds to the scattering amplitude for the $i_{1}+i_{2}\to f_{1}+f_{2}$ process by exchanging the hadron $E$. The above scattering amplitudes have the form of $\displaystyle\mathcal{M}$ $\displaystyle\sim$ $\displaystyle\frac{c_{0}+c_{1}\bm{k}^{2}+c_{2}\bm{p}^{2}+c_{3}\bm{k}\cdot\bm{p}+c_{i}(\bm{k}^{4},\bm{p}^{4},\ldots)}{\bm{k}^{2}+\bm{p}^{2}+2\bm{k}\cdot\bm{p}+M^{2}}.$ For the heavy loosely bound state, the higher order terms like the $c_{i}(\bm{k}^{4},\bm{p}^{4},\ldots)$ contribute very small. In our calculations, we neglect these interactions. According to the relation in Eq. (II), the convergence of the amplitude $\mathcal{M}(P_{c}\to f_{1}+f_{2})$ only depends on the wave functions of the $P_{cs}$ state as shown in Figure 2. For simplicity, we set a upper limit integral $k_{\text{Max}}$ on the amplitude $\mathcal{M}(P_{c}\to f_{1}+f_{2})$ according to the wave function normalization $\int_{0}^{k_{\text{Max}}}d^{3}k\psi(\bm{k})^{2}=1$. Figure 2: The radial wave functions for the $P_{cs}(4459)$ as a coupled $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{\prime}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$ in the momentum space. It is obtained by performing the Fourier transformation, i.e., $\psi(k)=\int{d^{3}\bm{r}}e^{i\bm{k}\cdot\bm{r}}\psi(\bm{r})$. Here, its binding energy is $E=-19.28$ MeV. ## III numerical results Before calculating the decay widths, let’s brief introduce the bound state property of the $P_{cs}(4459)$ as a strange hidden-charm meson-baryon molecular pentaquark. In Figure 3, we present the probabilities for different channels of the $P_{cs}(4459)$ as a coupled $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{\prime}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$. Here, the coupled channel effect plays an essential role in forming this bound state. And the $\Xi_{c}\bar{D}^{*}$ and $\Xi_{c}^{*}\bar{D}$ channels are the most important, followed by the $\Xi_{c}^{*}\bar{D}^{*}$ and $\Xi_{c}^{\prime}\bar{D}^{*}$ channels. Figure 3: Probabilities for different channels of the $P_{cs}(4459)$ as a coupled $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{\prime}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$. The shallow area labels the position of the $P_{cs}(4459)$ including the experimental uncertainty. With the above preparations, we can further produce the two-body strong decay widths for the coupled $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{\prime}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$. In Figure 4, we present the corresponding decay widths for the $P_{cs}(4459)$. Here, we take the binding energy from $-0.75$ MeV to $-30$ MeV. We see that * • The total two-body strong decay width $\Gamma_{\text{tot}}$ is from 10 MeV to 25 MeV in the mass range of the $P_{cs}(4459)$. It is consistent to the experiment data $\Gamma=17.3\pm 6.5_{-5.7}^{+8.0}~{}\text{MeV}$. * • In general, when the hadronic molecular state binds deeper and deeper, the overlap of the wave functions of the components becomes larger and larger. The quark exchange in the hadronic molecular state becomes easier and easier. As shown in Figure 4(a), with the decreasing of the mass of the $P_{cs}(4459)$, the total decay width turns larger. * • For the $c\bar{c}-$annihilation decay modes, the $K^{*}\Xi$ and $\omega\Lambda$ decay modes are the most important among all the discussed decay channels as shown in Figure 4(b). The partial widths for these two final states are around several or more than ten MeV, the corresponding branch fraction $(\Gamma_{K^{*}\Xi}+\Gamma_{\omega\Lambda})/\Gamma_{\text{tot}}$ is around 80%. For the remaining $\phi\Lambda$ and $\rho\Sigma$ channels, their partial decay widths are around a few tenths and percents MeV, respectively. * • Compared to the light hadron final states, the hidden-charm decay widths are much smaller as their narrow phase space. Here, the partial decay width for the $P_{cs}(4459)\to J/\psi\Lambda$ process is only several percents MeV. * • For the open-charm decay modes, the partial decay width for the $\Lambda_{c}\bar{D}^{*}$ channel is in the range of 0.6 MeV to 2.0 MeV. The relative ratios for the $\mathcal{R}=\Gamma_{\Lambda_{c}\bar{D}^{*}}/\Gamma_{J/\psi\Lambda}$ is around ten. Thus, the open-charm decay should be an essential decay mode to search for the $P_{cs}$ state as a strange hidden-charm molecular pentaquark in our model. Figure 4: The total (a) and partial (b) decay widths for the $P_{cs}(4459)$ as a coupled $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{\prime}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$. The shallow area labels the position of the $P_{cs}(4459)$ including the experimental uncertainty. To summarize, our results of the two-body strong decay widths support the $P_{cs}(4459)$ as the coupled $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{\prime}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$. ## IV summary In 2019, the LHCb Collaboration reported three narrow hidden-charm pentaquarks ($P_{c}(4312)$, $P_{c}(4440)$, and $P_{c}(4457)$) in the $\Lambda_{b}\to J/\psi pK$ process Aaij:2019vzc . They are likely to be the charmed baryon and anti-charmed meson molecular states. And the coupled channel effect plays an very important role in forming a bound state and the strong decay Wang:2019spc ; Chen:2019asm . Very recently, the LHCb Collaboration continued to report an evidence of the hidden-charm pentauqarks with strangeness $|S|=1$. After adopting the OBE model and considering the coupled channel effect, we find the newly reported $P_{cs}(4459)$ can be regarded as the coupled $\Xi_{c}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}/\Xi_{c}^{\prime}\bar{D}^{*}/\Xi_{c}^{*}\bar{D}^{*}$ molecule with $I(J^{P})=0(3/2^{-})$. The dominant channels are the $S-$wave $\Xi_{c}\bar{D}^{*}$ and $\Xi_{c}^{*}\bar{D}$ channels. Using the obtained bound wave functions, we study the two-body strong decay behaviors for the $P_{cs}(4459)$ in the molecular picture. Our results show that the total decay width here is well around the experimental value reported by the LHCb Collaboration 1837464 . The $c\bar{c}-$annihilation decay modes are very important. In particular, the partial decay widths for the $P_{cs}(4459)\to K^{*}\Xi(\omega\Lambda)$ are over several MeV, their branch fractions are nearly 80%. The partial decay width for the $\Lambda_{c}D_{s}^{*}$ mode is around 1 MeV. The relative ratio for the $\mathcal{R}=\Gamma_{\Lambda_{c}\bar{D}_{c}^{*}}/\Gamma_{J/\psi\Lambda}$ is around 10. Until now, the inner structure and the spin-parity of the $P_{cs}(4459)$ are still mystery, more theoretical and experimental studies are needed. Although our phenomenological study is still model dependence, the strong decay information provided here can be a crucial test of the hadronic molecular state assignment to the $P_{cs}$ state. Experimental search for the possible hidden-charm molecular pentaquark will be helpful to check and develop these adopted phenomenological models. ## ACKNOWLEDGMENTS Rui Chen is very grateful to Xiang Liu and Shi-Lin Zhu for helpful discussions and constructive suggestions. This project is supported by the National Postdoctoral Program for Innovative Talent, the National Natural Science Foundation of China under Grants No. 11705069 and No. 11575008, and National Key Basic Research Program of China (2015CB856700). ## References * (1) R. Aaij et al. [LHCb Collaboration], Phys. Rev. Lett. 122, no. 22, 222001 (2019) * (2) H. X. Chen, W. Chen, X. Liu and S. L. Zhu, Phys. Rept. 639, 1 (2016) * (3) Y. R. Liu, H. X. Chen, W. Chen, X. Liu and S. L. Zhu, Prog. Part. Nucl. Phys. 107, 237-320 (2019) * (4) N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C. P. Shen, C. E. Thomas, A. Vairo and C. Z. Yuan, Phys. Rept. 873, 1-154 (2020) * (5) F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou, Rev. Mod. Phys. 90, no.1, 015004 (2018) * (6) A. Esposito, A. Pilloni and A. D. Polosa, Phys. Rept. 668, 1-97 (2017) * (7) A. Hosaka, T. Iijima, K. Miyabayashi, Y. Sakai and S. Yasui, PTEP 2016, no.6, 062C01 (2016) * (8) R. Chen, Z. F. Sun, X. Liu and S. L. Zhu, Phys. Rev. D 100, no. 1, 011502 (2019) * (9) G. J. Wang, L. Y. Xiao, R. Chen, X. H. Liu, X. Liu and S. L. Zhu, Phys. Rev. D 102, no.3, 036012 (2020) doi:10.1103/PhysRevD.102.036012 [arXiv:1911.09613 [hep-ph]]. * (10) S. Sakai, H. J. Jing and F. K. Guo, Phys. Rev. D 100, no. 7, 074007 (2019) [arXiv:1907.03414 [hep-ph]]. * (11) M. B. Voloshin, Phys. Rev. D 100, no. 3, 034020 (2019) [arXiv:1907.01476 [hep-ph]]. * (12) H. X. Chen, Eur. Phys. J. C 80, no.10, 945 (2020) doi:10.1140/epjc/s10052-020-08519-1 [arXiv:2001.09563 [hep-ph]]. * (13) Y. J. Xu, C. Y. Cui, Y. L. Liu and M. Q. Huang, Phys. Rev. D 102, no.3, 034028 (2020) doi:10.1103/PhysRevD.102.034028 [arXiv:1907.05097 [hep-ph]]. * (14) T. Gutsche and V. E. Lyubovitskij, Phys. Rev. D 100, no.9, 094031 (2019) [arXiv:1910.03984 [hep-ph]]. * (15) Z. G. Wang and X. Wang, arXiv:1907.04582 [hep-ph]. * (16) Y. H. Lin and B. S. Zou, Phys. Rev. D 100, no.5, 056005 (2019) doi:10.1103/PhysRevD.100.056005 [arXiv:1908.05309 [hep-ph]]. * (17) R. Aaij et al. [LHCb], [arXiv:2012.10380 [hep-ex]]. * (18) R. Chen, [arXiv:2011.07214 [hep-ph]]. * (19) H. X. Chen, W. Chen, X. Liu and X. H. Liu, [arXiv:2011.01079 [hep-ph]]. * (20) F. Z. Peng, M. J. Yan, M. Sánchez Sánchez and M. P. Valderrama, [arXiv:2011.01915 [hep-ph]]. * (21) Z. G. Wang, [arXiv:2011.05102 [hep-ph]]. * (22) R. Chen, J. He and X. Liu, Chin. Phys. C 41, no.10, 103105 (2017) * (23) B. Wang, L. Meng and S. L. Zhu, Phys. Rev. D 101, no.3, 034018 (2020) * (24) V. V. Anisovich, M. A. Matveev, J. Nyiri, A. V. Sarantsev and A. N. Semenova, Int. J. Mod. Phys. A 30, no.32, 1550190 (2015) * (25) Z. G. Wang, Eur. Phys. J. C 76, no.3, 142 (2016) * (26) A. Feijoo, V. K. Magas, A. Ramos and E. Oset, Eur. Phys. J. C 76, no.8, 446 (2016) * (27) J. X. Lu, E. Wang, J. J. Xie, L. S. Geng and E. Oset, Phys. Rev. D 93, 094009 (2016) * (28) C. W. Xiao, J. Nieves and E. Oset, Phys. Lett. B 799, 135051 (2019) * (29) Q. Zhang, B. R. He and J. L. Ping, [arXiv:2006.01042 [hep-ph]]. * (30) C. W. Shen, H. J. Jing, F. K. Guo and J. J. Wu, Symmetry 12, no.10, 1611 (2020) * (31) J. Ferretti and E. Santopinto, JHEP 04, 119 (2020) * (32) H. X. Chen, L. S. Geng, W. H. Liang, E. Oset, E. Wang and J. J. Xie, Phys. Rev. C 93, no.6, 065203 (2016) * (33) C. W. Shen, J. J. Wu and B. S. Zou, Phys. Rev. D 100, no.5, 056006 (2019) doi:10.1103/PhysRevD.100.056006 [arXiv:1906.03896 [hep-ph]]. * (34) Z. -w. Lin and C. M. Ko, Phys. Rev. C 62, 034903 (2000) [arXiv:nucl-th/9912046]. * (35) H. Nagahiro, L. Roca and E. Oset, Eur. Phys. J. A 36, 73 (2008), [arXiv:0802.0455 [hep-ph]]. * (36) W. Liu, C. M. Ko and Z. W. Lin, Phys. Rev. C 65, 015203 (2002) doi:10.1103/PhysRevC.65.015203 * (37) A. Mueller- Groeling, K. Holinde and J. Speth, Nucl. Phys. A 513, 557-583 (1990) doi:10.1016/0375-9474(90)90398-6 * (38) T.-P. Chen, L.-F. Li, Gauge Theory of Elementary Particle Physics (Oxford Univ. Press, New York, 1984) * (39) R. A. Adelseck and B. Saghai, Phys. Rev. C 42, 108-127 (1990) doi:10.1103/PhysRevC.42.108 * (40) D. Ronchen, M. Doring, F. Huang, H. Haberzettl, J. Haidenbauer, C. Hanhart, S. Krewald, U. G. Meissner and K. Nakayama, Eur. Phys. J. A 49, 44 (2013) doi:10.1140/epja/i2013-13044-5 [arXiv:1211.6998 [nucl-th]]. * (41) G. Janssen, K. Holinde and J. Speth, Phys. Rev. C 54, 2218-2234 (1996) doi:10.1103/PhysRevC.54.2218
# Identification of brain states, transitions, and communities using functional MRI Lingbin Bian School of Mathematics, Monash University, Australia Turner Institute for Brain and Mental Health, School of Psychological Sciences and Monash Biomedical Imaging, Monash University, Australia Corresponding authors: Lingbin Bian<EMAIL_ADDRESS>and Adeel Razi <EMAIL_ADDRESS> Tiangang Cui School of Mathematics, Monash University, Australia B.T. Thomas Yeo Department of Electrical and Computer Engineering, National University of Singapore, Singapore Alex Fornito Turner Institute for Brain and Mental Health, School of Psychological Sciences and Monash Biomedical Imaging, Monash University, Australia Adeel Razi Turner Institute for Brain and Mental Health, School of Psychological Sciences and Monash Biomedical Imaging, Monash University, Australia Wellcome Centre for Human Neuroimaging, University College London, United Kingdom Joint senior authors Corresponding authors: Lingbin Bian<EMAIL_ADDRESS>and Adeel Razi<EMAIL_ADDRESS> Jonathan Keith School of Mathematics, Monash University, Australia Joint senior authors ###### Abstract Brain function relies on a precisely coordinated and dynamic balance between the functional integration and segregation of distinct neural systems. Characterizing the way in which neural systems reconfigure their interactions to give rise to distinct but hidden brain states remains an open challenge. In this paper, we propose a Bayesian model-based characterization of latent brain states and showcase a novel method based on posterior predictive discrepancy using the latent block model to detect transitions between latent brain states in blood oxygen level-dependent (BOLD) time series. The set of estimated parameters in the model includes a latent label vector that assigns network nodes to communities, and also block model parameters that reflect the weighted connectivity within and between communities. Besides extensive in- silico model evaluation, we also provide empirical validation (and replication) using the Human Connectome Project (HCP) dataset of 100 healthy adults. Our results obtained through an analysis of task-fMRI data during working memory performance show appropriate lags between external task demands and change-points between brain states, with distinctive community patterns distinguishing fixation, low-demand and high-demand task conditions. ## I identifying changes in brain connectivity over time can provide insight into fundamental properties of human brain dynamics. However, the definition of discrete brain states and the method of identifying the states has not been commonly agreed [1]. Experiments targeting unconstrained spontaneous ‘resting- state’ neural dynamics [2, 3, 4, 5, 6, 7, 8, 9, 10] have limited ability to infer latent brain states or determine how the brain segues from one state to another because it is not clear whether changes in brain connectivity are induced by variations in neural activity (for example induced by cognitive or vigilance states) or fluctuations in non-neuronal noise [11, 12, 13]. A recent study with naturalistic movie stimuli used a hidden Markov model to explore dynamic jumps between discrete brain states and found that the variations in the sensory and narrative properties of the movie can evoke discrete brain processes [14]. Task-fMRI studies with more restrictive constraints on stimuli have demonstrated that functional connectivity exhibits variation during motor learning [15] and anxiety-inducing speech preparation [16]. Although task- based fMRI experiments can, to some extent, delineate the external stimuli (for example, the onset and duration of stimuli in the block designed experiments), which constitute reference points against which to identify changes in the observed signal, this information does not precisely determine the timing and duration of the latent brain state relative to psychological processes or neural activity. For example, an emotional stimulus may trigger a neural response which is delayed relative to stimulus onset and which persists for some time after stimulus offset. Moreover, the dynamics of brain states and functional networks are not induced only by external stimuli, but also by unknown intrinsic latent mental processes [17]. Therefore, the development of noninvasive methods for identifying transitions of latent brain states during both task performance and task-free conditions is necessary for characterizing the spatiotemporal dynamics of brain networks. Change-point detection in multivariate time series is a statistical problem that has clear relevance to identifying transitions in brain states, particularly in the absence of knowledge regarding the experimental design. Several change-point detection methods based on spectral clustering [18, 19] and dynamic connectivity regression (DCR) [16] have been previously developed and applied to the study of fMRI time series, and these have enhanced our understanding of brain dynamics. However, change-point detection with spectral clustering only evaluates changes to the component eigenstructures of the networks but neglects the weighted connectivity between nodes, while the DCR method only focuses on the sparse graph but ignores the modules of the brain networks Other change-point detection strategies include a frequency-specific method [20], applying a multivariate cumulative sum procedure to detect change-points using EEG data, and methods which focus on large scale network estimation in fMRI time series [21, 22, 23, 24]. Many fMRI studies use sliding window methods for characterizing the time-varying functional connectivity in time series analysis [25, 26, 8, 27, 28, 29, 2]. Methods based on hidden Markov models (HMM) are also widely used to analyze transient brain states [30, 31, 32]. A community is defined as a collection of nodes that are densely connected in a network. The problem of community detection is a topical area of network science [33, 34]. How communities change or how the nodes in a network are assigned to specific communities is an important problem in the characterization of networks. Although many community detection problems in network neuroscience are based on modularity [15, 35, 36], recently a hidden Markov stochastic block model combined with a non-overlapping sliding window was applied to infer dynamic functional connectivity for networks, where edge weights were only binary and the candidate time points evaluated were not consecutive [37, 38]. More general weighted stochastic block models [39] have been used to infer structural connectivity for human lifespan analysis [40] and to infer functional connectivity in the mesoscale architecture of drosophila, mouse, rat, macaque, and human connectomes [41]. However, these studies using the weighted stochastic block model only explore the brain network over the whole time course of the experiment and neglect dynamic properties of networks. Weighted stochastic block models [39] are described in terms of exponential families (parameterized probability distributions), with the estimation of parameters performed using variational inference [42, 43]. Another relevant statistical approach introduces a fully Bayesian latent block model [2, 3], which includes both a binary latent block model and a Gaussian latent block model as special cases. The Gaussian latent block model is similar to the weighted stochastic block model, but different methods have been used for parameter estimation, including Markov chain Monte Carlo (MCMC) sampling. Although there is a broad literature exploring change-point detection, and also many papers that discuss community detection, relatively few papers combine these approaches, particularly from a Bayesian perspective. In this paper, we develop Bayesian model-based methods which unify change-point detection and community detection to explore when and how the community structure of discrete brain state changes under different external task demands at different time points using functional MRI. There are several advantages of our approach compared to existing change-point detection methods. Compared to data-driven methods like spectral clustering [18, 19] and DCR [16], which either ignore characterizing the weighted connectivity or the community patterns, the fully Bayesian framework and Markov chain Monte Carlo method provide flexible and powerful strategies that have been under-used for characterizing the latent properties of brain networks, including the dynamics of both the community memberships and weighted connectivity properties of the nodal community structures. Existing change-point detection methods based on the stochastic block model all use non-overlapping sliding windows and were applied only to binary brain networks [37, 38]. In contrast to the stochastic block model used in time-varying functional connectivity studies, the Gaussian latent block model used in our work considers the correlation matrix as an observation without imposing any arbitrary thresholds, so that all the information contained in the time series is preserved, resulting in more accurate detection of change-points. Moreover, unlike methods based on fixed community memberships over the time course [38], our methods consider both the community memberships and parameters related to the weighted connectivity to be time varying, which results in more flexible estimation of both community structure and connectivity patterns. Furthermore, our Bayesian change-point detection method uses overlapping sliding windows that assess all of the potential candidate change-points over the time course, which increases the resolution of the detected change-points compared to methods using non- overlapping windows [37, 38]. Finally, the proposed Bayesian change-point detection method is computationally efficient, scaling to whole-brain networks potentially covering hundreds of nodes within a reasonable time frame in the order of tens of minutes. Fig. 1: The framework for identifying brain states, transitions and communities: a Schematic of the proposed Bayesian change-point detection method. Three different background colors represent three brain states of individual subjects with different community architectures. The colors of the nodes represent community memberships. A sliding window of width $W$ centered at $t$ is applied to the time series. The different colored time series correspond to BOLD time series for each node. The sample correlation matrix $\textbf{x}_{t}$ (i.e., an observation for our Bayesian model) is calculated from the sample data $\textbf{Y}_{t}$ within the sliding window. We use the Gaussian latent block model to fit the observations and evaluate model fits to the observations to obtain the posterior predictive discrepancy index (PPDI). We then calculate the cumulative discrepancy energy (CDE) from the PPDI and use the CDE as a scoring criterion to estimate the change-points of the community architectures. b Dynamic community memberships of networks with $N=16$ nodes. A latent label vector z contains the labels ($k$) of specific communities for the nodes. Nodes of the same color are located in the same community. The dashed lines represent the (weighted) connectivity between communities and the solid lines represent the (weighted) connectivity within the communities. c Model fitness assessment. The observation is the realized adjacency matrix; different colors in the latent block model represent different blocks with the diagonal blocks representing the connectivity within a community and the off-diagonal blocks representing the connectivity between communities. To demonstrate distinct blocks of the latent block model, in this schematic we group the nodes in the same community adjacently and the communities are sorted. In reality, the labels of the nodes are mixed with respect to an adjacency matrix. The term $\bm{\pi}_{kl}$ represents the model parameters in block $kl$. Our paper presents four main contributions, namely: (i) we quantitatively characterize discrete brain states with weighted connectivity and time- dependent community memberships, using the latent block model within a temporal interval between two consecutive change-points; (ii) we propose a new Bayesian change-point detection method called posterior predictive discrepancy (PPD) to estimate transition locations between brain states, using a Bayesian model fitness assessment; (iii) in addition to the locations of change-points, we also infer the community architecture of discrete brain states, which we show are distinctive of 2-back, 0-back, and fixation conditions in a working- memory task-based fMRI experiment, and; (iv) we further empirically find that the estimated change-points between brain states show appropriate lags compared to the external working memory task conditions. Fig. 2: Results of method validation using synthetic data: a CDE of the multivariate Gaussian data with SNR=5dB using different models ($K$=6, 5, 4, and 3). The sliding window size for converting from time series to correlation matrices sequence is $W=20$, whereas (for smoothing) the sliding window size for converting from PPDI to CDE is $W_{s}=10$. The vertical dashed lines are the locations of the true change-points ($t$=20, 50, 80, 100, 130, and 160). The colored scatterplots in the figures are the CDEs of individual (virtual) subjects and the black curve is the group CDE (averaged CDE over 100 subjects). The red points are the local maxima and the blue points are the local minima. b Local fitting with different models (from $K$=3 to 18) for synthetic data (SNR=5dB). Different colors represent the PPDI values of different states with the true number of communities $K^{true}$. c The estimation of community constituents for SNR=5dB at each discrete state: $t$=36, 66, 91, 116, 146) for brain states 1 to 5, respectively. The estimations of the latent label vectors (Estimation) and the label vectors (True) that determine the covariance matrix in the generative model are shown as bar graphs. The strength and variation of the connectivity within and between communities are represented by the block mean and variance matrices within each panel. ## Results Our proposed method is capable of identifying transitions between discrete brain states and infer the patterns of connectivity between brain regions that underlie those brain states by modelling time-varying dynamics in BOLD signal under different stimuli. In this section, we validate our proposed methodology by applying Bayesian change-point detection and network estimation to both synthetic data and real fMRI data. The Bayesian change-point detection method is described in Fig. 1 and the mathematical formulation and detailed descriptions are in the Methods section (also see Supplementary information). We first use synthetic multivariate Gaussian data for extensive validation and critically evaluate the performance of our change-point detection and sampling algorithms. For real data analysis, we use working memory task fMRI (WM-tfMRI) data from the Human Connectome Project (HCP) [46]. We extracted the time series of 35 nodes whose MNI coordinates were determined by significant activations obtained via clusterwise inference using FSL [47]. ### Method validation using synthetic data To validate the Bayesian change-point detection algorithm, we first use synthetic data with the signal to noise ratio (SNR)=5dB. The simulated states of segments between two true change-points in the synthetic data could be repeating or all different, depending on the settings of the parameters in the generative model. A detailed description of the generative model and parameter settings for simulating the synthetic data are provided in Supplementary Section 1. Further simulation results with different levels of SNR (SNR=10dB, SNR=0dB, and SNR=-5dB) are provided in Supplementary Figures 1, 2, and 3. The resulting group-level cumulative discrepancy energy (CDE) scores (see Methods section for how we define CDE) using models with different values of $K$, where $K$ is the number of communities, are shown in Fig. 2a. We use a latent block model to fit the adjacency matrix at consecutive time points for change- point detection, which we call global fitting. The local maxima (red dots) of the group CDE indicate locations of change-points and the local minima (blue dots) correspond to distinct states that differ in their community architecture. We find that the local maxima (red dots) are located very close to the true change-points in all of the graphs (in Fig. 2a) which means that the global fitting has good performance. Here we clarify that global fitting is used to estimate the locations of the change-points or transitions of brain states, and local fitting is used to select a latent block model to estimate the community structures of discrete brain states (see Methods section for a detailed explanation of global and local fitting). Next, using the global fitting results, with $K=6$ and $W=20$, where $W$ is the width of the sliding (rectangular) window, we find the local minima (the blue dots) locations to be $t=\\{36,66,91,116,146\\}$, where each location corresponds to a discrete state. Next, we use local fitting to select a model (i.e. $K$ for local inference) to infer the community membership and model parameters relating to the connectivity of the discrete states. For local inference, the group averaged adjacency matrix is considered as the observation. We assess the goodness of fit between observation and a latent block model with various values of $K$ (from $K=3,\cdots,18$) using posterior predictive discrepancy for each local minimum, as shown in Fig. 2b. We selected the value of $K$ at which the curve starts to flatten as the preferred model. We find that the model assessment curves for states 1, 2, 4, and 5 flatten at $K=4$, whereas the model assessment curve for state 3 is flat over the entire range (from $K=3$ and up). Therefore the selected models are $K=\\{4,4,3,4,4\\}$ for states 1 to 5, respectively. To validate the MCMC sampling of the density $p(\textbf{z}|\textbf{x},K)$, we compare the estimate of the latent label vector to the ground truth of the node memberships. Fig. 2c shows the inferred community architectures of the discrete states including the estimated latent label vectors and the model parameters of block mean and variance. The true label vectors that determine the covariance matrix in the generative models are also included in this figure. We use the most frequent latent label vectors in the Markov chain after the burn-in steps as the estimate. Note that label-switching occurs in the MCMC sampling, which is a well-known problem in Bayesian estimation of mixture models [1]. In the results presented here, the node memberships have been relabelled to correct for label switching. The algorithm used for this purpose is described in Supplementary Section 2. We find that the estimated latent label vectors are (largely) consistent with the ground truth of labels that determined the covariance matrix. The discrepant ‘True’ and ‘Estimation’ patterns with respect to states 2 and 4 are due to the bias induced by the selected model ($K=5$ for the ground truth and $K=4$ for the selected model). Although the colors of the labels in the ‘True’ and ‘Estimation’ patterns are discrepant, we can see that the values of the labels are largely consistent, with some labels of $k=5$ missing in the ‘Estimation’ pattern compared to the ‘True’ pattern. Given the estimated latent label vector, we then draw samples of the block mean and variance from the posterior $p(\bm{\pi|\textbf{x},\textbf{z}})$ conditional on the estimated latent label vector z. However, there is no ground truth for the block mean and variance when we generate the synthetic data. The validation of sampling model parameters is illustrated in the Supplementary Figure 4. ### Method validation using working memory (WM) task-fMRI data | | MNI coordinates | Voxel locations | ---|---|---|---|--- Node number | Z-MAX | x | y | z | x | y | z | Region name 1 | 4.97 | 48 | -58 | 22 | 21 | 34 | 47 | Angular Gyrus 2 | 9.61 | 36 | 8 | 12 | 27 | 67 | 42 | Central Opercular Cortex 3 | 8.27 | -36 | 4 | 12 | 63 | 65 | 42 | Central Opercular Cortex 4 | 6.48 | 40 | 34 | -14 | 25 | 80 | 29 | Frontal Orbital Cortex 5 | 7.83 | -12 | 46 | 46 | 51 | 86 | 59 | Frontal Pole 6 | 4.84 | 54 | 32 | -4 | 18 | 79 | 34 | Inferior Frontal Gyrus 7 | 6 | 52 | 38 | 10 | 19 | 82 | 41 | Inferior Frontal Gyrus 8 | 4.38 | -52 | 40 | 6 | 71 | 83 | 39 | Inferior Frontal Gyrus 9 | 6.05 | 52 | -70 | 36 | 19 | 28 | 54 | Inferior Parietal Lobule PGp R 10 | 7.26 | -48 | -68 | 34 | 69 | 29 | 53 | Inferior Parietal Lobule PGp L 11 | 6.18 | 44 | -24 | -20 | 23 | 51 | 26 | Inferior Temporal Gyrus 12 | 9.54 | 36 | -86 | 16 | 27 | 20 | 44 | Lateral Occipital Cortex 13 | 8.04 | -30 | -80 | -34 | 60 | 23 | 19 | Left Crus I 14 | 7.6 | -8 | -58 | -52 | 49 | 34 | 10 | Left IX 15 | 6.9 | -22 | -48 | -52 | 56 | 39 | 10 | Left VIIIb 16 | 14.5 | 6 | -90 | -10 | 42 | 18 | 31 | Lingual Gyrus 17 | 10.3 | 30 | 10 | 58 | 30 | 68 | 65 | Middle Frontal Gyrus 18 | 6.61 | 66 | -30 | -12 | 12 | 48 | 30 | Middle Temporal Gyrus 19 | 4.53 | -68 | -34 | -4 | 79 | 46 | 34 | Middle Temporal Gyrus 20 | 14.5 | 18 | -88 | -8 | 36 | 19 | 32 | Occipital Fusiform Gyrus 21 | 5.06 | -12 | -92 | -2 | 51 | 17 | 35 | Occipital Pole 22 | 9.87 | 6 | 40 | -6 | 42 | 83 | 33 | Paracingulate Gyrus 23 | 12 | 42 | -16 | -2 | 24 | 55 | 35 | Planum Polare 24 | 11.3 | -40 | -22 | 0 | 65 | 52 | 36 | Planum Polare 25 | 9.03 | 38 | -26 | 66 | 26 | 50 | 69 | Postcentral Gyrus 26 | 8.31 | -10 | -60 | 14 | 50 | 33 | 43 | Precuneus Cortex 27 | 5.7 | 46 | -60 | -42 | 22 | 33 | 15 | Right Crus I 28 | 8.34 | 32 | -80 | -34 | 29 | 23 | 19 | Right Crus I 29 | 10.9 | 32 | -58 | -34 | 29 | 34 | 19 | Right Crus I 30 | 6.41 | 10 | -8 | -14 | 40 | 59 | 29 | Right Hippocampus 31 | 6.19 | 32 | -52 | 2 | 29 | 37 | 37 | Right Lateral Ventricle 32 | 7.69 | 24 | -46 | 16 | 33 | 40 | 44 | Right Lateral Ventricle 33 | 6.13 | 0 | 10 | -14 | 45 | 68 | 29 | Subcallosal Cortex 34 | 10.7 | 48 | -44 | 46 | 21 | 41 | 59 | Supramarginal Gyrus 35 | 4.23 | -50 | -46 | 10 | 70 | 40 | 41 | Supramarginal Gyrus Table 1: Significant activations of cluster wise inference (cluster-corrected Z>3.1, P<0.05): Activations are described in terms of local maximum Z (Z-MAX) statistic within each cluster including the activations of all contrast maps among 2-back, 0-back, and fixation. In this analysis, we used preprocessed working memory (WM)-tfMRI data obtained from 100 unrelated healthy adult subjects under a block designed paradigm, available from the Human Connectome Project (HCP) [46]. We mainly focused on the working memory load contrasts of 2-back vs fixation, 0-back vs fixation, or 2-back vs 0-back, and determine the brain regions of interest from the GLM analysis. After group-level GLM analysis, we obtained cluster activations with locally maximum Z statistics for different contrasts. The results in the form of thresholded local maximum Z statistic (Z-MAX) maps are shown in Supplementary Figure 5. The light box views of thresholded local maximum Z statistic with different contrasts are provided in Supplementary Figures 6. Significant activations obtained by clusterwise inference and the corresponding MNI coordinates with region names are shown in Table 1. We finally extracted the time series of 35 brain regions corresponding to the MNI coordinates. Refer to Methods section for the details of experimental design, GLM analysis and time series extraction. Fig. 3: The results of Bayesian change-point detection for working memory tfMRI data (session 1, LR): Cumulative discrepancy energy (CDE) with different sliding window sizes ($W$=22, 26, 30, and 34; a-d under the model $K=3$) and different models (K=3, 4, and 5; c, e and f using a sliding window of $W=30$). $W_{s}$ is width of the sliding window used for converting from PPDI to CDE. The vertical dashed lines are the times of onset of the stimuli, which are provided in the EV.txt files in the released data. The multi-color scatterplots in the figures represent the CDEs of individual subjects and the black curve is the group CDE (averaged CDE over 100 subjects). The red dots are the local maxima, which are taken to be the locations of change-points, and the blue dots are the local minima, which are used for local inference of the discrete brain states. Fig. 4: Detected change-points and the locations of brain states matching the task blocks for working memory tfMRI data (session 1, LR) with $K=3$, and $W=30$. The numbers in the small rectangular frame are the boundaries of the external task demands, the background colors in the large rectangular are the different task conditions, and the blue and red bars with specified numbers are the estimated locations of discrete brain states and change-points. #### Change-point detection for tfMRI time series In the main text, we illustrate the results using the HCP working memory data of session 1 i.e. with the polarity of Left to Right (LR). The replication of results obtained by using session 2 (RL) are demonstrated in Supplementary Figures 10 to 15) and Supplementary Table 1. We compare the brain states of different working memory loads for a specific kind of picture (tool, body, face, and place) involved in the experiments. As there is no repetition of task conditions in a single session, the estimated patterns of brain states do not recur in LR session. One can compare the LR and RL session for the recurrence of a specific task condition. To detect change-points in the extracted time series, we first converted each time series into a sequence of correlation matrices for each subject. We then modeled this sequence of correlation matrices for each subject using the latent block model and evaluated posterior predictive discrepancy (PPD) to assess the model fitness. Next, we converted the resulting PPD index (PPDI) to a CDE score for each subject. For group-level analysis, we averaged the resulting individual CDE scores over 100 subjects to obtain a sequence of group CDE as shown in Fig. 3 with different window sizes $W$=22, 26, 30, 34 (Fig. 3a-d). We chose the window size for converting from PPDI to CDE to be a constant $W_{s}=10$ for all of the assessments. The multi-colored scatterplots in the figures are the individual CDE scores. Although there are some false positives in terms of both local maxima and local minima (here false positives are defined as multiple points of local minima or local maxima that should be discarded in a single task block), we note that the onsets of the stimuli precede the inferred local maxima, and the local minima also show appropriate lags (for example, about 10 frames, or 7 seconds as shown in Fig. 3c) compared to the mid-points of the working memory blocks. For fixation blocks, the local maxima show lags compared to the mid-points of the blocks. These lags are likely due to the delay in the haemodynamic response of brain activation. With the same number of communities $K=3$, we found there are more false positives with window size $W=22$ compared to $W=26$, $W=30$ and $W=34$. This is because there are fewer sample data contained in the sliding window if the window size is smaller. We also tried different models with $K=4$ and $K=5$. We found that there are more false positives with larger values of $K$. Larger values of $K$ imply more blocks in the model, which gives rise to relatively better model fitness. In this situation, there will be less distinction between relatively static brain states and transition states with change-points in the window. The false positives among the local minima and local maxima are also influenced by the window size $W_{s}$ used for transforming from PPDI to CDE. A larger window size (for example $W_{s}=30$) reduces the accuracy of the estimates and results in false negatives. Too small a value of $W_{s}$ increases the false positive rate. We found that $W_{s}=10$ works well for all of the real data analyses. The time spent to run the posterior predictive assessment on each subject ($T$=405 frames, posterior predictive replication number $S$=50, $K$=3, and the window size $W$=30) by using a 2.6 GHz Intel Core i7 processor unit was about 10 minutes. Fig. 5: Local fitting between averaged adjacency matrix and models from $K$=3 to 18. Different colors represent the PPDI values of different brain states. Fig. 6: Community structure of the discrete brain states: The figures with blue frames represent brain states corresponding to working memory tasks (2-back tool at $t=41$; 0-back body at $t=76$; 2-back face at $t=140$; 0-back tool at $t=175$; 2-back body at $t=239$; 2-back place at $t=278$; 0-back face at $t=334$; and 0-back place at $t=375$ in a-k) and those with red frames represent brain states corresponding to fixation (fixation at $t$=107, 206, and 306 in c, f, and i). Each brain state shows connectivity at a sparsity level of 10%. The different colors of the labels represent community memberships. The strength of the connectivity is represented by the colors shown in the bar at the right of each frame. In Circos maps, nodes in the same community are adjacent and have the same color. Node numbers and abbreviations of the corresponding brain regions are shown around the circles. In each frame, different colors represent different community numbers. The connectivity above the sparsity level is represented by arcs. The blue links represent connectivity within communities and the red links represent connectivity between communities. The ‘local inference’ is defined as a way to estimate the discrete brain state corresponding to each task condition via Bayesian modelling. The group averaged dynamic functional networks were analyzed by performing ‘local inference’ as follows. In this experiment, we used results obtained for $K=3$ and $W=30$ (see Fig. 3c). We first listed all of the local maxima and minima, where the time points with distances smaller than 8 are grouped as vectors. Maxima and minima deemed to be false positives were discarded. The time points corresponding to the local minimum value of group CDE are (41, 46, 48, 50, 54) and (140, 147, 152). These were determined as single time points corresponding to discrete brain states, specifically 41 and 140 respectively, with all the other elements in the vectors presumed to be false positives and discarded. Time points with CDE value difference smaller than 0.002 were also discarded (points (191, 192) and (290, 292)). Then the resulting estimated change-point locations (maxima) are at $\\{68,107,165,206,265,306,356\\}$, and the estimated time points of the discrete brain states (minima) are $\\{41,76,140,175,239,278,334,375\\}$. A comparison of the detected change- points to the task blocks for working memory tfMRI data are shown in Fig. 4. #### Local inference for discrete brain states For ‘local inference’, we first calculated the group averaged adjacency matrix with a window of $W_{g}=20$, for all brain states. The center of the window is located at the time point of the local minimum value. We evaluated the goodness of fit for models with different values of $K$ (Fig. 5). The results demonstrate that the goodness of fit trends to flat at $K=6$. To avoid empty communities, $K=6$ is then selected as the number of communities in local inference. Note that the value of $K$ is unchanged in Markov chain Monte Carlo estimation, but an empty community containing no labels may take place. In the remainder of this section, we used the model with $K=6$ for all brain states. The times spent to run the estimation for latent label vector and model parameters for a single discrete brain state (MCMC sampling number $S_{s}$=200, $K$=6, and the window size $W$=20) by using a 2.6 GHz Intel Core i7 processor unit were about 1.85 and 1.25 seconds respectively. The inferred community structures are visualized using BrainNet Viewer [49] and Circos maps [50] as shown in Fig. 6. Estimated latent label vectors are visualized using different colors to represent different communities. The nodes are connected by weighted links at a sparsity level of 10% (we also visualized the brain states with sparsity levels of 20% and 30%: Supplementary Figures 7 and 8). The density and variation of connectivity within and between communities are characterized by the estimated block mean matrix and block variance matrix in Supplementary Figure 9 and 15. We first describe the working memory tasks involving the 2-back tool (Fig. 6a), 0-back tool (Fig. 6e), and fixation (Fig. 6c, f, i). The locations of fixation states are considered as the locations of the change-points at 107, 206, and 306 (we consider the fixation state as a transition buffer between two working memory blocks). We found that the connectivity between the inferior parietal lobule (node 9) and middle frontal gyrus (node 17), and the connectivity between the inferior parietal lobule (node 9) and supramarginal gyrus (node 34) are increased significantly both in 2-back and 0-back working memory compared to fixation. For 2-back face (Fig. 6d) and 0-back face (Fig. 6j), The connectivity between inferior parietal lobule (node 9) and supramarginal gyrus (node 34) and the connectivity between angular gyrus (node 1) and supramarginal gyrus (node 34) are increased in 2-back compared to 0-back and fixation. There is reduced connectivity between the lateral occipital cortex (node 12), occipital fusiform gyrus (node 20), and occipital pole (node 21) in 2-back and 0-back compared to fixation. There is reduced connectivity between the frontal pole (node 5) and inferior parietal lobule (node 10) only in 2-back. For task blocks with body parts pictures (Fig. 6g and Fig. 6b), we found that the connectivity between inferior parietal lobule (node 9) and middle frontal gyrus (node 17), and the connectivity between inferior parietal lobule (node 9) and supramarginal gyrus (node 34) are increased significantly both in 2-back and 0-back working memory compared to fixation. The connectivity between angular gyrus (node 1) and supramarginal gyrus (node 34) is increased in 2-back compared to 0-back and fixation. There is reduced connectivity between the lateral occipital cortex (node 12), occipital fusiform gyrus (node 20), and occipital pole (node 21) in 2-back and 0-back compared to fixation. Finally, we compare 2-back place (Fig. 6h), 0-back place (Fig. 6k), and fixation. We found that the connectivity between lateral occipital cortex (node 12) and occipital pole (node 21), and the connectivity between occipital fusiform gyrus (node 20) and occipital pole (node 21) are reduced in 2-back compared to 0-back and fixation. It is clear from Fig. 6 that nodes are clustered into communities with different connectivity densities within and between communities. The mean and variance of the connectivity within and between communities are reported as block mean and variance matrices in Fig. 6. We find that there are strong connections in communities $k$=3, 4, and 6 and weak connections in communities $k$=1, 2, and 5 for a majority of the states. The Circos map provides a different perspective on the community pattern of the brain state. We summarise the common community pattern for specific working memory load or fixation in Table 1. 2-back | 0-back | Fixation ---|---|--- Community | Node number | | Community | Node number | | Community | Node number | k=1 | | | | | | k=1 | | | | | | k=1 | | | | | k=2 | 15 | 30 | | | | k=2 | | | | | | k=2 | 15 | 31 | 32 | | k=3 | 16 | 20 | | | | k=3 | 16 | 20 | 21 | | | k=3 | 12 | 16 | 20 | 21 | k=4 | 1 | 9 | 17 | 34 | | k=4 | | | | | | k=4 | 1 | 9 | 25 | | k=5 | 11 | 14 | | | | k=5 | | | | | | k=5 | 3 | 11 | 14 | | k=6 | 8 | 19 | 35 | | | k=6 | 19 | | | | | k=6 | 5 | 8 | 10 | 19 | Table 2: This table summarises the nodes commonly located in a specific community $k$ for all of the picture types in the working memory tasks. ## Discussion We proposed a model-based method for identifying transitions and characterising brain states between two consecutive transitions. The transitions between brain states identified by the Bayesian change-point detection method exhibit appropriate lags compared to the external task demands. This indicates a significant difference between the temporal boundaries of external task demands and the transitions of latent brain states. We also estimated the community membership of brain regions that interact with each other to give rise to the brain states. Furthermore, we showed that the estimated patterns of community architectures show distinct networks for 2-back and 0-back working memory load and fixation. We first focus on the results of the brain states inferred from the WM-tfMRI data and discuss the estimated patterns of connectivity for different blocks of working memory tasks after local inference. We find that there are distinct connectivity differences between 2-back, 0-back, and fixation. We first compare the working memory and the fixation conditions, with particular reference to the middle frontal gyrus (node 17) and inferior parietal lobule (node 9) which includes the angular gyrus (node 1) and supramarginal gyrus (node 34). The middle frontal gyrus is related to manipulation, distractor resistance, refreshing, selection for action and monitoring, and the inferior parietal lobule is related to focus recognition and long-term recollection [51]. In our results, we find that the connectivity between the middle frontal gyrus and inferior parietal lobule is increased in the working memory tasks compared to the fixation state. The connectivity between the lateral occipital cortex (node 12) and occipital fusiform cortex (node 21) is strong and stable in fixation compared to the working memory tasks, and a higher working memory load may increase the instability of this connectivity. Regarding the difference between 2-back and 0-back working memory tasks, we focus on the angular gyrus and supramarginal gyrus. In our experimental results, we find that there is increased connectivity between the angular gyrus (node 1) and supramarginal gyrus (node 34) in 2-back compared to 0-back working memory task blocks. The angular gyrus is located in the posterior part of the inferior parietal lobule. The inferior parietal cortex, including the supramarginal gyrus and the angular gyrus, is part of a “bottom-up” attentional subsystem that mediates the automatic allocation of attention to task-relevant information [52]. Previous work has shown that activation of the inferior parietal lobe is involved in the shifting of attention towards particular stimuli [53]. The right inferior parietal lobule including angular gyrus is related to attention maintaining and salient event encoding in the environment [54]. These research findings are consistent with and justify our results. Next, we focus on the methodology. We introduced posterior predictive discrepancy (PPD), a novel method based on model fitness assessment combined with sliding window analysis to detect change-points in various functional brain networks and to infer the dynamics when a brain changes state. Posterior predictive assessment is a method based on Bayesian model comparison. Other Bayesian model comparison methods including Bayes factors [55, 56], the Bayesian information criterion (BIC) [57], and Kullback–Leibler divergence [58] are also widely used in mixture modelling. One advantage of the posterior predictive assessment is that the computation for the assessment is a straightforward byproduct of the posterior sampling required in the conventional Bayesian estimation. We defined a new criterion named cumulative discrepancy energy (CDE) to estimate locations of these change-points or transitions. The main idea underlying this novel strategy is to recognize that the goodness-of-fit between the model and observation is reduced if there is a change-point located within the current sliding window (the sample data in the window can be considered as being generated from two latent brain network architectures in this case), resulting in a significant increase in CDE. We use overlapping, rectangular, sliding windows so that all of the time points are included. The dynamics of the brain states are not only induced by external stimuli, but also the latent mental process, such as motivation, alertness, fatigue, and momentary lapse [17]. Crucially, directly using the temporal boundaries (onsets and durations) associated with predefined task conditions to infer the functional networks may not be sufficiently rigorous and accurate. The boundaries of the task demand are not the timing and duration of the latent brain state. The estimated change-points in our experiments are consistent with the working memory task demands but show a delay relative to the onsets of the task blocks or the mid-points of fixation blocks. These results reflect the delay involved due to the haemodynamic response, and also the delay arising from recording the data using the fMRI scanner, between signal emission and reception. The results of the task fMRI data analysis show that the change-point detection algorithm is sensitive to the choice of model. We found that a less complex model (with smaller $K$) for global fitting gave fewer false positives, so it had better change-point detection performance than models with larger $K$. Selecting a suitable window size $W$ is also very important for our method. Too small a window size results in too little information being extracted from the data within the window, causing the calculated CDE to fluctuate more, making it difficult to discriminate local maxima and local minima in the CDE score time series. Too large a window size (larger than the task block length) reduces the resolution at which the change-points can be distinguished. In the working memory task fMRI data set, the length of the task block is around 34 frames and the fixation is about 20 frames. Therefore, we made the window size at most 34 frames to ensure all potential change- points can be distinguished, and at least 20 frames to ensure the effectiveness of the posterior predictive assessment. In our experiments, we used window sizes of 22, 26, 30, and 34, which were all larger than the length of the fixation block. This means it was not possible to detect the two change-points at both ends of fixation blocks, so we consider the whole fixation block as a single change-point (i.e., a buffer between two task blocks). The latent block model provides a flexible approach to modeling and estimating the dynamical assignment of nodes to a community. Note that the latent block model was fitted to the adjacency matrix of each individual subject in global fitting, and was fitted to the group-averaged adjacency matrix in the local fitting. Different choices of $\bm{\pi}$ can generate different connection patterns in the adjacency matrix. The likelihood is Gaussian and the connectivity is weighted, both of which facilitate treating the correlation matrix as an observation, without losing much relevant information from the time series. We treat both the latent label vector and block model parameters as quantities to be estimated. Changes in community memberships of the nodes are reflected in changes in the latent labels, and changes in the densities and variations in functional connectivity are reflected in changes in the model parameters. Empirical fMRI datasets have no ground truth regarding the locations of latent transitions of the brain states and network architectures. Although the task data experiments include the timings of stimuli, the exact change-points between discrete latent brain states are uncertain. Here, we used the multivariate Gaussian model to generate synthetic data (ground truth) to validate our proposed algorithms by comparing ground truth to the estimated change-points and latent labels. With extensive experiments using synthetic data, we demonstrated the very high accuracy of our method. The multivariate Gaussian generative model can characterize the community patterns via determining the memberships of the elements in the covariance matrix, but it is still an unrealistic benchmark. In the future, we will integrate the clustering method into the dynamic causal modelling [59, 60] to simulate more biologically realistic synthetic data to validate the algorithm. There are still some limitations of the MCMC allocation sampler [2, 3] which we use to infer the latent label vectors. When Markov chains are generated by the MCMC algorithm, the latent label vectors typically get stuck in local modes. This is in part because the Gibbs moves in the allocation sampler only update one element of the latent label vector at a time. Although the M3 move (see Supplementary Section 7 for details on the M3 move) updates multiple elements of the latent label vector, the update is conditional on the probability ratio of a single reassignment, which results in similar problems to the Gibbs move. Improving the MCMC allocation sampler so that it can jump between different local modes, without changing the value of $K$, is a topic worth exploring. Currently, we use an MCMC sampler with a Gibbs move and an M3 move for local inference as well, keeping $K$ constant. In future work, we will extend the sampler using an absorption/ejection move, which is capable of sampling $K$ along with latent labels directly from the posterior distribution. The label switching phenomenon (see Supplementary Section 2) does not happen frequently if the chain is stuck in a local mode. However, the estimated labels in the latent label vector do switch in some experiments. To correct for label switching, we permute the labels in a post-processing stage. In this paper, we treat the group-averaged adjacency matrix as an observation in local inference, which neglects variation between subjects [61]. In the future, we propose to use hierarchical Bayesian modeling to estimate the community architecture at the group-level. In the local inference, we will model the individual adjacency matrix using the latent block model, and infer the number of communities along with the latent label vectors via an absorption/ejection strategy. At the group-level, we will model the estimated number of communities of the subjects using a Poisson-Gamma conjugate pair and model the estimated latent label vectors using a Categorical-Dirichlet pair. The posterior distribution of the number of communities will be modeled using a Gamma distribution and the posterior distribution of the latent label vector will be modelled using a Dirichlet distribution. The estimated rate of the Poisson posterior distribution and the estimated label assignment probability matrix of the Dirichlet posterior distribution will characterize the brain networks at the group-level. The change-point detection method described in this paper can be applied to locate the relatively static brain states occurring in block designed task fMRI data. In future work, we aim to apply the method to explore the dynamic characteristics of event-related task fMRI, where applying a sliding window approach may be difficult, as the changes of the states will be the pulses. We will be also interested in applying a change-point detection algorithm to resting-state fMRI data, which is also challenging given there is no stimuli timing available and there is relatively less distinct switching of brain states. ## Methods ### Working memory task fMRI data processing #### WM-tfMRI paradigms The original experiment involved a version of an N-back task, used to assess working memory/cognitive control. In the working memory task, each block of tasks consisted of trials with pictures of faces, places, tools and body parts. A specific stimulus type was presented in each block within each run. In 2-back blocks, the subjects judged whether the current stimulus is the same as the stimulus previously presented “two back”. In 0-back blocks, the subjects were given a target cue at the beginning of each task block, and judged whether any stimulus during that block is the same as the target cue. There were 405 frames (with 0.72 s repetition time - TR) in the time course with four blocks of 2-back working memory tasks (each for 25 s), four blocks of 0-back working memory tasks (each for 25 s) and four fixation blocks (each for 15 s). #### tfMRI data acquisition The whole brain echo-planar imaging (EPI) was acquired with a 32 channel head coil on a modified 3T Siemens Skyra with TR = 0.72 s, TE = 33.1 ms, flip angle = 52 degrees, BW = 2290 Hz/Px, in-plane FOV = $208\times 180$ mm, 72 slices with isotropic voxels of 2 mm with a multi-band acceleration factor of 8. Two runs of the tfMRI were acquired (one right to left, the other left to right). #### tfMRI data preprocessing The tfMRI data in HCP are minimally preprocessed including gradient unwarping, motion correction, fieldmap-based EPI distortion correction, brain-boundary- based registration of EPI to structural T1-weighted scan, non-linear (FNIRT) registration into MNI152 space, and grand-mean intensity normalization. The data analysis pipeline is based on FSL (FMRIB’s Software Library) [47]. Further smoothing processing is conducted by Volume-based analysis and Grayordinates-based analysis, the details of which are illustrated in the corresponding sections of [46]. #### GLM analysis The general linear model (GLM) analysis in this work includes 1st-level (individual scan run), 2nd-level (combining multiple scan runs for an individual participant) and 3rd-level (group analysis across multiple participants) analyses [7, 8]. At 1st-level, fixed effects analyses are conducted to estimate the average effect size of runs within sessions, where the variation only contains the within-subject variance. At 2nd-level, we also use fixed effects analysis, averaging the two sessions within the individuals. At 3rd-level, mixed effects analyses are conducted, with the subject effect size considered to be random. The estimated mean effect size is across the population and the between subject variance is contained in the group level of GLM. We can set up different contrasts to compare the activation with respect to the memory load or stimulus type. #### Time series extraction We created spheres of binary masks with radius 6 mm (the center of each sphere corresponded to the coordinates of locally maximum z statistics, and the voxel locations of the centers were transferred from MNI coordinates in fsleyes) and extracted the eigen time series of 35 regions of interest from the 4-D functional images. We obtained 100 sets of time series from 100 unrelated subjects using the same masks. ### The framework of Bayesian change-point detection An overview of the Bayesian change-point detection framework is shown in Fig. 1a. We consider a collection of $N$ nodes $\\{v_{1},\cdots,v_{N}\\}$ representing brain regions for a single subject, and suppose that we observe a collection of $N$ time series $\textbf{Y}\in\Re^{N\times T}$ where $\textbf{Y}=(\textbf{y}_{1},\textbf{y}_{2},\cdots,\textbf{y}_{T})$, and $T$ is the number of time points. Different background colors represent different latent network community architectures. The nodes in the networks are assumed to be clustered into communities and the different colors of the nodes represent the different community memberships. A more detailed example of changes in network architectures with 16 nodes is shown in Fig. 1b, where the community memberships are defined as a latent label vector z and $K$ is the number of communities. A transition or change-point is defined as a time point at which the community structure changes. Correlations between time series suggest interactions between the corresponding brain regions; we therefore first process the time series to construct a sequence of graphs in which temporal correlations between time series are represented by an edge connecting the corresponding nodes. We apply a sliding window of width $W$ (even numbered) to the time series as shown in Fig. 1a. The sliding windows overlap and the centers of the windows are located at consecutive time points. Change-points may occur only at times $t\in\\{\frac{W}{2}+1,\cdots,T-\frac{W}{2}\\}$ where $\frac{W}{2}$ is a margin size used to avoid computational and statistical complications. The advantage of using overlapping windows is that we can potentially detect transitions in network architecture at any time during the time course (except the margin area). For each time point $t\in\\{\frac{W}{2}+1,\cdots,T-\frac{W}{2}\\}$, we define $\textbf{Y}_{t}=\\{\textbf{y}_{t-\frac{W}{2}},\cdots,\textbf{y}_{t},,\cdots,\textbf{y}_{t+\frac{W}{2}-1}\\}$ as the data in the sliding window at time $t$ and calculate a sample correlation matrix $\textbf{x}_{t}$ within this window. We interpret this correlation matrix as a weighted adjacency matrix. This means for each $t$, we obtain a sample adjacency matrix $\textbf{x}_{t}$. Subsequently, instead of time series Y, we use the sample adjacency matrix $\textbf{x}_{t}$ as the realized observation at time $t$. Fig. 1c provides a schematic illustrating the posterior predictive model fitness assessment. Specifically, we propose to use the Gaussian latent block model [3] to quantify the likelihood of a network, and the MCMC allocation sampler (with the Gibbs move and the M3 move) [2, 3] to infer a latent label vector z from a collapsed posterior distribution $p(\textbf{z}|\textbf{x},K)$ derived from this model. The model parameters $\bm{\pi}$ for each block are sampled from a posterior distribution $p(\bm{\pi|\textbf{x},\textbf{z}})$, conditional on the sampled latent label vector z. The proposed model fitness procedure draws parameters (both latent label vectors and model parameters) from posterior distributions and uses them to generate a replicated adjacency matrix $\textbf{x}^{rep}$. It then calculates a disagreement index to quantify the difference between the replicated adjacency matrix $\textbf{x}^{rep}$ and realized adjacency matrix x. To evaluate model fitness, we use the parameter- dependent statistic PPDI by averaging the disagreement index. ### The latent block model The latent block model (LBM) [3] is a random process generating networks on a fixed number of nodes $N$. The model has an integer parameter $K$, representing the number of communities. Identifying a suitable value of $K$ is a model fitting problem that will be discussed in a later section; here we assume $K$ is given. A schematic of a latent block model is shown in the brown box on the right side of Fig. 1c. A defining feature of the model is that nodes are partitioned into $K$ communities, with interactions between nodes in the same community having a different (usually higher) probability than interactions between nodes in different communities. The latent block model first assigns the $N$ nodes into the $K$ communities resulting in $K^{2}$ blocks, which are symmetric, then generates edges with a probability determined by the community memberships. The diagonal blocks represent the connectivity within the communities and the off-diagonal blocks represent the connectivity between different communities. In this paper, we consider the edges between nodes to be weighted, so the model parameter matrix $\bm{\pi}$ consists of the means and variances that determine the connectivity in the blocks. We treat the correlation matrix as an observation, thus preserving more information from the BOLD time series than using binary edges. Given a sampled z we can draw $\bm{\pi}$ from the posterior directly. For mathematical illustration of the latent block model, see Supplementary section 3.1 and 3.2. Methods for sampling the latent vector z will be discussed in later sections. ### Sampling from the posterior The posterior predictive method we outline below involves sampling parameters from the posterior distribution. The sampled parameters are the latent label vector z and model parameter matrix $\bm{\pi}$. There are several methods for estimating the latent labels and model parameters of a latent block model described in the literature. One method evaluated the model parameters by point estimation but considered the latent labels in z as having a distribution [64], making this approach similar to an EM algorithm. Another method used point estimation for both the model parameters and latent labels [65]. We sample the latent label vector z from the collapsed posterior $p(\textbf{z}|\textbf{x},K)$ (see detailed derivation of $p(\textbf{z}|\textbf{x},K)$ in Supplementary Section 3.3). We use the Markov chain Monte Carlo (MCMC) [6] method to sample the latent label vector from the posterior using Gibbs moves and M3 moves [2] for updating z. The details of the MCMC allocation sampler and the computational complexity are illustrated in Supplementary Section 3.4. After sampling the latent label vector z, we then separately sample $\bm{\pi}$ from the density $p(\bm{\pi}|\textbf{x},\textbf{z})$ (See Supplementary section 3.2 for the details). ### Model fitting #### Global fitting Global fitting uses a model with a constant number of communities $K$ to fit consecutive individual adjacency matrices within a sliding window, for all time frames. For global fitting, we consider $K$ in our latent block model to be fixed over the time course. We detect the change-points based on Bayesian model comparison using posterior predictive discrepancy, which does not determine whether the model is ‘true’ or not, but rather quantifies the preference for the model given the data. One can imagine the model as a moving ruler under the sliding window, and the observation at each time step as the object to be measured. The discrepancy increases significantly if there is a change-point located within the window. Although $K$ is constant in global fitting, different values of $K$ can be used if we select different models. The evaluation of $K$ can be considered as a Bayesian model comparison problem. We repeat the inference with different values of $K$ and compare the change-point detection performance to identify an appropriate value for $K$. #### Local fitting Local fitting involves first selecting a model (i.e., choosing a value of $K$) that best fits the group averaged adjacency matrix for a discrete brain state. Subsequently, the data between change-points is used to estimate the community membership that constitutes that discrete brain state. We treat $K$ as constant for this local inference. The number of communities $K$ can potentially be inferred using the absorption/ejection move [2] in the allocation sampler, an innovation that will be explored in future research. ### Posterior predictive discrepancy Given inferred values of z and $\bm{\pi}$ under the model $K$, one can draw a replicated adjacency matrix $\textbf{x}^{rep}$ from the predictive distribution $P(\textbf{x}^{rep}|\textbf{z},\bm{\pi},K)$ as shown in Fig. 1c. Note that the realized adjacency matrix (i.e., an observation) and the replicated adjacency matrix are conditionally independent, $P(\textbf{x},\textbf{x}^{rep}|\textbf{z},\bm{\pi},K)=P(\textbf{x}^{rep}|\textbf{z},\bm{\pi},K)P(\textbf{x}|\textbf{z},\bm{\pi},K).$ (0.1) Multiplying both sides of this equality by $P(\textbf{z},\bm{\pi}|\textbf{x},K)/P(\textbf{x}|\textbf{z},\bm{\pi},K)$ gives $P(\textbf{x}^{rep},\textbf{z},\bm{\pi}|\textbf{x},K)=P(\textbf{x}^{rep}|\textbf{z},\bm{\pi},K)P(\textbf{z},\bm{\pi}|\textbf{x},K).$ (0.2) Here we use a replicated adjacency matrix in the context of posterior predictive assessment [67] to evaluate the fitness of a posited latent block model to a realized adjacency matrix. We generate a replicated adjacency matrix by first drawing samples (z, $\bm{\pi}$) from the joint posterior $P(\textbf{z},\bm{\pi}|\textbf{x},K)$. Specifically, we sample the latent label vector z from $p(\textbf{z}|\textbf{x},K)$ and model parameter $\bm{\pi}$ from $p(\bm{\pi}|\textbf{x},\textbf{z})$ and then draw a replicated adjacency matrix from $P(\textbf{x}^{rep}|\textbf{z},\bm{\pi},K)$. We compute a discrepancy function to assess the averaged difference between the replicated adjacency matrix $\textbf{x}^{rep}$ and the realized adjacency matrix x, as a measure of model fitness. In [67], the $\chi^{2}$ function is used as the discrepancy measure, where the observation is considered as a vector. However, in the latent block model, the observation is a weighted adjacency matrix and the sizes of the sub-matrices can vary. In this paper, we propose a new discrepancy index to compare adjacency matrices $\textbf{x}^{rep}$ and x. We define a disagreement index to evaluate the difference between the realized adjacency matrix and the replicated adjacency matrix. This disagreement index is denoted $\gamma(\textbf{x}^{rep};\textbf{x})$ and can be considered as a parameter- dependent statistic. In mathematical notation, the disagreement index $\gamma$ is defined as $\gamma(\textbf{x}^{rep};\textbf{x})=\frac{\sum_{i=1,j=1}^{N}{|\textbf{x}_{ij}-\textbf{x}_{ij}^{rep}|}}{N^{2}},$ (0.3) For the evaluation of model fitness, we generate $S$ replicated adjacency matrices and define the posterior predictive discrepancy index (PPDI) $\overline{\gamma}$ as follows. $\overline{\gamma}=\frac{\sum_{i=1}^{S}\gamma(\textbf{x}^{rep^{i}};\textbf{x})}{S}.$ (0.4) The computational cost of the posterior predictive discrepancy procedure in our method depends mainly on two aspects. The first is the iterated Gibbs and M3 moves used to update the latent variable vectors. The computational cost of these moves has been discussed in previous sections. The second aspect is the number of replications $S$ needed for the predictive process. Posterior predictive assessment is not sensitive to the replication number $S$, but $S$ linearly impacts the computational cost, that is, the computational complexity of model fitness assessment is $O(S)$. There is a natural trade-off between increasing the replication number and reducing the computational speed. ### Cumulative discrepancy energy Our proposed strategy to detect network community change-points is to assess the fitness of a latent block model by computing the posterior predictive discrepancy index (PPDI) $\overline{\gamma}_{t}$ for each $t\in\\{\frac{W}{2}+1,\cdots,T-\frac{W}{2}\\}$. The key insight here is that the fitness of the model is relatively worse when there is a change-point within the window used to compute $\textbf{x}_{t}$. If there is a change-point within the window, the data observed in the left and right segments are generated by different network architectures, resulting in poor model fit and a correspondingly high posterior predictive discrepancy index. In practice, we find that the PPDI fluctuates severely. To identify the most plausible position of a change-point, we use another window with window size $W_{s}$ to accumulate the PPDI time series. We obtain the cumulative discrepancy energy (CDE) $E_{t}$, given by $E_{t}=\sum_{i=t-\frac{W_{s}}{2}}^{t+\frac{W_{s}}{2}-1}\overline{\gamma}_{i}.$ (0.5) We take the locations of change-points to be the local maxima of the cumulative discrepancy energy, where those maxima rise sufficiently high above the surrounding sequence. The change-point detection algorithm is summarized in Supplementary Section 8. Note that the posterior predictive discrepancy index and cumulative discrepancy energy for change-point detection are calculated under the conditions of global fitting. For group analysis, we average CDEs across subjects to obtain the Group CDE. After discarding false positives, the change-points are taken to be the local maxima and the discrete states are inferred at the local minima. ### Local inference We estimate the community structure of brain states via local inference. For local inference, we first calculate the group averaged adjacency matrix of 100 subjects using the data between two estimated change-points and treat this as an observation of each discrete brain state, then we use local fitting to select a value $K$ using the latent block model for Bayesian estimation of community structure for each brain state. ### Code availability The code for GLM analysis (Shell script), Bayesian change-point detection (MATLAB), and brain network visualization (MATLAB, Perl) is available at: https://github.com/LingbinBian/BCPD1.0. ## References * [1] Morten L. Kringelbach and Gustavo Deco. Brain states and transitions: Insights from computational neuroscience. Cell Reports, 32(10):108128, 2020. * [2] Daniel J. Lurie, Daniel Kessler, Danielle S. Bassett, Richard F. Betzel, Michael Breakspear, Shella Kheilholz, Aaron Kucyi, Raphaël Liégeois, Martin A. Lindquist, Anthony Randal McIntosh, Russell A. Poldrack, James M. Shine, William Hedley Thompson, Natalia Z. Bielczyk, Linda Douw, Dominik Kraft, Robyn L. Miller, Muthuraman Muthuraman, Lorenzo Pasquini, Adeel Razi, Diego Vidaurre, Hua Xie, and Vince D. Calhoun. Questions and controversies in the study of time-varying functional connectivity in resting fMRI. Network Neuroscience, 4(1):30–69, 2020. * [3] Adeel Razi and Karl J. Friston. The connected brain: Causality, models, and intrinsic dynamics. IEEE Signal Processing Magazine, 33(3):14–35, 2016. * [4] Adeel Razi, Mohamed L. Seghier, Yuan Zhou, Peter McColgan, Peter Zeidman, Hae-Jeong Park, Olaf Sporns, Geraint Rees, and Karl J. Friston. Large-scale dcms for resting-state fmri. Network Neuroscience, 1(3):222–241, 2017. * [5] Adeel Razi, Joshua Kahan, Geraint Rees, and Karl J. Friston. Construct validation of a dcm for resting state fmri. NeuroImage, 106:1 – 14, 2015. * [6] Karl J. Friston, Joshua Kahan, Bharat Biswal, and Adeel Razi. A DCM for resting state fMRI. NeuroImage, 94:396 – 407, 2014. * [7] Karl J. Friston, Erik D. Fagerholm, Tahereh S. Zarghami, Thomas Parr, Ines Hipalito, Loic Magrou, and Adeel Razi. Parcels and particles: Markov blankets in the brain. Network Neuroscience, 0(ja):1–76, 2021. * [8] Elena A. Allen, Eswar Damaraju, Sergey M. Plis, Erik B. Erhardt, Tom Eichele, and Vince D. Calhoun. Tracking whole-brain connectivity dynamics in the resting state. Cerebral Cortex, 24:663–676, 2014. * [9] R. Matthew Hutchison, Thilo Womelsdorf, Elena A. Allen, Peter A. Bandettini, Vince D. Calhoun, Maurizio Corbetta, Stefania Della Penna, Jeff H. Duyn, Gary H. Glover, Javier Gonzalez-castillo, Daniel A. Handwerker, Shella Keilholz, Vesa Kiviniemi, David A. Leopold, Francesco De Pasquale, Olaf Sporns, Martin Walter, and Catie Chang. Dynamic functional connectivity : Promise , issues , and interpretations. NeuroImage, 80:360–378, 2013. * [10] Vince D. Calhoun, Robyn Miller, Godfrey Pearlson, and Tulay Adali. The chronnectome: Time-varying connectivity networks as the next frontier in fMRI data discovery. Neuron, 84(2):262–274, 2014. * [11] Jonathan D. Power, Mark Plitt, Timothy O. Laumann, and Alex Martin. Sources and implications of whole-brain fMRI signals in humans. NeuroImage, 146(September 2016):609–625, 2017. * [12] Linden Parkes, Ben Fulcher, Murat Yücel, and Alex Fornito. An evaluation of the efficacy, reliability, and sensitivity of motion correction strategies for resting-state functional MRI. NeuroImage, 171(December 2017):415–436, 2018. * [13] Kevin M. Aquino, Ben D. Fulcher, Linden Parkes, Kristina Sabaroedin, and Alex Fornito. Identifying and removing widespread signal deflections from fMRI data: Rethinking the global signal regression problem. NeuroImage, 212(February):116614, 2020. * [14] Johan N.van der Meer, Michael Breakspear, Luke J. Chang, Saurabh Sonkusare, and Luca Cocchi. Movie viewing elicits rich and reliable brain state dynamics. Nature Communications, 11(1):1–14, 2020. * [15] Danielle S. Bassett, Nicholas F. Wymbs, Mason A. Porter, Peter J. Mucha, Jean M. Carlson, and Scott T. Grafton. Dynamic reconfiguration of human brain networks during learning. PNAS, 108(18):7641–7646, 2011. * [16] Ivor Cribben, Ragnheidur Haraldsdottir, Lauren Y. Atlas, Tor D. Wager, and Martin A. Lindquist. Dynamic connectivity regression: determining state-related changes in brain connectivity. NeuroImage, 61:720–907, 2012. * [17] Jalil Taghia, Weidong Cai, Srikanth Ryali, John Kochalka, Jonathan Nicholas, Tianwen Chen, and Vinod Menon. Uncovering hidden brain state dynamics that regulate performance and decision-making during cognition. Nature Communications, 9(1), 2018. * [18] Ulrike von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17:359–416, 2007. * [19] Ivor Cribben and Yi Yu. Estimating whole-brain dynamics by using spectral clustering. Journal of the Royal Statistical Society. Series C (Applied Statistics), 66:607–627, 2017. * [20] Anna Louise Schröder and Hernando Ombao. FreSpeD: frequency-specific change-point detection in epileptic seizure multi-channel EEG data. Journal of the American Statistical Association, 114(525):115–128, 2019. * [21] Klaus Frick, Axel Munk, and Hannes Sieling. Multiscale change point inference (with discussion). Journal of the Royal Statistical Society. Series B (Statistical Methodology), 76:495–580, 2014. * [22] Haeran Cho and Piotr Fryzlewicz. Multiple-change-point detection for high dimensional time series via sparsified binary segmentation. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 77:475–507, 2015. * [23] Tengyao Wang and Richard J. Samworth. High-dimensional change point estimation via sparse projection. Journal of the Royal Statistical Society. Series B (Methodology), 80(1):57–83, 2017. * [24] Hae-Jeong Park, Karl J Friston, Chongwon Pae, Bumhee Park, and Adeel Razi. Dynamic effective connectivity in resting state fMRI. NeuroImage, 180(November 2017):594–608, 2018. * [25] Catie Chang and Gary H. Glover. Time-frequency dynamics of resting-state brain connectivity measured with fMRI. NeuroImage, 50:81–98, 2010. * [26] Daniel A. Handwerker, Vinai Roopchansingh, Javier Gonzalez-Castillo, and Peter A. Bandettini. Periodic changes in fMRI connectivity. NeuroImage, 63:1712–1719, 2012. * [27] Andrew Zalesky, Alex Fornito, Luca Cocchi, Leonardo L. Gollo, and Michael Breakspear. Time-resolved resting-state brain networks. PNAS, 111(28):10341–10346, 2014. * [28] Ricardo Pio Monti, Peter Hellyer, David Sharp, Robert Leech, Christoforos Anagnostopoulos, and Giovanni Montana. Estimating time-varying brain connectivity networks from functional MRI time series. NeuroImage, 103:427–443, 2014. * [29] Seok-Oh Jeong, Chongwon Pae, and Hae-Jeong Park. Connectivity-based change point detection for large-size functional networks. NeuroImage, 143:353–363, 2016. * [30] Diego Vidaurre, Andrew J. Quinn, Adam P. Baker, David Dupret, Alvaro Tejero-Cantero, and Mark W. Woolrich. Spectrally resolved fast transient brain states in electrophysiological data. NeuroImage, 126:81–95, 2016. * [31] Diego Vidaurre, Stephen M. Smith, and Mark W. Woolrich. Brain network dynamics are hierarchically organized in time. PNAS, 114(48):12827–12832, 2017. * [32] Diego Vidaurre, Romesh Abeysuriya, Robert Becker, Andrew J. Quinn, Fidel Alfaro-Almagro, Stephen M. Smith, and Mark W. Woolrich. Discovering dynamic brain networks from big data in rest and task. NeuroImage, 180(June 2017):646–656, 2018. * [33] Y. X. Rachel Wang and Peter J. Bickel. Likelihood-based model selection for stochastic block models. The Annals of Statistics, 45(2):500–528, 2017. * [34] Jiashun Jin. Fast community detection by SCORE. The Annals of Statistics, 43(1):57–89, 2015. * [35] M. E. J. Newman. Modularity and community structure in networks. PNAS, 103(23):8577–8582, 2006. * [36] Danielle S. Bassett, Mason A. Porter, Nicholas F. Wymbs, Scott T. Grafton, Jean M. Carlson, and Peter J. Mucha. Robust detection of dynamic community structure in networks. CHAOS, 23:13142, 2013. * [37] Lucy F. Robinson, Lauren Y. Atlas, and Tor D. Wager. Dynamic functional connectivity using state-based dynamic community structure: Method and application to opioid analgesia. NeuroImage, 108:274–291, 2015. * [38] Chee-Ming Ting, S. Balqis Samdin, Meini Tang, and Hernando Ombao. Detecting Dynamic Community Structure in Functional Brain Networks Across Individuals: A Multilayer Approach. 2020\. * [39] Christopher Aicher, Abigail Z. Jacobs, and Aaron Clauset. Learning latent block structure in weighted networks. Journal of Complex Networks, 3(2):221–248, 2015. * [40] Joshua Faskowitz, Xiaoran Yan, Xi-nian Zuo, and Olaf Sporns. Weighted stochastic block models of the human connectome across the life span. Scientific Reports, 8:1–16, 2018. * [41] Richard F. Betzel, John D. Medaglia, and Danielle S. Bassett. Diversity of meso-scale architecture in human and non-human connectomes. Nature Communications, 9(1), 2018. * [42] Matthew D. Hoffman, David M. Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14:1303–1347, 2013. * [43] David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017. * [44] Agostino Nobile and Alastair T. Fearnside. Bayesian finite mixtures with an unknown number of components: The allocation sampler. Statistics and Computing, 17:147–162, 2007. * [45] Jason Wyse and Nial Friel. Block clustering with collapsed latent block models. Statistics and Computing, 22:415–428, 2012. * [46] Deanna M. Barch, Gregory C. Burgess, Michael P. Harms, Steven E. Petersen, Bradley L. Schlaggar, Maurizio Corbetta, Matthew F. Glasser, Sandra Curtiss, Sachin Dixit, Cindy Feldt, Dan Nolan, Edward Bryant, Tucker Hartley, Owen Footer, James M. Bjork, Russ Poldrack, Steve Smith, Heidi Johnsen-Berg, Abraham Z. Snyder, DDavid C. Van Essen, and for the WU-Minn HCP Consortium. Function in the human connectome: Task-fMRI and individual differences in behavior. NeuroImage, 80:169–189, 2013. * [47] Stephen M. Smith, Mark Jenkinson, Mark W. Woolrich, Christian F. Beckmann, Timothy E.J. Behrens, Heidi Johansen-Berg, Peter R. Bannister, Marilena De Luca, Ivana Drobnjak, David E. Flitney, Rami K. Niazy, James Saunders, John Vickers, Yongyue Zhang, Nicola De Stefano, J. Michael Brady, and Paul M. Matthews. Advances in functional and structural MR image analysis and implementation as FSL. NeuroImage, 23:S208–S219, 2004. * [48] Matthew Stephens. Dealing with label switching in mixture models. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 62(4):795–809, 2000. * [49] Mingrui Xia, Jinhui Wang, and Yong He. BrainNet viewer: A network visualization tool for human brain connectomics. PLoS ONE, 8(7), 2013. * [50] Martin Krzywinski, Jacqueline Schein, İnanç Birol, Joseph Connors, Randy Gascoyne, Doug Horsman, Steven J Jones, and Marco A Marra. Circos : An information aesthetic for comparative genomics. Genome Research, 19(604):1639–1645, 2009. * [51] Derek Evan Nee, Joshua W. Brown, Mary K. Askren, Marc G. Berman, Emre Demiralp, Adam Krawitz, and John Jonides. A meta-Analysis of executive components of working memory. Cerebral Cortex, 23(2):264–282, 2013. * [52] Mohamed L. Seghier. The angular gyrus: Multiple functions and multiple subdivisions. Neuroscientist, 19(1):43–61, 2013. * [53] Jacqueline Gottlieb. From thought to action: The parietal cortex as a bridge between perception, action, and cognition. Neuron, 53(1):9–16, 2007. * [54] Victoria Singh-Curry and Masud Husain. The functional role of the inferior parietal lobe in the dorsal and ventral stream dichotomy. Neuropsychologia, 47(6):1434–1448, 2009. * [55] Robert E. Kass and Adrian E. Raftery. Bayes factors. Journal of the American Statistical Association, 90(430):773–795, 1995. * [56] Mike West. Bayesian Model Monitoring. Journal of the Royal Statistical Society. Series B (Methodological), 48(1):70–78, 1986. * [57] Andrew A. Neath and Joseph E. Cavanaugh. The Bayesian information criterion: Background, derivation, and applications. Wiley Interdisciplinary Reviews: Computational Statistics, 4(2):199–203, 2012. * [58] Solomon Kullback and Richard Leibler. On information and sufficiency. The Annals of Mathematical Statistics, 22(1):79–86, 1951. * [59] K.J. Friston, L. Harrison, and W. Penny. Dynamic Causal Modelling. Human Brain Function: Second Edition, 0:1063–1090, 2003. * [60] Karl J. Friston, Katrin H. Preller, Chris Mathys, Hayriye Cagnan, Jakob Heinzle, Adeel Razi, and Peter Zeidman. Dynamic causal modelling revisited. NeuroImage, 199:730 – 744, 2019. * [61] Karl J. Friston, Vladimir Litvak, Ashwini Oswal, Adeel Razi, Klaas E. Stephan, Bernadette C.M. van Wijk, Gabriel Ziegler, and Peter Zeidman. Bayesian model reduction and empirical bayes for group (dcm) studies. NeuroImage, 128:413 – 431, 2016. * [62] Mark W. Woolrich, Brian D. Ripley, Michael Brady, and Stephen M. Smith. Temporal autocorrelation in univariate linear modeling of FMRI data. NeuroImage, 14(6):1370–1386, 2001. * [63] Mark W. Woolrich, Timothy E.J. Behrens, Christian F. Beckmann, Mark Jenkinson, and Stephen M. Smith. Multilevel linear modelling for FMRI group analysis using Bayesian inference. NeuroImage, 21(4):1732–1747, 2004. * [64] J.-J. Daudin, F. Picard, and S. Robin. A mixture model for random graphs. Statistics and Computing, 18:173–183, 2008. * [65] Hugo Zanghi, Christophe Ambroise, and Vincent Miele. Fast online graph clustering via Erdös-Rényi mixture. Pattern Recognition, 41:3592–3599, 2008. * [66] W.K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1):97–109, 2016. * [67] Andrew Gelman, Xiao-Li Meng, and Hal Stern. Posterior predictive assessment of model fitness via realized discrepancies. Statistica Sinica, 6(4):733–760, 1996. Supplementary information ————————————————————————————————————————————— ## 1 Generative model, synthetic data and parameter settings To validate our Bayesian change-point detection algorithm, we use the multivariate Gaussian generative model to simulate the synthetic data. Specifically, we generate $D$ segments of Gaussian time series from $D$ different network architectures. The synthetic data contains the ground truth of $D-1$ change-points over the time course. The positions of the true change- points are denoted as a row vector $\textbf{p}=[p_{1},\cdots,p_{D-1}]$. Within each of $D$ segments, we suppose nodes are assigned to $K^{true}$ communities, the value of which differs in different segments. The true number of communities in the segments can be denoted as a vector $\textbf{K}^{true}=[K_{1}^{true},\cdots,K_{D}^{true}]$. We set the label vectors that determine the form of the covariance matrices in the generative model to be $\\{\textbf{z}_{1},\textbf{z}_{2},\cdots,\textbf{z}_{D}\\}$. These label vectors are generated using the Dirichlet-Categorical conjugate pair. The component weights $\\{\textbf{r}_{1},\textbf{r}_{2},\cdots,\textbf{r}_{D}\\}$ are first drawn from a uniform distribution on the $\textbf{K}^{true}$ simplex and then nodes are assigned to the communities by drawing from the corresponding Categorical distributions. Time series data in $\Re^{N}$ are then simulated from $Y=f(\textbf{z},a,b)+\bm{\epsilon}$ (1.1) for $t=1,\cdots,T$ by drawing $f(\textbf{z},a,b)\sim\mathcal{N}(\textbf{0},\bm{\Sigma}(\textbf{z},a,b))$, with $\Sigma_{ij}=\left\\{\begin{array}[]{lr}1,\ \ \ \ \mbox{if}\ i\ =j\\\ a,\ \ \ \ \mbox{if}\ i\ \neq j\ \mbox{and}\ z_{i}=z_{j}\\\ b,\ \ \ \ \mbox{if}\ i\ \neq j\ \mbox{and}\ z_{i}\neq z_{j}\\\ \end{array}\right.$ (1.2) where $a\sim U(0.8,1)$ and $b\sim U(0,0.2)$ are uniformly distributed, and $\bm{\epsilon}\sim\mathcal{N}(\textbf{0},\sigma^{2}\textbf{I})$ is the additive Gaussian noise. The resulting covariance matrices for $D$ segments are denoted as $\\{\bm{\Sigma}_{1},\bm{\Sigma}_{2},\cdots,\bm{\Sigma}_{D}\\}$. The simulated data $\textbf{Y}\in\Re^{N\times T}$ can be separated into $D$ segments which are $\\{\textbf{Y}_{1},\textbf{Y}_{2},\cdots,\textbf{Y}_{D}\\}$. For validation, we first generate 100 instances (as virtual subjects) of synthetic multivariate time series for a network with $N=35$ nodes and $T=180$ time points to imitate the scenario of real data. We set the true change- points at $\\{20,50,80,100,130,160\\}$ and the numbers of communities in the segments to be $\\{3,4,5,3,5,4,3\\}$. Here we define the signal-to-noise ratio (SNR) as $\frac{\Sigma_{ii}}{\sigma^{2}}$, and set different values of $\sigma$ to control SNR ($\sigma=0.3162$ for SNR = 10dB, $\sigma=0.5623$ for SNR = 5dB, $\sigma=1$ for SNR = 0dB, and $\sigma=1.7783$ for SNR = -5dB). For global fitting, the posterior prediction replication number is set as $S=50$ for all of our experiments. For local inference, we draw $S_{s}=200$ samples from the posterior densities for both latent label vectors and model parameters. We set the prior to be $\mbox{NIG}(\xi,\kappa^{2}\sigma_{kl}^{2},\nu/2,\rho/2)$ with $\xi=0$, $\kappa^{2}=1$, $\nu=3$ and $\rho=0.02$, which is non-informative. ## 2 Label switching For the latent block model, we set $\alpha_{k}=1$ with $\\{k=1,\cdots,K\\}$, and constant values of $\xi$, $\kappa^{2}$, $\nu$ and $\rho$ for all of the blocks $kl$, so the prior is symmetric with respect to permutations of community labels. Permutations of community labels do not change the likelihood, which means the distributions with respect to blocks are not identifiable. Therefore, the posterior is also invariant to permutations of community labels. In the Markov chain, the labels of the latent label vector switch occasionally: this effect is known as the label switching phenomenon [1, 2, 3]. For global fitting, label switching does not affect the results of posterior predictive discrepancy. However, for local inference, we need to assign the labels to the communities unequivocally to estimate the memberships of the nodes. We define a distance indicating the difference of coordinates between two latent label vectors z and $\textbf{z}^{\prime}$, $D(\textbf{z},\textbf{z}^{\prime})=\sum_{i=1}^{N}I(z_{i}\neq z^{\prime}_{i}),$ (2.1) where $I$ is the indicator function. We define $\bm{\sigma}=\\{\sigma(1),\cdots,\sigma(k),\cdots,\sigma(K)\\}$ (2.2) as a permutation of a labelling $\\{1,\cdots,k,\cdots,K\\}$. Let $\textbf{Q}=\\{\textbf{z}^{j}(\bm{\sigma}^{j}),j=1,\cdots,J\\}$ be a collection of latent label vectors with respect to a sequence of permutations $\\{\bm{\sigma}^{j},j=1,\cdots,J\\}$. We want to minimize the sum of all distances between the vectors $\sum_{j=1}^{J-1}\sum_{l=j+1}^{J}D(\textbf{z}^{j}(\bm{\sigma}^{j}),\textbf{z}^{l}(\bm{\sigma}^{l})).$ (2.3) The solution of this minimization can be considered as a sequential optimization problem of the square assignment. For each vector $\textbf{z}^{j}$, if the vectors that have already been processed (relabelled) up to $j-1$ are $\\{\textbf{z}^{t},t=1,\cdots,j-1\\}$, we define the element of a cost matrix $C(k_{1},k_{2})=\sum_{t=1}^{j-1}\sum_{i=1}^{N}D(z_{i}^{t}\neq k_{1},z_{i}^{j}=k_{2}).$ (2.4) We use the square assignment algorithm [4] returning a permutation $\bm{\sigma}^{j}$ which minimizes the total cost $\sum_{k=1}^{K}C(k,\sigma(k))$ for each $\textbf{z}^{j}$. Finally, we permute the labels in the vector $\textbf{z}^{j}$ according to $\bm{\sigma}^{j}$. ## 3 Bayesian Modelling for functional connectivity ### 3.1 Clustering with latent block model Mathematically, we denote the community memberships (also called the latent labels) of the nodes as a vector $\textbf{z}=(z_{1},\ldots,z_{N})$ such that $z_{i}\in\\{1,\cdots,K\\}$ denotes the community containing node $i$. Each $z_{i}$ independently follows the categorical (one-trial multinomial) distribution: $z_{i}\sim\mbox{ Categorical}(1;\textbf{r}=\\{r_{1},\cdots,r_{K}\\}),$ (3.1) where $r_{k}$ is the probability of a node being assigned to community $k$ and $\sum_{k=1}^{K}r_{k}=1$. The categorical probability can be expressed using the indicator function $I_{k}(z_{i})$ as $p(z_{i}|\textbf{r},K)=\prod_{k=1}^{K}r_{k}^{I_{k}(z_{i})},\mbox{where\ }I_{k}(z_{i})=\begin{cases}1,\ \mbox{if}\ z_{i}=k\\\ 0,\ \mbox{if}\ z_{i}\neq k\\\ \end{cases}.$ (3.2) This implies that the $N$ dimensional vector z is generated with probability $p(\textbf{z}|\textbf{r},K)=\prod_{k=1}^{K}r_{k}^{m_{k}(\textbf{z})},$ (3.3) where $m_{k}(\textbf{z})=\sum_{i=1}^{N}I_{k}(z_{i})$. The latent allocation parameter vector $\textbf{r}=(r_{1},\cdots,r_{K})$ is assumed to have a $K$-dimensional Dirichlet prior with density $p(\textbf{r}|K)=N(\bm{\alpha})\prod_{k=1}^{K}r_{k}^{\alpha_{k}-1},$ (3.4) where the normalization factor is $N(\bm{\alpha})=\frac{\Gamma(\sum_{k=1}^{K}\alpha_{k})}{\prod_{k=1}^{K}\Gamma(\alpha_{k})}$. In this work we suppose $\alpha_{k}=1$ for $k=1,\ldots,K$, so that the prior for r is uniform on the $K$-simplex. Edges between nodes are represented using an adjacency matrix $\textbf{x}\in\Re^{N\times N}$. We define a block $\textbf{x}_{kl}$ comprised of weighted edges connecting the nodes in community $k$ to the nodes in community $l$. The likelihood of the latent block model can be expressed as $p(\textbf{x}|\bm{\pi},\textbf{z},K)=\prod_{k.l}p(\textbf{x}_{kl}|\pi_{kl},\textbf{z},K),$ (3.5) and the likelihood in specific blocks can be expanded as $p(\textbf{x}_{kl}|\pi_{kl},\textbf{z},K)=\prod_{\\{i|z_{i}=k\\}}\prod_{\\{j|z_{j}=l\\}}p(x_{ij}|\pi_{kl},\textbf{z},K),$ (3.6) where $\bm{\pi}=\\{\pi_{kl}\\}$ is a $K\times K$ model parameter matrix. ### 3.2 The latent block model with weighted edges The block model parameter in block $kl$ is $\pi_{kl}=(\mu_{kl},\sigma_{kl}^{2})$ and each $x_{ij}$ in the block $kl$ follows a Gaussian distribution conditional on z under the model $K$, that is $x_{ij}|\pi_{kl},\textbf{z},K\sim\mathcal{N}(\mu_{kl},\sigma_{kl}^{2}).$ The parameter vectors $\pi_{kl}=(\mu_{kl},\sigma_{kl}^{2})$ are assumed to independently follow the conjugate Normal-Inverse-Gamma (NIG) prior $\pi_{kl}\sim\mbox{NIG}(\xi,\kappa^{2}\sigma_{kl}^{2},\nu/2,\rho/2)$. That is, $\mu_{kl}\sim\mathcal{N}(\xi,\kappa^{2}\sigma_{kl}^{2})$ and $\sigma_{kl}^{2}\sim\mbox{IG}(\nu/2,\rho/2)$. The density of the Inverse-Gamma distribution $\mbox{IG}(\alpha,\beta)$ has the general formula $p(x)=\frac{\beta^{\alpha}}{\Gamma(\alpha)}x^{-(\alpha+1)}e^{(\frac{-\beta}{x})}$, where $\alpha$ and $\beta$ are hyper-parameters. We define $s_{kl}(\textbf{x})$ to be the sum of the edge weights in the block $kl$ and $q_{kl}(\textbf{x})$ to be the sum of squares as follows: $s_{kl}(\textbf{x})=\sum_{i:z_{i}=k}\sum_{j:z_{j}=l}x_{ij},$ (3.7) and $q_{kl}(\textbf{x})=\sum_{i:z_{i}=k}\sum_{j:z_{j}=l}x_{ij}^{2}.$ (3.8) We also define $w_{kl}(\textbf{z})=m_{k}(\textbf{z})m_{l}(\textbf{z})$ to be the number of elements in the block, where $m_{k}$ and $m_{l}$ are the numbers of nodes in community $k$ and $l$ respectively. The prior and the likelihood in the above expression is the NIG-Gaussian conjugate pair. With this conjugate pair, we can calculate the posterior distribution for each model block, which is also a Normal-Inverse-Gamma distribution $\mu_{kl}\sim\mathcal{N}(\xi_{n},\kappa_{n}^{2}\sigma_{kl}^{2})$ and $\sigma_{kl}^{2}\sim\mbox{IG}(\nu_{n}/2,\rho_{n}/2)$, where $\nu_{n}=\nu+w_{kl},$ (3.9) $\kappa_{n}^{2}=\frac{\kappa^{2}}{1+w_{kl}\kappa^{2}},$ (3.10) $\xi_{n}=\frac{\xi+s_{kl}\kappa^{2}}{1+w_{kl}\kappa^{2}},$ (3.11) $\rho_{n}=\frac{\xi^{2}}{\kappa^{2}}+q_{kl}+\rho-\frac{(\xi+s_{kl}\kappa^{2})^{2}}{1/\kappa^{2}+w_{kl}}.$ (3.12) Details of the derivation of this $\mbox{NIG}(\xi_{n},\kappa_{n}^{2}\sigma_{kl}^{2},\nu_{n}/2,\rho_{n}/2)$ distribution are provided in Supplementary Section 4. The posterior density of the whole model is a product of such terms for all blocks, as follows. $p(\bm{\pi}|\textbf{x},\textbf{z})=\prod_{k,l}p(\pi_{kl}|\textbf{x}_{kl},\textbf{z}).$ (3.13) Given a sampled z we can draw $\bm{\pi}$ from the above posterior directly. Methods for sampling the latent vector z will be discussed later in the paper. ### 3.3 The collapsed posterior of latent label vector In this model, a change-point corresponds to a change in community architecture i.e., a change in the latent label vector z and the parameter matrix $\bm{\pi}$. For the sake of computational efficiency, it is convenient to construct the collapsed posterior distribution $p(\textbf{z}|\textbf{x},K)$. We can obtain the collapsed posterior by integrating out the nuisance parameters [5, 3]. In this section, we discuss the details of collapsing the latent block model when the edge weights are continuously valued. Given $K$, the joint density of x, $\bm{\pi}$, z, and r is $p(\textbf{x},\bm{\pi},\textbf{z},\textbf{r}|K)=p(\textbf{z},\textbf{r}|K)p(\textbf{x},\bm{\pi}|\textbf{z}).$ (3.14) The parameters r and $\bm{\pi}$ can be integrated out (collapsed) to obtain the marginal density $p(\textbf{x},\textbf{z}|K)$. $p(\textbf{z},\textbf{x}|K)=\int p(\textbf{z},\textbf{r}|K)d\textbf{r}\int p(\textbf{x},\bm{\pi}|\textbf{z})d\bm{\pi},$ (3.15) so that the posterior for the block-wise model can be expressed as $p(\textbf{z}|\textbf{x},K)\propto p(\textbf{z},\textbf{x}|K)=\int p(\textbf{z},\textbf{r}|K)d\textbf{r}\prod_{k,l}\int p(\textbf{x}_{kl},\pi_{kl}|\textbf{z})d\pi_{kl}.$ (3.16) The first integral $p(\textbf{z}|K)=\int p(\textbf{z},\textbf{r}|K)d\textbf{r}$, where the integral is over the $K$-simplex, can be evaluated as follows: $\displaystyle\int p(\textbf{z},\textbf{r}|K)d\textbf{r}$ $\displaystyle=$ $\displaystyle\frac{\Gamma(\sum_{k=1}^{K}\alpha_{k})}{\Gamma(\sum_{k=1}^{K}(\alpha_{k}+m_{k}(\textbf{z}))}$ (3.18) $\displaystyle\times\prod_{k=1}^{K}\frac{\Gamma(\alpha_{k}+m_{k}(\textbf{z}))}{\Gamma(\alpha_{k})}.$ The details of this derivation are in Supplementary Section 5 below. The integral of the form $\int p(\textbf{x}_{kl},\pi_{kl}|\textbf{z})d\pi_{kl}$ can be evaluated as $\displaystyle\int p(\textbf{x}_{kl},\pi_{kl}|\textbf{z})d\pi_{kl}$ $\displaystyle=$ $\displaystyle\frac{\rho^{\nu/2}\Gamma\\{(w_{kl}+\nu)/2\\}}{\pi^{w_{kl}/2}\Gamma(\nu/2)(w_{kl}\kappa^{2}+1)^{1/2}}$ (3.21) $\displaystyle\times(-\frac{\kappa^{2}(s_{kl}+\xi/\kappa^{2})^{2}}{w_{kl}\kappa^{2}+1}+\frac{\xi^{2}}{\kappa^{2}}$ $\displaystyle+q_{kl}+\rho)^{-(w_{kl}+\nu)/2}$ The derivation is in Supplementary Section 6. ### 3.4 Sampling from the collapsed posterior We use a Markov chain Monte Carlo (MCMC) method to sample the latent label vector from the posterior with proposal moves $p(\textbf{z}\rightarrow\textbf{z}^{\ast})$ similar to those of the allocation sampler [2] to update z. In the Metropolis-Hastings algorithm [6], a candidate latent label vector $\textbf{z}^{\ast}$ is accepted with probability $\min\\{1,r\\}$, where $r=\frac{p(K,\textbf{z}^{\ast},\textbf{x})p(\textbf{z}^{\ast}\rightarrow\textbf{z})}{p(K,\textbf{z},\textbf{x})p(\textbf{z}\rightarrow\textbf{z}^{\ast})}.$ (3.22) In each iteration of the sampler, we perform either a Gibbs move or an M3 move, with equal probability (0.5) of each. Each Gibbs move updates the latent label vector z by drawing from the collapsed posterior $p(\textbf{z}|\textbf{x},K)$. At each iteration, one entry $z_{i}$ is randomly selected and updated by drawing from $p(z_{i}^{\ast}|z_{-i},\textbf{x},K)=\frac{1}{C}p(z_{1},\cdots,z_{i-1},z_{i}^{\ast}=k,z_{i+1},\cdots,z_{n}|\textbf{x}),$ (3.23) where $k\in\\{1,\cdots,K\\}$, $z_{-i}$ represents the elements in z apart from $z_{i}$ and the normalization term $C=p(z_{-i}|\textbf{x},K)=\sum_{k=1}^{K}p(z_{1},\cdots,z_{i-1},z_{i}^{\ast}=k,z_{i+1},\cdots,z_{n}|\textbf{x}).$ (3.24) For a Gibbs move within a Metropolis-Hastings sampler, the ratio $r$ always equals one. The computational complexity of a Gibbs move depends on the cost of calculating the probability of the reassignment of a specific entry. Each probability takes $O(K^{2}+N^{2})$ time to calculate. There are $K$ possible reassignments so that each Gibbs move takes $O(K^{3}+KN^{2})$ time. The details of the M3 move are provided in Supplementary Section 7. The computational complexity of the M3 move depends on the cost of calculating the ratio of posterior density and proposal density. The time cost of calculating this ratio is $O(K^{2}+N^{2})$, and calculating the proposal ratio takes $O(N+L^{2})$ time, so the M3 move takes $O(K^{2}+N^{2}+L^{2})$ time. ## 4 The likelihood and posterior of the latent block model with weighted edges Likelihood: The likelihood of the block $kl$ with weighted edges is $\displaystyle p(\textbf{x}_{kl}|\pi_{kl},\textbf{z},K)$ $\displaystyle=$ $\displaystyle\prod_{\\{i|z_{i}=k\\}}\prod_{\\{j|z_{j}=l\\}}p(x_{ij}|\mu_{kl},\sigma_{kl}^{2},\textbf{z},K)$ (4.1) $\displaystyle=$ $\displaystyle(2\pi\sigma_{kl}^{2})^{-w_{kl}/2}\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}\sum_{i:z_{i}=k}\sum_{j:z_{j}=l}(x_{ij}-\mu_{kl})^{2}\\}$ $\displaystyle=$ $\displaystyle(2\pi\sigma_{kl}^{2})^{-w_{kl}/2}$ $\displaystyle\times\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}(\sum_{i:z_{i}=k}\sum_{j:z_{j}=l}x_{ij}^{2}-2\sum_{i:z_{i}=k}\sum_{j:z_{j}=l}x_{ij}\mu_{kl}$ $\displaystyle+\sum_{i:z_{i}=k}\sum_{j:z_{j}=l}\mu_{kl}^{2})\\}$ $\displaystyle=$ $\displaystyle(2\pi\sigma_{kl}^{2})^{-w_{kl}/2}\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}(q_{kl}-2\mu_{kl}s_{kl}+w_{kl}\mu_{kl}^{2})\\},$ where $w_{kl}$ is the number of elements in block $kl$, $s_{kl}$ is the sum of the weights and $q_{kl}$ is the sum of squares of the weights in the block $kl$. Posterior: We derive the posterior of the model parameter $\pi_{kl}$ with prior $\mu_{kl}\sim\mathcal{N}(\xi,\kappa^{2}\sigma_{kl}^{2})$ and $\sigma_{kl}^{2}\sim\mbox{IG}(\nu/2,\rho/2)$ as follows. $\displaystyle p(\pi_{kl}|\textbf{x}_{kl},\textbf{z},K)$ $\displaystyle\propto$ $\displaystyle p(\pi_{kl})p(\textbf{x}_{kl}|\pi_{kl},\textbf{z},K)$ (4.2) $\displaystyle=$ $\displaystyle p(\mu_{kl})p(\sigma_{kl}^{2})\prod_{\\{i|z_{i}=k\\}}\prod_{\\{j|z_{j}=l\\}}p(x_{ij}|\mu_{kl},\sigma_{kl}^{2},\textbf{z},K)$ $\displaystyle=$ $\displaystyle(2\pi\kappa^{2}\sigma_{kl}^{2})^{-1/2}\mbox{exp}\\{-\frac{1}{2\kappa^{2}\sigma_{kl}^{2}}(\mu_{kl}-\xi)^{2}\\}$ $\displaystyle\times\frac{(\rho/2)^{\nu/2}}{\Gamma(\nu/2)}\sigma_{kl}^{-2(\nu/2+1)}\mbox{exp}\\{-\rho/2\sigma_{kl}^{2}\\}$ $\displaystyle\times(2\pi\sigma_{kl}^{2})^{-w_{kl}/2}\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}(q_{kl}-2\mu_{kl}s_{kl}+w_{kl}\mu_{kl}^{2})\\}$ $\displaystyle=$ $\displaystyle\frac{(\rho/2)^{\nu/2}}{\Gamma(\nu/2)}(2\pi\kappa^{2})^{-1/2}(2\pi)^{-w_{kl}/2}\sigma_{kl}^{-1}\sigma_{kl}^{-\nu-2-w_{kl}}$ $\displaystyle\times\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}[(\frac{1}{\kappa^{2}}+w_{kl})\mu_{kl}^{2}-2(\frac{1}{\kappa^{2}}\xi+s_{kl})\mu_{kl}$ $\displaystyle+\frac{1}{\kappa^{2}}\xi^{2}+q_{kl}+\rho]\\}$ The posterior of the Gaussian model is also a Normal-Inverse-Gamma distribution which can be denoted as $\mu_{kl}\sim\mathcal{N}(\xi_{n},\kappa_{n}^{2}\sigma_{kl}^{2})$ and $\sigma_{kl}^{2}\sim\mbox{IG}(\nu_{n}/2,\rho_{n}/2)$. The posterior density can be expressed as $\displaystyle p(\pi_{kl}|\textbf{x}_{kl},\textbf{z},K)$ $\displaystyle=$ $\displaystyle(2\pi\kappa_{n}^{2}\sigma_{kl}^{2})^{-1/2}\mbox{exp}\\{-\frac{1}{2\kappa_{n}^{2}\sigma_{kl}^{2}}(\mu_{kl}-\xi_{n})^{2}\\}$ (4.3) $\displaystyle\times\frac{(\rho_{n}/2)^{\nu_{n}/2}}{\Gamma(\nu_{n}/2)}\sigma_{kl}^{-2(\nu_{n}/2+1)}\mbox{exp}\\{-\rho_{n}/2\sigma_{kl}^{2}\\}$ $\displaystyle=$ $\displaystyle\frac{(\rho_{n}/2)^{\nu_{n}/2}}{\Gamma(\nu_{n}/2)}(2\pi\kappa_{n}^{2})^{-1/2}\sigma_{kl}^{-1}\sigma_{kl}^{-\nu_{n}-2}$ $\displaystyle\times\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}(\frac{1}{\kappa_{n}^{2}}\mu_{kl}^{2}-\frac{2\xi_{n}}{\kappa_{n}^{2}}\mu_{kl}+\frac{\xi_{n}^{2}}{\kappa_{n}^{2}}+\rho_{n})\\}.$ Comparing the terms and coefficients with respect to $\mu_{kl}^{2}$, $\mu_{kl}$ and $\sigma_{kl}^{2}$, $-\nu_{n}-2=-\nu-2-w_{kl},$ (4.4) $\frac{1}{\kappa_{n}^{2}}=\frac{1}{\kappa^{2}}+w_{kl},$ (4.5) $\frac{2\xi_{n}}{\kappa_{n}^{2}}=2(\frac{1}{\kappa^{2}}\xi+s_{kl}),$ (4.6) $\frac{\xi_{n}^{2}}{\kappa_{n}^{2}}+\rho_{n}=\frac{1}{\kappa^{2}}\xi^{2}+q_{kl}+\rho.$ (4.7) In summary, the parameters of the posterior density are given by $\nu_{n}=\nu+w_{kl},$ (4.8) $\kappa_{n}^{2}=\frac{\kappa^{2}}{1+w_{kl}\kappa^{2}},$ (4.9) $\xi_{n}=\frac{\xi+s_{kl}\kappa^{2}}{1+w_{kl}\kappa^{2}},$ (4.10) $\rho_{n}=\frac{\xi^{2}}{\kappa^{2}}+q_{kl}+\rho-\frac{(\xi+s_{kl}\kappa^{2})^{2}}{1/\kappa^{2}+w_{kl}}.$ (4.11) We can directly sample $\pi_{kl}$ from $\mbox{NIG}(\xi_{n},\kappa_{n}^{2}\sigma_{kl}^{2},\nu_{n}/2,\rho_{n}/2)$. ## 5 Collapse r in latent block model We show the calculation of $p(\textbf{z}|K)=\int p(\textbf{z},\textbf{r}|K)d\textbf{r}$. Given the $K$-dimensional Dirichlet prior with density $p(\textbf{r}|K)=N(\bm{\alpha})\prod_{k=1}^{K}r_{k}^{\alpha_{k}-1}$, where $\bm{\alpha}=\\{\alpha_{1},\cdots,\alpha_{K}\\}$, $N(\bm{\alpha})=\frac{\Gamma(\sum_{k=1}^{K}\alpha_{k})}{\prod_{k=1}^{K}\Gamma(\alpha_{k})}$; and the likelihood $p(\textbf{z}|\textbf{r},K)=\prod_{k=1}^{K}r_{k}^{m_{k}(\textbf{z})}$, we can collapse r as follows: $\displaystyle p(\textbf{z}|K)$ $\displaystyle=$ $\displaystyle\int p(\textbf{z},\textbf{r}|K)d\textbf{r}$ (5.1) $\displaystyle=$ $\displaystyle\int p(\textbf{r}|K)p(\textbf{z}|\textbf{r},K)d\textbf{r}$ $\displaystyle=$ $\displaystyle\int\frac{\Gamma(\sum_{k=1}^{K}\alpha_{k})}{\prod_{k=1}^{K}\Gamma(\alpha_{k})}\prod_{k=1}^{K}r_{k}^{\alpha_{k}-1}\prod_{k=1}^{K}r_{k}^{m_{k}}d\textbf{r}$ $\displaystyle=$ $\displaystyle\frac{\Gamma(\sum_{k=1}^{K}\alpha_{k})}{\prod_{k=1}^{K}\Gamma(\alpha_{k})}\frac{\prod_{k=1}^{K}\Gamma(\alpha_{k}+m_{k})}{\Gamma(\sum_{k=1}^{K}(\alpha_{k}+m_{k}))}$ $\displaystyle\times\int\frac{\Gamma(\sum_{k=1}^{K}(\alpha_{k}+m_{k}))}{\prod_{k=1}^{K}\Gamma(\alpha_{k}+m_{k})}\prod_{k=1}^{K}r_{k}^{\alpha_{k}+m_{k}-1}d\textbf{r}$ $\displaystyle=$ $\displaystyle\frac{\Gamma(\sum_{k=1}^{K}\alpha_{k})}{\Gamma(\sum_{k=1}^{K}(\alpha_{k}+m_{k}))}\prod_{k=1}^{K}\frac{\Gamma(\alpha_{k}+m_{k})}{\Gamma(\alpha_{k})}$ ## 6 Collapse $\pi_{kl}$ in latent block model with weighted edges The collapsed posterior of the latent block model is described in the work by [3], but the details of the collapsing procedure are not described there. We elaborate the collapsing procedure of the Gaussian latent block model. We collapse $\mu_{kl}$ and $\sigma_{kl}^{2}$ respectively to get the integral. $\displaystyle\int p(\textbf{x}_{kl},\pi_{kl}|\textbf{z})d\pi_{kl}$ $\displaystyle=$ $\displaystyle\int\int p(\textbf{x}_{kl},\mu_{kl},\sigma_{kl}^{2}|\textbf{z})d\mu_{kl}d\sigma_{kl}^{2}$ (6.1) $\displaystyle=$ $\displaystyle\int\int p(\mu_{kl})p(\sigma_{kl}^{2})p(\textbf{x}_{kl}|\mu_{kl},\sigma_{kl}^{2},\textbf{z})d\mu_{kl}d\sigma_{kl}^{2}$ To facilitate integrating with respect to $\mu_{kl}$, we denote $I_{\mu_{kl}}=\int p(\textbf{x}_{kl},\mu_{kl},\sigma_{kl}^{2}|\textbf{z})d\mu_{kl},$ (6.2) then $\displaystyle I_{\mu_{kl}}$ $\displaystyle=$ $\displaystyle\frac{(\rho/2)^{\nu/2}}{\Gamma(\nu/2)}(2\pi\kappa^{2})^{-1/2}(2\pi)^{-w_{kl}/2}\sigma_{kl}^{-1}\sigma_{kl}^{-\nu-2-w_{kl}}$ (6.3) $\displaystyle\times\int\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}[(\frac{1}{\kappa^{2}}+w_{kl})\mu_{kl}^{2}-2(\frac{1}{\kappa^{2}}\xi+s_{kl})\mu_{kl}$ $\displaystyle+\frac{1}{\kappa^{2}}\xi^{2}+q_{kl}+\rho]\\}du_{kl}.$ Let $M=\frac{(\rho/2)^{\nu/2}}{\Gamma(\nu/2)}(2\pi\kappa^{2})^{-1/2}(2\pi)^{-w_{kl}/2}\sigma_{kl}^{-1}\sigma_{kl}^{-\nu-2-w_{kl}},$ (6.4) so that $\displaystyle I_{\mu_{kl}}=M\times\int\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}[\lambda(\mu_{kl}-m)^{2}-\lambda m^{2}+\frac{1}{\kappa^{2}}\xi^{2}+q_{kl}+\rho]\\}du_{kl},$ (6.5) where $\lambda=\frac{1}{\kappa^{2}}+w_{kl},$ (6.6) and $m=\frac{\frac{1}{\kappa^{2}}\xi+s_{kl}}{\frac{1}{\kappa^{2}}+w_{kl}}.$ (6.7) Then $\displaystyle I_{\mu_{kl}}$ $\displaystyle=$ $\displaystyle M\times(2\pi\frac{\sigma_{kl}^{2}}{\lambda})^{1/2}\int(2\pi\frac{\sigma_{kl}^{2}}{\lambda})^{-1/2}\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}\lambda(\mu_{kl}-m)^{2}\\}$ (6.8) $\displaystyle\times\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}(-\lambda m^{2}+\frac{1}{\kappa^{2}}\xi^{2}+q_{kl}+\rho)\\}du_{kl}$ $\displaystyle=$ $\displaystyle M\times(2\pi\frac{\sigma_{kl}^{2}}{\lambda})^{1/2}\times\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}(-\lambda m^{2}+\frac{1}{\kappa^{2}}\xi^{2}+q_{kl}+\rho)\\}$ $\displaystyle=$ $\displaystyle(2\pi)^{-w_{kl}/2}\frac{(\rho/2)^{\nu/2}}{\Gamma(\nu/2)}\sigma_{kl}^{-\nu- w_{kl}-2}(w_{kl}\kappa^{2}+1)^{-1/2}$ $\displaystyle\times\mbox{exp}\\{-\frac{1}{2\sigma_{kl}^{2}}[-\frac{(\frac{1}{\kappa^{2}}\xi+s_{kl})^{2}}{\frac{1}{\kappa^{2}}+w_{kl}}+\frac{1}{\kappa^{2}}\xi^{2}+q_{kl}+\rho]\\}.$ To facilitate integration with respect to $\sigma_{kl}^{2}$, we first rewrite $I_{\mu_{kl}}$ as follows $I_{\mu_{kl}}=(2\pi)^{-w_{kl}/2}\frac{(\rho/2)^{\nu/2}}{\Gamma(\nu/2)}(w_{kl}\kappa^{2}+1)^{-1/2}\frac{\Gamma(\alpha)}{\beta^{\alpha}}\frac{\beta^{\alpha}}{\Gamma(\alpha)}(\sigma_{kl}^{2})^{-(\alpha+1)}e^{(\frac{-\beta}{\sigma_{kl}^{2}})},$ (6.9) where $\alpha=\frac{1}{2}\nu+\frac{1}{2}w_{kl},$ (6.10) and $\beta=\frac{1}{2}[-\frac{(\frac{1}{\kappa^{2}}\xi+s_{kl})^{2}}{\frac{1}{\kappa^{2}}+w_{kl}}+\frac{1}{\kappa^{2}}\xi^{2}+q_{kl}+\rho].$ (6.11) This can be integrated as follows $\displaystyle\int I_{\mu_{kl}}d\sigma_{kl}^{2}$ $\displaystyle=$ $\displaystyle(2\pi)^{-w_{kl}/2}\frac{(\rho/2)^{\nu/2}}{\Gamma(\nu/2)}(w_{kl}\kappa^{2}+1)^{-1/2}\frac{\Gamma(\alpha)}{\beta^{\alpha}}$ (6.12) $\displaystyle=$ $\displaystyle(2\pi)^{-w_{kl}/2}\frac{(\rho/2)^{\nu/2}}{\Gamma(\nu/2)}(w_{kl}\kappa^{2}+1)^{-1/2}$ $\displaystyle\times\frac{\Gamma(\frac{1}{2}\nu+\frac{1}{2}w_{kl})}{(\frac{1}{2}[-\frac{(\frac{1}{\kappa^{2}}\xi+s_{kl})^{2}}{\frac{1}{\kappa^{2}}+w_{kl}}+\frac{1}{\kappa^{2}}\xi^{2}+q_{kl}+\rho])^{(\frac{1}{2}\nu+\frac{1}{2}w_{kl})}}$ $\displaystyle=$ $\displaystyle\frac{\rho^{\nu/2}\Gamma\\{(w_{kl}+\nu)/2\\}}{\pi^{w_{kl}/2}\Gamma(\nu/2)(w_{kl}\kappa^{2}+1)^{1/2}}$ $\displaystyle\times(-\frac{\kappa^{2}(s_{kl}+\xi/\kappa^{2})^{2}}{w_{kl}\kappa^{2}+1}+\frac{\xi^{2}}{\kappa^{2}}+q_{kl}+\rho)^{-(w_{kl}+\nu)/2}.$ In summary, $\displaystyle\int p(\textbf{x}_{kl},\pi_{kl}|\textbf{z})d\pi_{kl}$ $\displaystyle=$ $\displaystyle\int\int p(\textbf{x}_{kl},\mu_{kl},\sigma_{kl}^{2}|\textbf{z})d\mu_{kl}d\sigma_{kl}^{2}$ (6.13) $\displaystyle=$ $\displaystyle\frac{\rho^{\nu/2}\Gamma\\{(w_{kl}+\nu)/2\\}}{\pi^{w_{kl}/2}\Gamma(\nu/2)(w_{kl}\kappa^{2}+1)^{1/2}}$ $\displaystyle\times(-\frac{\kappa^{2}(s_{kl}+\xi/\kappa^{2})^{2}}{w_{kl}\kappa^{2}+1}+\frac{\xi^{2}}{\kappa^{2}}+q_{kl}+\rho)^{-(w_{kl}+\nu)/2}.$ ## 7 The M3 move In a Gibbs move, only one entry in z is updated at each iteration. An alternative is the M3 move [2], which updates multiple entries of z simultaneously. In M3, two communities in z are randomly selected and denoted as $k_{1}$ and $k_{2}$. Each element $z_{i}$ in the selected communities is reassigned to $k_{1}$ or $k_{2}$ with probability $P_{k_{1}}^{i}$ and $P_{k_{2}}^{i}$ respectively, to form the updated $\textbf{z}^{\ast}$. The collection of elements of z with labels $k_{1}$ or $k_{2}$ may be indexed by the set $I=\\{i:z_{i}=k_{1}\mbox{\ or\ }z_{i}=k_{2}\\}$. Let the number of such elements be $L$. The remaining elements of z are collected into a subvector denoted $\widetilde{\textbf{z}}$. For the update, one element $z_{i}$ with $i\in I$ is randomly selected and updated to $z_{i}^{\ast}$ according to a reassignment probability. The updated element is added to $\widetilde{\textbf{z}}$. The size of $I$ thus becomes $L-1$. This procedure is repeated until all the elements of $I$ are processed (the length of $I$ becomes 0) and the resulting vector $\widetilde{\textbf{z}}$ becomes the proposed move $\textbf{z}^{\ast}$. We define a sub-adjacency matrix $\widetilde{\textbf{x}}$ as the observations corresponding to $\widetilde{\textbf{z}}$ and the observations $\textbf{x}^{{\ast}i}$ corresponding to the updated $z_{i}^{\ast}$. The probabilities of the reassignment satisfy $P_{k_{1}}^{i}+P_{k_{2}}^{i}=1$ and the ratio $\displaystyle\frac{P_{k_{1}}^{i}}{P_{k_{2}}^{i}}$ $\displaystyle=$ $\displaystyle\frac{p(z_{i}^{\ast}=k_{1}|\widetilde{\textbf{z}},\widetilde{\textbf{x}},\textbf{x}^{\ast i},K)}{p(z_{i}^{\ast}=k_{2}|\widetilde{\textbf{z}},\widetilde{\textbf{x}},\textbf{x}^{\ast i},K)}$ (7.1) $\displaystyle=$ $\displaystyle\frac{p(z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}},\widetilde{\textbf{x}},\textbf{x}^{\ast i}|K)}{p(z_{i}^{\ast}=k_{2},\widetilde{\textbf{z}},\widetilde{\textbf{x}},\textbf{x}^{\ast i}|K)}$ $\displaystyle=$ $\displaystyle\frac{p(z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}}|K)}{p(z_{i}^{\ast}=k_{2},\widetilde{\textbf{z}}|K)}\frac{p(\widetilde{\textbf{x}},\textbf{x}^{\ast i}|z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}},K)}{p(\widetilde{\textbf{x}},\textbf{x}^{\ast i}|z_{i}^{\ast}=k_{2},\widetilde{\textbf{z}},K)}.$ The first term of this ratio is given by $\displaystyle\frac{p(z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}}|K)}{p(z_{i}^{\ast}=k_{2},\widetilde{\textbf{z}}|K)}$ $\displaystyle=$ $\displaystyle\frac{\Gamma(\alpha_{k_{1}}+\widetilde{m}_{k_{1}}(\widetilde{\textbf{z}})+1)}{\Gamma(\alpha_{k_{1}}+\widetilde{m}_{k_{1}}(\widetilde{\textbf{z}}))}\frac{\Gamma(\alpha_{k_{2}}+\widetilde{m}_{k_{2}}(\widetilde{\textbf{z}}))}{\Gamma(\alpha_{k_{2}}+\widetilde{m}_{k_{2}}(\widetilde{\textbf{z}})+1)}$ (7.2) $\displaystyle=$ $\displaystyle\frac{\alpha_{k_{1}}+\widetilde{m}_{k_{1}}(\widetilde{\textbf{z}})}{\alpha_{k_{2}}+\widetilde{m}_{k_{2}}(\widetilde{\textbf{z}})},$ where $\widetilde{m}_{k_{1}}(\widetilde{\textbf{z}})$ and $\widetilde{m}_{k_{2}}(\widetilde{\textbf{z}})$ are the numbers of nodes in community $k_{1}$ and $k_{2}$ in $\widetilde{\textbf{z}}$. The second term of the ratio is given by $\displaystyle\frac{p(\widetilde{\textbf{x}},\textbf{x}^{\ast i}|z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}},K)}{p(\widetilde{\textbf{x}},\textbf{x}^{\ast i}|z_{i}^{\ast}=k_{2},\widetilde{\textbf{z}},K)}$ $\displaystyle=$ $\displaystyle\frac{p(\textbf{x}^{\ast i}|\widetilde{\textbf{x}},z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}},K)}{p(\textbf{x}^{\ast i}|\widetilde{\textbf{x}},z_{i}^{\ast}=k_{2},\widetilde{\textbf{z}},K)}$ (7.3) $\displaystyle=$ $\displaystyle\frac{p(\textbf{x}^{\ast i}|\widetilde{\textbf{x}}^{k_{1}},z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}},K)}{p(\textbf{x}^{\ast i}|\widetilde{\textbf{x}}^{k_{2}},z_{i}^{\ast}=k_{2},\widetilde{\textbf{z}},K)}$ $\displaystyle=$ $\displaystyle\frac{p(\widetilde{\textbf{x}}^{k_{1}},\textbf{x}^{\ast i}|z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}})}{p(\widetilde{\textbf{x}}^{k_{1}}|z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}})}\frac{p(\widetilde{\textbf{x}}^{k_{2}}|z_{i}^{\ast}=k_{2},\widetilde{\textbf{z}})}{p(\widetilde{\textbf{x}}^{k_{2}},\textbf{x}^{\ast i}|z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}})}.$ Finally, the reassignment probability is given by $\displaystyle\frac{P_{k_{1}}^{i}}{1-P_{k_{1}}^{i}}$ $\displaystyle=$ $\displaystyle\frac{\alpha_{k_{1}}+\widetilde{m}_{k_{1}}(\widetilde{\textbf{z}})}{\alpha_{k_{2}}+\widetilde{m}_{k_{2}}(\widetilde{\textbf{z}})}\frac{p(\widetilde{\textbf{x}}^{k_{1}},\textbf{x}^{\ast i}|z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}})}{p(\widetilde{\textbf{x}}^{k_{1}}|z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}})}\frac{p(\widetilde{\textbf{x}}^{k_{2}}|z_{i}^{\ast}=k_{2},\widetilde{\textbf{z}})}{p(\widetilde{\textbf{x}}^{k_{2}},\textbf{x}^{\ast i}|z_{i}^{\ast}=k_{1},\widetilde{\textbf{z}})}.$ (7.4) and the proposal ratio is given by $\displaystyle\frac{p(\textbf{z}^{\ast}\rightarrow\textbf{z})}{p(\textbf{z}\rightarrow\textbf{z}^{\ast})}$ $\displaystyle=$ $\displaystyle\prod_{i\in I}\frac{P_{z_{i}}^{i}}{P_{z_{i}^{\ast}}^{i}}.$ (7.5) ## 8 Summary of the algorithm Algorithm 1 Bayesian change-point detection by posterior predictive discrepancy Input: Time series Y of one subject, length of time course $T$, window size $W$, number of communities $K$. 1:For $t=\frac{W}{2}+1,\cdots,T-\frac{W}{2}$ 2: Calculate $\textbf{Y}_{t}\rightarrow\textbf{x}_{t}$. 3: Draw samples $\\{\textbf{z}^{i},\bm{\pi}^{i}\\}$ $(i=1,\cdots,S)$ from the posterior $P(\textbf{z},\bm{\pi}|\textbf{x},K)$. 4: Simulate replicated adjacency matrix $\textbf{x}^{rep^{i}}$ from the predictive distribution $P(\textbf{x}^{rep}|\textbf{z},\bm{\pi},K)$. 5: Calculate the disagreement index $\gamma(\textbf{x}^{rep^{i}};\textbf{x})$. 6: Calculate the posterior predictive discrepancy index $\overline{\gamma}_{t}=\frac{\sum_{i=1}^{S}\bm{\gamma}(\textbf{x}^{rep^{i}};\textbf{x})}{S}$. 7:End 8:For $t=\frac{W}{2}+\frac{W_{s}}{2}+1,\cdots,T-\frac{W}{2}-\frac{W_{s}}{2}$ 9: Calculate cumulative discrepancy energy $E_{t}=\sum_{I=t-\frac{W_{s}}{2}}^{t+\frac{W_{s}}{2}-1}\overline{\gamma}_{I}$. 10:End ## Supplementary Table 1: 2-back | 0-back | Fixation ---|---|--- Community | Node number | | Community | Node number | | Community | Node number | k=1 | 29 | | | | | k=1 | | | | | | k=1 | | | | | k=2 | 11 | 31 | 32 | | | k=2 | 31 | 32 | | | | k=2 | 11 | 30 | 31 | 32 | k=3 | 21 | | | | | k=3 | 16 | 20 | 27 | | | k=3 | 12 | 16 | 20 | 21 | k=4 | 1 | 9 | 17 | 34 | | k=4 | 1 | 9 | 17 | 34 | | k=4 | 1 | 7 | 9 | 17 | 34 k=5 | 2 | | | | | k=5 | 3 | | | | | k=5 | 24 | | | | k=6 | 35 | | | | | k=6 | 5 | 10 | | | | k=6 | 8 | | | | Table 1: This table summarises the nodes commonly located in a specific community $k$ for all of picture types in the working memory tasks. Fig. 1: Results of method validation using synthetic data with SNR = 10dB: a CDE of the multivariate Gaussian data with SNR = 10dB using different models ($K$=6, 5, 4, and 3). The sliding window size for converting from time series to correlation matrices sequence is $W=20$, whereas (for smoothing) the sliding window size for converting from PPDI to CDE is $W_{s}=10$. The vertical dashed lines are the locations of the true change-points ($t$ = 20, 50, 80, 100, 130, and 160). The colored scatterplots in the figures are the CDEs of individual (virtual) subjects and the black curve is the group CDE (averaged CDE over 100 subjects). The red points are the local maxima and the blue points are the local minima. b Local fitting with different models (from $K$=3 to 18) for synthetic data (SNR=10dB). Different colors represent the PPDI values of different states with the true number of communities $K^{true}$. c The estimation of community constituents for SNR = 10dB at each discrete state: $t$ = 36, 67, 91, 116, 147) for brain states 1 to 5, respectively. The estimations of the latent label vectors (Estimation) and the label vectors (True) that determine the covariance matrix in the generative model are shown as bar graphs. The strength and variation of the connectivity within and between communities are represented by the block mean and variance matrices within each panel. Fig. 2: Results of method validation using synthetic data with SNR = 0dB: This figure is in the same format as the Supplementary Figure 1 above only that it is for a SNR = 0 dB. b Local fitting with different models (from $K$=3 to 18) for synthetic data (SNR=0dB). c The estimation of community constituents for SNR = 0dB at each discrete state: $t$=36, 66, 92, 116, 146) for brain states 1 to 5, respectively. Fig. 3: CDE of the multivariate Gaussian data with SNR = -5dB using different models ($K$=6, 5, 4, and 3 in a to d). Change-point detection did not work in this case, hence the brain states can not be identified here. Fig. 4: Validation of sampling the model parameters: a the histograms of the sampled block mean, b the histograms of the sampled block variance for the case $K=3$. We denote the block $kl$ sequentially (for example, the block for $k=2,l=3$ is denoted as block $6$; the block for $k=3,l=3$ is denoted as block $9$). The green dashed lines are the true values and the black dashed lines are the estimates. In order to validate the algorithm for sampling the model parameters, we simulate a synthetic adjacency matrix from a mixture of Gaussian distributions with ground truth of $K=3$, the true latent label vector (3, 2, 1, 1, 2, 3, 3, 3, 2, 2, 1, 3, 1, 2, 2, 2, 1, 3, 3, 3, 3), the true block mean matrix (0.22, 0.05, -0.06; 0.05, 0.30, 0.02; -0.06, 0.02, 0.18) and the true block variance matrix (0.1, 0.02, 0.01; 0.002, 0.15, 0.03; 0.01, 0.03, 0.09). Given this generated adjacency matrix as an observation, we draw samples of the block mean and variance from the posterior $p(\bm{\pi|\textbf{x},\textbf{z}})$ conditional on z. The shape of the histogram of mean is consistent with a Normal distribution and the histogram of variance is consistent with an Inverse-Gamma distribution. The figure shows that the estimations of the block mean and variance closely match the ground truth values. Fig. 5: Task activation maps (thresholded Z-MAX maps) for group analysis: Contrasts of 2-back vs fixation, 0-back vs fixation and 2-back vs 0-back for MNI coordinates (x = -50, y = -46, z = 10). For running 1st-level GLM-based FEAT [7] in FSL, we added the confound predictors files released by HCP to the design matrix of the model for each individual. We then set up a 2nd-level design matrix for the contrast of 2-back, 0-back, and fixation. For the 3rd- level (the group-level GLM analysis [8]), we applied cluster-wise inference and set up the cluster defining threshold (CDT) to be $Z=3.1$ ($P=0.001$) to avoid cluster failure problems as described in [9], with a family-wise error- corrected threshold of $P=0.05$. Maps are viewed by looking upward from the feet of the subject and the coordinate directions are denoted as Anterior (A), Posterior (P), Superior (S), Inferior (I), Left (L), and Right (R). Fig. 6: Thresholded activation maps Fig. 7: Community structure of the discrete brain states with sparsity level of 20% (session 1: LR): The figures with blue frames represent brain states corresponding to working memory tasks (2-back tool at $t=41$; 0-back body at $t=76$; 2-back face at $t=140$; 0-back tool at $t=175$; 2-back body at $t=239$; 2-back place at $t=278$; 0-back face at $t=334$; and 0-back place at $t=375$ in a-k) and those with red frames represent brain states corresponding to fixation (fixation at $t=107,206,$ and $306$ in c, f, and i). Each brain state shows connectivity at a sparsity level of 20%. The different colors of the labels represent community memberships. The strength of the connectivity is represented by the colors shown in the bar at the right of each frame. In Circos maps, nodes in the same community are adjacent and have the same color. Node numbers and abbreviations of the corresponding brain regions are shown around the circles. In each frame, different colors represent different community numbers. The connectivity above the sparsity level is represented by arcs. The blue links represent connectivity within communities and the red links represent connectivity between communities. Fig. 8: Community structure of the discrete brain states with sparsity level of 30% (session 1: LR): This figure is in the same format as the Supplementary Figure 7 above only that it is for sparsity level of 30%. Fig. 9: Estimated mean and variance matrices of the blocks for brain states (session 1: LR): The figures with blue frames represent brain states corresponding to working memory tasks (2-back tool at $t=41$; 0-back body at $t=76$; 2-back face at $t=140$; 0-back tool at $t=175$; 2-back body at $t=239$; 2-back place at $t=278$; 0-back face at $t=334$; and 0-back place at $t=375$ in a-k) and those with red frames represent brain states corresponding to fixation (fixation at $t=107,206,$ and $306$ in c, f, and i). The different colors of the labels represent community memberships. The density and variation of connectivity within and between communities are shown in the estimated block mean matrix and block variance matrix. Fig. 10: Results of Bayesian change-point detection for working memory tfMRI data (session 2 RL): Cumulative discrepancy energy (CDE) with different sliding window sizes (a $W=22$, b $W=26$, c $W=30$ and d $W=34$ under the model $K=3$) and different models (c K=3; e K=4; f K=5 using a sliding window of $W=30$). $W_{s}$ is the sliding window used for converting from PPDI to CDE. The vertical dashed lines are the times of onset of the stimuli, which are provided in the EV.txt files in the released data. The colourful scatterplots in the figures represent the CDEs of individual subjects and the black curve is the group CDE (averaged CDE over 100 subjects). The red points are the local maxima, which are taken to be the locations of change-points, and the blue points are the local minima, which are used for local inference of the discrete brain states. Fig. 11: Local fitting (session 2 RL) between averaged adjacency matrix and models from $K=3$ to $K=18$. Different colours represent the PPDI values of different brain states. Fig. 12: Community structure of the discrete brain states with sparsity level of 10% (session 2: RL): The figures with blue frames represent brain states corresponding to working memory tasks (2-back tool at $t=41$; 0-back body at $t=77$; 2-back face at $t=139$; 0-back tool at $t=175$; 0-back body at $t=236$; 2-back place at $t=275$; 0-back face at $t=334$; and 2-back place at $t=376$ in a-k) and those with red frames represent brain states corresponding to fixation (fixation at $t$=99, 209, and 306 in c, f, and i). Fig. 13: Community structure of the discrete brain states with sparsity level of 20% (session 2: RL): This figure is in the same format as the Supplementary Figure 11 above only that it is for sparsity level of 20%. Fig. 14: Community structure of the discrete brain states with sparsity level of 30% (session 2: RL): This figure is in the same format as the Supplementary Figure 11 above only that it is for sparsity level of 30%. Fig. 15: Estimated mean and variance matrices of the blocks for brain states (session 2: RL): This figure is in the same format as the Supplementary Figure 9. The figures with blue frames represent brain states corresponding to working memory tasks (2-back tool at $t=41$; 0-back body at $t=77$; 2-back face at $t=139$; 0-back tool at $t=175$; 0-back body at $t=236$; 2-back place at $t=275$; 0-back face at $t=334$; and 2-back place at $t=365$ in a-k) and those with red frames represent brain states corresponding to fixation (fixation at $t=99,209,$ and $306$ in c, f, and i). ## References * [1] Matthew Stephens. Dealing with label switching in mixture models. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 62(4):795–809, 2000. * [2] Agostino Nobile and Alastair T. Fearnside. Bayesian finite mixtures with an unknown number of components: The allocation sampler. Statistics and Computing, 17:147–162, 2007. * [3] Jason Wyse and Nial Friel. Block clustering with collapsed latent block models. Statistics and Computing, 22:415–428, 2012. * [4] Giorgio Carpaneto and Paolo Toth. Algorithm 548: Solution of the assignment problem [H]. ACM Transactions on Mathematical Software (TOMS), 6(1):104–111, 1980. * [5] Aaron.F. MacDaid, Thomas. Brendan. Murphy, Nial. Friel, and Neil.J. Hurley. Improved Bayesian inference for the stochastic block model with application to large networks. Computational Statistics and Data Analysis, 60:12–31, 2012. * [6] W.K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1):97–109, 2016. * [7] Mark W. Woolrich, Brian D. Ripley, Michael Brady, and Stephen M. Smith. Temporal autocorrelation in univariate linear modeling of FMRI data. NeuroImage, 14(6):1370–1386, 2001. * [8] Mark W. Woolrich, Timothy E.J. Behrens, Christian F. Beckmann, Mark Jenkinson, and Stephen M. Smith. Multilevel linear modelling for FMRI group analysis using Bayesian inference. NeuroImage, 21(4):1732–1747, 2004. * [9] Anders Eklund, Thomas E. Nichols, and Hans Knutsson. Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. PNAS, 113(28):7900–7905, 2016.
# Local shape of the vapor-liquid critical point on the thermodynamic surface and the van der Waals equation of state J. S. Yu School for Theoretical Physics, School of Physics and Electronics, Hunan University, Changsha 410082, China X. Zhou School for Theoretical Physics, School of Physics and Electronics, Hunan University, Changsha 410082, China J. F. Chen School for Theoretical Physics, School of Physics and Electronics, Hunan University, Changsha 410082, China W. K. Du School for Theoretical Physics, School of Physics and Electronics, Hunan University, Changsha 410082, China X. Wang School for Theoretical Physics, School of Physics and Electronics, Hunan University, Changsha 410082, China Q. H. Liu <EMAIL_ADDRESS>School for Theoretical Physics, School of Physics and Electronics, Hunan University, Changsha 410082, China ###### Abstract Differential geometry is powerful tool to analyze the vapor-liquid critical point on the surface of the thermodynamic equation of state. The existence of usual condition of the critical point $\left(\partial p/\partial V\right)_{T}=0$ requires the isothermal process, but the universality of the critical point is its independence of whatever process is taken, and so we can assume $\left(\partial p/\partial T\right)_{V}=0$. The distinction between the critical point and other points on the surface leads us to further assume that the critical point is geometrically represented by zero Gaussian curvature. A slight extension of the van der Waals equation of state is to letting two parameters $a$ and $b$ in it vary with temperature, which then satisfies both assumptions and reproduces its usual form when the temperature is approximately the critical one. critical point, van der Waals equation of state, Gaussian curvature, saddle point, response functions ## I Introduction In thermal physics, a critical point is the end point of a phase equilibrium curve, the pressure–temperature curve that designates conditions under which a liquid phase and a vapor phase can coexist. The critical point ($T_{C},V_{C},p_{C}$) in the $pV$diagrams determined by, $\left(\frac{\partial p}{\partial V}\right)_{T}=0,\left(\frac{\partial^{2}p}{\partial V^{2}}\right)_{T}=0,$ (1) together with the thermodynamic equation of state (EoS), where symbols ($T,V,p$) have their usual meaning in ordinary textbooks text1 ; text2 ; text3 ; text4 ; text5 . The phase transition exhibits critical slowing down, universality and scaling, etc., which reflect a fact that the details of the system play insignificant role lpk ; wilson . How to characterize the essence of the critical point is always an attractive topic. We note two seemly independent developments/facts. One is that the critical slowing down is its path-independence csd1 ; csd2 ; csd3 ; csd4 ; csd5 , which means that starting from any thermodynamic state in the vicinity of a critical point to approach to it, the system has inherently slow timescales whatever thermodynamic processes are chosen. The second is that a geometrical description of a local point on a curved surface is irrespective of either the parameters chosen to label the point of the surface or the paths selected to approach to it. The strong resemblance of these two facts suggests that geometrical description of the critical point is advantageous. Based on this observation, we make a proposal that the critical point is geometrically represented by zero Gaussian curvature on the thermodynamic EoS surface, together with some physical assumptions. We hope to use this proposal to resolve a long standing problem associated with the van der Waals (vdW) EoS. The most prominent aspect of the vdW EoS is that it captures many of the qualitative features of the liquid–vapor phase transition with possible help of Maxwell’s equal area rule. The vdW EoS was essentially presented history1 (but explicitly given later history2 ) by van der Waals in his 1873 Ph. D. thesis, and for this he was awarded the Nobel Prize in Physics 1910 history1 ; history2 ; history3 ; history4 ; review . The vdW EoS is well-known, $p=\frac{nRT}{V-nb}-\frac{n^{2}a}{V^{2}},$ (2) where two parameters, $a$ and $b$, can be estimated from the critical point and considered constants which are specific for each substance, and other symbols ($n,R$) also have their usual meaning in ordinary textbooks text1 ; text2 ; text3 ; text4 ; text5 . For one mole fluid $n=1$, and the values of $T_{C},V_{C},p_{C}$ are in terms of $a$ and $b$ parameter text1 ; text2 ; text3 ; text4 ; text5 , $T_{C}=\frac{8a}{27Rb},p_{C}=\frac{a}{27b^{2}},V_{C}=3b.$ (3) With these values, the vdW EoS can be transformed into following dimensionless form, $p^{\ast}=\frac{8}{3}\frac{t^{\ast}}{v^{\ast}-1/3}-\frac{3}{v^{\ast 2}},$ (4) where, $t^{\ast}\equiv\frac{T}{T_{C}},\text{ }v^{\ast}\equiv\frac{V}{V_{C}},p^{\ast}\equiv\frac{p}{p_{C}}.$ (5) The equation (4) is referred as the Law of Corresponding States which holds for all kinds of fluid substances, which was also originated with the work of van der Waals in about 1873 history1 , when he used the critical temperature and critical pressure to characterize a fluid. However, whether and how the vdW parameters $a$ and $b$ depend on the temperature $T$, and even more, on the volume $V$, has been a problem of long history. van der Waals himself was well-aware of it history2 , and remarked in his Nobel prize speech: ”I have never been able to consider that the last word had been said about the equation of state and I have continually returned to it during other studies. As early as 1873 I recognized the possibility that $a$ and $b$ might vary with temperature, and it is well-known that Clausius even assumed the value of $a$ to be inversely proportional to the absolute temperature.” history1 In fact, more than one century passed since the discovery of the vdW EoS, we do not have a strong experimental evidence nor a compelling theoretical argument to indicate how $a$ and $b$ parameter might depend on the temperature and/or volume. We have some theoretical results in statistical mechanics, revealing some temperature dependence of $a$ and $b$, for instance in the hard-sphere model text1 ; text2 ; text3 ; text4 ; text5 , but these results are frequently obtained for dilute fluid far from the critical point, and more importantly, they rely heavily on the specific model without universality which is inherent to the thermodynamics. The present paper thus addresses two problems. One is why we assume $\left(\partial p/\partial T\right)_{V}=0$ that is complementary to the first equation of (1), and why we propose that the critical point is geometrically represented by zero Gaussian curvature on the thermodynamic EoS surface. Another is to use above assumptions to discuss the long-standing problem within the thermodynamics. The paper is organized in the following. In section II, we prove a theorem stating that the local shape of the vapor-liquid critical point on the thermodynamic surface can never be an elliptic point; and in order to completely characterize the local shape of the critical point, we need two more response functions which are assumed to vanish at the critical point, implied by the critical slowing down observed in either the realistic experiments or the computer simulation of the phase transition csd1 ; csd2 ; csd3 ; csd4 ; csd5 . The vanishing response functions leads to the zero Gaussian curvature. In section III, the vdW EoS is slightly extended such that the parameters $a$ and $b$ vary with the temperature $T$, which is thus capable to give zero Gaussian curvature at the critical point, while the usual form of the vdW EoS fails. In section IV, a brief summation of the present study is given. In present paper, we concentrate the (interior) Gaussian curvature that is sufficient to specify the local shape of the two-dimensional thermodynamic EoS surface, but we will also give the (exterior) mean curvature as a contrasting quantity. In geometry, the curvature is usually referred to the interior one. ## II Local shape of the vapor-liquid critical point on the EoS surface and a proposal In differential geometry, the local shapes of a two-dimensional curved surface are completely classified into three types: elliptic, hyperbolic and parabolic, corresponding to the Gaussian curvature greater than, smaller than or equal to, zero, respectively docarmo . For a thermodynamic EoS $p=p(T,V)$ that can be treated as a two-dimensional surface in the three-dimensional flat space of coordinates $p$, $T$ and $V$, we now show that the vapor-liquid critical point can not be an elliptic point. In geometry, it is preferable to use the dimensionless equation of the EoS surface $p=p(T,V)$. The straightforward calculations can, respectively, give $H$ and Gaussian curvature $K$, $\displaystyle H$ $\displaystyle=$ $\displaystyle\frac{\left(\frac{\partial^{2}p}{\partial V^{2}}\right)_{T}\left(\left(\frac{\partial p}{\partial T}\right)_{V}^{2}+1\right)+\left(\frac{\partial^{2}p}{\partial T^{2}}\right)_{V}\left(\left(\frac{\partial p}{\partial V}\right)_{T}^{2}+1\right)-2\left(\frac{\partial p}{\partial V}\right)_{T}\left(\frac{\partial p}{\partial T}\right)_{V}\left(\frac{\partial^{2}p}{\partial V\partial T}\right)}{2\left(\left(\frac{\partial p}{\partial V}\right)_{T}^{2}+\left(\frac{\partial p}{\partial T}\right)_{V}^{2}+1\right)^{3/2}},$ (6) $\displaystyle K$ $\displaystyle=$ $\displaystyle\frac{\left(\frac{\partial^{2}p}{\partial V^{2}}\right)_{T}\left(\frac{\partial^{2}p}{\partial T^{2}}\right)_{V}-\left(\frac{\partial^{2}p}{\partial V\partial T}\right)^{2}}{\left(\left(\frac{\partial p}{\partial V}\right)_{T}^{2}+\left(\frac{\partial p}{\partial T}\right)_{V}^{2}+1\right)^{2}}.$ (7) At the critical point the conditions (1) apply, we have the mean curvature $H_{C}$ and Gaussian curvature $K_{C}$, respectively, $H_{C}=\frac{\left(\frac{\partial^{2}p}{\partial T^{2}}\right)_{V}}{2\left(\left(\frac{\partial p}{\partial T}\right)_{V}^{2}+1\right)^{3/2}},K_{C}=-\frac{\left(\frac{\partial^{2}p}{\partial V\partial T}\right)^{2}}{\left(\left(\frac{\partial p}{\partial T}\right)_{V}^{2}+1\right)^{2}},$ (8) which shows that $K\leq 0$. Thus, we in fact prove a theorem that the local shape of the vapor-liquid critical point on the thermodynamic surface can never be an elliptic point. To illustrate the mean and Gaussian curvature of the surface of the thermodynamic EoS, let us first consider two simple systems. For an incompressible liquid EoS: $V=const.$, which is a flat plane, both curvatures are zero. The ideal gas EoS surface, $p=nRT/V$ which can be rewritten as a dimensionless one $p^{\ast}v^{\ast}=t^{{}^{\ast}}$ with a reference point ($p_{0}$, $V_{0}$, $T_{0}\left(=p_{0}V_{0}/\left(nR\right)\right)$), where $t^{\ast}\equiv T/T_{0},$ $v^{\ast}\equiv V/V_{0},p^{\ast}\equiv p/p_{0}$. The mean curvature $H$ and Gaussian curvature $K$ are, respectively, $H=\frac{t^{\ast}v^{\ast 3}}{\left(t^{{}^{\ast}2}+v^{\ast 2}+v^{\ast 4}\right)^{3/2}},K=-\frac{v^{\ast 4}}{\left(t^{{}^{\ast}2}+v^{{}^{\ast}2}+v^{\ast 4}\right)^{2}}.$ (9) Since the Gaussian curvature $K<0$ is negative definite, every point on the ideal gas EoS surface is saddle. Now, we examine vdW EoS surface (2) and it is preferable to use the dimensionless form (4). The mean curvature $H$ and Gaussian curvature $K$ are, respectively, $H=9v^{\ast 5}(3v^{\ast}-1)^{3}\frac{F_{1}(t^{\ast},v^{\ast})}{\left(F_{2}(t^{\ast},v^{\ast})\right)^{3/2}},K=-\frac{576(3v^{\ast}-1)^{4}v^{\ast 12}}{\left(F_{2}(t^{\ast},v^{\ast})\right)^{2}},$ (10) where, $F_{1}(t^{\ast},v^{\ast})=8t^{\ast}v^{\ast 4}-27v^{\ast 3}+27v^{\ast 2}-73v^{\ast}+65,$ (11) $\displaystyle F_{2}(t^{\ast},v^{\ast})$ $\displaystyle=$ $\displaystyle 576t^{\ast 2}v^{\ast 6}-2592t^{\ast}v^{\ast 5}+1728t^{\ast}v^{\ast 4}-288t^{\ast}v^{\ast 3}$ (12) $\displaystyle+81v^{\ast 10}-108v^{\ast 9}+630v^{\ast 8}-396v^{\ast 7}+65v^{\ast 6}$ $\displaystyle+2916v^{\ast 4}-3888v^{\ast 3}+1944v^{\ast 2}-432v^{\ast}+36.$ At the critical point $(t^{\ast},v^{\ast})=(1,1)$, we have, respectively, $H_{C}=0,\text{ }K_{C}=-\frac{36}{289}\approx-0.125.$ (13) The negative Gaussian curvature $K_{C}\approx-0.125$ indicates that the critical point is a hyperbolic point, more precisely, a saddle point docarmo . A comparison of the Gaussian curvatures (9) for ideal gas and (13) for vdW EoS suggests that there is no qualitative difference in between. It is a little bit odd, for we believe that a realistic EoS differs from the ideal gas EoS in the qualitative sense, rather than a quantitative one. By the critical point on the $PV$ diagram, we mean a stationary inflection point in the constant- temperature line, critical isotherm, whose location is determined by two equations in (1). However, if one approaches this point from an isobaric process or an isovolumetric process, or a more complicated process, we do not know whether such a point exhibits the same singularity. Therefore we must seek for a general condition for critical point, independent of thermodynamic paths. At the local point of the thermodynamic EoS surface $p=p(T,V)$, the tangential plane is spanned by two independent vectors ($dT,dV$). At the critical point ($T_{C},V_{C}$), we have $\left(\partial p/\partial V\right)_{T}=0$ (1) which means existence of a limit along isotherm. Such a limit must exist irrespective along isotherm ($T=const.$) or along isovolumetric line ($V=const.$), implying that we can further impose $\left(\partial p/\partial T\right)_{V}=0$ at the critical point. Another condition is inspired by the Gaussian curvature that is independent of the detailed structure of matter, and the simplest assumption is $K_{C}=0$, implying $\partial^{2}p/\partial V\partial T=0$. In sum, we propose two additional conditions for the critical point on the EoS surface $p=p(T,V)$, $\left(\partial p/\partial T\right)_{V}=0\text{, }\partial^{2}p/\partial V\partial T=0.$ (14) It is worthy of mentioning that, in contrast to the realistic experiments which seem hard to measure these two response functions near the critical point, the computer simulations are more feasible csd1 ; csd2 ; csd3 ; csd4 which show that the critical slowing down is really an overall phenomenon no matter what path is chosen to approach to the critical point. ## III The proposal and temperature dependence of vdW parameters $a$ and $b$ We are confident that the vdW EoS with constant parameters $a$ and $b$ is not satisfactory for following two senses. The first is that the Gaussian curvature at the critical point is $K_{C}\approx-0.125$ (13) which is not qualitatively different from other point except the limiting situation. The second is that this value $K_{C}\approx-0.125$ manifestly depends on special thermodynamic path, i.e., isotherm (1). Fortunately, the vdW EoS can be adapted for removal of these weaknesses. The slightest extension of the vdW EoS is to let two constants $a$ and $b$ in the vdW EoS (2) depend on the temperature as $a\rightarrow a(T)$ and $b\rightarrow b(T)$. The critical values of $T_{C},V_{C},p_{C}$ are entirely determined by $a(T_{C})$ and $b(T_{C})$, $T_{C}=\frac{8a(T_{C})}{27Rb(T_{C})},p_{C}=\frac{a(T_{C})}{27b(T_{C})^{2}},V_{C}=3b(T_{C}).$ (15) With introduction of the dimensionless $\alpha\left(t^{\ast}\right)$ and $\beta\left(t^{\ast}\right)$ instead of $a(T)$ and $b(T)$ in the following, $\alpha\left(t^{\ast}\right)\equiv\frac{a(T)}{a(T_{C})}=\frac{a(t^{\ast}T_{C})}{a(T_{C})},\beta\left(t^{\ast}\right)\equiv\frac{b(T)}{b(T_{C})}=\frac{b(t^{\ast}T_{C})}{b(T_{C})},$ (16) the Law of Corresponding States does not hold true any more except the special case, $\alpha=const.$ and $\beta=const.$; and we have instead the dimensionless extended vdW EoS, $p^{\ast}=\frac{8}{3}\frac{t^{\ast}}{v^{\ast}-\beta\left(t^{\ast}\right)/3}-\frac{3\alpha\left(t^{\ast}\right)}{v^{\ast 2}}.$ (17) Near the critical point, we assume that $a(T)$ and $b(T)$ parameters take the following forms, $\displaystyle a\left(T\right)$ $\displaystyle\approx$ $\displaystyle a\left(T_{C}\right)+a^{\prime}\left(T_{C}\right)(T-T_{C})+\frac{1}{2}a^{{}^{\prime\prime}}\left(T_{C}\right)^{2}(T-T_{C})^{2},$ (18a) $\displaystyle b\left(T\right)$ $\displaystyle\approx$ $\displaystyle b\left(T_{C}\right)+b^{\prime}\left(T_{C}\right)(T-T_{C})+\frac{1}{2}b^{{}^{\prime\prime}}\left(T_{C}\right)^{2}(T-T_{C})^{2},$ (18b) where, $g^{\prime}=\frac{dg}{dT},g=a,b,a^{\prime},b^{\prime},....$ (19) The relations between set $\left(\alpha^{\prime}\left(T_{C}\right),\beta^{\prime}\left(T_{C}\right)\right)$ and set $\left(a^{\prime}\left(T_{C}\right),b^{\prime}\left(T_{C}\right)\right)$ are, $\alpha^{\prime}\left(t^{\ast}=1\right)=T_{C}\frac{a^{\prime}\left(T_{C}\right)}{a\left(T_{C}\right)},\beta^{\prime}\left(t^{\ast}=1\right)=T_{C}\frac{b^{\prime}\left(T_{C}\right)}{b\left(T_{C}\right)},$ (20) where $\alpha^{\prime}=d\alpha/dt^{\ast}$ and $\beta^{\prime}=d\beta/dt^{\ast}$, etc. The mean curvature $H$ and Gaussian curvature $K$ of the dimensionless extended vdW EoS surface have very long expressions of complicated structure. However, the expressions for both $H$ and $K$ at the critical point $\left(t^{\ast},v^{\ast},\alpha,\beta\right)=(1,1,1,1)$ are simply, $H_{C}=\frac{2\beta^{\prime}\left(\beta^{\prime}+2\right)-3\alpha^{\prime\prime}+2\beta^{\prime\prime}}{2\left(G(t^{\ast},v^{\ast})\right)^{3/2}},K_{C}=-\frac{36(1-\alpha^{\prime}+\beta^{\prime})^{2}}{\left(G(t^{\ast},v^{\ast})\right)^{2}},$ (21) where, $G(t^{\ast},v^{\ast})=4\beta^{\prime 2}+9\alpha^{\prime 2}-12\alpha^{\prime}\beta^{\prime}+16\beta^{\prime}-24\alpha^{\prime}+17.$ (22) _The distinctive feature of the extended vdW EoS is that it contains two possible local shapes at the critical point_ : _hyperbolic and parabolic_ , for $K_{C}\leq 0$. The case $K_{C}=0$ can be realized provided, $1-\alpha^{\prime}+\beta^{\prime}=0.$ (23) To note that two response functions $\left(\partial p/\partial T\right)_{V}$, and its partial derivative with respect to volume, $\left(\partial^{2}p/\partial V\partial T\right)$, produce values at the critical point $\left(T_{C},V_{C},p_{C}\right)$, respectively, $\displaystyle\left(\frac{\partial p}{\partial T}\right)_{V}$ $\displaystyle=$ $\displaystyle\frac{9RT_{C}b^{\prime}(T_{C})-4a^{\prime}(T_{C})+18Rb(T_{C})}{36\left(b\left(T_{C}\right)\right)^{2}},$ (24a) $\displaystyle\frac{\partial^{2}p}{\partial V\partial T}$ $\displaystyle=$ $\displaystyle-\frac{27RT_{C}b^{\prime}(T_{C})-8a^{\prime}(T_{C})+27Rb(T_{C})}{108\left(b\left(T_{C}\right)\right)^{3}}.$ (24b) These two values are sufficient to completely fix two derivatives $\left(a^{\prime}\left(T_{C}\right),b^{\prime}\left(T_{C}\right)\right)$, given that the parameter $b\left(T_{C}\right)$ in (24a)-(24b) is given by the magnitude of the molar critical volume via (15). Now let us examine situations where both response functions in (14) vanish at the critical point. First, once the second response function vanishes at the critical point, $\left(\partial^{2}p/\partial V\partial T\right)=0$, i.e., $27RT_{C}b^{\prime}(T_{C})-8a^{\prime}(T_{C})+27Rb(T_{C})=0$ from (24b), we have from the relations (20), $27Rb\left(T_{C}\right)\beta^{\prime}-8\alpha^{\prime}a\left(T_{C}\right)+27RT_{C}b(T_{C})=0.$ (25) which reproduces $1-\alpha^{\prime}+\beta^{\prime}=0$ (23) with $27T_{C}Rb\left(T_{C}\right)=8a(T_{C})$ (15). Secondly, once the first response function in (14) vanishes at the critical point, $\left(\partial p/\partial T\right)_{V}=0$, i.e., $9RT_{C}b^{\prime}(T_{C})-4a^{\prime}(T_{C})+18Rb(T_{C})=0$ from (24a), we have $4-3\alpha^{\prime}+2\beta^{\prime}=0$. An association of two equations $1-\alpha^{\prime}+\beta^{\prime}=0$ and $4-3\alpha^{\prime}+2\beta^{\prime}=0$ yields, $\alpha^{\prime}=2,\beta^{\prime}=1.\text{i.e., }a^{\prime}\left(T_{C}\right)=2a\left(T_{C}\right)/T_{C},b^{\prime}\left(T_{C}\right)=b\left(T_{C}\right)/T_{C}.$ (26) With these values, we find that not only the critical point is locally flat, but also $a\left(T\right)$ and $b\left(T\right)$ are, accurate up to the first order of $(t^{\ast}-1)$, $\displaystyle a\left(T\right)$ $\displaystyle\approx$ $\displaystyle a\left(T_{C}\right)+2a\left(T_{C}\right)(t^{\ast}-1)=-a\left(T_{C}\right)+2a\left(T_{C}\right)t^{\ast},$ (27a) $\displaystyle b\left(T\right)$ $\displaystyle\approx$ $\displaystyle b\left(T_{C}\right)+b\left(T_{C}\right)(t^{\ast}-1)=b\left(T_{C}\right)t^{\ast}.$ (27b) When $t^{\ast}\approx 1$, i.e., $T\approx T_{C}$, $a\left(T\right)\approx a\left(T_{C}\right)$ and $b\left(T\right)\approx$ $b\left(T_{C}\right)$, the usual form of vdW EoS is assumed. It is important to note that, from two relations above, the usual vdW EoS is valid when the thermodynamic states are very close to the critical point, and $a\left(T\right)$ and $b\left(T\right)$ are also solely determined by $a\left(T_{C}\right)$ and $b\left(T_{C}\right)$. ## IV Conclusions Differential geometry is powerful tool to reveal the intrinsic nature of the curved surface, and it is advantageous to analyze the critical point on the EoS surface. On the tangential plane of the critical point, the existence of limit $\left(\partial p/\partial V\right)_{T}=0$ requires the isothermal process. However, the essence of the critical point is its independence of whatever process is taken, and of detailed structure of matters, etc. We can therefore assume $\left(\partial p/\partial T\right)_{V}=0$ and $K_{C}=0$ at the critical point on the EoS surface. The vdW EoS is the simplest one to understand the liquid-gas transition. Since the vdW parameters $a$ and $b$ are constant, the Gaussian curvature is negative definite; and there is no distinction between vdW EoS and the ideal gas EoS. According to our assumptions, the vdW EoS is slightly modified or extended such that the vdW parameters $a$ and $b$ vary with temperature, allowing for presence of the zero Gaussian curvature at the critical point. Our approach sheds light on understanding the theoretical problem how the vdW parameters depend on the temperature. ###### Acknowledgements. This work is financially supported by National Natural Science Foundation of China under Grant No. 11675051. ## References * (1) R. K. Pathria, P. D. Beale, Statistical Mechanics, 3rd ed., (Oxford: Butterworth-Heinemann, 2011). * (2) K. Huang, Statistical Mechanics, 2nd ed., (New York: Wiley, 1986). * (3) M. Toda, R. Kubo, N. Saito, Statistical Physics I: Equilibrium Statistical Mechanics, 2nd Ed., (Berlin: Sringer-Verlag, 2012). * (4) F Reif. Fundamentals of Statistical and Thermal Physics, (New York: McGraw-Hill, 1965). * (5) L. D. Landau and E. M. Lifshitz, Statistical Physics (I), 3rd ed., (Oxford: Pergamon, 1980). * (6) L. P. Kadanoff, W. Gotze, D. Hamblen, R. Hecht, E. A. S. Lewis, V. V. Palciauskas, M. Rayl, J. Swift, D. Aspnes and J. W. Kane, Static Phenomena Near Critical Points: Theory and Experiment, Rev. Mod. Phys. 39(1967)395. * (7) K. G. Wilson, Renormalization Group and Critical Phenomena, Phys. Rev. B, 4(1971)3174-3183. * (8) P. C. Hohenberg, B. I. Halperin, Theory of dynamic critical phenomena. Rev. Mod. Phys. 49(1977)435. * (9) U. Wolff, Critical slowing down. Nuclear Physics B - Proceedings Supplements, 17(1990)93-102. * (10) M. Das, J. R. Green, Critical fluctuations and slowing down of chaos. Nat. Comm., 10(2019)2155. * (11) T. Brett, M. Ajelli, Q. H. Liu, M. G. Krauland, J. J. Grefenstette, W. G. van Panhuis, et al. Detecting critical slowing down in high-dimensional epidemiological systems. PLoS Comput Biol, 16(2020)e1007679. * (12) C. A. Kuehn, Mathematical framework for critical transitions: bifurcations, fast–slow systems and stochastic dynamics. Physica D, 240(2011)1020–1035. * (13) J. D. van der Waals, On the Continuity of the Gaseous and Liquid States, Leiden, 1873. * (14) J. D. van der Waals, The Equation of State for Gases and Liquids (December 12, 1910). In Nobel lectures including presentation speeches and laureates’ biographies. Physics, 1901-1921; (Amsterdam: Elsevier, 1967) pp 254-265. * (15) M. J. Klein, The Historical Origins of the Van Der Waals Equation. Physica, 73(1974)28-47. * (16) J. L. Lebowitz, E. M. Waisman, Statistical Mechanics of Simple Fluids: Beyond van Der Waals. Phys. Today, 33(1980)24-30. * (17) G. M. Kontogeorgis, R. Privat, and Jean-Noel Jaubert, Taking Another Look at the van der Waals Equation of State – Almost 150 Years Later. J. Chem. Eng. Data, 64(2019)4619-4637. * (18) James Clerk Maxwell’s thermodynamic surface, National Museum of Scotland, https://www.nms.ac.uk/explore-our-collections/stories/science-and-technology/james-clerk-maxwell-inventions/james-clerk-maxwell/thermodynamic-surface/ * (19) R. D. Kriz, Thermodynamic Case Study: Gibbs’ Thermodynamic Graphical Method, https://esm.rkriz.net/classes/ESM4714/methods/Gibbs.html. * (20) M. P. do Carmo, Differential Geometry of Curves and Surfaces (Prentice-Hall, New York, 1976) pp 146-147.
# Triangle relations for XYZ states Kai Zhu Institute of High Energy Physics, Beijing 100049, China111 <EMAIL_ADDRESS> (Day Month Year; Day Month Year) ###### Abstract We propose novel triangle relations, not the well-known triangle singularity, for better understanding of the exotic XYZ states. Nine XYZ resonances, $X(3872)$, $Y(4230)$, $Z_{c}(3900)$, $X(4012)$, $Y(4360/4390)$, $Z_{c}(4020)$, $X(4274)$, $Y(4660)$, and $Z_{cs}(3985)$ have been classified into triples to construct three triangles based on the assumption that they are all tetra- quark states. Here $X(4012)$ has not been observed experimentally yet, and we predict its mass and roughly the width. We also suggest some channels deserving search for with high priority based on this hypothesis, as well as predictions of a few production/decay rates of these channels. We hope further experimental studies of the XYZ states will benefit from our results. ###### keywords: charmonium-like states; XYZ states; triangle relations. PACS numbers: Charmonium-like sates, the so called XYZ particles, have been hot topics in high energy physics community since $X(3872)$ was discovered in 2003 by Belle [1]. Their special characters make them good candidates of multiple-quark states [2] beyond the conventional mesons (quark anti-quark pair) and baryons (triple quarks). In addition to be a novel material form, they also provide idea labs for studying the non-perturbative QCD, the theory describing strong interaction. Studying their production and decay rates individually, and comparing them to that of the charmonium states, should shed light on the internal structures and interaction mechanism of these exotic states. Then it will lead to better understanding of the strong interaction, such as color confinement, at the several $\mathrm{GeV}$ energy region. Various theoretical models have been proposed to interpret the experimental results and describe the nature of these XYZ states, including models, for example, tetra-quark, hybrid, conventional charmonium, molecular, and even cusp effect. For more details, our readers may read the fourth part of [2] and the references quoted in this review. However, in short, a global and satisfied frame, who can describe the whole XYZ states well, is still in absent. The situation of our knowledge on the multiple-quark states is somewhat similar to that on the glueballs, both of them are allowed by QCD but without experimentally conclusive evidence yet. Also, the mixing between these exotic states with normal mesons or baryons make the situation much more complex. Recently, three results reported by BESIII collaboration have called our attention. Two of them are the confirmation of the radiative transition from $Y(4230)$ to $X(3872)$ [3] and pion transition from $Y(4230)$ to $Z_{c}(3900)$ [4]. The third one is the first observation of $Z_{cs}(3985)$ [5], it is based on the data samples of $e^{+}e^{-}$ collision at the center-mass-energy close to $Y(4660)$. Even the statistic is low, it seems $Z_{cs}(3985)$ is produced by a kaon transition from $Y(4660)$. These new information and previous experimental results inspire us to investigate the relations between these XYZ states and suggest a general frame containing them. In this frame, XYZ states are classified into various sets, and each contains one X, one Y, and one Z states with similar quark components. So a single set can be illuminated by a triangle. Each point of a triangle is a state, while each side of the triangle is a hadronic or radiative transition. In this frame, nine XYZ states, $X(3872)$, $Y(4230)$, $Z_{c}(3900)$, $X(4012)$, $Y(4360/4390)$, $Z_{c}(4020)$, $X(4274)$, $Y(4660)$, and $Z_{cs}(3985)$ have been classified into three sets to construct triple triangles. We believe such a frame would provide insights on them, and can be a guide to future experimental searches and measurements. The construction of a triangle start with a simple hypothesis, that the XYZ states are tetra-quark states and can be described universally in a similar quark components, i.e. $\ket{h}\ket{c\bar{c}}$. Here the $\ket{h}$ is a superimposition of light quark components, and same for $X$ and $Y$ states as $\ket{h}_{X/Y}=p_{h1}\ket{u\bar{u}}+p_{h2}\ket{d\bar{d}}+p_{h3}\ket{s\bar{s}}\;\;,$ (1) where $p_{h1}$, $p_{h2}$, and $p_{h3}$ are amplitude strength can be determined as parameters via experimental measurements. However, the form of $\ket{h}$ for $Z$ states should be slightly different since their iso-spin is not zero, and can be written as $\ket{h}_{Z}=p^{\prime}_{h1}\ket{u\bar{d}}+p^{\prime}_{h2}\ket{u\bar{s}}+p^{\prime}_{h3}\ket{d\bar{u}}+p^{\prime}_{h4}\ket{s\bar{u}}+p^{\prime}_{h5}\ket{d\bar{s}}+p^{\prime}_{h6}\ket{s\bar{d}}+p^{\prime}_{h7}\ket{u\bar{u}}-p^{\prime}_{h7}\ket{d\bar{d}}\;\;,$ (2) where parameters $p^{\prime}_{h}$s will take corresponding values for various states. Similarly $\ket{c\bar{c}}$ can be expanded into a superimposition of hidden-charm states as $\ket{c\bar{c}}=p_{c1}\ket{\eta_{c}}+p_{c2}\ket{J/\psi}+p_{c3}\ket{\chi_{c0}}+p_{c4}\ket{\chi_{c1}}+p_{c5}\ket{\chi_{c2}}+p_{c6}\ket{h_{c}}+p_{c7}\ket{\eta^{\prime}_{c}}+p_{c8}\ket{\psi^{\prime}}+\cdots\;\;,$ (3) where only the chamonia below the open-charm threshold have been explicitly addressed, since we assume the contribution from the hidden-charm above the threshold is negligible in the cases we discussed. Notice our hypothesis is significantly different to the models with these XYZ states are pure charmonia or fake resonances from cusp effect. From this hypothesis, a naive inference is that a mass of any XYZ state is mainly determined by the heavy quark component, and strongly affected by their couplings to open-charm channels. This inference would be used to determine the mass weights of these triangles. Since there are many different open-charm combinations, here we only consider the thresholds of the following S-wave open-charm channels, i.e. $DD_{1}(2420)$, $D^{*}D_{1}(2420)$, $D_{s}D_{s1}(2460)$, $D^{*}_{s}D_{s1}(2460)$, $D^{*}_{s}D_{s1}(2536)$, $D^{*}D_{2}(2460)$, and $D^{*}_{s}D_{s2}(2573)$. They are believed to provide the largest contributions. From a simple sum of the masses of the D mesons, we know the threshold are roughly at $4.29$ GeV, $4.43$ GeV, $4.43$ GeV, $4.57$ GeV, $4.65$ GeV, $4.47$ GeV, and $4.68$ GeV, respectively. It seems if there are seven triangles, two of them are according to the same $4.43$ GeV would be considered degeneracy. However, after considering the quark component, it is more likely the two pais $(D^{*}D_{1}\mathrm{,}D^{*}D_{2})$ and $(D^{*}_{s}D_{s1}\mathrm{,}D^{*}_{s}D_{s2})$ should be consider as degeneracies. So the reduced four thresholds are $4.29$ GeV, $4.43(4.47)$ GeV, $4.57$ GeV, $4.65(4.68)$ GeV. And the shifts between the excited ones to the fundamental one would be $140$ MeV, $280$ MeV, $360$ MeV. Figure 1: Triple triangles for nine XYZ states. Due to the open-charm thresholds and the masses of the well-known XYZ states $X(3872)$, $Y(4230)$, and $Z_{c}(3900)$ [6], we assume that the three states construct the triple points of the fundamental triangle, and the radiative and hadronic transitions are illustrated as the triple sides as shown in the left- bottom of Fig. 1. So starting from the fundamental triangle, i.e. the X(3872)-Y(4230)-Zc(3900) triangle, we can predict an excited triangle with the mass shift between $4.29$ $\mathrm{GeV}$ and $4.43$ $\mathrm{GeV}$, i.e. $140$ $\mathrm{MeV}$. As shown in Fig. 2, it is obvious the Y and Z states belonging to the predicted triangle can be associated to the experimental observations, Y(4360/4390) and Zc(4020), respectively. Here we assume Y(4360) and Y(4390) are actually one vector state due to their similar masses. However, there is no such evidence of the X state close to X(4012) experimentally. Only it is well matched with another theoretical prediction [7]. Based on the similar argument, it is also another triangle can be obtained by shift the masses up to $360$ MeV, the difference between thresholds of $4.29$ $\mathrm{GeV}$ and $4.65$ $\mathrm{GeV}$, as shown in Fig. 3. Here the predicted X and Y states are perfectly matched with experimental results $X(4274)$ and $Y(4660)$. While the mass of predicted Z state is obviously larger than the recently observed $Z_{cs}(3985)$, this discrepancy may be caused by the replacement of the strange quark with up/down quark, that will change the internal structure of the multiple quark state then decrease the mass of $Z$ state. Notice the subtle shift is about $300$ MeV that is just the difference between the masses of kaon meson and pion meson. Another possibility is the $Z_{cs}(3985)$ is not the Z state should be filled in this triangle, so another heavier Z state deserves searching for with different channels. Till now, there is no evidence of the $X(4274)$, discovered in $B\to J/\psi\phi K$ process [Aaij:2016iza], is observed via $Y(4660)\to\gamma X(4274)$, $X(4274)\to\phi J/\psi$. So a search on this radiative transition channel is deserved. Additionally, because mass estimation of $X(4274)$ is based on a very rough way, other $X$ structures seen in $B\to J/\psi\phi K$ cannot be excluded such as $X(4140)$, and so $X(4274)$ is only the one of possible candidates. Figure 2: The prediction and experimental observation of the first excited triangle of XYZ states. Figure 3: The prediction and experimental observation of the second excited triangle of XYZ states involving with strange quark heavily. The previous predictions on the two excited triangles have implicitly applied part of the interaction mechanism proposed in this paper. The complete form of the interaction hypothesis can be summarized as the following two rules. * • Shifts between triangles: the radial position or orbital momentum between $\ket{h}$ and $\ket{c\bar{c}}$ for excited triangles keep similar to the fundamental triangle, but the $\ket{c\bar{c}}$ changes between the fundamental and excited ones according to the corresponding open-charm thresholds. * • Transitions within a triangle: the $\ket{c\bar{c}}$ keeps same, only the radial position or orbital momentum between $\ket{h}$ and $\ket{c\bar{c}}$ should change. Due to the second rule mentioned above and the observed channels, we can see there are more channels deserving search for. Some of them with higher priority are listed in Table. Triangle relations for XYZ states. Here almost the observed or searched results are from PDG [6], while the others from recent published results [4, 8, 9, 10, 11, 12, 13]. The observed/searched and proposed channels for XYZ states. Here $X\eta_{c}$ represents $\eta\pi^{+}\pi^{-}\eta_{c}$, $3\pi\eta_{c}$, $\pi^{+}\pi^{-}\eta_{c}$, and $\gamma\pi^{0}\eta_{c}$; $\psi_{2}$ is $\psi_{2}(3823)$; $D_{1}$ is $D_{1}(2420)$. States Observed/Searched Proposed $X(3872)$ $\rho J/\psi$, $\omega J/\psi$, $D^{0}\bar{D}^{*0}$, $\pi^{0}\chi_{c1}$, $\gamma J/\psi$, $\gamma\psi^{\prime}$, $\gamma D\bar{D}$, $\pi^{0}D^{0}\bar{D}^{0}$ $\pi\pi\eta_{c}$, $\pi\pi\chi_{cJ}$ $Y(4230)$ $\omega\chi_{c0}$, $\pi\pi J/\psi$, $\pi\pi h_{c}$, $\pi\pi\psi^{\prime}$, $\pi^{+}D^{0}D^{*-}$, $\eta J/\psi$, $\eta^{\prime}J/\psi$, $\pi^{+}\pi^{-}\chi_{cJ}$, $X\eta_{c}$ $\pi^{0}\psi^{\prime}$, $\gamma\chi_{cJ}$, $D^{*}\bar{D}^{*}$ $Z_{c}(3900)$ $\pi J/\psi$, $\rho\eta_{c}$, $D\bar{D}^{*}$, $\pi\eta_{c}$ $\pi h_{c}$, $\pi\psi^{\prime}$, $\pi\pi\chi_{c0}$ $X(4012)$ $\pi\pi\psi^{\prime}$, $\pi\pi\psi_{2}$, $\pi\pi h_{c}$, $\pi\pi D\bar{D}$ $Y(4360/4390)$ $\pi\pi\psi^{\prime}$, $\pi\pi\psi_{2}$, $D_{1}\bar{D}$, $\pi\pi h_{c}$, $\pi\pi\psi^{\prime\prime}$, $\eta J/\psi$ $\pi D^{*}\bar{D}^{*}$ $Z_{c}(4020)$ $\pi h_{c}$, $D\bar{D}^{*}$ $\pi\psi^{\prime}$, $\pi\psi_{2}$, $D\bar{D}^{*}$, $\pi D\bar{D}$ $X(4274)$ $\phi J/\psi$ $D_{s}^{(*)}\bar{D}_{s}^{(*)}$, $\eta\chi_{cJ}$, $\eta^{\prime}\chi_{cJ}$ $Y(4660)$ $KKJ/\psi$, $KKh_{c}$, $KK\psi^{\prime}$, $KD^{(*)}D_{s}^{(*)}$ $\phi\chi_{cJ}$, $\eta J/\psi$, $\eta^{\prime}J/\psi$, $\eta\psi^{\prime}$, $\eta^{\prime}\psi^{\prime}$ $Z_{cs}(3985)$ $DD_{s}^{*}$, $D^{*}D_{s}$ $KJ/\psi$, $K\eta_{c}$, $Kh_{c}$, $K\psi^{\prime}$, $K\chi_{c0}$ Furthermore, based on Eqs. 1, 2, 3 and the following partial width formula $\Gamma(X/Y/Z\to f_{h}f_{c})\propto\left|\braket{f_{h}}{h}\right|^{2}\left|\braket{f_{c}}{c\bar{c}}\right|^{2}\;,$ (4) where $f_{h}$ and $f_{c}$ indicate light hadrons and charmonia in the final states, we derive the formulae for relations of the partial widthes of XYZ states decaying into light hadrons plus hidden-charm final states: $\frac{\Gamma(X\to f_{h1}f_{c1})}{\Gamma(X\to f_{h2}f_{c1})}=\frac{\Gamma(Y\to f_{h1}f_{c2})}{\Gamma(Y\to f_{h2}f_{c2})}$ (5) and $\frac{\Gamma(Z\to f_{h1}f_{c1})}{\Gamma(Z\to f_{h1}f_{c2})}=\frac{\Gamma(Y\to f_{h2}f_{c1})}{\Gamma(Y\to f_{h2}f_{c2})}\;.$ (6) Here the phase space effect has been ignored since the masses of XYZ states in one triangle are similar to each other. For example, from Eq. 5 and the assumption that $X(3872)$ and $Y(4230)$ are assigned in the same triangle, we can have $\frac{\Gamma(X(3872)\to\pi\pi J/\psi)}{\Gamma(X(3872)\to\omega J/\psi)}=\frac{\Gamma(Y(4230)\to\pi\pi\chi_{c0})}{\Gamma(Y(4230)\to\omega\chi_{c0})}\;,$ where corresponding final light hadron or hidden-charm final states $\pi\pi$, $J/\psi$, $\omega$, and $\chi_{c0}$ have replaced $f_{h1}$, $f_{c1}$, $f_{h2}$, and $f_{c2}$, respectively. However, because available experimental measurements are very limited, only a few predictions are obtained. And we do not use the results from [4] since it only has a significance of $4.2$ standard deviation, and its results are consistent with the charged mode [14], which provides more precise measurements of $e^{+}e^{-}\to\pi\pi J/\psi$. And we do not use the results from [11, 12, 13], since unfortunately only upper limits of the Born cross sections are provided in these paper without the corresponding coupling strength that can be applied in our calculation directly. These predictions are listed in Table Triangle relations for XYZ states. The predictions of production or decay rates for XYZ states. Items Prediction Comment $B(Y(4230)\to\pi^{+}\pi^{-}\chi_{c0})\Gamma_{ee}^{Y(4230)}$(in eV) $2.3\pm 0.9$ with $Y(4230)\to\omega\chi_{c0}$ $B(Y(4230)\to\pi^{+}\pi^{-}\chi_{c0})\Gamma_{ee}^{Y(4230)}$(in eV) $<11.4$ with $Y(4230)\to\gamma\chi_{c0}$ $\Gamma(Z_{c}(3900)\to\pi h_{c})/\Gamma(Z_{c}(3900)\to\pi J/\psi)$ $0.4\sim 3.0$ not seen in experiment $\Gamma(Z_{c}(3900)\to\pi\psi^{\prime})/\Gamma(Z_{c}(3900)\to\pi J/\psi)$ $0.1\sim 1.0$ not seen in experiment All the obtained predictions are consistent with each other when different experimental inputs are used in the calculation. We notice both the values of $\Gamma(Z_{c}(3900)\to\pi h_{c})/\Gamma(Z_{c}(3900)\to\pi J/\psi)$ and $\Gamma(Z_{c}(3900)\to\pi\psi^{\prime})/\Gamma(Z_{c}(3900)\to\pi J/\psi)$ take large uncertainty. The reason is the experimental result of $Y(4230)\to\pi\pi J/\psi$ is used, while in this measurement multiple solutions are found [14]. Considering no evidence has been observed in experiments till now, the destructive solution is more favored than the constructive one for the multiple solutions. During our calculations we have also realized the negative results are worthy to reported, since the upper limits sometimes will transform into bound limits or other constraints due to the relations. In summary, we have proposed triple triangle relations based on the tetra- quark component hypothesis $\ket{h}\ket{c\bar{c}}$, with the S-wave open-charm thresholds as the weights, to classify the exotic XYZ states. Except $X(4012)$, that has not been observed experimentally yet, all the other predicted states can mostly consistent with presently observed states. Notice this triangle relation proposed by us is not the well-known triangle singularity theory, and it is just experimental results driven. Within this frame, we also propose some channels deserving search and predict a few production/decay rates of them. They are main results of this paper and listed in Tables Triangle relations for XYZ states and Triangle relations for XYZ states. Hope further experimental discoveries and measurements will benefit from our results. It should also be mentioned that the original $Z_{cs}$ states predicted by us is obviously heavier than recently discovered $Z_{cs}(3985)$. Further search in other channels such as $KJ/\psi$ are deserved, even this mass difference may be explained by the mass difference between kaon and pion. We have not discussed the widthes of the XYZ states, as well as the decay rates of open- charm channels since they are should be significantly affected by the strong coupling to open-charm channel near thresholds. Till now, we don’t have a convinced method to deal with it. But whose widthes should be similar to each other when the XYZ states are in one triangle due to their similar internal structure, except one state is below the open-charm threshold then its width would be obviously narrower. It means the width of $X(4012)$ would be dozens of MeV or narrower since it seems just below the $D^{*}D^{*}$ threshold. The number of S-wave open-charm thresholds is larger than the number of the triangles proposed in this paper due to limited experimental results, that implies there maybe undiscovered XYZ states and more searches are deserved. And some thresholds are close to each other, that should make the identification and classification of the XYZ states difficult. We have not discussed other possible excited states, who will take different quantum numbers and larger radial positions or higher orbital momentum between $\ket{h}$ and $\ket{c\bar{c}}$ , since the experimental results on the XYZ states are still too limited. These unknown states may expand the triangle relations to polygon relations in principle. ## Acknowledgments The author thanks Yuping Guo, Aiqiang Guo, and Wolfgang Kühn for inspiring discussions. This work is supported in part by National Key Research and Development Program of China under Contract No. 2020YFA0406301. ## References * [1] S. K. Choi et al. [Belle], Phys. Rev. Lett. 91, 262001 (2003). * [2] N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C. P. Shen, C. E. Thomas, A. Vairo and C. Z. Yuan, Phys. Rept. 873, 1-154 (2020). * [3] M. Ablikim et al. [BESIII], Phys. Rev. Lett. 122, 232002 (2019). * [4] M. Ablikim [BESIII], Phys. Rev. D 102, 012009 (2020). * [5] M. Ablikim et al. [BESIII], [arXiv:2011.07855 [hep-ex]]. * [6] P. A. Zyla et al. [Particle Data Group], PTEP 2020, 083C01 (2020). * [7] J. Nieves and M. P. Valderrama, Phys. Rev. D 86, 056004 (2012). * [8] M. Ablikim et al. [BESIII], Phys. Rev. D 102, 031101 (2020). * [9] M. Ablikim et al. [BESIII], Phys. Rev. Lett. 124, 242001 (2020). * [10] M. Ablikim et al. [BESIII], Phys. Rev. D 101, 012008 (2020). * [11] M. Ablikim et al. [BESIII], [arXiv:2012.02682 [hep-ex]]. * [12] M. Ablikim et al. [BESIII], [arXiv:2011.13850 [hep-ex]]. * [13] M. Ablikim et al. [BESIII], [arXiv:2010.14415 [hep-ex]]. * [14] M. Ablikim et al. [BESIII], Phys. Rev. Lett. 118, 092001 (2017).
# Measuring the Distance and ZAMS Mass of Galactic Core-Collapse Supernovae Using Neutrinos Manne Segerlund Department of Engineering Sciences and Mathematics, Luleå University of Technology, SE-97187 Luleå, Sweden Erin O’Sullivan<EMAIL_ADDRESS>Department of Physics and Astronomy, Uppsala University, Box 516, SE-75120 Uppsala, Sweden Evan O’Connor The Oskar Klein Centre, Department of Astronomy, Stockholm University, AlbaNova, SE-106 91 Stockholm, Sweden ###### Abstract Neutrinos from a Galactic core-collapse supernova will be measured by neutrino detectors minutes to days before an optical signal reaches Earth. We present a novel calculation showing the ability of current and near-future neutrino detectors to make fast predictions of the progenitor distance and place constraints on the zero-age main sequence mass in order to inform the observing strategy for electromagnetic follow-up. We show that for typical Galactic supernovae, the distance can be constrained with an uncertainty of $\sim$5% using IceCube or Hyper-K and, furthermore, the zero-age main sequence mass can be constrained for extremal values of compactness. ††preprint: APS/123-QED ## I Introduction The next Galactic core-collapse supernova (CCSN) will be one of the most important astrophysical events in our lifetime. A burst of neutrinos tens of seconds in duration with individual energies $O(10\,\mathrm{MeV})$ will be detected by neutrino experiments around the world. As neutrinos from a supernova arrive before the first light, an unprecedented multi-messenger search campaign to identify the supernova and observe the photon shock breakout will follow. However, due to potentially significant dust obscuration in the galaxy [1], the search strategy would benefit from any information about the progenitor system available from the neutrinos. Indeed, key information about the supernova is imprinted in the neutrino signal, including localization [2] and the type of remnant (black hole or neutron star) [3]. We present here a fast and novel method to determine the distance and progenitor star structure along with constraints on the zero-age main sequence (ZAMS) mass of a Galactic CCSN, which can help guide the observing strategy of electromagnetic telescopes, potentially hastening the identification of the host star as well as allowing for an estimate of the delay time between the neutrinos and photons. Our method builds from the procedure described in [4, 5], where it was shown that neutrinos could be used to place constraints on the presupernova structure of the progenitor star. Here, we extend and quantify the method and include predictions of intrinsic properties that are important for electromagnetic follow-up, such as the distance to the supernova and constraints, when possible, on the progenitor ZAMS mass. We improve on past work by [6], which examined how the neutronization peak imprinted in the neutrino signal can be used to determine supernova distance in a megaton water-Cherenkov detector. Our method obtains a similar sensitivity to distance as the method described in [6], but using smaller detector masses present in current and near future experiments and without relying on the separation between neutrinos and anti-neutrinos, which can take valuable time during a supernova event and adds potential sources of error. We demonstrate our method for the two most sensitive current neutrino experiments, as well as three near-future detectors. The two currently operational neutrino detectors considered are IceCube [7], a cubic-kilometer- scale neutrino detector embedded in the glacial ice at the South Pole, and Super-Kamiokande (Super-K) [8], a 32 kton inner-volume water-Cherenkov detector located in Japan. By 2027, we expect three other large facilities to significantly contribute to this measurement: Hyper-Kamiokande (Hyper-K) [9], the next-generation of Super-K which will have an inner volume of 220 ktons, DUNE, a liquid argon detector in the US that will be 40 ktons [10], and JUNO, a 20 kton liquid-scintillator detector in China [11]. In order to capitalize on the early warning provided by neutrinos, most large- scale neutrino detectors are connected to the SuperNova Early Warning System (SNEWS) [12, 13]. The fast reporting strategy for distance and progenitor structure described here can be implemented in SNEWS to further enhance the information reported about the supernova event. Figure 1: Progenitor dependence of the early neutrino signal in the IceCube detector assuming a normal mass ordering for the neutrinos. In the left panel we show the expected number of interactions detected by IceCube in the first 50 ms vs. distance for 149 different progenitor models. The color of each line denotes $f_{\Delta}$, a directly measurable intrinsic (although detector dependent) property of the core-collapse event. We show this distance independent $f_{\Delta}$ vs. the number of counts in the first 50 ms for a supernova at 10 kpc (middle) and vs. the progenitor compactness (right). The one-to-one relationships between $f_{\Delta}$ and these quantities allows a distance and ZAMS mass estimate from a galactic supernova event. The 1$\sigma$ error bars shown are based on the expected Gaussian counting statistics, background level, and systematic errors from the fit to the 149 progenitor models (only for the error bar on the compactness). ## II Methods ### II.1 Tools We base our analysis on the early CCSN neutrino signal generated from the evolution of 149 progenitor models from [14]. These models are single-star evolutions of solar-metallicity massive stars with ZAMS masses from $9.0\,M_{\odot}$ to $120\,M_{\odot}$. The presupernova structures of these models span the range expected for iron-core collapse and therefore make a complete set for this systematic study. For the core-collapse evolutions we use the FLASH [15, 16, 17] hydrodynamics package with an energy-dependent neutrino transport. We use the SFHo nuclear equation of state [18] and neutrino interactions from NuLib [19]. In order to capture important processes which impact the neutrino signal at early times [20], in addition to the standard neutrino rates used in [17], we utilize the microphysical electron captures rates from [21, 22], inelastic neutrino-electron scattering [23] and inelastic neutrino-nucleon scattering for heavy-lepton neutrinos based on [24]. Using the time evolution of the neutrino luminosity, mean energy, and the mean squared energy from our simulations, we utilize SNOwGLoBES [25, 26] to generate expected count rates in current and near-future neutrino detectors 111We include all the neutrino data from our simulations as well as the analysis scripts for the figures and data presented in this paper in the supplemental information.. Where stated, we use a Galactic CCSN spatial distribution from [1] and a Salpeter initial mass function (IMF) [28], i.e. $N(m)dm\propto m^{-2.35}dm$ extending from $8.75\,M_{\odot}$ to $130\,M_{\odot}$. ### II.2 Parameter extraction methods The number of observed supernova neutrinos is related to distance via an inverse square law. If the early signal was progenitor-independent, then we could calculate the distance by comparing the number of observed events to the predicted signal at a known distance, a so-called standard candle approach. Indeed, this is the method utilized in [6] with electron neutrinos that, during the first 10s of ms after the protoneutron star (PNS) forms, do show this behavior. However, the bulk of the early neutrino signal in many detectors consists of electron antineutrino interactions. These neutrinos do not show this universal behavior, rather the early (within the first $\sim$50 ms) interaction rates can vary up to a factor of $\sim$2 across different progenitors. In the left panel of Figure 1 we show the expected number of events in the first 50 ms for our collection of 149 progenitors as a function of distance for the IceCube neutrino detector assuming the normal mass ordering and only adiabatic neutrino oscillations 222Normal ordering is preferred over the inverted ordering by experiment [39]. Other detector configurations and neutrino mass orderings are available in the supplemental information. At 10 kpc, the mean distance to Galactic CCSN, the range of estimated counts observed in the IceCube detector in the first 50 ms and assuming normal neutrino mass ordering is between 7000-14000. To extract the distance from the detected neutrino signal we use two methods. First, we use the observed number of events detected in the first 50 ms and compare it to an IMF-weighted average progenitor signal (the white dashed line in the left panel of Figure 1). This method is robust, but as mentioned above, suffers from considerable systematic uncertainty because different progenitors predict a different number of events in this time period. However, for the low-statistics regime, either at large distances or for smaller detectors, it provides the best estimate for distance. The second method relies on using the neutrino signal itself to first constrain the progenitor in a distance- independent way, then using the expected signal from that progenitor to enhance the sensitivity to distance. As a byproduct, we obtain information about the progenitor star. The line color of the 149 individual curves (representing the 149 progenitors) in the left panel of Fig. 1 is a parameterization of this progenitor dependence. By first constraining the progenitor, we have a more precise estimate of the expected signal. This key parameter of the early neutrino signal is $f_{\Delta}=\frac{N(100-150~{}\mathrm{ms})}{N(0-50~{}\mathrm{ms})}\,,$ (1) from Horiuchi et al. [5]. $f_{\Delta}$ is the ratio of the number of neutrino interactions occurring between 100 ms and 150 ms ($N(100-150~{}\mathrm{ms})$) to the number of interactions occurring in the first 50 ms ($N(0-50~{}\mathrm{ms})$) 333In the supplemental information we show results for different definitions of $f_{\Delta}$. In middle panel of Fig. 1, we explicitly show the key relationship we are exploiting. $f_{\Delta}$, which is a distance-independent quantity, has a one-to-one mapping with the expected number of interactions in the first 50 ms. The error bars shown in this figure are the expected 1$\sigma$ error bars for a $d=10\,$kpc supernova based on Gaussian counting statistics and also taking into account the background noise in the IceCube detector. The combined distance estimate is achieved by averaging both of the above methods with weights corresponding to the statistical and systematic measurement error. For the statistical errors, Gaussian counting statistics is assumed with the addition for the IceCube detector of a background component [31]. The systematic error is detector specific and is based on the variance of the models to the fit (cf. the middle panel of Fig. 1 for the IceCube detector with normal mass ordering). Not only does $f_{\Delta}$ relay the expected number of events in the first 50 ms, it is also directly related to the compactness [32, 5], a measure of the progenitor structure of the star at the end of its life. The compactness is defined as $\xi_{M}=\frac{M/M_{\odot}}{R(M)/1000\,\mathrm{km}}\,,$ (2) where M is some chosen mass scale (taken here to be $M=2.0\,M_{\odot}$ following [5]) and $R(M)$ is the radius that encloses that mass at the point of core collapse. This relationship between $f_{\Delta}$ and compactness is seen in the right panel of Fig. 1. This particular relationship is distance independent, however we show the expected 1$\sigma$ error bars for a $d=10\,$kpc supernova detected in IceCube. A measurement of $f_{\Delta}$ allows a direct constraint on the compactness. It can be related to the ZAMS mass of the progenitor star through stellar evolution models, although the mapping is non-monotonic and can change rapidly with changing ZAMS mass [33]. Given the non-monotonic relationship, for a measurement of a particular value of $f_{\Delta}$, along with an assumption of a progenitor model series, we can determine a probability distribution for the ZAMS mass of the exploding star. ## III Results ### III.1 Distance Following the method to determine distance outlined above, we perform a large number of mock observations to determine the precision. For distances up to 25 kpc in increments of 1 kpc, for each of the five considered detectors and for each neutrino mass ordering we perform 80000 mock core-collapse events. For each event, we randomly choose a mass based on a Salpeter IMF. The mock observations are randomly determined based on a Gaussian distribution about the mean expected events in each window. As mentioned above, for IceCube a background noise component is added using a modified Gaussian distribution taking the spread to be $1.3\sqrt{\mu}$ where $\mu$ is the average detector background rate equal to 286 Hz/DOM, then the mean is subtracted. The factor of 1.3 is to account for correlated hits from muons [31]. The distance is estimated for each realization and the resulting 1$\sigma$ value of the distribution of relative errors on distance ($|d-d_{\mathrm{estimate}}|/d$) is shown in the top panel of Fig. 2. For nearby distances (which varies detector to detector, but generally $\lesssim$ few kpc), the error is dominated by the systematic variation of the models from the fit shown in the middle panel of Fig. 1. It is worth noting that this systematic error is smallest with DUNE or in the inverted mass ordering, highlighting the fact the electron neutrinos and to some extent heavy-lepton neutrinos, are more progenitor-independent then electron antineutrinos, especially at early times. This was the original motivation for the work of [6]. As the distance increases, the errors begin to become dominated by statistics and the relative error grows linearly with distance. For IceCube, the presence of the constant background noise floor causes the error to grow faster than linear at large distances. Marginalizing over a Galactic distance distribution [1] we obtain 1$\sigma$ relative errors of 5.4%, 8.9%, 5.1%, 8.3%, and 8.7% for IceCube, Super-K, Hyper-K, DUNE, and JUNO, respectively for the normal neutrino mass ordering 444Full cumulative distributions of the relative distance error for both mass orderings and also for different choices for the definition of $f_{\Delta}$ are available in the supplemental information.. Changes in the spatial distribution of CCSN events, for example, using the neutron star distribution explored in [35] gives similar (but $\lesssim$0.5% larger) population-averaged relative distance errors. Figure 2: Estimated 1$\sigma$ errors derived from trial observations on the distance (relative; top panel) and compactness (absolute; bottom panel) marginalized over the IMF as a function of distance for each detector and the normal (NO; solid line) and inverted (IO; dashed line) neutrino mass ordering. ### III.2 Compactness In addition to extracting the distance via the early neutrino signal we can extract properties of the progenitor star itself. Compactness, as seen in Equation 2, is a measure of the structure of the star at the point of collapse. The original proposal from Horiuchi et al. [5] was to determine the compactness of the presupernova star via an observation of neutrinos. We reproduce that analysis here, extend it to IceCube, Hyper-K, and JUNO, and quantify our ability to constrain the compactness for a galactic population. As determined by Horiuchi et al.. From the fit of $f_{\Delta}=m_{\xi}\xi_{2.0}+b_{\xi}$ (see right panel of Fig. 1 for IceCube in the normal mass ordering) and an observation of $f_{\Delta}$, we estimate the compactness via $\tilde{\xi}_{2.0}=(f_{\Delta}-b_{\xi})/m_{\xi}$, where $m_{\xi}$ and $b_{\xi}$ are the fitted slope and intercept (available in the supplemental information for each detector and neutrino mass ordering). In the bottom panel of Fig. 2 we show the expected 1$\sigma$ absolute error on a measurement of $\xi_{2.0}$ as a function distance. We note the same characteristics as the relative distance error. At small distances ($\lesssim 5\,$kpc for IceCube and Hyper-K and $\lesssim 1$ kpc for Super-K, DUNE, and JUNO) the error is dominated by systematics, it becomes linear (and statistics dominated) at larger distances. Marginalizing over a galactic distance distribution [1] we obtain 1$\sigma$ absolute errors of 0.11, 0.2, 0.11, 0.17, and 0.20 for IceCube, Super-K, Hyper-K, DUNE, and JUNO respectively for the normal neutrino mass ordering 555Full cumulative distributions for the absolute compactness error for both neutrino mass orderings and also for different choices for the definition of $f_{\Delta}$ are available in the supplemental information.. ### III.3 Mass The strong correlation between $f_{\Delta}$ and compactness gives us an indirect measurement of the presupernova structure. Stellar evolution–to the extent that the current modeling of the advanced burning stages, convection, and overshoot can be trusted–complicates the mapping between the ZAMS properties of the stars and the final structure at the time of core-collapse [33]. Furthermore, astrophysical factors, such as binarity, rotation, and metallicity will all impact the ZAMS mass to compactness mapping. With these caveats in mind, for the single-star, solar metallicity, non-rotating model set we have chosen to use from Sukhbold et al. (2016) [14], we can invert the ZAMS mass-compactness relation in order to explore potential constraints on the ZAMS mass of the progenitor star from a neutrino observation. In Fig. 3, we show the probability distribution of measured $f_{\Delta}$ as a function of the progenitor ZAMS mass for a supernova observed with a reconstructed distance of 10 kpc sampled over the IMF. We assume the IceCube detector and the normal neutrino mass ordering. There is some structure in the $f_{\Delta}-M_{\mathrm{ZAMS}}$ plane suggesting information on the ZAMS mass may be obtained, at least in some limiting cases. We show in the bottom panel cumulative distributions in ZAMS mass for assumed values of $f_{\Delta}$ = 2.0, 2.5, 3.0, and, 3.25. Note, these cumulative distributions have the statistical uncertainty of the measurement of $f_{\Delta}$ built in and therefore will depend on the detector and assumed distance. For an observed value of $f_{\Delta}$=2.0, which corresponds to progenitors of low compactness, this model set confidently places an upper ZAMS mass limit (95% of the time) of $\sim$11.6 $M_{\odot}$. A measurement of $f_{\Delta}$=2.5 would give a lower ZAMS mass limit (95% of the time) of $\sim$12.2 $M_{\odot}$. For this model set, a measurement of $f_{\Delta}>3.0$ confidently ($\gtrsim$98% of the time) places a lower limit on the ZAMS mass of 20 $M_{\odot}$, and isolates potential masses to be near either $\sim 23\,M_{\odot}$ or 35 $M_{\odot}$-50 $M_{\odot}$. Even with the caveats listed above, there is general confidence in the statement that low ZAMS mass stars ($M\lesssim 12\,M_{\odot}$) have the lowest compactness and therefore the lowest values of $f_{\Delta}$. It is therefore likely that for such supernovae, a constraint on ZAMS mass is possible. Figure 3: Probability distribution of measured $f_{\Delta}$ as a function of the ZAMS mass of the progenitor stars for a measured distance of 10 kpc for the IceCube detector assuming the normal neutrino mass ordering. The color denotes the logarithm of the probability, dark blue values are $\sim$1000 times more likely than yellow. Based on the progenitor model set we use, a measurement of $f_{\Delta}$ relays some information on the ZAMS mass of the progenitor star, particularly if the measured $f_{\Delta}$ is small. In the bottom panel we show cumulative distributions for four choices of $f_{\Delta}$, marked by dashed lines in the top panel. The grey dashed lines denote cumulative probabilities of 5% and 95%. On the right is the marginalized distribution of observed $f_{\Delta}$ over the IMF. ## IV Discussion and Conclusions In the results of this paper we present a compelling case that current and near-future neutrino detectors have the capabilities and statistics to make a measurement of the distance, compactness, and a constraint on the ZAMS mass of a future Galactic CCSN. However, both stellar evolution modelling and CCSNe evolution are complex multiphysics problems. There are certainly systematic errors, both known and unknown, in both of these processes. We have tried to eliminate many of the potential model dependencies. We explore the full range of iron-core progenitors expected and show that the response we investigate is well behaved and linear over this model set. Also, by restricting our measurements to early times, in many cases we avoid the complex multidimensional explosion dynamics and potentially avoid complex collective neutrino oscillations, that we know are present at later times. However, we note that very early explosions (prior to $\sim$150 ms) may give smaller $f_{\Delta}$ then predicted here. We have tested our methods against the parameterized explosion models of [37] and find no systematic differences as long as the same set of neutrino interaction rates is assumed. On this note, we have found, though not included here, that varying these neutrino interactions (such as including and excluding neutrino-nucleon scattering and microphysical electron-capture rates) that the quantitative fits shown in Fig. 1 can change, however the qualitative results, such as the expected precision and the ability to measure or constrain compactness and mass, are robust against these changes. We expect similar results for variations in the nuclear equation of state [38]. We therefore advocate for efforts to quantify and ideally eliminate these systematic errors so that the supernova properties discussed here can be rapidly, reliably, and accurately determined during the next Galactic CCSN and allow for optimal multi-messenger followup. These efforts would include the development and implementation of precision neutrino interaction rates and refined nuclear equations of state. Unless the core contains a large amount of angular momentum, which is the case in only a small fraction of progenitors, we do not expect rotation to affect the neutrino signal enough to distort the trends found here. We have not included detailed detector responses, which may change the predicted number of events for a given model, or taken into account in our error estimate any uncertainties in the cross sections of neutrinos in these detectors. Furthermore, we have not combined the results from the different experiments, which may improve the distance, compactness, and mass determinations shown here. Neutrinos from the next Galactic CCSN will provide a once-in-a-generation warning for the electromagnetic community to view the first light from shock breakout. We present a simple method to determine supernova distance and constrain the progenitor mass using the neutrino signal from current and near- future experiments, with the intention that this information could be used to aid astronomers in localizing the progenitor star, as well as inform the observing strategy. We hope this method can help to ensure we are prepared to fully maximize the data we can collect from this incredible event. ###### Acknowledgements. We thank Sean Couch and MacKenzie Warren for FLASH development and access to models from [37], as well as Olga Botner, Allan Hallgren, Carlos Perez de los Heros, Christian Glaser, Kate Scholberg, and Segev BenZvi for useful discussions. EOS and EOC would each like to acknowledge Vetenskapsrådet (the Swedish Research Council) for supporting this work under award numbers 2019-05447, 2018-04575, and 2020-00452. The simulations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at PDC and NSC partially funded by the Swedish Research Council through grant agreement No. 2016-07213. ## References * Adams _et al._ [2013] S. M. Adams, C. S. Kochanek, J. F. Beacom, M. R. Vagins, and K. Z. Stanek, Observing the Next Galactic Supernova, Astrophys. J. 778, 164 (2013), arXiv:1306.0559 [astro-ph.HE] . * Beacom and Vogel [1999] J. F. Beacom and P. Vogel, Can a supernova be located by its neutrinos?, Phys. Rev. D 60, 033007 (1999), arXiv:astro-ph/9811350 [astro-ph] . * Burrows [1988] A. Burrows, Supernova Neutrinos, Astrophys. J. 334, 891 (1988). * O’Connor and Ott [2013] E. O’Connor and C. D. Ott, The Progenitor Dependence of the Pre-explosion Neutrino Emission in Core-collapse Supernovae, Astrophys. J. 762, 126 (2013), arXiv:1207.1100 [astro-ph.HE] . * Horiuchi _et al._ [2017] S. Horiuchi, K. Nakamura, T. Takiwaki, and K. Kotake, Estimating the core compactness of massive stars with Galactic supernova neutrinos, Journal of Physics G Nuclear Physics 44, 114001 (2017), arXiv:1708.08513 [astro-ph.HE] . * Kachelriess _et al._ [2005] M. Kachelriess, R. Tomas, R. Buras, H.-T. Janka, A. Marek, and M. Rampp, Exploiting the neutronization burst of a galactic supernova, Phys. Rev. D 71, 063003 (2005), arXiv:astro-ph/0412082 . * Aartsen _et al._ [2017] M. G. Aartsen et al. (IceCube), The IceCube Neutrino Observatory: Instrumentation and Online Systems, JINST 12 (03), P03012, arXiv:1612.05093 [astro-ph.IM] . * Fukuda _et al._ [2003] Y. Fukuda et al. (Super-Kamiokande), The Super-Kamiokande detector, Nucl. Instrum. Meth. A 501, 418 (2003). * Abe _et al._ [2018] K. Abe et al. (Hyper-Kamiokande), Hyper-Kamiokande Design Report, (2018), arXiv:1805.04163 [physics.ins-det] . * Abi _et al._ [2020] B. Abi et al. (DUNE), Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume I Introduction to DUNE, (2020), arXiv:2002.02967 [physics.ins-det] . * An _et al._ [2016] F. An, G. An, Q. An, V. Antonelli, E. Baussan, J. Beacom, L. Bezrukov, S. Blyth, R. Brugnera, M. Buizza Avanzini, et al., Neutrino physics with JUNO, Journal of Physics G Nuclear Physics 43, 030401 (2016), arXiv:1507.05613 [physics.ins-det] . * Antonioli _et al._ [2004] P. Antonioli et al., SNEWS: The Supernova Early Warning System, New J. Phys. 6, 114 (2004), arXiv:astro-ph/0406214 . * Kharusi _et al._ [2021] S. A. Kharusi, S. Y. BenZvi, J. S. Bobowski, V. Brdar, T. Brunner, E. Caden, M. Clark, A. Coleiro, M. Colomer-Molla, J. I. Crespo-Anadón, et al., Snews 2.0: A next-generation supernova early warning system for multi-messenger astronomy, New Journal of Physics (2021), arXiv:2011.00035 [astro-ph.HE] . * Sukhbold _et al._ [2016] T. Sukhbold, T. Ertl, S. E. Woosley, J. M. Brown, and H. T. Janka, Core-collapse Supernovae from 9 to 120 Solar Masses Based on Neutrino-powered Explosions, Astrophys. J. 821, 38 (2016), arXiv:1510.04643 [astro-ph.HE] . * Fryxell _et al._ [2000] B. Fryxell, K. Olson, P. Ricker, F. X. Timmes, M. Zingale, D. Q. Lamb, P. MacNeice, R. Rosner, J. W. Truran, and H. Tufo, FLASH: An Adaptive Mesh Hydrodynamics Code for Modeling Astrophysical Thermonuclear Flashes, ”Astro. Phys. J. Suppl.” 131, 273 (2000). * Couch [2013] S. M. Couch, On the Impact of Three Dimensions in Simulations of Neutrino-driven Core-collapse Supernova Explosions, Astrophys. J. 775, 35 (2013), arXiv:1212.0010 [astro-ph.HE] . * O’Connor and Couch [2018] E. P. O’Connor and S. M. Couch, Two-dimensional Core-collapse Supernova Explosions Aided by General Relativity with Multidimensional Neutrino Transport, Astrophys. J. 854, 63 (2018), arXiv:1511.07443 [astro-ph.HE] . * Steiner _et al._ [2013] A. W. Steiner, M. Hempel, and T. Fischer, Core-collapse Supernova Equations of State Based on Neutron Star Observations, Astrophys. J. 774, 17 (2013), arXiv:1207.2184 [astro-ph.SR] . * O’Connor [2015] E. O’Connor, An Open-source Neutrino Radiation Hydrodynamics Code for Core-collapse Supernovae, ”Astro. Phys. J. Suppl.” 219, 24 (2015), arXiv:1411.7058 [astro-ph.HE] . * Lentz _et al._ [2012] E. J. Lentz, A. Mezzacappa, O. E. B. Messer, W. R. Hix, and S. W. Bruenn, Interplay of Neutrino Opacities in Core-collapse Supernova Simulations, Astrophys. J. 760, 94 (2012), arXiv:1206.1086 [astro-ph.SR] . * Sullivan _et al._ [2016] C. Sullivan, E. O’Connor, R. G. T. Zegers, T. Grubb, and S. M. Austin, The Sensitivity of Core-collapse Supernovae to Nuclear Electron Capture, Astrophys. J. 816, 44 (2016), arXiv:1508.07348 [astro-ph.HE] . * Langanke _et al._ [2003] K. Langanke, G. Martínez-Pinedo, J. M. Sampaio, D. J. Dean, W. R. Hix, O. E. B. Messer, A. Mezzacappa, M. Liebendörfer, H.-T. Janka, and M. Rampp, Electron capture rates on nuclei and implications for stellar core collapse, Phys. Rev. Lett. 90, 241102 (2003). * Bruenn [1985] S. W. Bruenn, Stellar core collapse - Numerical model and infall epoch, ”Astro. Phys. J. Suppl.” 58, 771 (1985). * Thompson _et al._ [2000] T. A. Thompson, A. Burrows, and J. E. Horvath, $\mu$ and $\tau$ neutrino thermalization and production in supernovae: Processes and time scales, Phys. Rev. C 62, 035802 (2000), arXiv:astro-ph/0003054 [astro-ph] . * Scholberg [2012] K. Scholberg, Supernova Neutrino Detection, Annual Review of Nuclear and Particle Science 62, 81 (2012), arXiv:1205.6003 [astro-ph.IM] . * Malmenbeck and O’Sullivan [2019] F. Malmenbeck and E. O’Sullivan, Using SNOwGLoBES to Calculate Supernova Neutrino Detection Rates in IceCube, in _36th International Cosmic Ray Conference (ICRC2019)_ , International Cosmic Ray Conference, Vol. 36 (2019) p. 975. * Note [1] We include all the neutrino data from our simulations as well as the analysis scripts for the figures and data presented in this paper in the supplemental information. * Salpeter [1955] E. E. Salpeter, The Luminosity Function and Stellar Evolution., Astrophys. J. 121, 161 (1955). * Note [2] Normal ordering is preferred over the inverted ordering by experiment [39]. Other detector configurations and neutrino mass orderings are available in the supplemental information. * Note [3] In the supplemental information we show results for different definitions of $f_{\Delta}$. * Abbasi _et al._ [2011] R. Abbasi, Y. Abdou, T. Abu-Zayyad, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, M. M. Allen, D. Altmann, and et al., Icecube sensitivity for low-energy neutrinos from nearby supernovae, ”Astronomy & Astrophysics” 535, A109 (2011). * O’Connor and Ott [2011] E. O’Connor and C. D. Ott, Black Hole Formation in Failing Core-Collapse Supernovae, Astrophys. J. 730, 70 (2011), arXiv:1010.5550 [astro-ph.HE] . * Sukhbold and Woosley [2014] T. Sukhbold and S. E. Woosley, The Compactness of Presupernova Stellar Cores, Astrophys. J. 783, 10 (2014), arXiv:1311.6546 [astro-ph.SR] . * Note [4] Full cumulative distributions of the relative distance error for both mass orderings and also for different choices for the definition of $f_{\Delta}$ are available in the supplemental information. * Mirizzi _et al._ [2006] A. Mirizzi, G. G. Raffelt, and P. D. Serpico, Earth matter effects in supernova neutrinos: optimal detector locations, ”J. of Cosmo. and Astropart. Phys.” 2006, 012 (2006), arXiv:astro-ph/0604300 [astro-ph] . * Note [5] Full cumulative distributions for the absolute compactness error for both neutrino mass orderings and also for different choices for the definition of $f_{\Delta}$ are available in the supplemental information. * Warren _et al._ [2020] M. L. Warren, S. M. Couch, E. P. O’Connor, and V. Morozova, Constraining properties of the next nearby core-collapse supernova with multimessenger signals, The Astrophysical Journal 898, 139 (2020). * Schneider _et al._ [2019] A. S. Schneider, L. F. Roberts, C. D. Ott, and E. O’Connor, Equation of state effects in the core collapse of a 20 -M⊙ star, Phys. Rev. C 100, 055802 (2019), arXiv:1906.02009 [astro-ph.HE] . * de Salas _et al._ [2020] P. F. de Salas, D. V. Forero, S. Gariazzo, P. Martínez-Miravé, O. Mena, C. A. Ternes, M. Tórtola, and J. W. F. Valle, 2020 Global reassessment of the neutrino oscillation picture, arXiv e-prints , arXiv:2006.11237 (2020), arXiv:2006.11237 [hep-ph] . Supplemental Information: Measuring the Distance and ZAMS Mass of Galactic Core-Collapse Supernovae Using Neutrinos ## Appendix A Progenitor dependence for different detectors In the main article we specifically demonstrate our method using the IceCube detector and assuming the normal mass ordering for neutrinos, with the aid of Figure 1. Here we present the equivalent figures for the inverted mass ordering and the IceCube detector (Fig. 4; for completeness we reproduce the normal mass ordering figure for IceCube as well), for both mass orderings for the Super-K detector (Fig. 5), for the Hyper-K detector (Fig. 6), for the DUNE detector (Fig. 7), and for the JUNO detector (Fig. 8). Figure 4: Progenitor dependence of early neutrino signal in the IceCube detector assuming a normal mass ordering (upper panel; repeat of Figure 1 in the main article for completeness) inverted mass ordering (lower panel) for the neutrinos. For details, see the Figure 1 caption in the main article. Figure 5: Progenitor dependence of early neutrino signal in the Super-K detector assuming a normal mass ordering (upper panel) and the inverted mass ordering (bottom panel) for the neutrinos. For details, see the Figure 1 caption in the main article. Figure 6: Progenitor dependence of early neutrino signal in the Hyper-K detector assuming a normal mass ordering (upper panel) and the inverted mass ordering (bottom panel) for the neutrinos. For details, see the Figure 1 caption in the main article. Figure 7: Progenitor dependence of early neutrino signal in the DUNE detector assuming a normal mass ordering (upper panel) and the inverted mass ordering (bottom panel) for the neutrinos. For details, see the Figure 1 caption in the main article. Figure 8: Progenitor dependence of early neutrino signal in the JUNO detector assuming a normal mass ordering (upper panel) and the inverted mass ordering (bottom panel) for the neutrinos. For details, see the Figure 1 caption in the main article. The fits shown in Figs. 4-8 are given in Table 1. We also include the variance of the models from the fit, which is used as an estimate of the systematic error in our analysis. Detector, $\nu$ mass ordering | $m_{N}$ | $b_{N}$ | $\sigma^{\mathrm{sys}}_{N,\ b}$ | $m_{\xi}$ | $b_{\xi}$ | $\sigma^{\mathrm{sys}}_{\xi,\ b}$ ---|---|---|---|---|---|--- IceCube, NO | 0.000182 | 0.779 | 0.11 | 1.48 | 2.1 | 0.119 IceCube, IO | 0.000125 | 0.342 | 0.0656 | 1.12 | 1.42 | 0.0695 Super-K, NO | 0.0105 | 0.894 | 0.0973 | 1.22 | 2.0 | 0.0996 Super-K, IO | 0.00815 | 0.439 | 0.0529 | 0.914 | 1.37 | 0.0583 Hyper-K, NO | 0.00152 | 0.894 | 0.0973 | 1.22 | 2.0 | 0.0996 Hyper-K, IO | 0.00119 | 0.439 | 0.0529 | 0.914 | 1.37 | 0.0583 DUNE, NO | 0.0158 | -0.0304 | 0.0641 | 1.18 | 1.23 | 0.0696 DUNE, IO | 0.00978 | -0.706 | 0.0411 | 0.893 | 0.66 | 0.05 JUNO, NO | 0.0109 | 0.746 | 0.0909 | 1.18 | 1.87 | 0.0952 JUNO, IO | 0.0088 | 0.319 | 0.0515 | 0.914 | 1.29 | 0.0569 Table 1: Fit parameters for the relationship between $f_{\Delta}$ and $N(0-50\,\mathrm{ms})$ and compactness ($\xi_{2.0}$) in the middle and right column, respectively, for the five detectors considered (IceCube, Hyper-K, Super-K, DUNE, and JUNO) and the two neutrino mass orderings (NO: normal mass ordering, IO: inverted mass ordering). The fits are of the form $f_{\Delta}=m_{X}X+b_{X}$. The variance of the 149 progenitor models to the fit are also given, this is taken as a proxy for the systematic error on the fit. ## Appendix B Cumulative Distributions of error on distance and compactness determination In the main article we provide 1$\sigma$ error estimates for the relative error on the distance, and absolution error on the compactness extracted from our mock observations. For this we marginalized over a galactic spatial distribution [1] and a Salpeter initial mass function [28]. We show, for each detector and neutrino mass ordering, the full cumulative distribution of this relative error for distance and absolute error on compactness in Fig. 9. We show the results for three different definitions of $f_{\Delta}$ from [5]. These three definitions are the ratio of the number of counts in three time windows (50 ms-100 ms, 100 ms-150 ms, and 150 ms-200 ms) to the number of counts in the first 50 ms. For the determination of the distance, all three choices are comparable, although with a slight preference for the latest time window, 150 ms-200 ms. For the compactness, the latest time window is generally the best, although for IceCube and Hyper-K the 100 ms-150 ms time window is comparable or slightly better. We settle on $f_{\Delta}=\frac{N(100-150~{}\mathrm{ms})}{N(0-50~{}\mathrm{ms})}$ because the multidimensional dynamics (not included here) will impact the latest time window most significantly. Figure 9: Cumulative distributions of the relative distance error (orange) and the absolute compactness error (blue) marginalized over a galactic population of core-collapse supernovae (i.e. both spatial position [1] and initial mass function [28]). From the top to the bottom we show the distributions for IceCube, Super-K Hyper-K, DUNE, and JUNO, respectively. The left panels are for the normal neutrino mass ordering while the right panels are for the inverted mass ordering. The three lines for each detector and mass ordering represent three different defintions of $f_{\Delta}$, see the text. ## Appendix C Demonstration of distance extraction methods We utilize two methods to extract the distance from the early neutrino signal. One is purely based on the number of counts in the first 50 ms and compares this observation to the mean progenitor from our model set. The other method first constrains the progenitor model with $f_{\Delta}$ in order to more precisely determine the expected number of the count. At close distances (or large detectors) we can constrain the progenitor well enough for the latter method to give better distances estimates. However, at large distances, or smaller detectors, the error introduced by the progenitor identification is larger than the one introduced by just assuming the mean progenitor. We explicitly show this in Fig. 10 for IceCube and Super-K (both using the normal mass ordering). As a function of distance, the blue dashed and dashed-dotted lines are the 1$\sigma$ relative distance errors for the first (with the mean progenitor model) and the second (via the progenitor constraint with $f_{\Delta}$) distance-estimate method, respectively. As argued above, the $f_{\Delta}$ method performs better at close distances and for larger detectors. The cross over point is at $\sim$13 kpc for IceCube and $\sim$7.5 kpc for Super-K. The orange lines shown in the figures are 1$\sigma$ values of the estimated error which are used to weight the distance estimates from the two methods for each mock observation. As can be seen, they track the actual errors quite well. The blue solid line is the 1$\sigma$ relative distance error achieved when we combine the two distance-estimate methods. Figure 10: 1$\sigma$ relative distance errors for different distance extraction techniques (dashed and dashed-dotted lines) as well as the combination (solid lines). See the text for details. We show the results for two detectors: IceCube with normal neutrino mass ordering (left); and Super-K with normal neutrino mass ordering (right).
# Hyper-optimization with Gaussian Process and Differential Evolution Algorithm Jakub Klus<EMAIL_ADDRESS> Innovatrics s.r.o Pavel Grunt<EMAIL_ADDRESS> Innovatrics s.r.o Martin Dobrovolný<EMAIL_ADDRESS> Innovatrics s.r.o ###### Abstract Optimization of problems with high computational power demands is a challenging task. A probabilistic approach to such optimization called Bayesian optimization lowers performance demands by solving mathematically simpler model of the problem. Selected approach, Gaussian Process, models problem using a mixture of Gaussian functions. This paper presents specific modifications of Gaussian Process optimization components from available scientific libraries. Presented modifications were submitted to BlackBox 2020 challenge, where it outperformed some conventionally available optimization libraries. ## 1 Introduction The recent popularity of Machine Learning (ML) methods broadens its applicability in many research fields. Selecting the best ML method and tuning its parameters is a common approach to achieve state-of-the-art results. But, with rising data size and method complexity, parameter optimization (so-called hyper-optimization) quickly exhausts available computational power. For example, the systematic optimization of deep learning (DL) training parameters [1, 2] can take several days to perform. The time required for optimization is proportional to the number of iterations we perform (number of samples taken). The number of samples can be significantly lowered if we model the relation between individual parameters and outputs (so-called objective function). A probabilistic framework for the hyper-optimization is called Bayesian optimization [3]. Bayesian optimization defines a surrogate model of the objective function and loss function, which defines how optimal are future queries. Shahriari et al. [4] provide a comprehensive review of Bayesian optimization applications and available implementations. Moreover, three basic methods of Bayesian optimization are described in [4], namely: Gaussian Process (GP), Tree of Parzen Estimators (TPE), and Random forests (RF). A GP (also described in [5]) approximates objective function with a mixture of multivariate Gaussian functions. The loss of future queries is then calculated from the predicted mean value and confidence. On the contrary, the objective function in TPE (described thoroughly in [6]) is modelled by two hierarchical processes. One process describes parameters yielding better values, and the second describes parameters yielding worse values. Loss is then evaluated using the ratio of these two hierarchical processes. In RF regression a surrogate model consisting of an ensemble of simpler models made on random subsets of problems is constructed. A GP approach was selected for this paper since it provides more space for experimental improvements as described in the following sections. ### 1.1 Kernel function To estimate Gaussian mixture coefficients efficiently, covariances between individual Gaussian components are calculated using kernel trick [5]. Selection of kernel is one of the crucial parts of GP, individual kernel functions and even their arithmetic are described in [7, 5] Commonly used kernels are squared exponential kernel and Matérn 5/2 kernel [8]. Both aforementioned kernels implement automatic relevance determination (ARD). Briefly, ARD works by maximizing the marginal likelihood of the GP regression model with respect to the kernel length-scale parameters. If the length-scale corresponding to the objective function parameter is large, the model is almost independent of it. ### 1.2 Non-continuous parameters A GP needs to be extended to work with non-continuous parameters like categories, integer numbers, or true/false values. Therefore, Garrido-Merchán and Hernández-Lobato in [9] proposed an extension of GP for non-continuous domains. They discourage the naive approach of converting parameters to the real domain and rounding outputs of GP afterwards. To truly model non- continuous parameters they suggest to modify kernel function and apply rounding inside of it. This ensures that interactions between individual parameters will not be extrapolated to undefined values (e.g. real values between integers). The difference of these approaches is depicted in Figure 1. The practical implementation is discussed in section 2.1. Figure 1: Comparison of loss function for a) kernel function without transformation, b) for kernel function with transformation (see section 2.1 for implementation details). ### 1.3 Parallelization Bayesian optimization belongs to a family of sequential-model based optimization (SMBO) methods. In SMBO, a new query is calculated whenever a model prediction is updated with data acquired from the previous step. A GP must be modified to enable parallel evaluation of queries in batches. In [10] two basic methods of parallelization were proposed: constant liar and kriging believer. In both methods, a batch of queries is calculated sequentially by predicting the result of the first query and inserting it into the GP model. For the constant liar approach, known points are expanded by constant $L$ corresponding to the parameters of the previous query. For the kriging believer approach, known points are expanded by the current GP model mean corresponding to the parameters of the previous query. The full potential of parallelization was exploited in [11], where authors sampled multiple queries from the loss function directly. ### 1.4 Challenge details Algorithms described in this paper were submitted to the Black-Box Optimization for Machine Learning challenge (BlackBox 2020 challenge), a part of NeurIPS 2020 Competition Track. In short, the optimization was performed in 16 batches of 8 queries, and optimizers executing longer than 640 seconds were stopped. Moreover, the BlackBox 2020 challenge organizers defined a test set of assignments and kept a preliminary leaderboard of submissions. Organizers also provided a compilation of example submissions utilizing selected optimization libraries and means of comparison using a defined score. ## 2 Methods The proposed solution is a combination of methods described in the previous section built upon a scikit-learn library [12]. Scikit-learn provides an implementation of GP regression and basic kernels, including kernel arithmetic. The selected loss function was Expected Improvement (EI) [3] defined as follows: $EI=\left(f_{\textrm{min}}-\hat{y}\right)\Phi\left(\frac{f_{\textrm{min}}-\hat{y}}{s}\right)+s\phi\left(\frac{f_{\textrm{min}}-\hat{y}}{s}\right),$ (1) where $f_{\textrm{min}}$ is the best observed value, $\hat{y}$ is the value predicted by GP model, $s$ is the standard deviation of the model prediction, $\Phi\left(\cdot\right)$ is a standard normal distribution function, and $\phi\left(\cdot\right)$ is a standard normal density function. The parallelization was implemented according to the kriging believer approach since the constant liar is straightforward and sampling of multiple queries is complicated. ### 2.1 Input space transformations The nature of objective function parameters can affect the performance of the GP model. Therefore it is advisable to transform the aforementioned parameters into domains where their interactions are balanced. Such domain transforms can incorporate changes proposed in [9]. Numerical values are transformed with respect to their "configuration space" (e.g. parameters with "log" configuration space are transformed using $\log_{10}$ function). Categorical values are transformed to the range $[0.0,1.0]$ using one-hot encoding, thus increasing the input dimension for GP. Boolean true and false values are transformed to $1.0$ and $0.0$ respectively. The kernel function transformation is implemented using type coercion. Real numbers corresponding to integer objective function parameters are coerced by rounding. Real numbers from non-linear space are rounded to the nearest transformed integer (e.g. parameters from "log" configuration space are coerced by $\log_{10}(\lfloor 10^{x}\rceil))$). Each real number vector corresponding to a categorical value is modified by replacing its maximal value by $1.0$ and the rest of its values by $0.0$. Real numbers corresponding to Boolean values are rounded and clipped to range $[0.0,1.0]$. ### 2.2 Initialization The GP has to be primed with at least two queries. With respect to the parallelization, the number of prime queries has to be multiple of batch size. As only the objective function parameter ranges and spaces are known, the simplest option to initialize the GP is a batch of randomly generated parameter values. More advanced initialization mitigating possibilities of sampling nearly identical random parameters is Latin hypercube (LH) initialization [13]. In the LH initialization, we create a set of possible prime queries using a uniform grid over all parameters. The uniform used by the LH is constructed in the transformed input space. Afterwards, we randomly sample unseen points from the set of possible prime queries to create a batch. ### 2.3 Meta-optimizer New queries from GP correspond to minimums in loss function (1). We call the optimization inside Bayesian optimization a meta-optimization and the selecting optimizer a meta-optimizer. We utilized methods provided in the sckit-learn library, namely the default optimizer L-BFGS (an approximation of Broyden–Fletcher–Goldfarb–Shanno algorithm [14]) and implementation of a non- gradient method, Differential Evolution [15]. The Differential Evolution algorithm was chosen according to the investigation of transformed $EI$ loss (see $EI$ loss calculated on an example problem selected from the test set in figure 1b) because L-BFGS can have lower performance in non-continuous spaces. The negative impact of discretization on L-BFGS meta-optimizer is confirmed experimentally, see Table 1. ## 3 Results and Discussion Proposed changes were evaluated on a test set given by organizers of BlackBox Challenge 2020. Initially, a problem of domain discretization with respect to meta-optimizers primed by two random batches was investigated. Results in Table 1 show that Differential Evolution meta-optimizer performs better for complex kernel approximation. In contrast, the score of L-BFGS-B meta-optimizer for complex kernel transformation is significantly lower than the score for naive transformation. In sum, L-BFGS meta-optimizer outperforms Differential Evolution, but cannot beat example submissions. Table 1: Performance of different meta-optimizers with respect to the discretization regime. Proposed modifications of domain discretizations are evaluated on two meta-optimizers and compared with example submissions (pySOT, turbo, and hyperOpt). The score is calculated from BlackBox Challenge 2020 test set. Optimizer | Meta-optimizer | Discretization | Score ---|---|---|--- pySOT | | | 98.385 turbo | | | 97.660 Proposed | L-BFGS-B | naive | 97.316 Proposed | L-BFGS-B | complex | 96.556 Proposed | Differential Evolution | complex | 96.296 hyperOpt | | | 96.147 Proposed | Differential Evolution | naive | 95.957 We tried to exploit potential benefits of employing Differential Evolution meta-optimizer by improving its initialization. Apart from the initialization type (discussed in Section 2) a size of initialization batch was concerned. Initialization batch size increment is motivated by the increase of the probability of hitting a promising parameter configuration. The last examined modification was initialization of meta-optimizer, denoted as meta- initialization. Random meta-initialization can sustain the same drawbacks as in the case of initialization. Therefore, we proposed a quasi-random meta- initialization that samples objective function parameters from the vicinity of known promising points. Results in Table 2 show how can the initialization improve the performance of proposed optimizer. It can be stated that a larger number of priming batches helps in most cases, where initialization was not done purely randomly. Also, the non-random initialization improves the resulting score significantly. The combination of the Differential Evolution meta-optimizer, the complex discretization of the kernel, LH initialization, and quasi-random meta- initialization was chosen as the final submission for the BlackBox 2020 challenge. Table 2: Score values for different initialization and meta-initialization options. The evaluated proposal combines the Differential Evolution meta-optimizer and the complex discretization. The quasi-random meta-initialization method samples the objective function parameters from the vicinity of known promising points. Optimizer | Initialization | Initialization size [batch] | Meta-initialization | Score ---|---|---|---|--- pySOT | | | | 98.385 Chosen | LH | 5 | quasi-random | 97.871 Proposed | random | 5 | quasi-random | 97.464 Proposed | LH | 2 | quasi-random | 97.309 Proposed | random | 2 | quasi-random | 97.028 Proposed | LH | 2 | random | 95.957 Proposed | random | 5 | random | 96.671 Proposed | LH | 5 | random | 96.450 Proposed | random | 2 | random | 96.296 ## 4 Conclusion This paper showed several modifications to an available implementation of Gaussian Process optimization procedure. Proposed modifications were evaluated on a test set provided by BlackBox 2020 challenge organizers. The evaluation showed that at least one example submission outperformed selected approach. This was also confirmed by the preliminary BlackBox 2020 challenge leaderboard, where the selected submission landed around 45 place. However, at the final evaluation (disclosed to BlackBox 2020 challenge participants), the selected submission achieved the 11 place outperforming example submissions greatly. The best explanation we can give as authors is that kernel discretization can play a major role, especially for the case of categorical input parameters. This assumption is further supported by the fact that there were no assignments with categorical inputs in the available test set. ## Broader Impact Authors believe that this work will motivate contributors and maintainers of black-box optimization scientific packages to incorporate proposed changes into their software. Our optimization framework is available at https://github.com/Brown-Box/brown-box. ## Acknowledgments and Disclosure of Funding This work was not directly funded by any funding organization. However, authors would like to thank selected open-source projects, which made the submission possible, namely: scikit-learn [12] and GNU parallel [16]. ## References * Smith [2018] Leslie N. Smith. A disciplined approach to neural network hyper-parameters: Part 1 - learning rate, batch size, momentum, and weight decay. _CoRR_ , abs/1803.09820, 2018. URL http://arxiv.org/abs/1803.09820. * Bergstra et al. [2013] J. Bergstra, D. Yamins, and D. D. Cox. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In _Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28_ , ICML’13, page I–115–I–123. JMLR.org, 2013. * Jones et al. [1998] Donald R. Jones, Matthias Schonlau, and William J. Welch. Efficient global optimization of expensive black-box functions. _J. of Global Optimization_ , 13(4):455–492, December 1998. ISSN 0925-5001. doi: 10.1023/A:1008306431147. URL https://doi.org/10.1023/A:1008306431147. * Shahriari et al. [2016] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of bayesian optimization. _Proceedings of the IEEE_ , 104(1):148–175, 2016\. doi: 10.1109/JPROC.2015.2494218. * Rasmussen and Williams [2005] Carl Edward Rasmussen and Christopher K. I. Williams. _Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)_. The MIT Press, 2005. ISBN 026218253X. URL http://www.amazon.com/Gaussian-Processes-Learning-Adaptive-Computation/dp/026218253X. * Bergstra et al. [2011] James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. In _Proceedings of the 24th International Conference on Neural Information Processing Systems_ , NIPS’11, page 2546–2554, Red Hook, NY, USA, 2011. Curran Associates Inc. ISBN 9781618395993. * Duvenaud [2014] David Duvenaud. _Automatic model construction with Gaussian processes_. PhD thesis, University of Cambridge Repository, 2014. URL https://www.repository.cam.ac.uk/handle/1810/247281. * Snoek et al. [2012] Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machine learning algorithms. In _Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2_ , NIPS’12, page 2951–2959, Red Hook, NY, USA, 2012. Curran Associates Inc. * Garrido-Merchán and Hernández-Lobato [2020] Eduardo C. Garrido-Merchán and Daniel Hernández-Lobato. Dealing with categorical and integer-valued variables in bayesian optimization with gaussian processes. _Neurocomputing_ , 380:20 – 35, 2020. ISSN 0925-2312. doi: https://doi.org/10.1016/j.neucom.2019.11.004. URL http://www.sciencedirect.com/science/article/pii/S0925231219315619. * Ginsbourger et al. [2010] David Ginsbourger, Rodolphe Le Riche, and Laurent Carraro. _Kriging Is Well-Suited to Parallelize Optimization_ , pages 131–162. Springer Berlin Heidelberg, Berlin, Heidelberg, 2010. ISBN 978-3-642-10701-6. doi: 10.1007/978-3-642-10701-6_6. URL https://doi.org/10.1007/978-3-642-10701-6_6. * Wang et al. [2019] Jialei Wang, Scott C. Clark, Eric Liu, and Peter I. Frazier. Parallel bayesian global optimization of expensive functions, 2019. * Pedregosa et al. [2011] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_ , 12:2825–2830, 2011\. * Ye [1998] Kenny Q. Ye. Orthogonal column latin hypercubes and their application in computer experiments. _Journal of the American Statistical Association_ , 93(444):1430–1439, 1998. doi: 10.1080/01621459.1998.10473803. URL https://www.tandfonline.com/doi/abs/10.1080/01621459.1998.10473803. * Fletcher [1987] R. Fletcher. _Practical Methods of Optimization; (2nd Ed.)_. Wiley-Interscience, USA, 1987. ISBN 0471915475. * Storn and Price [1997] Rainer Storn and Kenneth Price. Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. _J. of Global Optimization_ , 11(4):341–359, December 1997. ISSN 0925-5001. doi: 10.1023/A:1008202821328. URL https://doi.org/10.1023/A:1008202821328. * Tange [2020] Ole Tange. Gnu parallel 20200722 (’privacy shield’), July 2020. URL https://doi.org/10.5281/zenodo.3956817. GNU Parallel is a general parallelizer to run multiple serial command line programs in parallel without changing them.
# Independent Control and Path Planning of Microswimmers with a Uniform Magnetic Field ###### Abstract Artificial bacteria flagella (ABFs) are magnetic helical micro-swimmers that can be remotely controlled via a uniform, rotating magnetic field. Previous studies have used the heterogeneous response of microswimmers to external magnetic fields for achieving independent control. Here we introduce analytical and reinforcement learning control strategies for path planning to a target by multiple swimmers using a uniform magnetic field. The comparison of the two algorithms shows the superiority of reinforcement learning in achieving minimal travel time to a target. The results demonstrate, for the first time, the effective independent navigation of realistic micro-swimmers with a uniform magnetic field in a viscous flow field. ###### keywords: micro-swimmers, reinforcement learning, magnetically driven Lucas Amoudruz Petros Koumoutsakos* L. Amoudruz, Prof. P. Koumoutsakos Computational Science and Engineering Laboratory, ETH Zürich, CH-8092, Switzerland. Email<EMAIL_ADDRESS> L. Amoudruz, Prof. P. Koumoutsakos John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA. ## 1 Introduction The magnetic control of micro-swimming devices [1, 2, 3, 4, 5] through micro- manipulation [6, 7], targeted drug delivery [8, 9] or convection-enhanced transport [10], has created new frontiers for bio-medicine. A particularly promising technology involves corkscrew-shaped magnetic micro-swimmers (artificial bacterial flagella) that propel themselves when subjected to a rotating magnetic field [11]. Rotating magnetic fields can form propulsive gradients and they are arguably preferable to alternatives, such as electric fields, for in-vivo conditions [12, 13]. However, the independent, yet coordinated, control of individual ABFs is challenging as it requires balancing between the magnetic forces and the hydrodynamic interactions between the swimmers while the employed magnetic fields are practically uniform over lengths of few micrometers. We note that independent navigation of mm-sized micro-swimmers has been shown in [14] through experiments and simulations, while in [15] a reinforcement learning (RL) algorithm was applied to adjust the velocity of an idealized swimmer in simulations with one way coupling with a complex flow field. Control of swimmers using two-way coupling and RL have been demonstrated with linked-spheres at low Reynolds numbers [16] and for artificial fish in macro-scales [17]. Similarly, genetic algorithms have been used to navigate micro-swimmers towards high concentrations of chemicals [18]. The problem of heterogeneous micro-robots navigation via a uniform input has been studied in two dimensions on surfaces [19] and in a fluid at rest [20]. The steering of two micro-propellers along two distinct paths in 3 dimensions has been accomplished with the help of magnetic fields gradients [21]. These advances exploited the heterogeneous response of micro-swimmers to a uniform input to achieve independent trajectories along a prescribed path. These control methods are based on short horizon objectives (stay on the prescribed path) and do not provide the trajectory that minimizes the travel time to a target position, particularly in the presence of a background flow. In addition, strong background flows restrict the set of feasible paths for given micro-swimmers. To the best of our knowledge the steering of multiple micron- sized swimmers towards a target in a minimal time under a background flow and a uniform magnetic field has not been reported before. In this work, we present two methods to independently guide two micro-ABFs towards a single target in the presence of a uniform magnetic field. The two methods rely on simulations of swimming ABFs using an ordinary differential equation (ODE) model. The model is calibrated with the method of dissipative particle dynamics (DPD) [22, 23], taking into account the particular geometry of the swimmers and their interactions with the viscous fluid. We first present a semi-analytical solution for the simple yet instructive setup of multiple, geometrically distinct ABFs in free space, with zero background flow. This result enables understanding of the design constraints for the ABFs necessary for independent control and how their geometric characteristics relate to their travel time. We then employ RL to control multiple ABFs trajectories in a broad range of flow conditions including a non-zero background flow. ## 2 Artificial bacterial flagella The ABFs are modeled as microscopic rigid bodies of length $l$ with position $\mathbf{x}$ and orientation $q$ (represented by a quaternion), immersed in a viscous fluid and subjected to a rotating, uniform, magnetic field. We estimate that the magnetic and hydrodynamic interactions between ABFs are orders of magnitude smaller than those due to the magnetic field for dilute systems (see supplementary material) and we ignore inertial effects due to their low Reynolds number ($\mathrm{Re}\approx${10}^{-3}$$). Following this approximation, the system is fully described by the position and orientation of the ABFs. Additionally, the linear and angular velocities of the ABF, $\mathbf{V}^{b}$ and $\bm{\Omega}^{b}$, are directly linked to the external force and torque, $\mathbf{F}^{b}$ and $\mathbf{T}^{b}$, via the mobility matrix [24], $\begin{bmatrix}\mathbf{V}^{b}\\\ \bm{\Omega}^{b}\end{bmatrix}=\begin{bmatrix}\Delta&Z\\\ Z^{\text{T}}&\Gamma\end{bmatrix}\begin{bmatrix}\mathbf{F}^{b}\\\ \mathbf{T}^{b}\end{bmatrix},$ (1) where the superscript b indicates that the quantity is expressed in the ABF frame of reference, for which $\Delta$, $Z$ and $\Gamma$ are diagonal. The matrices $\Gamma$ and $Z$ represent the application of torque to changing the angular and linear velocity, respectively. The ABFs are propelled by torque applied through a magnetic field and we assume that it can swim only in the direction of its main axis so that $Z$ has only one non-zero entry ($Z_{11})$. The coefficients in the mobility matrix are often estimated by empirical formulas for low Reynolds number flows [25]. Here we estimate the components of the mobility matrix for the specific ABF by conducting flow simulations using DPD [22, 23], which we validate against experimental data of [8] (see supplementary material). We remark that the shape (pitch, diameter, length, thickness) of the ABF influence the elements of these matrices and the present approach allows to account for these geometries. The ABF with a magnetic moment $\mathbf{m}$ is subjected to a uniform magnetic field $\mathbf{B}$ and hence experiences a torque $\mathbf{T}=\mathbf{m}\times\mathbf{B}.$ (2) No other external force is applied to the ABF, hence $\mathbf{F}=\mathbf{0}$. Combining eq. 1 with the kinematic equations for a rigid body gives the following system of ODEs: $\displaystyle\dot{\mathbf{x}}$ $\displaystyle=\mathbf{V},$ (3a) $\displaystyle\dot{q}$ $\displaystyle=\frac{1}{2}q\otimes\hat{\bm{\Omega}},$ (3b) $\displaystyle\mathbf{V}^{b}$ $\displaystyle=Z\mathbf{T}^{b},$ (3c) $\displaystyle\bm{\Omega}^{b}$ $\displaystyle=\Gamma\mathbf{T}^{b},$ (3d) where $\otimes$ denotes the quaternion product, and $\hat{\bm{\Omega}}$ the pure quaternion formed by the vector $\bm{\Omega}$. The transformations between the laboratory frame of reference and that of the ABF are given by: $\displaystyle\mathbf{T}^{b}$ $\displaystyle=R(q)\mathbf{T},$ (4a) $\displaystyle\mathbf{m}$ $\displaystyle=R(q^{\star})\mathbf{m}^{b},$ (4b) $\displaystyle\mathbf{V}$ $\displaystyle=R(q^{\star})\mathbf{V}^{b},$ (4c) $\displaystyle\bm{\Omega}$ $\displaystyle=R(q^{\star})\bm{\Omega}^{b},$ (4d) where $q^{\star}$ is the conjugate of $q$ and $R(q)$ is the rotation matrix that corresponds to the rotation by a quaternion $q$ [26]. The system of differential equations (2) and (3) is advanced in time with a fourth order Runge-Kutta integrator. We note that when simulating multiple non-interacting ABFs in free space, we use the above ODE system for each swimmer with the common magnetic field but different mobility coefficients and magnetic moments. ## 3 Forward velocity ABF were designed to swim under a rotating, uniform magnetic field [27, 11]. We first study this scenario by applying the field $\mathbf{B}(t)=B\left(0,\cos{\omega t},\sin{\omega t}\right)$ to ABFs initially aligned with the $x$ axis of the laboratory frame. Note that in the later sections, the magnetic field is able to rotate in any direction so that the swimmers can navigate in three dimensions. We consider two ABFs with the same length but different pitch and magnetic moments, as shown in fig. 1. In both cases, the magnetic moment is perpendicular to the helical axis of the ABF. Under these conditions, by symmetry of the problem, the swimmers swim along the $x$ axis. The difference in pitch results in different coefficients of the mobility matrix and along with the different magnetic moments results in distinct propulsion velocities for the two ABFs. For each ABF velocity we distinguish a linear and a non-linear variation with respect to the frequency of the magnetic field. First, the ABF rotates at the same frequency as the magnetic field and its forward velocity increases linearly with the frequency of the magnetic field, consistent with the low Reynolds approximation [28, 8, 29, 3]. In the non-linear regime, the magnetic torque is no longer able to sustain the same frequency of rotation as the magnetic field. The onset of non-linearity depends on the geometry and magnetic moment of the ABF as well as the imposed magnetic field. Indeed, the magnitude of the magnetic torque is bounded while that of the hydrodynamic torque increases linearly with the ABF angular velocity $\Omega$. The torque imbalance at high rotation frequencies causes the ABF to slip, resulting in an alternating forward and backward motion (see supplementary material). Increasing the frequency further increases the effective slip and accordingly decreases the forward velocity. The two regimes are distinguished by the step-out frequency $\omega_{c}$ corresponding to the maximum forward velocity of the ABF. Figure 1: Left: Dimensionless time averaged forward velocity of two ABFs, differing in shape and magnetic moment, against the field rotation frequency (in units of the step-out frequency of the first swimmer, $\omega_{c,1}$). Right: The ABFs geometries. The arrows represent the magnetic moment of the ABFs. The differences in propulsion velocities for the ABFs can be exploited to control independently their trajectories. The slope $V/\omega$ in the linear regime depends only on the shape of the ABF. The step-out frequency depends on both the shape and the magnetic moment (it can also be changed by varying the surface wetability of the ABF [30]). These two properties can be chosen such that the forward velocities of two ABFs react differently to the magnetic rotation frequency (fig. 1). By changing $\omega$, it is then possible to control the relative velocities of the two ABFs: one is faster that the other in one regime while the opposite occurs in an other regime. This simple observation constitutes the key idea for independent control of several ABFs even with a uniform magnetic field. We remark that, while this potential has been previously identified [28, 30, 31, 32, 33], the control of similar systems have been performed in the simple case of free space, non interacting propellers and no background flow [21, 20]. To the best of our knowledge, this is the first time that such independent controlled navigation of multiple micro-swimmers is materialised in three dimensions with a complex background flow. In the following sections we propose two methods to tackle the problem of steering ABFs towards a target in a minimal amount of time. ## 4 Independent control I: semi-analytical solution In the absence of an external flow field, we derive a semi-analytical strategy for the navigation of $N$ ABFs towards a particular target. Each ABF has a distinct magnetic moment and without any loss of generality, we set the target position of all swimmers to the origin and define the initial position of the $i^{\text{th}}$ ABF as $\mathbf{x}^{(i)}$. We assume that the time required by one ABF to align with the rotation direction of the field is much smaller than $|\mathbf{x}^{(i)}|/v$, where $v$ is the typical forward velocity of the ABF. The proposed strategy consists in gathering all ABFs along one direction $\mathbf{n}_{k}$ at a time, such that $\mathbf{x}^{(i)}\cdot\mathbf{n}_{k}=0$, $i=1,2,\dots,N$ after phase $k$. We choose a sequence of orthogonal directions, $\mathbf{n}_{k}\cdot\mathbf{n}_{k^{\prime}}=\delta_{kk^{\prime}}$. The choice of the orientations of $\mathbf{n}_{k}$ is not restricted to the basis vectors of the laboratory frame and is described at the end of this section. In three dimensions, the strategy consists of three phases, $k=1,2,3$, until all ABFs have reached their target: they first gather on a plane, then on a line and finally to the target. All ABFs are gathered along a given direction $\mathbf{n}_{k}$ by exploiting the different forward responses of the ABFs when we alternate the frequency of rotation of the magnetic field. More specifically, for $N$ ABFs, the field rotates in the direction $\mathbf{n}_{k}$ for $t_{j}$ time units at frequency $\omega_{c,j}$, $j=1,2,\dots,N$, where $\omega_{c,j}$ is the step-out frequency of the $j^{\text{th}}$ swimmer. We define the velocity matrix with elements $U_{ij}=V_{i}\left(\omega_{c,j}\right)$, denoting the velocity of swimmer $i$ when the field rotates with the step out frequency of swimmer $j$. We can relate the above quantities to the (signed) distances $d_{j}$ covered by the ABFs as $d_{i}=\sum\limits_{j=1}^{N}s_{j}t_{j}U_{ij},$ where $s_{j}\in\\{-1,1\\}$ determines if the field rotates clockwise/counterclockwise. Equivalently, the vector form of the above is $\mathbf{d}=U\mathbf{\beta}$, where $\beta_{j}=t_{j}s_{j}$. Setting $d_{i}=\mathbf{x}^{(i)}\cdot\mathbf{n}_{k}$, we can invert this linear system of equations for each phase $k$ and obtain the times spent at each step-out frequency $\mathbf{\beta}=U^{-1}X\mathbf{n}_{k}$, where we have set $X_{ij}=x^{(i)}_{j}$. We emphasize that this result holds only if the velocity matrix is invertible, restricting the design of the ABFs to achieve independent control. The total time spent at phase $k$ is then given by $T(\mathbf{n}_{k})=\sum\limits_{i=1}^{N}t_{i}=\sum\limits_{i=1}^{N}|\beta_{i}|=\left\lVert U^{-1}X\mathbf{n}_{k}\right\rVert_{1}.$ The yet unknown directions $\mathbf{n}_{k}$, $k=1,2,3$, are chosen to minimize the total travel time. The directions are parameterized as $\mathbf{n}_{k}=R(\phi,\theta,\psi)\mathbf{e}_{k}$, $k=1,2,3$, where $R(\phi,\theta,\psi)$ is the rotation matrix given by the three Euler angles $\phi$, $\theta$ and $\psi$. Note that this choice of handedness of the three directions does not influence the final result. The optimal angles satisfy $\phi^{\star},\theta^{\star},\psi^{\star}=\operatorname*{arg\,min}\limits_{\phi,\theta,\psi}{\sum\limits_{k=1}^{3}T\left(R(\phi,\theta,\psi)\mathbf{e}_{k}\right)}.$ We solve the above minimization problem numerically with derandomised evolution strategy with covariance matrix adaptation (CMA-ES) [34] (see supplementary material for the configuration of the optimizer). ## 5 Independent control II: Reinforcement Learning We now employ a RL approach to solve the problem introduced in section 4. Each of the $N$ ABFs is initially placed at a random position $\mathbf{x}_{i}\sim\mathcal{N}\left(\mathbf{x}_{i}^{0},\sigma\right)$, $i=1,2,\dots,N$. The RL agent controls the magnetic field frequency of rotation and direction, and has the goal of bringing all ABFs within a small distance (here two body lengths, $d=2l$) from the target origin. This small distance is justified by the assumption of non-interacting ABFs. The agent sets the direction and magnitude of the magnetic field frequency every fixed time interval. An episode is terminated if either of the two conditions occur: (a) all ABFs reached the target within a small distance $d$, or (b) the simulation time exceeds a maximum time $T_{\text{max}}$. The positions $\mathbf{x}_{i}$ and orientations $q_{i}$ of the ABFs describe the state $s$ of the environment in the RL framework. The action performed by the agent every $\Delta t$ time encodes the magnetic field rotation frequency and orientation for the next time interval. The reward of the system is designed so that all ABFs reach the target and the travel time is minimized. Additionally, a shaping reward term [35] is added to improve the learning process. The training is performed using VRACER, the off-policy actor critic RL method described in [36]. More details on the method can be found in the supplementary material. ## 6 Reaching the targets In this section, we demonstrate the effectiveness of the two methods introduced in sections 4 and 5. We first consider 2 ABFs in free space with zero background flow. Figure 2 shows the distance of the ABFs to their target over time, and the corresponding magnetic field rotation frequency for both methods. In both cases, the ABFs successfully reach their target. Interestingly, the rotation frequencies chosen by the RL agent correspond to the step-out frequencies of the ABFs. Indeed, these frequencies allow the fastest absolute velocity difference between the ABFs, so it is consistent that they are part of the fastest solution found by the RL method. Furthermore, the RL trained swimmer was about $25\%$ faster than the semi- analytical swimmer. We remark that the RL solution amounts to first blocking the forward motion of one swimmer while the other continues swimming (see fig. 3). The blocked swimmer is first reoriented such that its magnetic moment is orthogonal to the plane of the magnetic field rotation, thus the resulting magnetic torque applied to this swimmer is zero. On the other hand, the method presented in section 4 makes both ABFs swim at all time, even if one of them must go further from its target. In such situations, the “blocking” method found by RL is advantageous over the other method. Figure 2: Distance to target of the two controlled ABFs (in units of body length $l$) against dimensionless time (and ) in free space, zero background flow, and corresponding magnetic field rotation frequency (), where $\omega_{c,1}$ is the step-out frequency of the first swimmer. Figure 3: Trajectories of the ABFs from their initial positions (ABF representations) to the target area (sphere) obtained with the two methods in three dimensions: semi-analytical (dotted lines) and RL (solid lines). The arrows show the successive axes of rotation of the magnetic field. The size of the ABFs has been scaled up by a factor of $7$, for visualization purpose. We now employ the RL method in the case of 2 ABFs swimming in a background flow with non zero velocity. The assumptions required for deriving the semi- analytical approach are violated and therefore we do not use this approach in this case. In the presence of a background flow $\mathbf{u}_{\infty}$, eqs. 4c and 4d become $\displaystyle\mathbf{V}$ $\displaystyle=R(q^{\star})\mathbf{V}^{b}+\mathbf{u}_{\infty}(\mathbf{x}),$ $\displaystyle\bm{\Omega}$ $\displaystyle=R(q^{\star})\bm{\Omega}^{b}+\frac{1}{2}\nabla\times\mathbf{u}_{\infty}(\mathbf{x})+\frac{\lambda^{2}-1}{\lambda^{2}+1}\mathbf{p}\times\left(E(\mathbf{x})\mathbf{p}\right),$ where we approximated the rotation component by the effect of the flow on an axisymmetric ellipsoid of aspect ratio $\lambda$ (Jeffery orbits). Here $E(\mathbf{x})=\left(\nabla\mathbf{u}_{\infty}(\mathbf{x})+\nabla\mathbf{u}_{\infty}^{T}(\mathbf{x})\right)/2$ is the deformation rate tensor of the background flow evaluated at the swimmer’s position and $\mathbf{p}=R(q^{\star})\mathbf{e}_{x}$ is the orientation of the ellipsoid. We used $\lambda=2$ in the subsequent simulations. The background flow is set to the initial conditions of the Taylor-Green vortex, $\mathbf{u}_{\infty}(\mathbf{r})=\begin{bmatrix}A\cos{ax}\sin{by}\sin{cz}\\\ B\sin{ax}\cos{by}\sin{cz}\\\ C\sin{ax}\sin{by}\cos{cz}\end{bmatrix},$ (5) with $A=B=C/2=V_{1}(\omega_{c,1})$ and $a=b=-c=2\pi/50l$. With these parameters, the maximum velocity of the background flow is larger than the maximum swimming speed of the ABFs. Figure 4: Left: Distance to target of the two controlled ABFs (in units of body length $l$) against dimensionless time (and ) in free space with the background flow described by eq. 5 and corresponding magnetic field rotation frequency (), where $\omega_{c,1}$ is the step-out frequency of the first swimmer. Right: Trajectories of the ABFs with non-zero flow obtained with the RL method. The arrows represent the velocity field and the colors represent the magnitude of the vorticity field. The flow field is only shown for a distance less than $4l$ from the trajectories, where $l$ is the length of the swimmers. The distances between the swimmers and the target over time are shown for the RL method on fig. 4. Despite the background flow perturbation, the RL method successfully navigates the ABFs to their target. The magnetic action space exhibits a similar behavior as in the free space case: the rotation frequency of the magnetic field oscillates between the step out frequencies of both swimmers and never exceeds the highest of these frequencies, where the swimming performance would degrade considerably. The trajectories of the ABFs seem to make use of the velocity field to achieve a lower travel time: fig. 4 shows that the trajectories tend to be parallel to the velocity field. The RL method not only found a solution, but also made use of its environment to reduce the travel time. ## 7 Robustness of the RL policy The robustness of the RL method is tested against two external perturbations, unseen during the training phase. In both cases, the robustness of the method is measured in terms of success rate (expected ratio between the number of successful trajectories and the number of attempts). First, a flow perturbation $\mathbf{\delta u}(\mathbf{r})=\varepsilon\mathbf{u}_{\infty}(\mathbf{r}/p)$ is added to the background flow described in the previous section, where $\varepsilon$ controls the strength of the perturbation and $p$ controls the wave length of the perturbation with respect to the original one. Figure 5 shows the success rate of the RL approach against the perturbation strength $\varepsilon$ for different $p$. For large wave lengths ($p=2$), the RL agent is able to successfully steer the ABFs to their target in more than $90\%$ of the cases when the perturbation strengths of less than $20\%$ of the original flow. In contrast, the success rate degrades more sharply for smaller wave lengths ($p=1/2$), suggesting that the method is less robust for short wave length perturbations. The RL policy seems more robust to perturbations with the same wavelengths as the original flow ($p=1$) for large perturbation strengths: the success rate is above $30\%$ even for large perturbations. Figure 5: Left: Success rate of the RL method to guide swimmers to their target against the flow perturbation strength $\varepsilon$, for different wave numbers, $p=1/2$ (), $p=1$ (), $p=2$ (). Right: Success rate of the RL method to steer swimmers to their target against thermal fluctuation $k_{B}T/k_{B}T_{0}$. At small length scales, micro-swimmers are subjected to thermal fluctuations. We investigate the robustness of the RL policy (trained with the background flow, eq. 5) on swimmers subjected to thermal noise and background flow (eq. 5). The thermal fluctuations are modeled as an additive stochastic term to the linear and angular velocities of each swimmer, following the Einstein relation with the mobility tensor given by eq. 1. Defining the generalized undisturbed velocity $\bar{\mathcal{V}}=(\mathbf{V},\bm{\Omega})$, the resulting stochastic generalized velocities satisfy $\displaystyle\mathcal{V}$ $\displaystyle=\bar{\mathcal{V}}+\delta\mathcal{V},$ $\displaystyle\langle\delta\mathcal{V}_{i}\rangle$ $\displaystyle=0,\;\;i=1,2,\dots,6,$ $\displaystyle\langle\delta\mathcal{V}_{i},\delta\mathcal{V}_{j}\rangle$ $\displaystyle=k_{B}TM_{ij},\;\;i,j=1,2,\dots,6,$ where $M$ is the mobility tensor and $k_{B}T$ is the temperature of the system, in energy units. The above property is achieved by adding a scaled Gaussian random noise with zero mean to the velocities at every time step of the simulation. The success rate of the policy is shown in fig. 5 for various temperatures $k_{B}T$, in units of the room temperature $k_{B}T_{0}$. As expected, a large thermal noise causes the policy to fail at its task. Nevertheless, this failure only occurs at relatively high temperatures: the success rate falls below $50\%$ for $k_{B}T>25k_{B}T_{0}$, which is well above the normal operating conditions of ABFs. With temperatures below $2k_{B}T_{0}$, the RL method sustains a success rate above $99\%$. We remark that this robustness is achieved successfully even with a policy trained with $k_{B}T=0$. ## 8 Conclusion We have presented two methods to guide multiple ABFs individually towards targets with a uniform magnetic field. The semi-analytical method allows to understand the basic mechanisms that allow independent control and we derive the necessary condition for the independent control of multiple ABFs: their velocity matrix must be invertible, a condition that can be accommodated by suitably choosing the geometries of the swimmers. This result may help to optimize the shapes of ABFs to further reduce the travel time. The RL approach can control multiple ABFs in quiescent flow as well as in the presence of a complex background flow. Additionally, this approach is resilient to small flow perturbations and to thermal noise. When the background flow vanishes, the RL method recovers a very similar behavior as the first method: the rotation frequency is alternating between the step-out frequencies of the swimmers. Furthermore, the RL method could reach lower travel time than the first method by blocking one swimmer while the other is swimming. Steering an increased number of swimmers requires longer travel times according to the first method. We thus expect that applying the RL approach to more than two swimmers requires longer training times and might become prohibitively expensive as the number of swimmers increases. Possible solutions to this problem might include pre-training the RL agent with the policy found by the semi-analytical method. The current work focused on the simplified case of non-interacting swimmers. In practice, the ABFs may interact hydrodynamically and magnetically with each other, encounter obstacles, evolve in confined geometries or experience time varying flows. Nevertheless, we expect the RL method to be a good candidate to overcome these variants, in the same way as it naturally handled the addition of a background flow. Acknowledgments We acknowledge insightful discussions with Guido Novati (ETHZ) and his technical support for the usage of smarties. We acknowledge support by the European Research Council (ERC Advanced Grant 341117). Conflicts of Interest The authors declare no financial or commercial conflicts of interest. ## References * [1] Q. Cao, X. Han, L. Li, _Lab Chip_ 2014, _14_ 2762\. * [2] L. Yang, L. Zhang, _Annual Review of Control, Robotics, and Autonomous Systems_ 2021, _4_ , 1 null. * [3] P. Tierno, R. Golestanian, I. Pagonabarraga, F. Sagués, _The Journal of Physical Chemistry B_ 2008, _112_ , 51 16525. * [4] Y. Liu, D. Ge, J. Cong, H.-G. Piao, X. Huang, Y. Xu, G. Lu, L. Pan, M. Liu, _Small_ 2018, _14_ , 17 1704546. * [5] P. Liao, L. Xing, S. Zhang, D. Sun, _Small_ 2019, _15_ , 36 1901197. * [6] L. Zhang, K. E. Peyer, B. J. Nelson, _Lab on a Chip_ 2010, _10_ , 17 2203. * [7] Y. Yu, J. Guo, Y. Wang, C. Shao, Y. Wang, Y. Zhao, _ACS applied materials & interfaces_ 2020, _12_ , 14 16097. * [8] R. Mhanna, F. Qiu, L. Zhang, Y. Ding, K. Sugihara, M. Zenobi-Wong, B. J. Nelson, _Small_ 2014, _10_ , 10 1953. * [9] P. Sharan, A. Nsamela, S. C. Lesher-Pérez, J. Simmchen, _Small_ 2021, 2007403. * [10] S. Schuerle, A. P. Soleimany, T. Yeh, G. M. Anand, M. Häberli, H. E. Fleming, N. Mirkhani, F. Qiu, S. Hauert, X. Wang, B. J. Nelson, S. N. Bhatia, _Science Advances_ 2019, _5_ , 4 1. * [11] L. Zhang, J. J. Abbott, L. Dong, B. E. Kratochvil, D. Bell, B. J. Nelson, _Applied Physics Letters_ 2009, _94_ , 6 064107. * [12] H. Gu, Q. Boehler, D. Ahmed, B. J. Nelson, _Science Robotics_ 2019, _4_ , 35. * [13] K. Bente, A. Codutti, F. Bachmann, D. Faivre, _Small_ 2018, _14_ , 29 1704374. * [14] D. Wong, E. B. Steager, V. Kumar, _IEEE Robotics and Automation Letters_ 2016, _1_ , 1 554. * [15] S. Colabrese, K. Gustavsson, A. Celani, L. Biferale, _Physical Review Letters_ 2017, _118_ , 15 1. * [16] A. C. H. Tsang, P. W. Tong, S. Nallan, O. S. Pak, _Phys. Rev. Fluids_ 2020, _5_ 074101\. * [17] S. Verma, G. Novati, P. Koumoutsakos, _Proceedings of the National Academy of Sciences of the United States of America_ 2018, _115_ , 23 5849. * [18] B. Hartl, M. Hübl, G. Kahl, A. Zöttl, _Proceedings of the National Academy of Sciences_ 2021, _118_ , 19. * [19] S. Floyd, E. Diller, C. Pawashe, M. Sitti, _The International Journal of Robotics Research_ 2011, _30_ , 13 1553. * [20] P. J. Vach, S. Klumpp, D. Faivre, _Journal of Physics D: Applied Physics_ 2015, _49_ , 6 065003. * [21] E. Diller, J. Giltinan, M. Sitti, _The International Journal of Robotics Research_ 2013, _32_ , 5 614. * [22] P. Espanol, P. Warren, _EPL (Europhysics Letters)_ 1995, _30_ , 4 191. * [23] D. Alexeev, L. Amoudruz, S. Litvinov, P. Koumoutsakos, _Comput. Phys. Commun._ 2020, 107298. * [24] J. Happel, H. Brenner, _Low Reynolds number hydrodynamics: with special applications to particulate media_ , volume 1, Springer Science & Business Media, 1981. * [25] S. Kim, S. J. Karrila, _Microhydrodynamics: principles and selected applications_ , Courier Corporation, 2013. * [26] B. Graf, _arXiv preprint arXiv:0811.2889_ 2008. * [27] A. Ghosh, P. Fischer, _Nano letters_ 2009, _9_ , 6 2243. * [28] K. E. Peyer, L. Zhang, B. J. Nelson, _Nanoscale_ 2013, _5_ , 4 1259. * [29] D. Li, M. Jeong, E. Oren, T. Yu, T. Qiu, _Robotics_ 2019, _8_ , 4 87. * [30] X. Wang, C. Hu, L. Schurz, C. De Marco, X. Chen, S. Pané, B. J. Nelson, _ACS Nano_ 2018, _12_ , 6 6210. * [31] F. Bachmann, K. Bente, A. Codutti, D. Faivre, _Physical Review Applied_ 2019, _11_ , 3 034039. * [32] I. S. Khalil, A. F. Tabak, Y. Hamed, M. E. Mitwally, M. Tawakol, A. Klingner, M. Sitti, _Advanced Science_ 2018, _5_ , 2 1700461. * [33] I. S. Khalil, A. F. Tabak, Y. Hamed, M. Tawakol, A. Klingner, N. El Gohary, B. Mizaikoff, M. Sitti, _IEEE Robotics and Automation Letters_ 2018, _3_ , 3 1703. * [34] N. Hansen, S. D. Müller, P. Koumoutsakos, _Evolutionary computation_ 2003, _11_ , 1 1. * [35] A. Y. Ng, D. Harada, S. Russell, In _ICML_ , volume 99. 1999 278–287. * [36] G. Novati, P. Koumoutsakos, In _Proceedings of the 36 th International Conference on Machine Learning_. 2019 . *[ABFs]: artificial bacterial flagellum *[RL]: reinforcement learning *[ODE]: ordinary differential equation *[DPD]: dissipative particle dynamics *[ABF]: artificial bacterial flagellum *[ODEs]: ordinary differential equation *[CMA-ES]: derandomised evolution strategy with covariance matrix adaptation
# Open Quantum Dynamics Theory for Non-Equilibrium Work: Hierarchical Equations of Motion Approach Souichi Sakamoto and Yoshitaka Tanimura∗ Department of ChemistryDepartment of Chemistry Graduate School of Science Graduate School of Science Kyoto University Kyoto University Kyoto 606-8502 Kyoto 606-8502 Japan Japan ###### Abstract A system–bath (SB) model is considered to examine the Jarzynski equality in the fully quantum regime. In our previous paper [J. Chem. Phys. 153 (2020) 234107], we carried out “exact” numerical experiments using hierarchical equations of motion (HEOM) in which we demonstrated that the SB model describes behavior that is consistent with the first and second laws of thermodynamics and that the dynamics of the total system are time irreversible. The distinctive quantity in the Jarzynski equality is the “work characteristic function (WCF)”, $\langle\exp[-\beta W]\rangle$, where $W$ is the work performed on the system and $\beta$ is the inverse temperature. In the present investigation, we consider the definitions based on the partition function (PF) and on the path, and numerically evaluate the WCF using the HEOM to determine a method for extending the Jarzynski equality to the fully quantum regime. We show that using the PF-based definition of the WCF, we obtain a result that is entirely inconsistent with the Jarzynski equality, while if we use the path-based definition, we obtain a result that approximates the Jarzynski equality, but may not be consistent with it. In thermodynamics, work, $W$, and heat, $\Delta Q$, are thermodynamic process quantities, while the internal energy, $\Delta U$, is an extensive quantity that cannot be measured directly. For a classical system beginning in an equilibrium state, it has been found that the work done under an arbitrary mechanical operation is related to the equilibrium free energy in accordance with the Jarzynski equality[1, 2, 3, 4, 5, 9, 6, 7, 8, 10, 13, 11, 14, 12]: $-\ln\left(\langle\exp[-\beta W(\tau))\rangle\right)/\beta=\Delta F_{A}(\tau)$. Here, $\beta\equiv 1/k_{\mathrm{B}}T$ is the inverse temperature with the Boltzmann constant $k_{\mathrm{B}}$, $\Delta F_{A}(\tau)$ is the change in the free energy of the system, $W(\tau)$ is the non-equilibrium work, and $\langle\ldots\rangle$ is the ensemble average over all phase space trajectories under the time-dependent external perturbation from time $t=0$ to $\tau$. Although investigating this equality in the classical regime is straightforward in both theoretical and experimental contexts, doing so in the quantum regime remains challenging,[23, 22, 21, 20, 19, 18, 17, 16, 15] because the dynamics of a small quantum system itself are reversible in time and therefore the system cannot reach thermal equilibrium on its own without a system–bath interaction: We cannot assume a canonical distribution as the equilibrium state for either the system of the heat-bath, due to the presence of the system-bath interaction. In the present paper, we numerically evaluate the work characteristic function (WCF), $\langle\exp[-\beta W(\tau)]\rangle$, and $\Delta F_{A}(\tau)$ with the goal of extending the Jarzynski equality to the fully quantum regime. A commonly employed model for this kind of investigation is described by a system-bath (SB) Hamiltonian, in which a small quantum system $A$ is coupled to a bath $B$ modeled by an infinite number of harmonic oscillators. We found that the behavior described by this model is consistent with the first and second laws of thermodynamics and provides an ideal platform to examine various fundamental propositions of thermodynamics in the fully quantum regime.[24, 25] In this model, the Hamiltonian of the total system is given by $\displaystyle{{\hat{H}}}(t)={{\hat{H}}}_{A}(t)+{{\hat{H}}}_{I}+{{\hat{H}}}_{B},$ (1) where ${{\hat{H}}}_{A}(t)$, ${{\hat{H}}}_{I}$ and ${{\hat{H}}}_{B}$ are the Hamiltonian of the system, the interaction and the bath, respectively. The system Hamiltonian is given by ${{\hat{H}}}_{A}(t)={{\hat{H}}}_{A}^{0}+{{\hat{H}}}_{E}(t)$, with ${{\hat{H}}}_{A}^{0}=\frac{1}{2}\hbar\omega_{0}(|e\rangle\langle e|-|g\rangle\langle g|)$ and ${{\hat{H}}}_{E}(t)=0$ for $t\leq 0$, where $|e\rangle$ and $|g\rangle$ are the excited and ground states of the system, and ${{\hat{H}}}_{E}(t)$ is the interaction Hamiltonian with an external field. The bath Hamiltonian ${{\hat{H}}}_{B}$ is expressed as $\displaystyle{\hat{H}}_{B}=\sum_{j}\left[\frac{\hat{p}_{j}^{2}}{2m_{j}}+\frac{1}{2}m_{j}\omega_{j}^{2}{\hat{x}_{j}}^{2}\right],$ (2) where $\hat{p}_{j}$, $\hat{x}_{j}$, $m_{j}$ and $\omega_{j}$ are the momentum, position, mass and frequency of the $j$th bath oscillator, respectively. The SB interaction ${{\hat{H}}}_{I}$ is given by ${{\hat{H}}}_{I}={\hat{V}}\sum_{j}g_{j}{\hat{x}}_{j}$, where $\hat{V}$ is the system part of the interaction, and $g_{j}$ is the coupling constant between the system and the $j$th bath oscillator. The effect of the bath is characterized by the noise correlation function, $C(t)\equiv\langle{\hat{X}}(t){\hat{X}}(0)\rangle_{B}$, where ${\hat{X}}\equiv\sum_{j}g_{j}{\hat{x}}_{j}$, and $\langle\ldots\rangle_{B}$ represents the average taken with respect to the canonical density operator of the bath. The noise correlation function is expressed as $\displaystyle C(t)=\hbar\int_{0}^{\infty}\frac{d\omega}{\pi}J(\omega)\left[\coth\left(\frac{1}{2}\beta\hbar\omega\right)\cos(\omega t)-i\sin(\omega t)\right],$ (3) where $J(\omega)\equiv\sum_{j}\left({\pi g_{j}^{2}}/{2m_{j}\omega_{j}}\right)\delta(\omega-\omega_{j})$ is the spectral density and $\beta$ is the inverse temperature of the bath. When we apply the SB model to problems of thermodynamics, because the main system is microscopic and because the quantum coherence between the system and bath characterizes the quantum nature of the system dynamics, the role of the SB interaction has to be examined carefully. For example, although the factorized thermal equilibrium state, $\hat{\rho}_{tot}^{eq}=\hat{\rho}_{A}^{eq}\otimes\hat{\rho}_{B}^{eq}$, where $\hat{\rho}_{A}^{eq}$ is the equilibrium state of the system without the SB interaction, is often employed as an initial state when investigating open quantum dynamics, in actual situations, the system and bath are quantum mechanically entangled (a phenomenon referred to as “bath entanglement”).[26, 27] In Refs. KATO2016 and Sakamoto2020JCP, we presented a scheme for calculating thermodynamic variables in the SB model on the basis of simulations including an external perturbation using the hierarchical equations of motion (HEOM).[26, 27, 28, 29, 30, 31, 32] The key quantity in this investigation is the change of the “quasi-static Helmholtz energy” at time $\tau$, which is defined as[25] $\displaystyle\Delta F_{A}(\tau)\equiv\int_{0}^{\tau}{\operatorname{tr}_{A}}\left\\{{\hat{\rho}}_{A}^{qeq}(t)\frac{\partial}{\partial t}{{\hat{H}}_{A}}(t)\right\\}dt,$ (4) where ${\hat{\rho}}_{A}^{qeq}(t)$ is the “quasi-static” reduced density operator. Here, note that, as we demonstrated numerically, when ${\hat{H}}_{A}(t)$ changes much more slowly than the relaxation time of the system, $\hat{\rho}_{A}(t)$ can be evaluated within the HEOM approach as the quasi-thermal equilibrium state of the system, ${\hat{\rho}}_{A}^{qeq}(\tau)\approx{\operatorname{tr}_{B}}\\{e^{-\beta({{\hat{H}}_{A}}(\tau)+{\hat{H}}_{I}+{{\hat{H}}_{B}})}\\}/Z_{tot}(\tau)$, where $Z_{tot}(\tau)\equiv{\operatorname{tr}_{A+B}}\\{e^{-\beta({{\hat{H}}_{A}}(\tau)+{\hat{H}}_{I}+{{\hat{H}}_{B}})}\\}$ at time $\tau=t$. Then the change of the “quasi-static Boltzmann entropy”, $\Delta S_{A}(\tau)$, is given by $\displaystyle\Delta S_{A}(\tau)=k_{B}\beta^{2}\frac{\partial}{\partial\beta}\Delta F_{A}(\tau).$ (5) Although the increase of the internal energy and the Boltzmann entropy of the system arise not only from the change in the system Hamiltonian itself but also from the change in the system part of the SB interaction, the HEOM allows us to evaluate both variables accurately. Using the HEOM, we can also evaluate the change of the bath part of the SB interaction energy and bath energy, while these energies themselves are treated as infinitely large, because the bath is regarded as possessing an infinitely large heat capacity. With this treatment, we found that the SB model describes behavior that is consistent with the first law of thermodynamics. Explicitly, we found that the relation, $\langle W(\tau)\rangle=\Delta U_{A}(\tau)-\Delta Q(\tau)$, is satisfied, where $\Delta U_{A}(\tau)=\Delta F_{A}(\tau)+T\Delta S_{A}(\tau)$ is the internal energy of the system and $\Delta Q(\tau)$ is the heat released from the bath.[25] Moreover, we have numerically confirmed that the total entropy production is alway positive. With these results strongly supporting the validity of our approach, in this paper, we evaluate $\langle\exp[-\beta W(\tau)]\rangle$ using the HEOM formalism with the goal of extending the Jarzynski equality to the fully quantum regime characterized by a non- Markovian and non-perturbative SB interaction. In what follows, we examine two definitions of the WCF: (i) a definition based on the partition function (PF) (the PF-WCF)[33] and (ii) a definition based on trajectory (path) (the path-WCF). For an isolated quantum system, the PF-WCF has been defined as[34] $\displaystyle\langle\exp[-\beta W(\tau)]\rangle_{PF}$ $\displaystyle\equiv tr\left\\{{\rm e}^{-\beta{\hat{H}}^{(H)}(\tau)}{\rm e}^{\beta{\hat{H}}(0)}{\hat{\rho}}_{tot}(0)\right\\}$ $\displaystyle=\frac{Z_{tot}(\tau)}{Z_{tot}(0)},$ (6) where ${\hat{\rho}}_{tot}(0)\equiv{\hat{\rho}}_{tot}^{eq}$, and ${\hat{H}}^{(H)}(\tau)$ is the Heisenberg operator of ${\hat{H}}(\tau)$. We rewrite this expression in terms of the time-reversal Liouville operator as $\langle\exp[-\beta W(\tau)]\rangle_{PF}={\rm tr}\left\\{{\hat{P}}_{PF}(\tau)\right\\}$, where ${\hat{P}}_{PF}(\tau)=\exp_{+}\left[-i\int_{\tau}^{0}dt{\hat{H}}_{tot}^{\times}(t)/{\hbar}\right]{\hat{Z}}_{tot}(\tau)$, with ${\hat{Z}}_{tot}(\tau)\equiv\exp\left[-\beta{\hat{H}}_{tot}(\tau)\right]$, and we have introduced the time-ordered exponential ${\rm exp}_{\pm}$. Here and hereafter we use the hyperoperator notation ${\mathcal{\hat{O}}}^{\times}{\hat{f}}\equiv{\mathcal{\hat{O}}}{\hat{f}}-{\hat{f}}{\mathcal{\hat{O}}}$ and ${\mathcal{\hat{O}}}^{\circ}{\hat{f}}\equiv{\mathcal{\hat{O}}}{\hat{f}}+{\hat{f}}{\mathcal{\hat{O}}}$ for any operator ${\mathcal{\hat{O}}}$ and operand operator ${\hat{f}}$. Unfortunately, because the heat bath possesses infinitely many degrees of freedom, we cannot evaluate ${\hat{P}}(\tau)$. Instead, because $\langle\exp[-\beta W(\tau)]\rangle_{PF}=\left(Z_{A}(\tau)Z_{B}(\tau)\right)/\left(Z_{A}(0)Z_{B}(0)\right)$, where $Z_{A}(\tau)$ and $Z_{B}(\tau)$ are the system and bath parts of the partition functions, we can evaluate it indirectly using Eq.(4) as $\langle\exp[-\beta W(\tau)]\rangle_{PF}\approx\exp\left[-\beta(\Delta U_{A}(\tau)-\Delta Q(\tau))\right]$ with $\Delta U_{A}(\tau)=\Delta F_{A}(\tau)+T\Delta S_{A}(\tau)$, which, off course, is the first law of thermodynamics.[25] Alternatively, we can use the path-WCF on the basis of the non-equilibrium trajectories (paths) as $\langle\exp[-\beta W(\tau)]\rangle_{path}={\rm tr}\left\\{{\hat{K}}(\tau)\right\\}$, where ${\hat{K}}(\tau)\equiv{\hat{U}}(\tau,0){\hat{P}}_{path}(\tau){\hat{U}}^{\dagger}(\tau,0)$, with $\displaystyle{\hat{P}}_{path}(\tau)={\rm e}_{+}^{-\frac{\beta}{2}\int_{0}^{\tau}dt{\dot{\hat{H}}_{A}^{(H)}}(t)}{\hat{\rho}}_{tot}(0){\rm e}_{-}^{-\frac{\beta}{2}\int_{0}^{\tau}dt{\dot{\hat{H}}_{A}^{(H)}}(t)},$ (7) and ${\hat{U}}(\tau,0)\equiv\exp_{+}\left[-(i/\hbar)\int_{0}^{\tau}dt{\hat{H}}(t)\right]$. Here, instead of ${\hat{P}}(\tau)=\exp_{+}\left[-\beta\int_{0}^{\tau}dt{\dot{\hat{H}}_{A}^{(H)}}(t)\right]{\hat{\rho}}_{tot}(0)$,[34] we use the symmetric form in Eq. (7), because otherwise ${\hat{P}}(\tau)$ is not Hermitian and, as a consequence, the WCF may not be real valued. The equation of motion for ${\hat{K}}(\tau)$ is given by $\displaystyle\frac{\partial}{\partial\tau}{\hat{K}}(\tau)=-\left[\frac{i}{\hbar}{\hat{H}}_{tot}^{\times}(\tau)+\frac{\beta}{2}{\dot{\hat{H}}}_{A}^{\circ}(\tau)\right]{\hat{K}}(\tau).$ (8) The path (or functional) integral form of ${\hat{K}}(\tau)$ is expressed as $\displaystyle K(\xi,\xi^{\prime},\tau)$ $\displaystyle=\int\int d\xi_{0}d\xi_{0}^{\prime}\int_{\xi(0)=\xi_{0}}^{\xi(\tau)=\xi}{\mathcal{D}}[\xi(t)]\int_{\xi^{\prime}(0)=\xi^{\prime}_{0}}^{\xi^{\prime}(\tau)=\xi^{\prime}}{\mathcal{D}}[\xi^{\prime}(t)]$ $\displaystyle\times{\exp}\left[{\frac{i}{\hbar}S_{tot}[\xi;\tau]-\frac{\beta}{2}W[\xi;\tau]}\right]\rho_{tot}(\xi_{0},\xi_{0}^{\prime},t_{0})$ $\displaystyle\times{\exp}\left[{-\frac{i}{\hbar}S_{tot}[\xi^{\prime};\tau]-\frac{\beta}{2}W[\xi^{\prime};\tau]}\right],$ (9) where $S_{tot}[\xi;\tau]$ and $W[\xi;\tau]\equiv\int_{0}^{\tau}dt{\dot{H}}_{A}(\xi;t)$ are the total action and the work as a functional of $\xi(t)=(\sigma(t),x_{1}(t),x_{2}(t),\ldots)$, i.e., the system coordinate appended to the bath coordinate. The above expression implies that $\langle\exp[-\beta W(\tau)]\rangle_{path}$ is obtained as an ensemble average of possible Liouville space pathways $\xi(t)$ and $\xi^{\prime}(t)$. In the framework of the classical Jarzynski equality, the paths $\xi(\tau)$ and $\xi^{\prime}(\tau)$ in the work operators $-\beta W[\xi;\tau]/2$ and $-\beta W[\xi^{\prime};\tau]/2$ are determined as the paths of minimal action for $-iS_{tot}[\xi^{\prime};\tau]/\hbar$ and $iS_{tot}[\xi;\tau]/\hbar$, respectively. In the present case, however, the paths $\xi(\tau)$ and $\xi^{\prime}(\tau)$ are altered by the presence of $-\beta W[\xi;\tau]/2$ and $-\beta W[\xi^{\prime};\tau]/2$ in Eq. (9): This violates the condition to satisfy the Jarzynski equality. Nevertheless, we use Eq. (7), because it is natural to assume that the measurement of the work cannot be carried out without disturbing the dynamics of the main system, because it is regarded as small. For an open quantum system, we can derive the HEOM for Eq. (9) using the same procedure as that used to obtain the HEOM for Eq. (1),[27, 28, 29, 30, 31, 32] because the only difference between the quantum Liouville equation and Eq.(8) is the presence of the work operator $-\beta{\dot{\hat{H}}_{A}^{\circ}}(\tau)/2$ in the latter. We assume that the spectral density is given by the Drude distribution, $J(\omega)=\eta\gamma^{2}\omega/(\omega^{2}+\gamma^{2})$, where $\eta$ is the SB coupling strength, and $\gamma$ is the inverse correlation time of the bath-induced noise. Then, the noise correlation function takes the form of a linear combination of exponential functions and a delta function: $C(t)=\sum_{k=0}^{L}(c^{\prime}_{k}+ic^{\prime\prime}_{k})\gamma_{k}e^{-\gamma_{k}t}+2\Delta_{L}\delta(t)$, where $c^{\prime}_{k}$, $c^{\prime\prime}_{k}$, $\gamma_{k}$, and $\Delta_{L}$ are constants. Then the HEOM for Eq.(8) is expressed as $\displaystyle\frac{\partial}{\partial t}$ $\displaystyle{\hat{K}}_{(n_{0},\ldots,n_{L})}(t)$ $\displaystyle=-\left[\frac{i}{\hbar}{\hat{H}}_{A}^{\times}(t)+\frac{\beta}{2}{\dot{\hat{H}}}_{A}^{\circ}(t)+\Delta_{L}{\hat{\Phi}}^{2}+\sum_{k=0}^{L}n_{k}\gamma_{k}\right]{\hat{K}}_{(n_{0},\ldots,n_{L})}(t)$ $\displaystyle+\sum_{k=0}^{L}n_{k}{\hat{\Theta}_{k}}{\hat{K}}_{(\ldots,n_{k}-e_{k},\ldots)}(t)+\sum_{k=0}^{L}{\hat{\Phi}}{\hat{K}}_{(\ldots,n_{k}+e_{k},\ldots)}(t),$ (10) where $\mathbf{e}_{k}$ is the unit vector along the $k$th direction. Each hierarchical density matrix is specified by the index ${\bf n}=(n_{0},\ldots,n_{L})$. The density matrix for ${\bf n}=0$ corresponds to the actual work distribution operator, ${\hat{K}}(\tau)$. The initial state is prepared by numerically solving Eq. (10) with the fixed Hamiltonian ${\hat{H}}_{A}(t=0)$ until all of the hierarchy elements reach a steady state and then these elements are used as the initial state. Because Eq. (10) is identical to the HEOM in the case that $\partial{\hat{H}}_{A}(t)/{\partial t}$ has no explicit time dependence, the steady–state solution of the first hierarchy element is identical to the correlated thermal equilibrium state defined by ${\hat{K}}_{({\bf n}=0)}^{eq}={\rm tr_{B}}\\{\exp(-\beta{\hat{H}}(0))\\}/{\rm tr_{A+B}}\\{\exp(-\beta{\hat{H}}(0))\\}$. Figure 1: (Color online) The quantity $-{\rm ln}\langle\exp[-\beta W(\tau)]\rangle/\beta$ evalauted as the path-WCF (solid curves) and the PF-WCF (dashed curves), and the change in the free energy, $\Delta F_{A}(\tau)$ (black dots), under the external perturbation ${\hat{H}}_{E}(t)=(\theta(t)\sin(\Omega t)/4)\hbar\omega_{0}(|g\rangle\langle e|+|e\rangle\langle g|)$ are plotted as functions of time for fixed $\beta\hbar\omega_{0}=1$ in (a) the weak ($\eta=0.1$), (b) intermediate ($\eta=1$), and (c) strong ($\eta=3$) SB coupling cases. The colored dashed and solid curves represent the results for different frequencies: $\Omega=0.1\omega_{0}$ (red curves), $\Omega=\omega_{0}$ (green curves) and $\Omega=5\omega_{0}$ (blue curves). We now report the results of our numerical computations of the PF-WCF, $\langle\exp[-\beta W(\tau)]\rangle_{PF}$, defined in Eq.(6), and the path- WCF, $\langle\exp[-\beta W(\tau)]\rangle_{path}$, defined in Eq.(7) (or Eq. (9)), under the periodic external force described by ${\hat{H}}_{E}(t)=\hbar\omega_{0}\theta(t)\sin(\Omega t)(|g\rangle\langle e|+|e\rangle\langle g|)/4$, where $\theta(t)$ is the step function and $\Omega$ is the frequency of the external field. The SB interaction is defined as $\hat{V}=|g\rangle\langle e|+|e\rangle\langle g|$. While $\langle\exp[-\beta W(\tau)]\rangle_{path}$ is evaluated using Eq. (10), the procedures for computing $\Delta F_{A}(\tau)$ and $\langle W(\tau)\rangle$ ($=-\ln\left(\langle\exp[-\beta W(\tau)]\rangle_{PF}\right)/\beta$) on the basis of the HEOM are explained in Ref. Sakamoto2020JCP. The effect of the bath on thermodynamic properties in this SB model is characterized by the SB coupling strength, the bath temperature, and the noise correlation time.[35, 25, 24] Throughout this investigation, we fix $\beta\hbar\omega_{0}=1$ and $\gamma=\omega_{0}$. These values correspond to intermediate temperature and moderately non-Markovian noise. In Fig. 1 (a), we display the time dependences of the PF-WCF, path-WCF, and $\Delta F_{A}(\tau)$ for several values of the excitation frequency, $\Omega$, in the weak ($\eta=0.1$) SB coupling case. Due to the production of heat, $\Delta Q(\tau)$, the PF-WCF, i.e $-\ln\left(\langle\exp[-\beta W(\tau)]\rangle_{PF}\right)/\beta=\langle W(\tau)\rangle$, increases as a function of time in accordance with the first law of thermodynamics, $\langle W(\tau)\rangle=\Delta F_{A}(\tau)+T\Delta S_{A}(\tau)-\Delta Q(\tau)$. This increase is largest in the resonant case $\Omega=\omega_{0}$, because the excitation of the system is most efficient there. We calculated $\Delta Q(\tau)$ separately and found that the cycle of the production of heat is similar to that of the WCF. Because the entropy production $\Sigma_{tot}(\tau)=\Delta S_{A}(\tau)-\Delta Q(\tau)/T$ exhibits a time lag after the external excitation, we observe a phase delay in the PF-WCF results due to this contribution. The delay is largest at the resonant excitation, $\Omega=\omega_{0}$, because the entropy production is largest there. The amplitudes of oscillations are suppressed because $-\Delta Q(\tau)$ partially cancels out the contribution from the free energy. For the slowest modulation, $\Omega=0.1\omega_{0}$, in this weak coupling case, the free energy is almost canceled by the heat production. In this case, the time evolution of the PF- WCF is dominated by $T\Delta S_{A}(\tau)$. While the time evolution of the PF-WCF differs significantly from $\Delta F_{A}(\tau)$, that of the path-WCF is quite similar. This similarity can be understood as follows. First, note that the ensemble average of $W[\xi;\tau]\equiv\int_{0}^{\tau}dt{\dot{H}}_{A}(\xi;t)$ in Eq. (9) is taken after the time integration of ${\dot{H}}_{A}(\xi;t)$ for a given Liouville path $\xi(t)$. Then, because the contribution of ${\dot{H}}_{A}(\xi;t)$ oscillates between positive and negative values rapidly in time, the heat production involved in the definition in Eq. (9) is suppressed. By contrast, the ensemble average of the PF-WCF is taken step by step in the time integration. Thus, $-\ln\left(\langle\exp[-\beta W(\tau)]\rangle_{PF}\right)/\beta=\langle W(\tau)\rangle$ becomes thermodynamic process function, while $\langle\exp[-\beta W(\tau)]\rangle_{path}$ is not. In Figs. 1 (b) and (c), we present the results for the intermediate and strong SB coupling cases. As pointed out previously,[35, 36] the efficiency of the heat current is suppressed when the SB coupling becomes strong. Because the effective SB coupling depends on the characteristic time scale of the system, the increase of the PF-WCF is suppressed in the case $\Omega\geq\omega_{0}$, while it is enhanced in the case $\Omega<\omega_{0}$.[37] In addition, due to the effect of the moderately strong SB coupling, the system closely follows its instantaneous equilibrium state, as the entropy production, $\Sigma_{tot}(\tau)$, is suppressed. As a result, the amplitudes of oscillations and phase delay are small. In Figs. 1 (b) and (c), because the eigenenergies of the system are significantly altered by the strong system- bath coupling, the deviation of the time profile of the PF-WCF from $\Delta F_{A}(\tau)$ increases as $\Omega$ decreases. For the path-WCF results represented by the solid curves, the deviation becomes larger as the strength of the SB coupling increases. This is because the calculated $\Delta F_{A}(\tau)$ involves a contribution from the energy of the system part of the SB interaction,[25] whereas ${\dot{H}}_{A}(\xi;t)$ does not include the contribution from $H_{I}$, which is also time dependent in the reduced description, due to the non-Markovian nature of the noise that arises from the bath. The path-WCF approaches the free energy when the characteristic time scale of the system is shorter than $t_{c}$, because in this case, the contribution from the SB interaction is insignificant. In the weak coupling case considered in Fig. 1 (a), the difference between the path-WCF results and the free energy results is large in the resonant excitation case, $\Omega=\omega_{0}$. This difference has a non-kinetic origin arising from the alteration of the Liouville path in Eq. (9) due to the presence of the work functional $-\beta W[\xi;\tau]\equiv-\beta\int_{0}^{\tau}dt{\dot{H}}_{A}(\xi;t)$. In the case $\Omega=\omega_{0}$, the contribution of the external perturbation, ${\dot{H}}_{A}(\xi;t)$, in the work operator is large, because the cyclic excitation with this excitation strongly perturbs the system dynamics. Thus the paths that should be determined by the total action are altered, and as a result, the calculated path-WCF exhibits time profiles that differ significantly from those in other cases. For this reason, the path-WCF results also differ significantly from $\Delta F_{A}(\tau)$ in the low temperature case, because the contribution of $-\beta W[\xi;\tau]$ is larger there (results not shown). In this paper, we demonstrated a method for extending the Jarzynski equality to the fully quantum regime. We evaluated the WCF defined in two ways, the PF- WCF and path-WCF, using the numerally rigorous HEOM formalism. Although the path-WCF agrees with the free energy reasonably well, in particular in a weak SB coupling case or the fast excitation cases, while the PF-WCF exhibits very different time-dependence due to the heat production, the result is not equality but approximation. This discrepancy arises from the contribution of the SB interaction, which should also play a role in the classical case if the SB coupling strength is comparable to the system energy. Indeed, if we employ quantum hierarchical Fokker–Planck equations (QHFPEs) for a system described by Wigner distribution functions, we can investigate not only the quantum case but also the classical case by taking the classical limit: We can easily identify purely quantum mechanical effects by comparing the classical and quantum results for the Wigner distribution.[37, 38] It should be mention that, although here we introduced the path-WCF, this is not physical observable, as seen from Eq. (8). Moreover, we cannot determined the paths in the functional formalism of Eq. (9), due to the limitation introduced by the uncertainty principle. Thus, in order to evaluate the free energy in the fully quantum regime, the path-WCF is not practical. Instead, Eq. (4) should be used to evaluate the free energy. Although the present investigation is limited to spin-Boson systems for the specific definitions of the WCF, the applicability of our approach based on the HEOM formalism is in fact more general. Indeed, the same approach can be applied to all of the systems to which the HEOM formalism has been previously applied.[27, 26] Different definitions of the WCF should also be examined. We leave such extensions to future studies to be carried out in the context of the fluctuation theorem. ## References * [1] C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997). * [2] C. Jarzynski, J. Stat. Mech. (2004) P09005. * [3] C. Jarzynski, Annu. Rev. Condens. Matter Phys. 2 (2011) 321. * [4] G. E. Crooks, Phys. Rev. E 60, 2721 (1999). * [5] U. Seifert, Phys. Rev. Lett. 95, 040602 (2005). * [6] T. Mai, A. Dhar, Phys. Rev. E 75, 061101 (2007). * [7] T. Speck, U. Seifert, J. Stat. Mech. L09002 (2007). * [8] K. Saito, Y. Utsumi, Phys. Rev. B 78, 115429 (2008). * [9] S. R. Williams, D. J. Searles, D. J. Evans, Phys. Rev. Lett. 100, 250601 (2008). * [10] J. Liphardt, S. Dumont, S. B. Smith,I. Tinoco, Jr., C. Bustamante, Science 296, 1832 (2002). * [11] F. Douarche, S. Ciliberto, A. Petrosyan, I. Rabbiosi Europhys. Lett. 70, 593 (2005). * [12] D. Collin, F. Ritort, C. Jarzynski, S. B. Smith, I. Tinoco, Jr., C. Bustamante, Nature 437, 231, (2005). * [13] V. Blickle, T. Speck, L. Helden, U. Seifert, C. Bechinger Phys. Rev. Lett. 96, 070603 (2006). * [14] A. Mossa, S. de Lorenzo, J. M. Huguet, F. Ritort, J. Chem. Phys. 130, 234116 (2009). * [15] I. de Vega and D. Alonso, Rev. Mod. Phys. 89 (2017) 015001. * [16] J. Kurchan, e-print arXiv:cond-mat/0007360 (2000). * [17] H. Tasaki, e-print arXiv:cond-mat/0009244 (2000). * [18] S. Yukawa, J. Phys. Soc. Jpn. 69 (2000) 2367. * [19] G. E. Crooks, J. Stat. Phys. (2008) P10023. * [20] M. Campisi, P. Talkner and P. Hänngi, Phys. Rev. Lett. 102 (2009) 210401. * [21] M. Campisi, P. Hänggi, and P. Talkner, Rev. Mod. Phys. 83 (2011) 771. * [22] M. Esposito, U. Harbola, and S. Mukamel, Rev. Mod. Phys. 81 (2019) 1665. * [23] P. Talkner and P. Hänggi Rev. Mod. Phys. 92 (2020) 041002. * [24] A. Kato and Y. Tanimura, J. Chem. Phys. 145 (2016) 224105. * [25] S. Sakamoto and Y. Tanimura, J. Chem. Phys. 153 (2020) 234107. * [26] Y. Tanimura, J. Chem. Phys. 153 (2020) 020901. * [27] Y. Tanimura, J. Phys. Soc. Jpn. 75 (2006) 082001. * [28] Y. Tanimura and R. Kubo, J. Phys. Soc. Jpn. 58 (1989) 101. * [29] Y. Tanimura, Phys. Rev. A41 (1990) 6676. * [30] A. Ishizaki and Y. Tanimura, J. Phys. Soc. Jpn. 74 (2005) 3131. * [31] Y. Tanimura, J. Chem. Phys. 141 (2014) 044114. * [32] Y. Tanimura, J. Chem. Phys. 142 (2015) 144110. * [33] B. P. Venkatesh, G. Watanabe, P. Talkner, New J. Phys. 17 (2017) 075018. * [34] P. Talkner, E. Lutz, P. Hänggi, Phys. Rev. E 75 (2007) 050102(R). * [35] A. Kato and Y. Tanimura, J. Chem. Phys, 143 (2015) 064107. * [36] K. Saito, Europhys. Lett. 83 (2008) 50006. * [37] Y. Tanimura and P. G. Wolynes, J. Chem. Phys. 96 (1992) 8485. * [38] A. Kato and Y. Tanimura, J. Phys. Chem. B 117 (2013) 13132.
OCU-PHYS 528 NITEP 86 Non-Gaussianity from $X,Y$ gauge bosons in Cosmological Collider Physics Nobuhito Marua,b and Akira Okawaa aDepartment of Mathematics and Physics, Osaka City University, Osaka 558-8585, Japan bNambu Yoichiro Institute of Theoretical and Experimental Physics (NITEP), Osaka City University, Osaka 558-8585, Japan Heavy fields of Hubble scale order present during inflation contribute to the non-Gaussian signature for the three-point function of the inflaton. Taking into account that Hubble scale is around the scale of grand unified theory (GUT), this opens a possibility that the GUT scale signatures, which are very hard to be discovered at collider, might be detectable by using information from the precise observations of cosmic microwave background. We discuss a detactability of the $X,Y$ gauge boson present in any GUT in a framework of cosmological collider physics. Calculating one-loop contributions of $X,Y$ gauge bosons to the inflaton three-point functions, we find a remarkable result that one-loop diagram with interactions originated from the mass terms of $X,Y$ gauge bosons provides an enhancement factor expressed by the ratio between the $X,Y$ gauge boson mass and Hubble scale as $(m_{X}/H)^{4}$. In an estimation of the non-Gaussianity, this factor is crucial and its impact on the detactability of $X,Y$ gauge bosons is discussed. ## 1 Introduction The standard model (SM) of particle physics has described many physical phenomena and has gained trust. However, some observations cannot be explained by the SM, such as neutrino oscillation and dark matter problems. For this reason, several theories beyond the standard model have been studied, for example, a Grand Unified Theory (GUT) that unifies the electromagnetic, strong and weak forces. Any GUT includes $X,Y$ gauge bosons that do not exist in the SM, which are the massive gauge bosons in a spontaneous breaking of the GUT gauge group to the SM ones. Typically the prediction of GUT is a proton decay by mediating these particles, but has not yet been observed in Super- Kamiokande experiment. According to the gauge coupling evolution via renormalization group equation, the GUT scale is predicted to be around $10^{15}$ GeV, and it is very difficult to confirm such a high energy theory signature at collider experiments. On the other hand, it has been pointed out that the development of cosmological precision observation in recent years has the potential to verify high-energy physics. Precise observation of cosmic microwave background radiation (CMB) by Planck satellite has confirmed the inflation mechanism that solves the flatness problem, horizon problem, and monopole problem. It has been estimated from the observations that inflation occurs at an energy scale (inflation scale) of about $10^{14}$ GeV by a scalar field called inflaton, which is very close to the energy scale of the GUT. For this reason, an approach called Cosmological Collider Physics in elementary particle physics has been actively studied in recent years [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56]. This approach uses the effective field theory of inflation, which integrates inflation scale heavy particles other than inflaton and graviton. Heavy particles of the inflation scale order contribute to a three-point correlation function that describes the interaction between inflaton and graviton. If these three-point functions give the non-Gaussian property, which is the deviation of the statistical distribution of the curvature fluctuation from the Gaussian distribution, information on the particles that existing during the inflation period can be obtained by comparing it with observation data such as Planck satellites. Therefore, this approach can regard the universe as a high-energy accelerator and has the advantage that it can extract information on physics at energy scales which cannot be reached by terrestrial accelerators. The three-point function of the model with only inflaton and graviton was calculated by Maldacena [57]. The effective field theory of inflation was proposed by Chueng et al. [58], and models with other particles also began to be considered. However, the contribution from the $X,Y$ gauge bosons to the three-point function, whose existence is predicted by a GUT with an energy scale very close to the inflation scale, has not yet been calculated. Therefore, we introduce $X,Y$ gauge bosons into our theory in addition to the inflaton and the graviton, and calculate three-point functions in a framework of quantum field theory. By comparing the non-Gaussianity of these three-point functions obtained in this paper with observation data such as Planck satellites, we will explore an evidence for heavy particles existing in the Grand Unified Theory and discuss its detectability. The organization of this paper is as follows. In the next section, our setup is introduced, which includes a brief review of in-in formalism, a derivation of the propagators of $X,Y$ gauge bosons and its interaction terms to the graviton. In section 3, the non-Gaussianity is estimated by calculating one- loop contributions from $X,Y$ gauge bosons to the graviton three-point function and converting the results to that of the inflaton three-point function. Our conclusion is given in a section 4. The detail calculations of one-loop Feynman diagrams relevant to the graviton three-point function are explained in Appendix. ## 2 Set up ### 2.1 Action We consider a fluctuating spacetime in ADM form in the Friedmann-Lemaitre- Robertson-Walker (FLRW) metric of curvature $K=0$ $ds^{2}=-N^{2}dt^{2}+\tilde{h}_{ij}\left(dx^{i}+N^{i}dt\right)\left(dx^{j}+N^{j}dt\right),$ (1) where $\tilde{h}_{ij}$ is a spatial components of the metric, $N$ is a lapse function and $N^{i}$ is a shift function. The quantity with tilde represents that in the comoving gauge. From the viewpoint of the effective field theory [58, 59], the Einstein-Hilbert action and the inflaton action can be written as $\displaystyle S_{\mathrm{grav}}=\int d^{4}x\sqrt{-g}$ $\displaystyle\left[\frac{1}{2}M_{\mathrm{Pl}}^{2}R-M_{\mathrm{Pl}}^{2}\left(3H^{2}(\tilde{t}+\pi)+\dot{H}(\tilde{t}+\pi)\right)\right.$ (2) $\displaystyle\left.+M_{\mathrm{PI}}^{2}\dot{H}(\tilde{t}+\pi)\left(\partial_{\mu}(\tilde{t}+\pi)\partial_{\nu}(\tilde{t}+\pi)g^{\mu\nu}\right)\right.$ $\displaystyle+\frac{M_{2}(\tilde{t}+\pi)^{4}}{2!}\left(\partial_{\mu}(\tilde{t}+\pi)\partial_{\nu}(\tilde{t}+\pi)g^{\mu\nu}+1\right)^{2}$ $\displaystyle+\left.\frac{M_{3}(\tilde{t}+\pi)^{4}}{3!}\left(\partial_{\mu}(\tilde{t}+\pi)\partial_{\nu}(\tilde{t}+\pi)g^{\mu\nu}+1\right)^{3}+\ldots\right],$ where $g$ is the determinant of the metric $g_{\mu\nu}$, $M_{\mathrm{Pl}}$ is the Planck mass, $R$ is the Ricci curvature in four dimensions, $H$ is the Hubble parameter, $\pi$ is the inflaton as a Nambu-Goldstone boson of time translation, and $M_{2,3}$ are the coefficients of the high-dimensional operators. Of these, the four-dimensional Ricci curvature $R$ can be decomposed into a three-dimensional Ricci curvature $R^{(3)}$ and an external curvature part $E_{ij}$. That is, using the external curvature part $E_{ij}\equiv\frac{1}{2}\left(\dot{\tilde{h}}_{ij}-\nabla_{i}N_{j}-\nabla_{j}N_{i}\right)$ (3) with a covariant derivative $\nabla_{\mu}A_{\nu}=\partial_{\mu}A_{\nu}-{\Gamma^{\alpha}}_{\mu\nu}A_{\alpha}$ (4) for gravity ($A_{\mu}$ is a covariant component of a 4-dimensional vector), the four-dimensional Ricci curvature can be represented as $\sqrt{-g}R=\sqrt{\tilde{h}}\left[NR^{(3)}+N^{-1}\left(E_{ij}E^{ij}-E^{2}\right)\right],$ (5) where $\tilde{h}$ is the determinant of the spatial metric $\tilde{h}_{ij}$ and $E$ is defined to be the trace of $E_{ij}$, $E\equiv{E^{i}}_{i}$ taken by the spatial metric $\tilde{h}_{ij}$. The specific formula for these quantities are [60] $\displaystyle R^{(3)}$ $\displaystyle=$ $\displaystyle-\frac{1}{4a^{2}}(\tilde{t})\partial_{l}\gamma_{ij}\partial_{l}\gamma_{ij}+\mathcal{O}\left(\gamma^{3}\right),$ (6) $\displaystyle E_{ij}E^{ij}-E^{2}$ $\displaystyle=$ $\displaystyle-6{H^{2}+4H\partial_{i}N^{i}}+\frac{1}{2}(\partial_{i}N^{j})\left(\partial_{i}N^{j}+\partial_{j}N^{i}\right)-\left(\partial_{i}N^{i}\right)^{2}$ (7) $\displaystyle-\frac{1}{2}\left(\partial_{i}N^{j}+\partial_{j}N^{i}\right)\dot{\gamma}_{ij}+\frac{1}{4}\dot{\gamma}_{ij}\dot{\gamma}_{ij}-\frac{1}{2}\dot{\gamma}_{ij}(\partial_{l}\gamma_{ij})N^{l}+(\partial_{l}\gamma_{ij})N^{l}\partial_{i}N^{j}$ $\displaystyle+\mathcal{O}\left(\gamma^{3}\right),$ where the spatial metric $\tilde{h}_{ij}$ has been expanded as $\tilde{h}_{ij}=\delta_{ij}+\tilde{\gamma}_{ij}+\frac{1}{2}\tilde{\gamma}_{ik}\tilde{\gamma}_{kj}.$ (8) Since the indices of the perturbations are raised or lowered by $\delta_{ij}$, we do not care about the raising and lowering the indices. We also use that the inverse of the metric is given by $\tilde{g}^{\mu\nu}=\frac{1}{N^{2}}\left(\begin{array}[]{cc}{-1}&{N^{i}}\\\ {N^{j}}&{N^{2}\tilde{h}^{ij}-N^{i}N^{j}}\end{array}\right).$ (9) Since we consider the energy scale after the GUT gauge symmetry is broken, the action of the $X,Y$ gauge bosons in GUT111We implicitly consider an $SU(5)$ GUT case, but our results are not essentially changed even if the theory in larger GUT gauge group is considered. (collectively denoted by $X^{a}_{\mu}$) is given by $\displaystyle S_{\mathrm{gauge}}$ $\displaystyle=$ $\displaystyle\int d^{4}x\sqrt{-g}\left(-\frac{1}{4}g^{\mu\rho}g^{\nu\sigma}F^{a}_{\mu\nu}F^{a}_{\rho\sigma}-m^{2}_{X}g^{\mu\nu}X^{a}_{\mu}X^{*}_{a\nu}\right)$ (10) $\displaystyle\supset$ $\displaystyle\int d^{4}x\sqrt{-g}\left[-g^{\mu\rho}g^{\nu\sigma}\left\\{(\partial_{\mu}X_{a\nu})(\partial_{\rho}X^{*}_{a\sigma})-(\partial_{\mu}X_{a\nu})(\partial_{\sigma}X^{*}_{a\rho})\right\\}\right.$ $\displaystyle\left.-m^{2}_{X}g^{\mu\nu}X^{a}_{\mu}X^{*}_{a\nu}+\cdots\right],$ where the field strength of $X,Y$ gauge bosons is $\displaystyle F_{\mu\nu}=\partial_{\mu}X_{\nu}-\partial_{\nu}X_{\mu}-ig_{\textrm{GUT}}[X_{\mu},X_{\nu}]$ (11) with a GUT gauge coupling constant $g_{\textrm{GUT}}$. $m_{X}$ is the mass of the $X,Y$ gauge bosons and $a$ is the index for the gauge group. $\cdots$ represents interaction terms form the commutator in the field strength $F_{\mu\nu}$, which are neglected here because they are sub-leading effects in our analysis later. Since the FLRW background space-time is conformal invariant, we introduce a gauge fixed term $\displaystyle S_{\mathrm{GF}}$ $\displaystyle=$ $\displaystyle-\frac{1}{\xi}\int d^{4}x\sqrt{-g}\left(g^{\mu\nu}\nabla_{\mu}X^{a}_{\nu}+g^{\mu\nu}{\Gamma^{\alpha}}_{\mu\nu}X^{a}_{\alpha}\right)^{2}$ (12) $\displaystyle=$ $\displaystyle-\frac{1}{\xi}\int d^{4}x\sqrt{-g}\left(g^{\mu\nu}\partial_{\mu}X^{a}_{\nu}\right)^{2},$ which preserves conformal invariance. $\xi$ is a gauge parameter, which will be fixed to $\xi=1$. From equations (8) and (9), the action of the $X,Y$ gauge bosons with quadratic terms is eventually $\displaystyle S_{G}$ $\displaystyle=$ $\displaystyle S_{\mathrm{gauge}}+S_{\mathrm{GF}}$ (13) $\displaystyle\supset$ $\displaystyle\int d^{4}x\sqrt{-g}\left[-g^{\mu\rho}g^{\nu\sigma}\left\\{(\partial_{\mu}X_{a\nu})(\partial_{\rho}X^{*}_{a\sigma})-(\partial_{\mu}X_{a\nu})(\partial_{\sigma}X^{*}_{a\rho})\right\\}-m^{2}_{X}g^{\mu\nu}X^{a}_{\mu}X^{*}_{a\nu}\right]$ $\displaystyle-\frac{1}{\xi}\int d^{4}x\sqrt{-g}\left(g^{\mu\nu}\partial_{\mu}X^{a}_{\nu}\right)\left(g^{\rho\sigma}\partial_{\rho}X^{a*}_{\sigma}\right).$ ### 2.2 Propagators of $X,Y$ gauge bosons In this subsection, we derive the propagators of the $X,Y$ gauge bosons in inflationary spacetime. In order to derive them, it is necessary to introduce briefly a concept of in-in formalism [64]. For details, please refer to [61]. Starting from the Heisenberg picture, let us consider the path integral formalism of the expectation value $\langle Q(\eta)\rangle$ for the operator $Q(\eta)\equiv\varphi^{A_{1}}\left(\eta,\bm{x}_{1}\right)\cdots\varphi^{A_{N}}\left(\eta,\bm{x}_{N}\right).$ (14) $\eta$ means a conformal time defined as $d^{2}\eta=a^{2}(t)dt^{2}$. In the Heisenberg picture, the initial state is independent of time, and the expectation value $\langle Q(\eta)\rangle$ is given by $\langle Q\rangle=\langle\Omega|Q(\eta)|\Omega\rangle.$ (15) The state $\ket{\Omega}$ is determined at some initial time slice $\eta=\eta_{0}$, and the vacuum state at $\eta=\eta_{0}$ is usually adopted. It is natural to adopt the state $\ket{\Omega}$ as the in-state, but since inflationary spacetime is in non-equilibrium, it is nontrivial what the out- state should be. To solve this problem, we choose a time slice $\Sigma_{f}$ at an arbitrary time $\eta_{f}\geq\eta$ and insert a complete system $1=\int\prod_{\bm{x}}dO_{\alpha}(\eta_{f},\bm{x})\left|O_{\alpha}(\eta_{f},\bm{x})\right\rangle\left\langle O_{\alpha}(\eta_{f},\bm{x})\right|$ (16) of local operators $O_{\alpha}\left(\eta,\bm{x}_{i}\right)$ consisting of field operators in the Lagrangian, $\langle Q\rangle=\int\prod_{\bm{x}}dO_{\alpha}(\eta_{f},\bm{x})\left\langle\Omega\left|O_{\alpha}(\eta_{f},\bm{x})\right\rangle\left\langle O_{\alpha}(\eta_{f},\bm{x})\right|\left.\varphi^{A_{1}}\left(\eta,\bm{x}_{1}\right)\cdots\varphi^{A_{N}}\left(\eta,\bm{x}_{N}\right)\right|\Omega\right\rangle.$ (17) Furthermore, we introduce infinitely many slices between the initial slice $\Sigma_{0}$ and the final slice $\Sigma_{f}$, and insert a complete system $1=\int\prod_{\bm{x}}d\varphi(\eta_{i},\bm{x})\left|\varphi(\eta_{i},\bm{x})\right\rangle\left\langle\varphi(\eta_{i},\bm{x})\right|$ (18) of field operators $\varphi^{A}$ and a complete system $1=\int\prod_{\bm{x}}d\pi(\eta_{i},\bm{x})\left|\pi(\eta_{i},\bm{x})\right\rangle\left\langle\pi(\eta_{i},\bm{x})\right|$ (19) of their conjugate operators $\pi_{A}$ in each slice $\Sigma_{i}$. If we denote the field on the time-ordered factors $\left\langle O_{\alpha}\left(\eta_{f}\right)|Q(\eta)|\Omega\left(\eta_{0}\right)\right\rangle$ by the subscript $+$, we obtain $\displaystyle\left\langle O_{\alpha}\left(\eta_{f}\right)|Q(\eta)|\Omega\left(\eta_{0}\right)\right\rangle$ $\displaystyle=$ $\displaystyle\int\mathcal{D}\varphi_{+}\mathcal{D}\pi_{+}\exp\left[i\int_{\eta_{0}}^{\eta_{f}}d\eta d^{3}x\,\left(\pi_{+A}\varphi_{+}^{\prime A}-\mathscr{H}\left[\pi_{+},\varphi_{+}\right]\right)\right]$ $\displaystyle\times\varphi_{+}^{A_{1}}\left(\eta,\mathrm{x}_{1}\right)\cdots\varphi_{+}^{A_{N}}\left(\eta,\mathrm{x}_{N}\right)\left\langle O_{\alpha}(\eta_{0},\bm{x})|\varphi_{+}\left(\eta_{f}\right)\right\rangle\left\langle\varphi_{+}\left(\eta_{0}\right)|\Omega\right\rangle,$ where the prime of $\varphi_{+}^{\prime A}$ means a derivative with respect to $\eta$. $\mathscr{H}\left[\pi_{+},\varphi_{+}\right]$ is a Hamiltonian and the integral measures are collectively denoted by $\mathcal{D}\varphi\mathcal{D}\pi\equiv\lim_{N\rightarrow\infty}\prod_{n=0}^{N-1}\prod_{\bm{x}}\frac{d\varphi\left(\eta_{n},\bm{x}\right)d\pi\left(\eta_{n},\bm{x}\right)}{2\pi}=\prod_{\eta_{0}\leq\eta\leq\eta_{f}}\prod_{\bm{x}}\frac{d\varphi\left(\eta_{n},\bm{x}\right)d\pi\left(\eta_{n},\bm{x}\right)}{2\pi}.$ (21) Similarly, if we denote the field for anti-time-ordered factors $\left\langle\Omega\left(\eta_{0}\right)|O_{\alpha}(\eta_{f},\bm{x})\right\rangle$ with a subscript $-$, we obtain $\displaystyle\left\langle\Omega\left(\eta_{0}\right)|O_{\alpha}(\eta_{f},\bm{x})\right\rangle$ $\displaystyle=$ $\displaystyle\int\mathcal{D}\varphi_{-}\mathcal{D}\pi_{-}\exp\left[-i\int_{\eta_{0}}^{\eta_{f}}d\eta d^{3}x\,\left(\pi_{-A}\varphi_{-}^{\prime A}-\mathscr{H}\left[\pi_{-},\varphi_{-}\right]\right)\right]$ (22) $\displaystyle\vspace{3cm}\times\left\langle\varphi_{-}\left(\eta_{f}\right)|O_{\alpha}(\eta_{0},\bm{x})\right\rangle\left\langle\Omega|\varphi_{-}\left(\eta_{0}\right)\right\rangle.$ Putting the equations (LABEL:20) and (22) together, we obtain the expectation value as $\displaystyle\langle Q\rangle=$ $\displaystyle\int\mathcal{D}\varphi_{+}\mathcal{D}\pi_{+}\mathcal{D}\varphi_{-}\mathcal{D}\pi_{-}\varphi_{+}^{A_{1}}\left(\eta,\bm{x}_{1}\right)\cdots\varphi_{+}^{A_{N}}\left(\eta,\bm{x}_{N}\right)$ $\displaystyle\times\exp\left[i\int_{\eta_{0}}^{\eta_{f}}d\eta d^{3}x\,\left(\pi_{+A}\varphi_{+}^{\prime A}-\mathscr{H}\left[\pi_{+},\varphi_{+}\right]\right)\right]$ $\displaystyle\times\exp\left[-i\int_{\eta_{0}}^{\eta_{f}}d\eta d^{3}x\,\left(\pi_{-A}\varphi_{-}^{\prime A}-\mathscr{H}\left[\pi_{-},\varphi_{-}\right]\right)\right]$ $\displaystyle\times\left\langle\Omega|\varphi_{-}\left(\eta_{0}\right)\right\rangle\left\langle\varphi_{+}\left(\eta_{0}\right)|\Omega\right\rangle\prod_{A,\bm{x}}\delta\left(\varphi_{+}^{A}\left(\eta_{f},\bm{x}\right)-\varphi_{-}^{A}\left(\eta_{f},\bm{x}\right)\right),$ (23) where the path integral is restricted between the two times $\eta=\eta_{0}$ and $\eta=\eta_{f}$, which means that it must be integrated over all possible states $\ket{\varphi_{-}(\eta_{0})}$ and $\bra{\varphi_{+}(\eta_{0})}$ appearing in the integrand. As a result, two copies of the path integral were obtained. One is the path integral with the direction in which time moves forward, and the other is that with the the direction in which time moves backward, both of which coincide in the limit of future time $\eta_{f}$ by condition $\varphi_{+}^{A}(\eta_{f})=\varphi_{-}^{A}(\eta_{f}).$ (24) If the theory does not contain higher-order derivatives, then the momentum integrals can be performed as Gaussian integrals, which can be written by using Lagrangian $\mathscr{L}_{\mathrm{cl}}\left[\varphi_{\pm}\right]$ as $\displaystyle\langle Q\rangle=$ $\displaystyle\int\mathcal{D}\varphi_{+}\mathcal{D}\varphi_{-}\varphi_{+}^{A_{1}}\left(\eta,\bm{x}_{1}\right)\cdots\varphi_{+}^{A_{N}}\left(\eta,\bm{x}_{N}\right)\exp\left[i\int_{\eta_{0}}^{\eta_{f}}d\eta d^{3}x\,\left(\mathscr{L}_{\mathrm{cl}}\left[\varphi_{+}\right]-\mathscr{L}_{\mathrm{cl}}\left[\varphi_{-}\right]\right)\right]$ $\displaystyle\times\left\langle\Omega|\varphi_{-}\left(\eta_{0}\right)\right\rangle\left\langle\varphi_{+}\left(\eta_{0}\right)|\Omega\right\rangle\prod_{A,\bm{x}}\delta\left(\varphi_{+}^{A}\left(\eta_{f},\bm{x}\right)-\varphi_{-}^{A}\left(\eta_{f},\bm{x}\right)\right).$ (25) Comparing to the path integral in flat space-time, there is an extra contribution of factor $-i\mathscr{L}_{\mathrm{cl}}\left[\varphi_{-}\right]$. Since this is a change of $i$ to $-i$ in the path integral of the flat space- time, it means that we only need to add the complex conjugate. The reason why such a complex conjugate of the original path integral is introduced is that the expectation value of the operators in cosmology is determined at a fixed time, which is different from that in particle physics where the initial state is at infinite past and the final state is at infinite future. In addition, the two inner products $\left\langle\Omega|\varphi_{-}\left(\eta_{0}\right)\right\rangle$ and $\left\langle\varphi_{+}\left(\eta_{0}\right)|\Omega\right\rangle$ play a role in giving the correct $i\epsilon$ prescription of the time integrals. This means that the time-ordered part is used to deform the complex plane in the time direction $\eta\to(1-i\epsilon)\eta,$ (26) and the anti-time-ordered part is used to deform the complex plane in the time direction $\eta\to(1+i\epsilon)\eta.$ (27) Such a formalism, which is used to calculate the expectation values of physical quantities in non-equilibrium systems, is called in-in formalism (Schwinger-Keldysh formalism). Corresponding to the extension of the path integral as described above, the generating functional is extended as follows. For simplicity, we consider the case of a real scalar field $\varphi$, but it is straightforward to generalize. We write $\varphi_{\pm}(\eta,\bm{x})$ for the two copies of the scalar field and introduce their corresponding sources $J_{\pm}(\eta,\bm{x})$. $Z\left[J_{+},J_{-}\right]=\int\mathcal{D}\varphi_{+}\mathcal{D}\varphi_{-}\exp\left[i\int_{\eta_{0}}^{\eta_{f}}\mathrm{d}\eta d^{3}x\left(\mathscr{L}_{\mathrm{cl}}\left[\varphi_{+}\right]-\mathscr{L}_{\mathrm{cl}}\left[\varphi_{-}\right]+J_{+}\varphi_{+}-J_{-}\varphi_{-}\right)\right].$ (28) We divide the Lagrangian into a free field part $\mathscr{L}_{0}$ and an interaction part $\mathscr{L}_{\mathrm{int}}$, $\mathscr{L}_{\mathrm{cl}}[\varphi]=\mathscr{L}_{0}[\varphi]+\mathscr{L}_{\mathrm{int}}[\varphi]$ (29) and rewrite the generating functional as $\displaystyle Z\left[J_{+},J_{-}\right]$ $\displaystyle=$ $\displaystyle\exp\left[i\int_{\eta_{0}}^{\eta_{f}}d\eta d^{3}x\left(\mathscr{L}_{\mathrm{int}}\left[\frac{\delta}{i\delta J_{+}}\right]-\mathscr{L}_{\mathrm{int}}\left[-\frac{\delta}{i\delta J_{-}}\right]\right)\right]Z_{0}\left[J_{+},J_{-}\right],$ (30) $\displaystyle Z_{0}\left[J_{+},J_{-}\right]$ $\displaystyle\equiv$ $\displaystyle\int\mathcal{D}\varphi_{+}\mathcal{D}\varphi_{-}\exp\left[i\int_{\eta_{0}}^{\eta_{f}}d\eta d^{3}x\left(\mathscr{L}_{0}\left[\varphi_{+}\right]-\mathscr{L}_{0}\left[\varphi_{-}\right]+J_{+}\varphi_{+}-J_{-}\varphi_{-}\right)\right].$ The path integral $Z_{0}[J_{+},J_{-}]$ is Gaussian and can be explicitly integrated. Therefore, there are four types of propagators, $-iG_{ab}\left(\eta_{1},\bm{x}_{1};\eta_{2},\bm{x}_{2}\right)\equiv\left.\frac{\delta}{ia\delta J_{a}\left(\eta_{1},\bm{x}_{1}\right)}\frac{\delta}{ib\delta J_{b}\left(\eta_{2},\bm{x}_{2}\right)}Z_{0}\left[J_{+},J_{-}\right]\right|_{J_{\pm}=0}.$ (32) Since the signs of derivatives of $+$ type fields are different from those of $-$ type fields, we have added $a,b=\pm$ to each derivative. For example, the propagator of type $(+,+)$ is $\displaystyle-iG_{++}\left(\eta_{1},\bm{x}_{1};\eta_{2},\bm{x}_{2}\right)$ $\displaystyle=$ $\displaystyle\left.\frac{\delta}{i\delta J_{+}\left(\eta_{1},\bm{x}_{1}\right)}\frac{\delta}{i\delta J_{+}\left(\eta_{2},\bm{x}_{2}\right)}Z_{0}\left[J_{+},J_{-}\right]\right|_{J_{\pm}=0}$ (33) $\displaystyle=$ $\displaystyle\int\mathcal{D}\varphi_{+}\mathcal{D}\varphi_{-}\varphi_{+}\left(\eta_{1},\bm{x}_{1}\right)\varphi_{+}\left(\eta_{2},\bm{x}_{2}\right)e^{i\int d\eta d^{3}x\left(\mathscr{L}_{0}\left[\varphi_{+}\right]-\mathscr{L}_{0}\left[\varphi_{-}\right]\right)}$ $\displaystyle=$ $\displaystyle\left\langle\Omega\left|\mathrm{T}\left\\{\varphi\left(\eta_{1},\bm{x}_{1}\right)\varphi\left(\eta_{2},\bm{x}_{2}\right)\right\\}\right|\Omega\right\rangle,$ where T means an ordinary time-ordered operation. Thanks to the translational and rotational invariances on each time slice, we can move to three- dimensional momentum space. The field $\varphi$ is represented by a mode function $v(\eta,\bm{k})$ and creation and annihilation operators for a given three-dimensional momentum $\bm{k}$. Substituting this into the four propagators, we obtain the propagators in momentum space $\Delta_{ab}\left(\eta_{1},\eta_{2},k\right)=-i\int d^{3}xe^{-i\bm{k}\cdot\bm{x}}G_{ab}\left(\eta_{1},\bm{x};\eta_{2},\bm{0}\right).$ (34) Here, we added $-i$ to $\Delta_{ab}$ in order to avoid extra factors in the Feymnan rule in momentum space. Furthermore, the momentum dependence of $\Delta_{ab}$ can be written as $k=|\bm{k}|$. This is because the propagator does not depend on the direction of the three-dimensional momentum, but only on its magnitude, thanks to the rotational symmetry. Then, the tree-level propagators in three-dimensional momentum space can be easily obtained. $\displaystyle\Delta_{++}\left(\eta_{1},\eta_{2},k\right)$ $\displaystyle=$ $\displaystyle\Delta_{>}\left(\eta_{1},\eta_{2},k\right)\theta\left(\eta_{1}-\eta_{2}\right)+\Delta_{<}\left(\eta_{1},\eta_{2},k\right)\theta\left(\eta_{2}-\eta_{1}\right),$ (35) $\displaystyle\Delta_{+-}\left(\eta_{1},\eta_{2},k\right)$ $\displaystyle=$ $\displaystyle\Delta_{<}\left(\eta_{1},\eta_{2},k\right),$ (36) $\displaystyle\Delta_{-+}\left(\eta_{1},\eta_{2},k\right)$ $\displaystyle=$ $\displaystyle\Delta_{>}\left(\eta_{1},\eta_{2},k\right),$ (37) $\displaystyle\Delta_{--}\left(\eta_{1},\eta_{2},k\right)$ $\displaystyle=$ $\displaystyle\Delta_{<}\left(\eta_{1},\eta_{2},k\right)\theta\left(\eta_{1}-\eta_{2}\right)+\Delta_{>}\left(\eta_{1},\eta_{2},k\right)\theta\left(\eta_{2}-\eta_{1}\right),$ (38) where $\Delta_{>}\left(\eta_{1},\eta_{2},k\right)$ and $\Delta_{<}\left(\eta_{1},\eta_{2},k\right)$ are defined by $\displaystyle\Delta_{>}\left(\eta_{1},\eta_{2},k\right)\equiv v\left(\eta_{1},k\right)v^{*}\left(\eta_{2},k\right),$ (39) $\displaystyle\Delta_{<}\left(\eta_{1},\eta_{2},k\right)\equiv v^{*}\left(\eta_{1},k\right)v\left(\eta_{2},k\right)$ (40) and $\theta(\eta)$ is a step function of $\eta$. For the graphical representations of the propagators, we assign black and white dots to represent $+$ and $-$, respectively. Hence, the four propagators are shown in Fig. 1. Figure 1: Graphical representation for an internal line in in-in formalism. The external line connecting the time slice of the end time $\eta=\eta_{f}$ (boundary point) adopts $\eta_{f}$ as an argument. Since the boundary point does not distinguish $+$ from $-$, there are only two kinds of propagators, which are called as bulk-to-boundary propagators. We will assign a square to the boundary point as shown in Fig. 2. Figure 2: Graphical representation for an external line in in-in formalism. With these preparations, let us derive the propagators of the $X,Y$ gauge bosons in the in-in formalism. The action of the $X,Y$ gauge fields (13) in the background space-time $ds^{2}=a^{2}(\eta)\left(-d\eta^{2}+dx^{2}+dy^{2}+dz^{2}\right),\quad a(\eta)=-\frac{1}{H\eta}$ (41) can be written as $S_{G}=\int d^{4}x\left[\eta^{\mu\rho}X^{a*}_{\sigma}\partial_{\mu}\partial_{\rho}X^{\sigma}_{a}-X^{\mu*}_{a}\partial_{\mu}\partial_{\sigma}X^{\sigma}_{a}-a^{2}m_{X}^{2}X^{a}_{\mu}X^{\mu*}_{a}+\frac{1}{\xi}X^{\rho*}_{a}\partial_{\mu}\partial_{\rho}X^{a\mu}\right].$ (42) Taking the Feynman gauge $\xi=1$, the second and fourth terms are cancelled each other and we get a simple expression $S_{G}=\int d^{4}x\left[\eta^{\mu\rho}X^{a*}_{\sigma}\partial_{\mu}\partial_{\rho}X^{\sigma}_{a}-a^{2}m_{X}^{2}X^{a}_{\mu}X^{\mu*}_{a}\right].$ (43) Deriving the equation of motion from this action, we obtain the Klein-Gordon equation with mass $a^{2}m_{X}^{2}$ $\left(\eta^{\nu\sigma}\partial_{\nu}\partial_{\sigma}-a^{2}m_{X}^{2}\right)X^{\mu}_{a}=0.$ (44) If we Fourier transform the gauge field $X^{\mu}_{a}$ into three dimensional momentum space and write its mode function as $v^{\mu}_{a}(\eta,\bm{k})$, the Klein-Gordon equation (44) is $\frac{\partial^{2}v^{\mu}_{a}}{\partial\eta^{2}}(\eta,\bm{k})+\left(|\bm{k}|^{2}+\frac{m_{X}^{2}}{H^{2}\eta^{2}}\right)v^{\mu}_{a}(\eta,\bm{k})=0.$ (45) If we define $\nu_{A}\equiv\sqrt{\frac{1}{4}-\left(\frac{m_{X}}{H}\right)^{2}},$ (46) the solution to this Klein-Gordon equation is found to be $v^{(\lambda)\mu}_{a}(\eta,\bm{k})=-i\frac{\sqrt{\pi}}{2}e^{i\pi\left(\frac{\nu_{A}}{2}+\frac{1}{4}\right)}\left(-\eta\right)^{\frac{1}{2}}H^{a(1)}_{\nu_{A}}(-|\bm{k}|\eta)\varepsilon^{(\lambda)\mu}(\bm{k})\qquad(\lambda=1,2,3),$ (47) where $H^{a(1)}_{\nu_{A}}(-|\bm{k}|\eta)$ is a Hankel function of the first kind and the index $a$ is added as the gauge field index. $\varepsilon^{(\lambda)\mu}(k)$ is a polarization vector of $X_{\mu}$, which satisfies the normalization condition and the projection relation $\varepsilon^{(\lambda)\mu}(\bm{k})\varepsilon^{(\lambda)\nu}(\bm{k})=\delta^{\lambda\lambda^{\prime}}\delta^{\mu\nu},\qquad\sum_{\lambda}\varepsilon^{(\lambda)}_{\mu}(\bm{k})\varepsilon_{\nu}^{(\lambda)}(\bm{k})=g_{\mu\nu}+\frac{k_{\mu}k_{\nu}}{m_{X}^{2}},$ (48) where $\lambda$ expresses a polarization index. The normalization of the expression (47) was done as follows. First, the Bunch-Davies vacuum is chosen as the initial condition and the gauge field $X^{\mu}_{a}(\eta,\bm{x})$ and its conjugate momentum $\Pi^{\mu}_{a}(\eta,\bm{x})$ is expanded in terms of the creation and annihilation operators. $\displaystyle X^{\mu}_{a}(\eta,\bm{x})$ $\displaystyle=$ $\displaystyle\int\frac{d^{3}k}{(2\pi)^{3}}\sum_{\lambda=1,2,3}\left[a^{(\lambda)}(\bm{k})v^{(\lambda)\mu}_{a}(\eta,\bm{k})+b^{(\lambda)\dagger}(-\bm{k})v^{(\lambda)\mu*}_{a}(\eta,-\bm{k})\right]e^{i\bm{k}\cdot\bm{x}},$ (49) $\displaystyle\Pi^{\mu}_{a}(\eta,\bm{x})$ $\displaystyle=$ $\displaystyle\frac{\partial}{\partial\left(\partial_{\eta}{X_{a\mu}}\right)}\left[\eta^{\mu\rho}X^{a*}_{\sigma}\partial_{\mu}\partial_{\rho}X^{\sigma}_{a}-a^{2}m_{X}^{2}X^{a}_{\mu}X^{\mu*}_{a}\right]$ $\displaystyle=$ $\displaystyle\int\frac{d^{3}k}{(2\pi)^{3}}\sum_{\lambda=1,2,3}\left[b^{(\lambda)}(-\bm{k})\partial_{\eta}v^{(\lambda)\mu}_{a}(\eta,-\bm{k})+a^{(\lambda)\dagger}(\bm{k})\partial_{\eta}v^{(\lambda)\mu*}_{a}(\eta,\bm{k})\right]e^{-i\bm{k}\cdot\bm{x}},$ where $a^{(\lambda)}(\bm{k}),a^{(\lambda)\dagger}(\bm{k}),b^{(\lambda)}(-\bm{k})$ and $b^{(\lambda)\dagger}(-\bm{k})$ are the creation and annihilation operators corresponding to the modes with a polarization $\lambda$. Then, the gauge field, its conjugate momentum and the creation, annihilation operators are set to satisfy the following canonical commutation relations $\displaystyle\left[X^{\mu}_{a}(\eta,\bm{x}),\Pi^{\nu}_{a}(\eta,\bm{x}^{\prime})\right]$ $\displaystyle=$ $\displaystyle i\left(g^{\mu\nu}-\frac{\partial^{\mu}\partial^{\nu}}{m_{X}^{2}}\right)\delta(\bm{x}-\bm{x}^{\prime}),$ (51) $\displaystyle\left[a^{(\lambda)}(\bm{k}),a^{(\lambda^{\prime})\dagger}(\bm{k}^{\prime})\right]$ $\displaystyle=$ $\displaystyle(2\pi)^{3}\delta^{\lambda\lambda^{\prime}}\delta(\bm{k}-\bm{k}^{\prime}),$ (52) $\displaystyle\left[b^{(\lambda)}(\bm{k}),b^{(\lambda^{\prime})\dagger}(\bm{k}^{\prime})\right]$ $\displaystyle=$ $\displaystyle(2\pi)^{3}\delta^{\lambda\lambda^{\prime}}\delta(\bm{k}-\bm{k}^{\prime}),$ (53) others $\displaystyle=$ $\displaystyle 0.$ (54) Using (39), (40) and a property $H_{\nu_{A}}^{(1)*}(-k\eta)=H_{\nu_{A}}^{(2)}(-k\eta)$ (55) that the complex conjugate of Hankel functions of the first kind is a Hankel function of the second kind, we obtain the propagators of the $X,Y$ gauge bosons, $\displaystyle\Delta^{(\lambda)\mu\nu}_{>}\left(\eta_{1},\eta_{2},k\right)=-\frac{\pi}{4}e^{-\pi\mathrm{Im}(\nu_{A})}\left(\eta_{1}\eta_{2}\right)^{1/2}H_{\nu_{A}}^{(1)}(-k\eta_{1})H_{\nu_{A}}^{(2)}(-k\eta_{2})\varepsilon^{(\lambda)\mu}(k)\varepsilon^{(\lambda)\nu}(k),$ (56) $\displaystyle\Delta^{(\lambda)\mu\nu}_{<}\left(\eta_{1},\eta_{2},k\right)=-\frac{\pi}{4}e^{-\pi\mathrm{Im}(\nu_{A})}\left(\eta_{1}\eta_{2}\right)^{1/2}H_{\nu_{A}}^{(1)}(-k\eta_{2})H_{\nu_{A}}^{(2)}(-k\eta_{1})\varepsilon^{(\lambda)\nu}(k)\varepsilon^{(\lambda)\mu}(k).$ (57) ### 2.3 Interaction of $X,Y$ gauge bosons and gravitons For the calculation of the non-Gaussianity from $X,Y$ gauge bosons, let us calculate the interactions between $X,Y$ gauge bosons and gravitons. Although we have to calculate the interactions between the $X,Y$ gauge bosons and the inflaton to predict the non-Gaussianity, they are very complicated. Therefore, we will instead calculate three-point function of the gravition and evaluate the three-point function of the inflation from the results of three-point function of the gravition by utilizing information on the power spectrum. As we will see later, this strategy is sufficient for our analysis to estimate the order of non-Gaussianity. We proceed with the calculation using the action of the gauge field (13) and the inverse matrix of the metric (9). The following three-point and four-point couplings including the gravition are derived, respectively. $\displaystyle\mathcal{L}_{3pt}$ $\displaystyle=$ $\displaystyle\gamma^{jl}\left[-a\left(\partial_{j}X_{0}\right)\partial_{l}X^{*}_{0}+a\left(\partial_{j}X_{0}\right)\partial_{0}X^{*}_{l}+a\left(\partial_{0}X_{j}\right)\partial_{l}X^{*}_{0}-a\left(\partial_{j}X_{l}\right)\partial_{0}X^{*}_{0}\right.$ (58) $\displaystyle\left.-a\left(\partial_{0}X_{0}\right)\partial_{l}X^{*}_{l}-a\left(\partial_{0}X_{j}\right)\partial_{0}X^{*}_{l}+\frac{1}{a}\left(\partial_{n}X_{j}\right)\partial_{n}X^{*}_{l}+\frac{1}{a}\left(\partial_{j}X_{n}\right)\partial_{l}X^{*}_{n}\right.$ $\displaystyle\left.-\frac{1}{a}\left(\partial_{n}X_{j}\right)\partial_{l}X^{*}_{n}-\frac{1}{a}\left(\partial_{j}X_{n}\right)\partial_{n}X^{*}_{l}+\frac{1}{a}\left(\partial_{n}X_{n}\right)\partial_{j}X^{*}_{l}+\frac{1}{a}\left(\partial_{j}X_{l}\right)\partial_{n}X^{*}_{n}\right.$ $\displaystyle-\left.am_{X}^{2}X_{j}X_{l}^{*}\right],$ $\displaystyle\mathcal{L}_{4pt}$ $\displaystyle=$ $\displaystyle\frac{1}{2}\gamma^{im}\gamma^{mk}\left[a\left(\partial_{i}X_{0}\right)\partial_{k}X^{*}_{0}-a\left(\partial_{i}X_{0}\right)\partial_{0}X^{*}_{k}-a\left(\partial_{0}X_{i}\right)\partial_{k}X^{*}_{0}+a\left(\partial_{i}X_{k}\right)\partial_{0}X^{*}_{0}\right.$ (59) $\displaystyle\left.+a\left(\partial_{0}X_{0}\right)\partial_{i}X^{*}_{k}+a\left(\partial_{0}X_{i}\right)\partial_{0}X^{*}_{k}-\frac{1}{a}\left(\partial_{n}X_{i}\right)\partial_{n}X^{*}_{k}-\frac{1}{a}\left(\partial_{i}X_{n}\right)\partial_{k}X^{*}_{n}\right.$ $\displaystyle\left.+\frac{1}{a}\left(\partial_{n}X_{i}\right)\partial_{k}X^{*}_{n}+\frac{1}{a}\left(\partial_{i}X_{n}\right)\partial_{n}X^{*}_{k}-\frac{1}{a}\left(\partial_{n}X_{n}\right)\partial_{i}X^{*}_{k}-\frac{1}{a}\left(\partial_{i}X_{k}\right)\partial_{n}X^{*}_{n}\right.$ $\displaystyle+\left.am_{X}^{2}X_{i}X^{*}_{k}\right]+\frac{1}{a}\gamma^{im}\gamma^{kn}\left[-\left(\partial_{i}X_{k}\right)\partial_{m}X^{*}_{n}+\left(\partial_{i}X_{k}\right)\partial_{n}X^{*}_{m}-\left(\partial_{i}X_{m}\right)\partial_{k}X^{*}_{n}\right].$ For the sake of clarity, we omitted the subscript $a$ for the gauge field. ## 3 Calculation of non-Gaussianity ### 3.1 Contribution from $X,Y$ gauge bosons to the graviton 3-point function We are ready to calculate one-loop contributions from $X,Y$ gauge bosons to the non-Gaussianity for the graviton three-point function. Note that the terms multiplied by $a^{-1}$ among the terms in the three-point and four-point couplings (58), (59) become small after the inflation, because of the scale factor $a\propto e^{Ht}\to\infty$ as $t\to\infty$. Therefore, it is good approximation to neglect the terms proportional to $1/a$. The remaining interactions we should consider are classified into two types. One is the interactions from the derivative terms for the $X,Y$ gauge bosons, the other is those from the mass terms for them. We first consider the interaction like $-\gamma^{jl}a\left(\partial_{j}X_{0}\right)\partial_{l}X^{*}_{0}\times\frac{1}{2}\gamma^{im}\gamma^{mk}a\left(\partial_{i}X_{0}\right)\partial_{k}X^{*}_{0},$ (60) which are the first term of the three- and four-point couplings (58) and (59), respectively, and calculate the non-Gaussianity of the graviton three point functions as a typical example by using these interactions. The Feynman diagram for this is shown in Figure 3. The external gravitons are represented by solid lines, and the internal $X,Y$ gauge bosons are represented by wavy lines. Figure 3: ($+,-$) type graph of the contribution from $X,Y$ gauge bosons to the graviton three-point function. Using the Feynman rules in the in-in formalism, the contribution of this one- loop diagram to the graviton three-point function can be written as $\displaystyle S\times N\times\int_{-(1+i\epsilon)\infty}^{0}a(\tau)\,d\tau\int_{-(1+i\epsilon)\infty}^{0}a(\tau^{\prime})\,d\tau^{\prime}\int\frac{d^{3}k}{(2\pi)^{3}}i\cdot(-i)\frac{1}{2}a(\tau)\left(-a(\tau^{\prime})\right)$ $\displaystyle\times G_{++}^{ab,im}(\tau,0,\bm{p}_{1})G_{++}^{bc,mk}(\tau,0,\bm{p}_{2})G_{++}^{ac,jl}(\tau^{\prime},0,\bm{p}_{3})$ $\displaystyle\times\frac{1}{3}\sum_{\lambda}\Delta_{00+-}^{(\lambda)}(\tau,\tau^{\prime},k)\cdot\left(-ik_{i}\right)\left(-ik_{j}\right)$ $\displaystyle\times\frac{1}{3}\sum_{\lambda^{\prime}}\Delta_{00+-}^{(\lambda^{\prime})}\left(\tau,\tau^{\prime},\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\right)\cdot\left(-i\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{k}\right)\left(-i\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{l}\right)$ $\displaystyle+\text{sym.}+\text{c.c.}$ (61) $\displaystyle\equiv$ $\displaystyle J+\text{sym.}+\text{c.c.},$ (62) where c.c. means a complex conjugate of the previous terms. Here, we write the first term as $J$ and “sym.” represents contributions from the diagrams with cyclically permutated momenta of $\bm{p}_{1},\bm{p}_{2}$ and $\bm{p}_{3}$. The momentum flowing into the black circles is denoted by $\bm{p}_{1}$ and $\bm{p}_{2}$, the momentum leaving the white circles denoted by $\bm{p}_{3}$, and the internal line momenta are denoted by $\bm{k}$ and $\bm{k}-\bm{p}_{1}-\bm{p}_{2}$. $S$ is a symmetry factor, which is the number of ways to connect legs of the vertex to the external lines, namely, $2^{2}\times 2$ in the present case. $N$ is a dimension of $X,Y$ gauge bosons. In an $SU(5)$ GUT case, $N=3\times 2$ because $X,Y$ gauge bosons belong to the $({\bf 3},{\bf 2})$ and $({\bf 3}^{*},{\bf 2})$ representations under $SU(3)\times SU(2)$ of the SM gauge groups. $G_{++}^{ab,im}(\tau,0,\bm{p}_{1})$ and so on are graviton propagators [61] $\left(G_{>}\right)_{ij,kl}(\tau_{1},\tau_{2},\bm{k})=\frac{H^{2}}{M_{\mathrm{Pl}}^{2}k^{3}}\left(1+ik\tau_{1}\right)\left(1-ik\tau_{2}\right)e^{-ik\left(\tau_{1}-\tau_{2}\right)}\sum_{\alpha}e^{\alpha}_{ij}(\bm{k})e^{\alpha*}_{kl}(\bm{k}).$ (63) Here only the results of Eq. (62) are given because of complicated calculations. For detailed calculations, see Appendix A. $\displaystyle\text{Eq.}~{}(\ref{1-loop1})\simeq$ $\displaystyle~{}SN\frac{1}{18\cdot 4^{3}\pi}\frac{H^{2}}{M_{\mathrm{Pl}}^{6}(p_{1}p_{2}p_{3})^{3}}\Pi_{abim}(\bm{p}_{1})\Pi_{bcmk}(\bm{p}_{2})\Pi_{acjl}(\bm{p}_{3})$ (64) $\displaystyle\times\frac{1}{210\pi}\left(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)\left(2\Lambda- p_{1}-p_{2}\right)^{7}+\text{sym.}$ where $\Pi_{ijkl}(\bm{k})$ is defined as an expression for the sum of the projectors by the graviton polarization tensor $\Pi_{ijkl}(\bm{k})\equiv\sum_{\alpha}e^{\alpha}_{ij}(\bm{k})e^{\alpha*}_{kl}(\bm{k}).$ (65) In the result (64), the large momentum limit for Hankel function and the squeezed limit for external momenta are considered. $\Lambda$ is the cutoff scale of the loop momentum, which is usually considered to be $m_{X}$ from the effective theory viewpoint. If the cutoff scale is sufficiently larger than the graviton momentum, then Eq. (64) can be $\displaystyle\text{Eq.}~{}(\ref{1-loop1result})\simeq$ $\displaystyle~{}SN\frac{1}{18\cdot 4^{3}\pi}\frac{H^{2}}{M_{\mathrm{Pl}}^{6}(p_{1}p_{2}p_{3})^{3}}\Pi_{abim}(\bm{p}_{1})\Pi_{bcmk}(\bm{p}_{2})\Pi_{acjl}(\bm{p}_{3})$ (66) $\displaystyle\times\frac{1}{210\pi}\left(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)\left(2\Lambda\right)^{7}\times 3.$ Similarly, we have to calculate graphs with propagators other than those of type $(+,-)$ and found that they are of the same order as the result of (66). Also, the one-loop contributions with other terms proportional to $a$ in the three-point couplings (58) and four-point couplings (59) to the graviton three-point function are easily found to be the same order as the results (66). The remaining last contribution due to the three-point and four-point couplings originated from the mass terms for $X,Y$ gauge bosons is expressed as $-\gamma^{jl}am_{X}^{2}X_{j}X_{l}^{*}\times\frac{1}{2}\gamma^{im}am_{X}^{2}X_{i}X^{*}_{k}\gamma^{mk}.$ (67) The corresponding one-loop diagram such as Fig. 3 is similarly calculated (denote by $J_{m}$), $J_{m}\simeq\text{eq.}(\ref{1-loop1})\times 2^{4}\times 30\times\left(\frac{m_{X}}{H}\right)^{4}.$ (68) As is clear from the result, the contribution $J_{m}$ is dominated compared to $J$ due to the enhancement factors $2^{4}\times 30\times\left(\frac{m_{X}}{H}\right)^{4}$. In particular, the factor $\left(\frac{m_{X}}{H}\right)^{4}$ is very large in the case where the mass of $X,Y$ gauge bosons is larger than the Hubble scale and crucial in estimating the non-Gaussianity. ### 3.2 Non-Gaussianity To evaluate the Non-Gaussianity, we consider the graviton version of the equation (5.9) in [62]. $\left\langle\gamma\gamma\gamma\right\rangle\equiv S\left(p_{1},p_{2},p_{3}\right)\frac{1}{\left(p_{1}p_{2}p_{3}\right)^{2}}\tilde{P}_{\gamma}^{2}(2\pi)^{7}\delta^{3}\left(\sum_{i=1}^{3}\mathbf{p}_{i}\right),$ (69) where $\tilde{P}_{\gamma}$ is the reference power spectrum. Since $r\sim 0.1$ is the case where the tensor-to-scalar ratio is largest from the observed data, using the power spectrum $\tilde{P}_{\zeta}=\tilde{P}_{\zeta}(k_{\mathrm{WMAP}})=6.1\times 10^{-9}$ of the inflaton in $k_{\mathrm{WMAP}}=0.027\,\mathrm{Mpc}^{-1}$, we obtain $\tilde{P}_{\gamma}^{2}=r^{2}\tilde{P}_{\zeta}^{2}\sim 3.7\times 10^{-19}.$ (70) Consider an equilateral triangle $p_{1}=p_{2}=p_{3}=p$ as the momentum configuration. Since the natural scale in the theory is the GUT scale $m_{X}$, we write this momentum as $p=\alpha m_{X}$ with the arbitrary dimensionless parameter $\alpha$. Using the loop momentum cutoff as $\Lambda\simeq m_{X}=10^{15}$ GeV, Hubble scale $H=10^{14}$ GeV, Planck scale $M_{\mathrm{Pl}}=10^{19}$ GeV and equation (70), the function $S\left(p_{1},p_{2},p_{3}\right)$ can be evaluated as $\displaystyle S\left(p_{1},p_{2},p_{3}\right)$ $\displaystyle=$ $\displaystyle\frac{\left(p_{1}p_{2}p_{3}\right)^{2}}{(2\pi)^{7}\tilde{P}_{\gamma}^{2}}\times\frac{SN}{36\cdot 4^{3}\pi}\frac{H^{2}}{M_{\mathrm{Pl}}^{6}(p_{1}p_{2}p_{3})^{3}}\Pi_{abim}(\bm{p}_{1})\Pi_{bcmk}(\bm{p}_{2})\Pi_{acjl}(\bm{p}_{3})$ (71) $\displaystyle\times\frac{1}{210\pi}\left(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)\left(2\Lambda\right)^{7}\times 3\times 2^{4}\times 30\times\left(\frac{m_{X}}{H}\right)^{4}$ $\displaystyle\sim$ $\displaystyle\mathcal{O}(10^{-9})\times\frac{1}{\alpha^{3}}.$ In the limit $p_{1}=p_{2}=p_{3}$, the non-Gaussianity $f_{\mathrm{NL}}$ is of the same order as the function $S\left(p_{1},p_{2},p_{3}\right)$ [62] and the non-Gaussianity $f_{\mathrm{NL}}$ for the inflaton three-point function is obtained by multiplying $S\left(p_{1},p_{2},p_{3}\right)$ for the graviton three-point finction by $r^{3/2}$. $f_{\mathrm{NL}}\sim\mathcal{O}(10^{-11})\times\frac{1}{\alpha^{3}}.$ (72) Considering a larger GUT group such as an $E_{8}$ group, we can expect the number of $X,Y$ gauge bosons contributing to the diagram to increase, six times enhancement compared to $SU(5)$ in the case of $E_{8}$ for instance. In addition, we used in the above calculation the non-SUSY GUT scale $\simeq 10^{15}$ GeV, but if we consider the SUSY GUT, then $m_{X}$ becomes typically one order of magnitude larger $10^{16}$ GeV and the factor $\left(m_{X}/H\right)^{4}$ makes $f_{\mathrm{NL}}$ four orders of magnitude larger.222It is assumed that the order estimation of our results are not so changed by the contributions from the superpartners. In such a case, the non- Gaussianity is $f_{\mathrm{NL}}\sim\mathcal{O}(10^{-7})/\alpha^{3}.$ (73) Alternatively, if we consider the string-inspired GUT, the $m_{X}$ would be typically a string scale $10^{17}$ GeV, then the factor $\left(m_{X}/H\right)^{4}$ makes $f_{\mathrm{NL}}$ eight orders of magnitude larger333It is assumed that the order estimation of our results are not so changed by the contributions from the string modes and so on., which results in $f_{\mathrm{NL}}\sim\mathcal{O}(10^{-3})/\alpha^{3}.$ (74) In order to discuss a detectability of the non-Gaussianity form $X,Y$ gauge bosons, the schematic illustration of current and future constraints on the non-Gaussianity is given in Figure 4. Figure 4: Schematic illustration of current and future constraints on the non- Gaussianity (Based on a figure in [63]). The results depend on the parameter $\alpha$, which normalizes the external momentum of the gravitions by $m_{X}$. From the effective theory viewpoint, the external momentum should be smaller than $m_{X}$, namely $\alpha\leq{\mathcal{O}}(1)$. For the non-SUSY GUT case (72), if the parameter $\alpha$ is in a range of ${\mathcal{O}(10^{-(1\sim 2)})}<\alpha<{\mathcal{O}}(1)$, the non-Gaussianity from $X,Y$ gauge bosons may be verified by future experiments of the CMB, observations of large-scale structures in the universe, and observations of the 21cm spectrum of neutral hydrogen atoms. For the SUSY GUT case (73), $\alpha\sim{\mathcal{O}}(1)$ implies that the non-Gaussianity from $X,Y$ gauge bosons may be verified by future experiments of the CMB. As for the string-inspire GUT case (74), our prediction is that it has already been ruled out by the Planck satellite experiments. ## 4 Conclusion Although the GUT is one of the attractive extensions of the SM, it is very hard to detect its signatures directly at colliders since the GUT scale is predicted to be very high. Typical predictions of GUT are a proton decay, but it has not been observed at Super-Kamiokande so far. Recent developments of the precision measurements of CMB by the Planck satellites etc has opened an alternative avenue to a possible detection of the GUT signatures. If the heavy fields of the GUT scale are present during the inflation, which is considered to happen typically at an order of $10^{14}$ GeV close to the GUT scale, the heavy fields will contribute to the three-point function of the inflaton. In order to evaluate such contributions, we consider an approach based on the effective field theory of inflation. In this paper, we discussed a detactability of the $X,Y$ gauge boson in GUT, which is model-independently present in GUT. We have first calculated one-loop contributions of $X,Y$ gauge bosons to the graviton three-point function, which is based on the in-in formalism. Then, the obtained results were converted to the inflaton three-point functions by using the information on the power spectrum. The remarkable feature found in this paper is that one- loop diagram with the cubic and the quartic interactions between the $X,Y$ gauge bosons and the gravitons, which are originated from the mass terms of $X,Y$ gauge bosons, provides an enhancement factor such as $(m_{X}/H)^{4}$. Due to this enhancement factor, it might be possible to detect the non- Gaussianity from $X,Y$ gauge bosons depending on the magnitude of external momentum of the graviton. For the non-SUSY GUT case, if the parameter $\alpha$ is ${\mathcal{O}(10^{-(1\sim 2)})}<\alpha<{\mathcal{O}}(1)$, the non- Gaussianity from $X,Y$ gauge bosons may be verified by future experiments of the CMB, observations of large-scale structures in the universe, and observations of the 21cm spectrum of neutral hydrogen atoms. For the SUSY GUT case, ${\mathcal{O}}(1)$ implies that the non-Gaussianity from $X,Y$ gauge bosons may be verified by future experiments of the CMB. As for the string- inspire GUT case, our prediction is that it has already been ruled out by the Planck satellite experiments. ## Acknowledgments This work is supported in part by JSPS KAKENHI Grant Number JP17K05420 (N.M.). ## Appendix A Details of calculation In this appendix, we give a detailed calculation of the one-loop diagram for the graviton three-point function. The diagrams we should calculate are classified into two categories. One is the diagrams with a three-point and a four-point interactions from the derivative terms for the $X,Y$ gauge bosons, the other is those from the mass terms for them. We first discuss former one, which is calculated from the sum of $J$ denoted in the main text and its complex conjugate. $\displaystyle J=$ $\displaystyle SN\int_{-(1+i\epsilon)\infty}^{0}a(\tau)\,d\tau\int_{-(1+i\epsilon)\infty}^{0}a(\tau^{\prime})\,d\tau^{\prime}\int\frac{d^{3}k}{(2\pi)^{3}}i\cdot(-i)\frac{1}{2}a(\tau)\left(-a(\tau^{\prime})\right)$ $\displaystyle\times G_{++}^{ab,im}(\tau,0,\bm{p}_{1})G_{++}^{bc,mk}(\tau,0,\bm{p}_{2})G_{++}^{ac,jl}(\tau^{\prime},0,\bm{p}_{3})$ $\displaystyle\times\frac{1}{3}\sum_{\lambda}\Delta_{00+-}^{(\lambda)}(\tau,\tau^{\prime},k)\cdot\left(-ik_{i}\right)\left(-ik_{j}\right)$ $\displaystyle\times\frac{1}{3}\sum_{\lambda^{\prime}}\Delta_{00+-}^{(\lambda^{\prime})}\left(\tau,\tau^{\prime},\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\right)\cdot\left(-i\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{k}\right)\left(-i\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{l}\right).$ (75) Using the graviton propagator [61] $\left(G_{>}\right)_{ij,kl}(\tau_{1},\tau_{2},\bm{k})=\frac{H^{2}}{M_{\mathrm{Pl}}^{2}k^{3}}\left(1+ik\tau_{1}\right)\left(1-ik\tau_{2}\right)e^{-ik\left(\tau_{1}-\tau_{2}\right)}\sum_{\alpha}e^{\alpha}_{ij}(\bm{k})e^{\alpha*}_{kl}(\bm{k})$ (76) and the $X,Y$ gauge boson propagators $\displaystyle\Delta^{(\lambda)\mu\nu}_{>}\left(\eta_{1},\eta_{2},k\right)=-\frac{\pi}{4}e^{-\pi\mathrm{Im}(\nu_{A})}\left(\eta_{1}\eta_{2}\right)^{1/2}H_{\nu_{A}}^{(1)}(-k\eta_{1})H_{\nu_{A}}^{(2)}(-k\eta_{2})\varepsilon^{(\lambda)\mu}(k)\varepsilon^{(\lambda)\nu}(k),$ (77) $\displaystyle\Delta^{(\lambda)\mu\nu}_{<}\left(\eta_{1},\eta_{2},k\right)=-\frac{\pi}{4}e^{-\pi\mathrm{Im}(\nu_{A})}\left(\eta_{1}\eta_{2}\right)^{1/2}H_{\nu_{A}}^{(1)}(-k\eta_{2})H_{\nu_{A}}^{(2)}(-k\eta_{1})\varepsilon^{(\lambda)\nu}(k)\varepsilon^{(\lambda)\mu}(k),$ (78) we obtain $\displaystyle J+\text{c.c.}=$ $\displaystyle 2\mathrm{Re}\left[\frac{SN}{18}\int_{-(1+i\epsilon)\infty}^{0}\,d\tau\int_{-(1+i\epsilon)\infty}^{0}\,d\tau^{\prime}\int\frac{d^{3}k}{(2\pi)^{3}}\left(-\frac{1}{H\tau}\right)^{2}\left(-\frac{1}{H\tau^{\prime}}\right)^{2}\right.$ $\displaystyle\times\left[\frac{H^{2}}{M_{\mathrm{Pl}}^{2}p_{1}^{3}}\left(1+ip_{1}\tau\right)e^{-ip_{1}\tau}\sum_{\alpha}e^{\alpha}_{ab}(\bm{p}_{1})e^{\alpha*}_{im}(\bm{p}_{1})\theta(\tau)\right.$ $\displaystyle\left.\quad+\frac{H^{2}}{M_{\mathrm{Pl}}^{2}p_{1}^{3}}\left(1-ip_{1}\tau\right)e^{ip_{1}\tau}\sum_{\alpha}e^{\alpha*}_{ab}(\bm{p}_{1})e^{\alpha}_{im}(\bm{p}_{1})\theta(-\tau)\right]$ $\displaystyle\times\left[\frac{H^{2}}{M_{\mathrm{Pl}}^{2}p_{2}^{3}}\left(1+ip_{2}\tau\right)e^{-ip_{2}\tau}\sum_{\beta}e^{\beta}_{bc}(\bm{p}_{2})e^{\alpha*}_{mk}(\bm{p}_{2})\theta(\tau)\right.$ $\displaystyle\left.\quad+\frac{H^{2}}{M_{\mathrm{Pl}}^{2}p_{2}^{3}}\left(1-ip_{2}\tau\right)e^{ip_{2}\tau}\sum_{\beta}e^{\beta*}_{bc}(\bm{p}_{2})e^{\alpha}_{mk}(\bm{p}_{2})\theta(-\tau)\right]$ $\displaystyle\times\frac{H^{2}}{M_{\mathrm{Pl}}^{2}p_{3}^{3}}\left(1+ip_{3}\tau^{\prime}\right)e^{-ip_{3}\tau^{\prime}}\sum_{\gamma}e^{\gamma}_{ac}(\bm{p}_{3})e^{\alpha*}_{jl}(\bm{p}_{3})$ $\displaystyle\times\sum_{\lambda}\left(-\frac{\pi}{4}\right)e^{-\pi\mathrm{Im}(\nu_{A})}(\tau\tau^{\prime})^{1/2}H^{(1)}_{\nu_{A}}(-k\tau)H^{(2)}_{\nu_{A}}(-k\tau^{\prime})\varepsilon^{(\lambda)}_{0}(\bm{k})\varepsilon^{(\lambda)}_{0}(\bm{k})$ $\displaystyle\times\sum_{\lambda^{\prime}}\left(-\frac{\pi}{4}\right)e^{-\pi\mathrm{Im}(\nu_{A})}(\tau\tau^{\prime})^{1/2}H^{(1)}_{\nu_{A}}(-\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\tau)H^{(2)}_{\nu_{A}}(-\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\tau^{\prime})$ $\displaystyle\times\varepsilon^{(\lambda^{\prime})}_{0}(\bm{k}-\bm{p}_{1}-\bm{p}_{2})\varepsilon^{(\lambda^{\prime})}_{0}(\bm{k}-\bm{p}_{1}-\bm{p}_{2})$ $\displaystyle\times\left.k_{i}k_{j}\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{k}\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{l}\right].$ (79) Introducing an expression for the sum of projections for the graviton polarization tensor as $\Pi_{ijkl}(\bm{k})\equiv\sum_{\alpha}e^{\alpha}_{ij}(\bm{k})e^{\alpha*}_{kl}(\bm{k}),$ (80) the equation (A) is written as $\displaystyle J+\text{c.c.}=$ $\displaystyle-2\mathrm{Re}\left[(-i)^{2}\frac{SNe^{-2\pi\mathrm{Im}(\nu_{A})}}{36\cdot 4^{3}\pi}\frac{H^{2}}{M_{\mathrm{Pl}}^{6}\left(p_{1}p_{2}p_{3}\right)^{3}}\Pi_{abim}(\bm{p}_{1})\Pi_{bcmk}(\bm{p}_{2})\Pi_{acjl}(\bm{p}_{3})\right.$ $\displaystyle\times\int_{-(1+i\epsilon)\infty}^{0}\,d\tau\int_{-(1+i\epsilon)\infty}^{0}\,d\tau^{\prime}\frac{1}{\tau\tau^{\prime}}\left(1-ip_{1}\tau\right)\left(1-ip_{2}\tau\right)\left(1+ip_{3}\tau^{\prime}\right)e^{i(p_{1}+p_{2})\tau}e^{-ip_{3}\tau^{\prime}}$ $\displaystyle\times\int d^{3}k\,H^{(1)}_{\nu_{A}}(-k\tau)H^{(2)}_{\nu_{A}}(-k\tau^{\prime})H^{(1)}_{\nu_{A}}(-\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\tau)H^{(2)}_{\nu_{A}}(-\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\tau^{\prime})$ $\displaystyle\times\left.k_{i}k_{j}\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{k}\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{l}\right].$ (81) Approximating the Hankel functions by taking a limit $x\to\infty$ as $H^{(1)}_{\nu_{A}}(x)\to\sqrt{\frac{2}{\pi x}}e^{i\left(x-\pi/4-\pi\nu_{A}/2\right)},\quad H^{(2)}_{\nu_{A}}(x)\to\sqrt{\frac{2}{\pi x}}e^{-i\left(x-\pi/4-\pi\nu^{*}_{A}/2\right)},$ (82) since the large momentum part is dominant in the momentum integral, the momentum integral can be expressed as $\displaystyle\int d^{3}k\,H^{(1)}_{\nu_{A}}(-k\tau)H^{(2)}_{\nu_{A}}(-k\tau^{\prime})H^{(1)}_{\nu_{A}}(-\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\tau)H^{(2)}_{\nu_{A}}(-\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\tau^{\prime})$ $\displaystyle\times k_{i}k_{j}\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{k}\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{l}$ $\displaystyle=$ $\displaystyle\frac{16e^{2\pi\mathrm{Im}(\nu_{A})}}{\pi\tau\tau^{\prime}}\int_{0}^{\Lambda}dk\frac{k}{\left|\bm{k}+\bm{p}_{3}\right|}e^{-ik(\tau-\tau^{\prime})}e^{-i\left|\bm{k}+\bm{p}_{3}\right|(\tau-\tau^{\prime})}k_{i}k_{j}(k_{k}k_{l}+p_{3k}p_{3l})$ (83) where $\Lambda$ is the cutoff scale of the loop momentum and the momentum integral of the odd function in $k_{i}$ is dropped. Furthermore, by using relations for internal momentum $\displaystyle k_{i}k_{j}k_{k}k_{l}$ $\displaystyle=\frac{k^{4}}{15}\left(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right),$ (84) $\displaystyle k_{i}k_{j}$ $\displaystyle=\frac{k^{2}}{3}\delta_{ij},$ (85) we obtain $\displaystyle\frac{16e^{2\pi\mathrm{Im}(\nu_{A})}}{\pi\tau\tau^{\prime}}\int_{0}^{\Lambda}dk\frac{k}{\left|\bm{k}+\bm{p}_{3}\right|}e^{-ik(\tau-\tau^{\prime})}e^{-i\left|\bm{k}+\bm{p}_{3}\right|(\tau-\tau^{\prime})}k_{i}k_{j}(k_{k}k_{l}+p_{3k}p_{3l})$ $\displaystyle=$ $\displaystyle\frac{16e^{2\pi\mathrm{Im}(\nu_{A})}}{\pi\tau\tau^{\prime}}\int_{0}^{\Lambda}dk\frac{k}{\left|\bm{k}+\bm{p}_{3}\right|}e^{-ik(\tau-\tau^{\prime})}e^{-i\left|\bm{k}+\bm{p}_{3}\right|(\tau-\tau^{\prime})}$ $\displaystyle\qquad\times\left[\frac{k^{4}}{15}\left(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)+\frac{k^{2}}{3}\delta_{ij}p_{3k}p_{3l}\right].$ (86) Assuming that the loop momentum cutoff $\Lambda$ is sufficiently larger than the graviton external momentum $\bm{p}_{1,2,3}$, we can approximate $\left|\bm{k}+\bm{p}_{3}\right|$ as $\left|\bm{k}+\bm{p}_{3}\right|\simeq k\left[1+\frac{p_{3}\cos\theta}{k}+\mathcal{O}\left(\frac{p_{3}^{2}}{\bm{k}^{2}}\right)\right],$ (87) where $\theta$ is the angle between $\bm{k}$ and $\bm{p}_{3}$. From this approximation, $\tau^{\prime}$-integral is performed $\displaystyle\int_{-(1+i\epsilon)\infty}^{0}d\tau^{\prime}\frac{1}{\tau^{\prime 2}}(1+ip_{3}\tau^{\prime})e^{-i(p_{3}-k)\tau^{\prime}}e^{i\left|\bm{k}+\bm{p}_{3}\right|\tau^{\prime}}+\text{c.c.}$ $\displaystyle=$ $\displaystyle\mathrm{Re}\left[-i\int_{-(1+i\epsilon)\infty}^{0}d\tau^{\prime}\frac{1}{\tau^{\prime 2}}(1+ip_{3}\tau^{\prime})e^{-i(p_{3}-k)\tau^{\prime}}e^{ik\left(1+p_{3}\cos\theta/k\right)\tau^{\prime}}\right]$ $\displaystyle=$ $\displaystyle-2k,$ (88) where the formula [57] $\mathrm{Re}\left[-i\int_{-(1+i\epsilon)\infty}^{0}d\tau^{\prime}\frac{1}{\tau^{\prime 2}}(1-iK\tau)e^{-iK\tau^{\prime}}\right]=-K$ (89) was utilized. Calculating the $\tau$-integral in $J+\text{c.c}$ in the same way, we obtain $\displaystyle\mathrm{Re}\left[-i\int_{-(1+i\epsilon)\infty}^{0}d\tau\frac{1}{\tau^{2}}(1-ip_{1}\tau)(1-ip_{2}\tau)e^{-i(k-p_{1}-p_{2})\tau}e^{-i\left|\bm{k}+\bm{p}_{3}\right|\tau}\right]$ $\displaystyle=$ $\displaystyle-\left(2k-p_{1}-p_{2}-\frac{p_{1}p_{2}}{2k-p_{1}-p_{2}}\right).$ (90) Putting these together, the equation (A) except for the first line becomes $\displaystyle\frac{16e^{2\pi\mathrm{Im}(\nu_{A})}}{\pi}\mathrm{Re}\left[(-i)^{2}\int_{-(1+i\epsilon)\infty}^{0}\,d\tau\int_{-(1+i\epsilon)\infty}^{0}\,d\tau^{\prime}\frac{1}{\tau\tau^{\prime}}\left(1-ip_{1}\tau\right)\left(1-ip_{2}\tau\right)\left(1+ip_{3}\tau^{\prime}\right)e^{i(p_{1}+p_{2})\tau}e^{-ip_{3}\tau^{\prime}}\right.$ $\displaystyle\times\int d^{3}k\,H^{(1)}_{\nu_{A}}(-k\tau)H^{(2)}_{\nu_{A}}(-k\tau^{\prime})H^{(1)}_{\nu_{A}}(-\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\tau)H^{(2)}_{\nu_{A}}(-\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\tau^{\prime})$ $\displaystyle\times\left.k_{i}k_{j}\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{k}\left(\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right)_{l}\right]$ $\displaystyle=$ $\displaystyle\frac{16e^{2\pi\mathrm{Im}(\nu_{A})}}{\pi}\int_{0}^{\Lambda}dk\,\frac{k}{\left|\bm{k}+\bm{p}_{3}\right|}\left[\frac{k^{4}}{15}\left(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)+\frac{k^{2}}{3}\delta_{ij}p_{3k}p_{3l}\right]\times(-2k)$ $\displaystyle\times\left[-\left(2k-p_{1}-p_{2}-\frac{p_{1}p_{2}}{2k-p_{1}-p_{2}}\right)\right]$ (91) $\displaystyle\simeq$ $\displaystyle\frac{16e^{2\pi\mathrm{Im}(\nu_{A})}}{\pi}\int_{0}^{\Lambda}dk\,\frac{k^{4}}{15}\left(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)\times(-2k)\times(-1)\left(2k-p_{1}-p_{2}\right)$ (92) $\displaystyle\simeq$ $\displaystyle\frac{e^{2\pi\mathrm{Im}(\nu_{A})}}{210\pi}\left(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)\left(2\Lambda- p_{1}-p_{2}\right)^{7}.$ (93) Hence, the sum of $J$ and its complex conjugate results in $\displaystyle J+\text{c.c.}=$ $\displaystyle SN\frac{1}{36\cdot 4^{3}\pi}\frac{H^{2}}{M_{\mathrm{Pl}}^{6}(p_{1}p_{2}p_{3})^{3}}\Pi_{abim}(\bm{p}_{1})\Pi_{bcmk}(\bm{p}_{2})\Pi_{acjl}(\bm{p}_{3})$ $\displaystyle\times\frac{1}{210\pi}\left(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)\left(2\Lambda- p_{1}-p_{2}\right)^{7}\times 2.$ (94) Note that there are still some diagrams with other three-point and four-point interactions in (58) and (59), which we should calculate in the first category. We can easily check that the contributions from these diagrams are roughly same order as the result of (94). Next, we discuss the other diagram from the latter category, which will be denoted by $J_{m}$. As in the previous calculations of $J$, we calculate a sum of $J_{m}$ and its complex conjugate. $\displaystyle J_{m}+\text{c.c.}=$ $\displaystyle 2\text{Re}\left[SN\int_{-(1+i\epsilon)\infty}^{0}a(\tau)\,d\tau\int_{-(1+i\epsilon)\infty}^{0}a(\tau^{\prime})\,d\tau^{\prime}\int\frac{d^{3}k}{(2\pi)^{3}}i\cdot(-i)\frac{1}{2}a(\tau)\left(-a(\tau^{\prime})\right)m_{X}^{4}\right.$ $\displaystyle\times G_{++}^{ab,im}(\tau,0,\bm{p}_{1})G_{++}^{bc,mk}(\tau,0,\bm{p}_{2})G_{-+}^{ac,jl}(\tau^{\prime},0,\bm{p}_{3})$ $\displaystyle\times\frac{1}{3}\sum_{\lambda}\Delta_{ij+-}^{(\lambda)}(\tau,\tau^{\prime},k)\times\left.\frac{1}{3}\sum_{\lambda^{\prime}}\Delta_{kl+-}^{(\lambda^{\prime})}\left(\tau,\tau^{\prime},\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\right)\right].$ (95) Substituting the graviton propagator (76) and the $X,Y$ gauge boson propagators (77), (78) into (A), we obtain $\displaystyle J_{m}+\text{c.c.}=$ $\displaystyle 2\text{Re}\left[(-i)^{2}\frac{SN}{36}m_{X}^{4}\int_{-(1+i\epsilon)\infty}^{0}\,d\tau\int_{-(1+i\epsilon)\infty}^{0}\,d\tau^{\prime}\int\frac{d^{3}k}{(2\pi)^{3}}\left(-\frac{1}{H\tau}\right)^{2}\left(-\frac{1}{H\tau^{\prime}}\right)^{2}\right.$ $\displaystyle\times\left[\frac{H^{2}}{M_{\mathrm{Pl}}^{2}p_{1}^{3}}\left(1+ip_{1}\tau\right)e^{-ip_{1}\tau}\sum_{\alpha}e^{\alpha}_{ab}(\bm{p}_{1})e^{\alpha*}_{im}(\bm{p}_{1})\theta(\tau)\right.$ $\displaystyle\left.\quad+\frac{H^{2}}{M_{\mathrm{Pl}}^{2}p_{1}^{3}}\left(1-ip_{1}\tau\right)e^{ip_{1}\tau}\sum_{\alpha}e^{\alpha*}_{ab}(\bm{p}_{1})e^{\alpha}_{im}(\bm{p}_{1})\theta(-\tau)\right]$ $\displaystyle\times\left[\frac{H^{2}}{M_{\mathrm{Pl}}^{2}p_{2}^{3}}\left(1+ip_{2}\tau\right)e^{-ip_{2}\tau}\sum_{\beta}e^{\beta}_{bc}(\bm{p}_{2})e^{\alpha*}_{mk}(\bm{p}_{2})\theta(\tau)\right.$ $\displaystyle\left.\quad+\frac{H^{2}}{M_{\mathrm{Pl}}^{2}p_{2}^{3}}\left(1-ip_{2}\tau\right)e^{ip_{2}\tau}\sum_{\beta}e^{\beta*}_{bc}(\bm{p}_{2})e^{\alpha}_{mk}(\bm{p}_{2})\theta(-\tau)\right]$ $\displaystyle\times\frac{H^{2}}{M_{\mathrm{Pl}}^{2}p_{3}^{3}}\left(1+ip_{3}\tau^{\prime}\right)e^{-ip_{3}\tau^{\prime}}\sum_{\gamma}e^{\gamma}_{ac}(\bm{p}_{3})e^{\alpha*}_{jl}(\bm{p}_{3})$ $\displaystyle\times\sum_{\lambda}\left(-\frac{\pi}{4}\right)e^{-\pi\mathrm{Im}(\nu_{A})}(\tau\tau^{\prime})^{1/2}H^{(1)}_{\nu_{A}}(-k\tau)H^{(2)}_{\nu_{A}}(-k\tau^{\prime})\varepsilon^{(\lambda)}_{i}(\bm{k})\varepsilon^{(\lambda)}_{j}(\bm{k})$ $\displaystyle\times\sum_{\lambda^{\prime}}\left(-\frac{\pi}{4}\right)e^{-\pi\mathrm{Im}(\nu_{A})}(\tau\tau^{\prime})^{1/2}H^{(1)}_{\nu_{A}}(-\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\tau)H^{(2)}_{\nu_{A}}(-\left|\bm{k}-\bm{p}_{1}-\bm{p}_{2}\right|\tau^{\prime})$ $\displaystyle\times\left.\varepsilon^{(\lambda^{\prime})}_{k}(\bm{k}-\bm{p}_{1}-\bm{p}_{2})\varepsilon^{(\lambda^{\prime})}_{l}(\bm{k}-\bm{p}_{1}-\bm{p}_{2})\right].$ (96) By using the approximation of Hankel functions and the quantity $\Pi_{abcd}$, we can further rewrite the above expression as $\displaystyle J_{m}+\text{c.c.}\simeq$ $\displaystyle 2\text{Re}\left[-SN\frac{1}{32\cdot 3^{2}\pi^{3}}\frac{H^{2}m_{X}^{4}}{M_{\mathrm{Pl}}^{6}\left(p_{1}p_{2}p_{3}\right)^{3}}\Pi_{abim}(\bm{p}_{1})\Pi_{bcmk}(\bm{p}_{2})\Pi_{acjl}(\bm{p}_{3})\right.$ $\displaystyle\times\int_{-(1+i\epsilon)\infty}^{0}\,d\tau\int_{-(1+i\epsilon)\infty}^{0}\,d\tau^{\prime}\frac{1}{(\tau\tau^{\prime})^{2}}\left(1-ip_{1}\tau\right)\left(1-ip_{2}\tau\right)\left(1+ip_{3}\tau^{\prime}\right)$ $\displaystyle\times\int d^{3}k\frac{1}{k^{2}}\sum_{\lambda,\lambda^{\prime}}\varepsilon_{i}^{(\lambda)}(\bm{k})\varepsilon_{j}^{(\lambda)}(\bm{k})\varepsilon_{k}^{(\lambda^{\prime})}(\bm{k}-\bm{p}_{1}-\bm{p}_{2})\varepsilon_{l}^{(\lambda^{\prime})}(\bm{k}-\bm{p}_{1}-\bm{p}_{2})$ $\displaystyle\times\left.e^{-i(k-p_{1}-p_{2})\tau}e^{-i|\bm{k}-\bm{p}_{1}-\bm{p}_{2}|\tau}e^{-i(-k+p_{3})\tau^{\prime}}e^{-i|\bm{k}-\bm{p}_{1}-\bm{p}_{2}|\tau^{\prime}}\right]$ $\displaystyle\simeq$ $\displaystyle 2\text{Re}\left[-SN\frac{1}{8\cdot 3^{2}\pi^{2}}\frac{H^{2}m_{X}^{4}}{M_{\mathrm{Pl}}^{6}\left(p_{1}p_{2}p_{3}\right)^{3}}\Pi_{abim}(\bm{p}_{1})\Pi_{bcmk}(\bm{p}_{2})\Pi_{acik}(\bm{p}_{3})\right.$ $\displaystyle\times\int_{-(1+i\epsilon)\infty}^{0}\,d\tau\int_{-(1+i\epsilon)\infty}^{0}\,d\tau^{\prime}\frac{1}{(\tau\tau^{\prime})^{2}}\left(1-ip_{1}\tau\right)\left(1-ip_{2}\tau\right)\left(1+ip_{3}\tau^{\prime}\right)$ $\displaystyle\times\left.\int_{0}^{\Lambda}dk\frac{1}{(H\tau)^{4}}e^{-i(k-p_{1}-p_{2})\tau}e^{-i(k+p_{3}\cos\theta)\tau}e^{-i(-k+p_{3})\tau^{\prime}}e^{-i(k+p_{3}\cos\theta)\tau^{\prime}}\right]$ $\displaystyle=$ $\displaystyle J\times 2^{4}\times 30\times\left(\frac{m_{X}}{H}\right)^{4}.$ (97) In the second approximation, the sum of the product of polarization vectors for graviton is approximated as $\displaystyle\sum_{\lambda}\varepsilon_{i}^{(\lambda)}(\bm{k})\varepsilon_{j}^{(\lambda)}(\bm{k})=a(\tau)^{2}\delta_{ij}+\frac{k_{i}k_{j}}{m_{X}^{2}}=\frac{\delta_{ij}}{(H\tau)^{2}}+\frac{k_{i}k_{j}}{m_{X}^{2}}\simeq\frac{\delta_{ij}}{(H\tau)^{2}}$ (98) by taking into account of the assumption $H^{2}\ll m_{X}^{2}$. Assumption that the cutoff $\Lambda$ is sufficiently larger than the graviton external momentum $\bm{p}_{1,2,3}$ is also understood in the momentum integral. ## References * [1] X. Chen and Y. Wang, “Large non-Gaussianities with Intermediate Shapes from Quasi- Single Field Inflation,” Phys. Rev. D 81, 063511 (2010) [arXiv:0909.0496 [astro-ph.CO]]. * [2] X. Chen and Y. Wang, “Quasi-Single Field Inflation and Non-Gaussianities,” JCAP 04, 027 (2010) [arXiv:0911.3380 [hep-th]]. * [3] D. Baumann and D. Green, “Signatures of Supersymmetry from the Early Universe,” Phys. Rev. D 85, 103520 (2012) [arXiv:1109.0292 [hep-th]]. * [4] V. Assassi, D. Baumann and D. Green, “On Soft Limits of Inflationary Correlation Functions,” JCAP 11, 047 (2012) [arXiv:1204.4207 [hep-th]]. * [5] E. Sefusatti, J. R. Fergusson, X. Chen and E. P. S. Shellard, “Effects and Detectability of Quasi-Single Field Inflation in the Large-Scale Structure and Cosmic Microwave Background,” JCAP 08, 033 (2012) [arXiv:1204.6318 [astro-ph.CO]]. * [6] J. Norena, L. Verde, G. Barenboim and C. Bosch, “Prospects for constraining the shape of non-Gaussianity with the scale-dependent bias,” JCAP 08, 019 (2012) [arXiv:1204.6324 [astro-ph.CO]]. * [7] X. Chen and Y. Wang, “Quasi-Single Field Inflation with Large Mass,” JCAP 09, 021 (2012) [arXiv:1205.0160 [hep-th]]. * [8] T. Noumi, M. Yamaguchi and D. Yokoyama, “Effective field theory approach to quasi-single field inflation and effects of heavy fields,” JHEP 06, 051 (2013) [arXiv:1211.1624 [hep-th]]. * [9] J. O. Gong, S. Pi and M. Sasaki, “Equilateral non-Gaussianity from heavy fields,” JCAP 11, 043 (2013) [arXiv:1306.3691 [hep-th]]. * [10] R. Emami, “Spectroscopy of Masses and Couplings during Inflation,” JCAP 04, 031 (2014) [arXiv:1311.0184 [hep-th]]. * [11] A. Kehagias and A. Riotto, “High Energy Physics Signatures from Inflation and Con-formal Symmetry of de Sitter,” Fortsch. Phys. 63, 531 (2015) [arXiv:1501.03515 [hep-th]]. * [12] J. Liu, Y. Wang and S. Zhou, “Inflation with Massive Vector Fields,” JCAP 08, 033 (2015) [arXiv:1502.05138 [hep-th]]. * [13] N. Arkani-Hamed and J. Maldacena, “Cosmological Collider Physics,” arXiv:1503.08043 [hep-th]. * [14] E. Dimastrogiovanni, M. Fasiello and M. Kamionkowski, “Imprints of Massive Primordial Fields on Large-Scale Structure,” JCAP 02, 017 (2016) [arXiv:1504.05993 [astro- ph.CO]]. * [15] F. Schmidt, N. E. Chisari and C. Dvorkin, “Imprint of inflation on galaxy shape correlations,” JCAP 10, 032 (2015) [arXiv:1506.02671 [astro-ph.CO]]. * [16] X. Chen, M. H. Namjoo and Y. Wang, “Quantum Primordial Standard Clocks,” JCAP 02, 013 (2016) [arXiv:1509.03930 [astro-ph.CO]]. * [17] L. V. Delacretaz, T. Noumi and L. Senatore, “Boost Breaking in the EFT of Inflation,” JCAP 02, 034 (2017) [arXiv:1512.04100 [hep-th]]. * [18] B. Bonga, S. Brahma, A. S. Deutsch and S. Shandera, “Cosmic variance in inflation with two light scalars,” JCAP 05, 018 (2016) [arXiv:1512.05365 [astro-ph.CO]]. * [19] R. Flauger, M. Mirbabayi, L. Senatore and E. Silverstein, “Productive Interactions: heavy particles and non-Gaussianity,” JCAP 10, 058 (2017) [arXiv:1606.00513 [hep- th]]. * [20] H. Lee, D. Baumann and G. L. Pimentel, “Non-Gaussianity as a Particle Detector,” JHEP 12, 040 (2016) [arXiv:1607.03735 [hep-th]]. * [21] L. V. Delacretaz, V. Gorbenko and L. Senatore, “The Supersymmetric Effective Field Theory of Inflation,” JHEP 03, 063 (2017) [arXiv:1610.04227 [hep-th]]. * [22] P. D. Meerburg, M. Münchmeyer, J. B. Muñoz and X. Chen, “Prospects for Cosmological Collider Physics,” JCAP 03, 050 (2017) [arXiv:1610.06559 [astro-ph.CO]]. * [23] X. Chen, Y. Wang and Z. Z. Xianyu, “Standard Model Background of the Cosmological Collider,” Phys. Rev. Lett. 118, no.26, 261302 (2017) [arXiv:1610.06597 [hep-th]]. * [24] X. Chen, Y. Wang and Z. Z. Xianyu, “Standard Model Mass Spectrum in Inflationary Universe,” JHEP 04, 058 (2017) [arXiv:1612.08122 [hep-th]]. * [25] A. Kehagias and A. Riotto, “On the Inflationary Perturbations of Massive Higher-Spin Fields,” JCAP 07, 046 (2017) [arXiv:1705.05834 [hep-th]]. * [26] H. An, M. McAneny, A. K. Ridgway and M. B. Wise, “Quasi Single Field Inflation in the non-perturbative regime,” JHEP 06, 105 (2018) [arXiv:1706.09971 [hep-ph]]. * [27] X. Tong, Y. Wang and S. Zhou, “On the Effective Field Theory for Quasi-Single Field Inflation,” JCAP 11, 045 (2017) [arXiv:1708.01709 [astro-ph.CO]]. * [28] A. V. Iyer, S. Pi, Y. Wang, Z. Wang and S. Zhou, “Strongly Coupled Quasi-Single Field Inflation,” JCAP 01, 041 (2018) [arXiv:1710.03054 [hep-th]]. * [29] H. An, M. McAneny, A. K. Ridgway and M. B. Wise, “Non-Gaussian Enhancements of Galactic Halo Correlations in Quasi-Single Field Inflation,” Phys. Rev. D97, no.12, 123528 (2018) [arXiv:1711.02667 [hep-ph]]. * [30] S. Kumar and R. Sundrum, “Heavy-Lifting of Gauge Theories By Cosmic Inflation,” JHEP 05, 011 (2018) [arXiv:1711.03988 [hep-ph]]. * [31] S. Riquelme M., “Non-Gaussianities in a two-field generalization of Natural Inflation,” JCAP 04, 027 (2018) [arXiv:1711.08549 [astro-ph.CO]]. * [32] G. Franciolini, A. Kehagias and A. Riotto, “Imprints of Spinning Particles on Primordial Cosmological Perturbations,” JCAP 02, 023 (2018) [arXiv:1712.06626 [hep-th]]. * [33] R. Saito and T. Kubota, “Heavy Particle Signatures in Cosmological Correlation Functions with Tensor Modes,” JCAP 06, 009 (2018) [arXiv:1804.06974 [hep-th]]. * [34] G. Cabass, E. Pajer and F. Schmidt, “Imprints of Oscillatory Bispectra on Galaxy Clustering,” JCAP 09, 003 (2018) [arXiv:1804.07295 [astro-ph.CO]]. * [35] Y. Wang, Y. P. Wu, J. Yokoyama and S. Zhou, “Hybrid Quasi-Single Field Inflation,” JCAP 07, 068 (2018) [arXiv:1804.07541 [astro-ph.CO]]. * [36] X. Chen, Y. Wang and Z. Z. Xianyu, “Neutrino Signatures in Primordial Non-Gaussianities,” JHEP 09, 022 (2018) [arXiv:1805.02656 [hep-ph]]. * [37] E. Dimastrogiovanni, M. Fasiello and G. Tasinato, “Probing the inflationary particle content: extra spin-2 field,” JCAP 08, 016 (2018) [arXiv:1806.00850 [astro-ph.CO]]. * [38] L. Bordin, P. Creminelli, A. Khmelnitsky and L. Senatore, “Light Particles with Spin in Inflation,” JCAP 10, 013 (2018) [arXiv:1806.10587 [hep-th]]. * [39] W. Z. Chua, Q. Ding, Y. Wang and S. Zhou, “Imprints of Schwinger Effect on Primordial Spectra,” JHEP 04, 066 (2019) [arXiv:1810.09815 [hep-th]]. * [40] N. Arkani-Hamed, D. Baumann, H. Lee and G. L. Pimentel, “The Cosmological Bootstrap: Inflationary Correlators from Symmetries and Singularities,” JHEP 04, 105 (2020) [arXiv:1811.00024 [hep-th]]. * [41] S. Kumar and R. Sundrum, “Seeing Higher-Dimensional Grand Unification In Primordial Non-Gaussianities,” JHEP 04, 120 (2019) [arXiv:1811.11200 [hep-ph]]. * [42] G. Goon, K. Hinterbichler, A. Joyce and M. Trodden, “Shapes of gravity: Tensor non-Gaussianity and massive spin-2 fields,” JHEP 10, 182 (2019) [arXiv:1812.07571 [hep-th]]. * [43] Y. P. Wu, “Higgs as heavy-lifted physics during inflation,” JHEP 04, 125 (2019) [arXiv:1812.10654 [hep-ph]]. * [44] D. Anninos, V. De Luca, G. Franciolini, A. Kehagias and A. Riotto, “Cosmological Shapes of Higher-Spin Gravity,” JCAP 04, 045 (2019) [arXiv:1902.01251 [hep-th]]. * [45] L. Li, T. Nakama, C. M. Sou, Y. Wang and S. Zhou, “Gravitational Production of Superheavy Dark Matter and Associated Cosmological Signatures,” JHEP 07, 067 (2019) [arXiv:1903.08842 [astro-ph.CO]]. * [46] M. McAneny and A. K. Ridgway, “New Shapes of Primordial Non-Gaussianity from Quasi-Single Field Inflation with Multiple Isocurvatons,” Phys. Rev. D100, no.4, 043534 (2019) [arXiv:1903.11607 [astro-ph.CO]]. * [47] S. Kim, T. Noumi, K. Takeuchi and S. Zhou, “Heavy Spinning Particles from Signs of Primordial Non-Gaussianities: Beyond the Positivity Bounds,” JHEP 12, 107 (2019) [arXiv:1906.11840 [hep-th]]. * [48] S. Lu, Y. Wang and Z. Z. Xianyu, “A Cosmological Higgs Collider,” JHEP 02, 011 (2020) [arXiv:1907.07390 [hep-th]]. * [49] A. Hook, J. Huang and D. Racco, “Searches for other vacua. Part II. A new Higgstory at the cosmological collider,” JHEP 01, 105 (2020) [arXiv:1907.10624 [hep-ph]]. 25 * [50] A. Hook, J. Huang and D. Racco, “Minimal signatures of the Standard Model in non-Gaussianities,” Phys. Rev. D 101, no.2, 023519 (2020) [arXiv:1908.00019 [hep-ph]]. * [51] S. Kumar and R. Sundrum, “Cosmological Collider Physics and the Curvaton,” JHEP 04, 077 (2020) [arXiv:1908.11378 [hep-ph]]. * [52] L. T. Wang and Z. Z. Xianyu, “In Search of Large Signals at the Cosmological Collider,” JHEP 02, 044 (2020) [arXiv:1910.12876 [hep-ph]]. * [53] Y. Wang and Y. Zhu, “Cosmological Collider Signatures of Massive Vectors from Non-Gaussian Gravitational Waves,” JCAP 04, 049 (2020) [arXiv:2001.03879 [astro-ph.CO]]. * [54] L. Li, S. Lu, Y. Wang and S. Zhou, “Cosmological Signatures of Superheavy Dark Matter,” JHEP 07, 231 (2020) [arXiv:2002.01131 [hep-ph]]. * [55] L. T. Wang and Z. Z. Xianyu, “Gauge Boson Signals at the Cosmological Collider,” JHEP 11, 082 (2020) [arXiv:2004.02887 [hep-ph]]. * [56] A. Bodas, S. Kumar and R. Sundrum, “The Scalar Chemical Potential in Cosmological Collider Physics,” [arXiv:2010.04727 [hep-ph]]. * [57] J. Maldacena, “Non-Gaussian features of primordial fluctuations in single field inflationary models,” JHEP 0305, 013(2003), arXiv:0210603 [astro-ph]. * [58] C. Cheung, et al., “The Effective Field Theory of Inflation,” JHEP 0803, 014(2008), arXiv:0709.0293 [hep-th]. * [59] L. Senatore, “Lectures on Inflation,” arXiv:1609.00716 [hep-th]. * [60] R. Saito, “Cosmological correlation functions including a massive scalar field and an arbitrary number of soft-gravitons,” arXiv:1803.01287 [hep-th]. * [61] X. Chen, Y. Wang, and Z. Z. Xianyu, “Schwinger-Keldysh Diagrammatics for Primordial Perturbations,” JCAP 1712, 006(2017), arXiv:1703.10166 [hep-th]. * [62] X. Chen, “Primordial Non-Gaussianities from Inflation Models,” Adv.Astron 2010 (2010) 638979, arXiv:1002.1416 [astro-ph.CO]. * [63] D. Baumann, “TASI Lectures on Primordial Cosmology,” arXiv:1807.03098 [hep-th]. * [64] J. Schwinger, “Brownian Motion of a Quantum Oscillator,” J. Math. Phys. 2 (1961) 407; R. Jordan, “Effective Field Equations for Expectation Values,” Phys. Rev. D33 (1986) 444; E. Calzetta and B. Hu, “Closed Time Path Functional Formalism in Curved Space-Time: Application to Cosmological Back Reaction Problems,” Phys. Rev. D35 (1987) 495; J. Maldacena, “Non-Gaussian Features of Primordial Fluctuations in Single-Field Inflationary Models,” JHEP 05 (2003) 013, arXiv: astro-ph/0210603 [astro-ph]; S. Weinberg, “Quantum Contributions to Cosmological Correlations,” Phys. Rev. D72 (2005) 043514, arXiv: hep-th/0506236 [hep-th].
# Generalized optical theorem for Rayleigh scattering approximation Irving Rondón111corresponding author<EMAIL_ADDRESS>and Jooyoung Lee School of Computational Sciences, Korea Institute for Advanced Study, 85 Hoegi-ro, Seoul 0245, Republic of Korea ###### Abstract A general expression for the optical theorem for probe sources given in terms of propagation invariant beams is derived. This expression is obtained using the far field approximation for Rayleigh regime. In order to illustrate this results is revisited the classical and standard scattering elastic problem of a dielectric sphere for which the incident field can be any propagation invariant beam. ## 1 Introduction In electromagnetic theory the Optical Theorem (OT) has a very long an interesting history (See R. Newton [1] and references therein). This physical concept of the OT is a useful result in scattering theory, relating the extinction cross section of a structure to the scattering amplitude in the forward direction [2, 3]. In applied sciences to understand this mechanics and how the absorption and the scattering affect wave propagation throughout a medium can help to obtain a meaningful features of the object such as the characterization and its physical properties [4]. Recently, some interesting reviews have been published [5, 6] concerning the development of generalized Lorenz-Mie theories, the Extended Optical theorem (EOT), using structured beam shape methods are presented, and the description of several electromagnetic effects by experiments, some applications using T-matrix methods for structured beam illumination. The optical theorem has generated important and new applications, such as calculating the complex scattering amplitude on spherical and cylindrical objects, application in acoustic backscattering [7], to understand correctly the physical effects of propagation in radiation force and torque [8], without input nonphysical effect or based on numerical calculations (See [9] and references). A generalized optical theorem for acoustic waves has been published by Marston and Zhang [10, 11], recently extended for arbitrary beams [12]. For electromagnetic fields [13, 14, 15, 16] quantum mechanics [17], in time domain, transmission lines, propagation in acoustic and electromagnetic waves in anisotropic medium [18, 19, 20], seismologic waves [21], the manipulation of the scattering pattern using non-Hermitian particles [22]. In the following section, a general derivation EOT applicable to any “nondiffracting beams”using the Huygens principle in the far field approximation for Rayleigh regime. A generalized scattering amplitude function is presented. These results are illustrated using a Bessel beam. ## 2 Theoretical background analysis Let us consider a scattering particle of arbitrary form and size, with volume $V$ a boundary surface $\partial\Omega$ and, a complex permittivity $\varepsilon\left(\vec{r}\right)=\varepsilon_{0}\left[\varepsilon_{r}^{\prime}(\vec{r})+i\varepsilon_{r}^{\prime\prime}(\vec{r})\right],$ where $\varepsilon_{0}$ is the vacuum permittivity. For simplicity the medium surrounding the scattering particle is lossless and its permittivity is $\varepsilon$ and not magnetic. Let us consider the incident arbitrary beams as $\vec{E}_{i}(\vec{r}),\,\vec{H}_{i}(\vec{r})$ that strikes the scattering particle, $\vec{E}_{s}(\vec{r}),\,\vec{H}_{s}(\vec{r})$ the scattered electromagnetic field and $\vec{E}(\vec{r}),\,\vec{H}(\vec{r})$ as the total fields [3, 2, 23, 24]. $\vec{E}=\vec{E}_{i}+\vec{H}_{s},$ (1) $\vec{H}=\vec{H}_{i}+\vec{H}_{s}.$ (2) The time-averaged power absorbed by the scattering object is given by $P_{a}=-\frac{1}{2}\int_{\partial\Omega}\Biggl{[}\vec{n}\cdot\left[\left(\vec{E}_{i}+\vec{E}_{s}\right)\times\left(\vec{H}_{i}^{*}+\vec{H}_{s}^{*}\right)\right]\Biggr{]}dS,$ (3) this expression can be written in term of the Poynting vector as $P_{a}=-\frac{1}{2}\int_{\partial\Omega}\vec{n}\cdot\left[\vec{S}_{i}+\vec{S}_{s}+\vec{S}^{\prime}\right]dS,$ (4) Where the first term of Eq. (4) is the incident Poynting vector $\vec{S}_{i}=\frac{1}{2}\mathbf{Re}\left[\vec{E_{i}}\times\vec{H}^{*}_{i}\right],$ (5) followed by the scattered vector $\vec{S}_{s}=\frac{1}{2}\mathbf{Re}\left[\vec{E}_{s}\times\vec{H}_{s}^{*}\right],$ (6) and the cross term interaction vector, which depend in term of the initial and scatter vectors $\vec{S}^{\prime}=\frac{1}{2}\mathbf{Re}\left[\vec{E}_{i}\times\vec{H}_{s}^{*}+\vec{E}_{s}\times\vec{H}_{i}^{*}\right].$ (7) Using these equations and the right boundary condition which depend of the specific geometry. It is possible to obtain analytical and numerical solution for general scattering problem (See [23, 24] where the authors has reported a pedagogical tutorials). In this letter, we follow and adapt the approach presented in ( See. Refs [25, 26]) for an arbitrary structured beams, where in general the electromagnetic fields can be expressed as $\vec{E}(r,t)=\mathbf{Re}[{E(r)e^{-i\omega t}}]$ and $\vec{H}(r,t)=\mathbf{Re}[{H(r)e^{-i\omega t}}]$, where $\omega$ is the harmonic frequency of the wave (single frecuency). Let us define the following incident fields as $\vec{E}_{i}=\hat{e}_{i}\varphi(x,y)e^{ik_{z}z}e^{-i\omega t},$ (8) $\vec{H}_{i}=\frac{1}{i\omega\mu_{0}}\nabla\times\vec{E}_{i},$ (9) where $\varphi(x,y)$ physically represent a structured beam (scalar function) which satisfies the Maxwell’s equations and the transversal Helmholtz equation $\nabla_{t}^{2}\varphi+k_{t}^{2}\varphi=0$, in Cartesian, circular, parabolic cylindrical, and elliptical coordinates [30, 27, 28, 29] and $k_{t}$ is the transversal wave vector. For simplicity the factor $e^{-i\omega t}$ is omitted. In the following is considered a polarized electric field, with a complex amplitude $E_{0}$. $\vec{E}_{i}=\hat{e}_{x}E_{0}\varphi(x,y)e^{ik_{z}z},\\\ $ (10) $\vec{H}_{i}\approx\hat{e}_{y}\frac{E_{0}}{\omega\mu_{0}}k_{z}\varphi(x,y)e^{ik_{z}z},$ (11) The Poynting vector for these incident fields lies the plane XY [3]. In this approximation has been assumed that any impinging electric field can be written as invariant structured beam using the plane wave spectrum representation [30, 27, 28, 29]. $\varphi(x,y)=\int_{0}^{2\pi}A(\phi)e^{ik_{t}(x\cos\phi+y\sin\phi)}d\phi,$ (12) where $A(\phi)$ is an angular function. Note that for $A(\phi)=\delta(\phi-\phi_{0})$, the plane wave case is recovered, $k_{t}$ is the transversal wave vector. This equation has the following interpretation. A structured or non-diffracting beam is given by a superposition of multiple plane waves having transversal wave vectors $k_{t}$ on a circle. Only for particular functions $A(\phi)$ on the Whittaker’s integral can be expressed analytically. For example higher order Bessel beams are defined by $A(\phi)=e^{im\phi}$, where $m$ is the azimuthal order of the beam. Mathieu beams are defined by $A(\phi)=C(m,q,\phi)+iS(m,q,\phi)$ are the Mathieu cosine and sine functions. Weber beam are defined by $A(a,\phi)=\frac{1}{2(\pi|\sin\phi|^{1/2})}e^{ia\ln|\tan\phi/2|}$ in turn is divided into even and odd case [27]. After substituting Eqs. (10) and (11) into Eq. (4) is written as $P_{a}+P_{s}=-\frac{1}{2}\int_{\partial\Omega}\mathbf{Re}\left[\vec{E}_{i}\times\vec{H}_{s}^{*}+\vec{E}_{s}\times\vec{H}_{i}^{*}\right]\cdot\vec{n}dS,$ (13) Figure 1: Differential scattering cross section (23) normalized and the polarization function (24) vs scattering angle for a plane wave and Bessel beam. In the Bessel beam case the sphere is on the beam axis and the beam order is $m=0$ . It is possible to recast Eq. (13) after some vector algebra, as follow $P_{a}+P_{s}=\frac{\mathbf{Re}}{2}\int_{\partial\Omega}\Biggl{[}\hat{e}_{x}\cdot\left(\vec{n}\times\vec{H}_{s}\right)E_{0}^{*}\varphi^{*}(x,y)e^{-ik_{z}z}+\hat{e}_{y}\cdot\left(\vec{n}\times\vec{E}_{s}\right)\varphi^{*}(x,y)\frac{E_{0}^{*}e^{-ik_{z}z}}{\eta}\Biggr{]}dS$ (14) where $\eta=\sqrt{\mu/\varepsilon}$ is denoted for the impedance. This is a general expression that represent from the physical point of view the interaction between the incident and scattered fields in term of the Green function, also related with the Huygens principle[2, 25, 26]. Therefore, the scattered electric field in the far field can be expressed as $\vec{E}_{s}=\int_{\partial\Omega}i\omega\mu G(\vec{r},\vec{r^{\prime}})\cdot\left[\hat{n}\times\vec{H^{\prime}}_{s}\right]+\nabla\times G(\vec{r},\vec{r^{\prime}})\cdot\left[\hat{n}\times\vec{E^{\prime}}_{s}\right]dS^{\prime}.$ (15) Alternatively, this field in the scattered vector in the far approximation can also be expressed as $\vec{E}_{s}(\vec{r})=\frac{e^{ikr}}{r}F(\hat{k}_{i},\hat{k}_{s})\cdot\hat{e}_{i},$ (16) where the function $F(\hat{k}_{i},\hat{k}_{s})$ is called the scattering amplitude function, $\hat{e}_{i}$ is an unitary vector. A similar expression is related with the pressure in acoustic [10, 11, 12] . This function can provide itself very interesting physics in scattering phenomena. Indeed, in Refs. [31, 32] the authors proposed a method how to obtain analytical expressions for the scattering amplitude function in order to explore an acoustic Bessel beam, later extended for Mathieu [33], and Weber [34] waves. For instance , in order to obtain an OT expression for arbitrary beam, we use Eq. (16) in the forward direction i.e $\hat{k}_{s}=\hat{k}_{i}$ [38], taking the product with $\hat{e}_{i}\varphi^{*}(x^{\prime},y^{\prime})E_{0}^{*}$ , and making the integration over the scattered object in the far field approximation for the electromagnetic case [35, 36, 37, 15, 16]. Also a clear derivation for the ETO theorem for acoustic waves using Jones lemma can be reviewed [10, 11, 12] $\hat{e}_{i}\varphi^{*}(x^{\prime},y^{\prime})\cdot\vec{E}_{s}=\int_{\partial\Omega}\varphi^{*}(x^{\prime},y^{\prime})e^{-i\hat{k}_{s}\cdot\vec{r}^{\prime}}\left[\hat{e}_{i}\cdot(\hat{n}\times\vec{H^{\prime}}_{s})+(\hat{k}_{i}\times\hat{e}_{i})\cdot(\hat{n}\times\vec{E^{\prime}}_{s})\right]dS^{\prime},$ (17) Therefore, without loss generality applying the same physical condition as [26, 25, 35, 36, 38] using Eq. (16) and (17), the optical theorem can be written as $\sigma_{\text{ext}}=\frac{4\pi}{k}\mathbf{Im}\left[\hat{e}_{i}\cdot\varphi^{*}(x^{\prime},y^{\prime})F(\hat{k}_{i},\hat{k}_{i})\cdot\hat{e}_{i}\right].$ (18) This expression recover the OT for the plane wave case, $\sigma_{\text{ext}}=\frac{4\pi}{k}\mathbf{Im}\left[\hat{e}_{i}\cdot F(\hat{k}_{i},\hat{k}_{i})\cdot\hat{e}_{i}\right]$ (19) ## 3 Generalized scattering amplitude function In this section, we show an application of the scattering amplitude function. Let us consider an electric field $\vec{E}=\frac{3}{2+\varepsilon_{r}}\vec{E}_{i}.$ (20) this equation describe the electric field $\vec{E}$ inside a dielectric sphere immersed in term of the incident electric field $\vec{E}_{i}$ [2, 3, 26] In order to avoid redundancy, we put forward the scattering amplitude function the Rayleigh regime reported [38], but here written in term of Whittaker integral as $F(\hat{i},\hat{o})=k^{2}a^{3}\left(\frac{n^{2}-1}{n^{2}+2}\right)\left[\hat{e}_{i}-\left(\hat{o}\cdot\hat{e}_{i}\right)\hat{o}\right]e^{ik_{z}z}\int_{0}^{2\pi}A(\phi)e^{ik_{t}(x\cos\phi+y\sin\phi)}d\phi,$ (21) where $n$ is the complex refraction index, $k$ is the wave vector and $a$ is the sphere radius. The square of this equation an integration over the solid angle $d\Omega$ gives the differential scattering cross section as $\frac{d\sigma_{\text{s}}}{d\Omega}=|F(\hat{i},\hat{o})|^{2}=k^{4}a^{6}\left(\frac{n^{2}-1}{n^{2}+2}\right)^{2}I_{\text{beam}},$ (22) where $I_{\text{beam}}=\left[1-\left(\hat{o}\cdot\hat{e}_{i}\right)^{2}\right]|\varphi(x,y)|^{2},$ is the intensity expressed in term of a scalar potential, it is related with the incident angle and observation point, if $\hat{e}_{i}$ is perpendicular to the scattering plane, and $\hat{o}\cdot\hat{e}_{1}=\sin\theta$, if $\hat{e}_{i}$ lies in the plane, $|\varphi(x,y)|^{2}$ is related with the intensity. Using this expression several probe fields can be used to measure the scattering cross section. In the following, it is denoted $\sigma_{\text{d}}\equiv d\sigma_{\text{s}}/d\Omega$. Figure 2: Normalized differential scattering cross section and the polarization scattered function for a Bessel beam with $m=0$, using the equations (23) and (24) with $\beta=35^{\circ}$. ### 3.1 The Rayleigh scattering approximation using a Bessel beam In this section, we consider an incident electric field expressed using Eq. (20) taking the angular spectrum as $A(\phi)=e^{im\phi}$, which represent a linear polarized Bessel beam of $m$ order is expressed as $\vec{E}_{i}(\vec{r})=\hat{e}_{i}J_{m}(k_{t}r)e^{ik_{z}z}$. Using this result it is possible to obtain the polarization function of the scattered radiation as $\sigma_{\text{d}}(\theta)=\frac{k^{4}a^{6}}{2}\left(\frac{n^{2}-1}{n^{2}+2}\right)^{2}J_{m}^{2}(k_{t}r)(1+\cos^{2}\theta).$ (23) In this problem,we have used the standard scattering geometry [41, 42, 40] to relate the scattering angle $\theta=\arccos\left(\hat{e}_{i}\cdot\hat{o}\right)$ to the unit vectors $\hat{i}$ and $\hat{o}$ in a particular point P, where the scattered radiation is observed. In the other hand, if the incident field is unpolarized, the differential scattering function is the average over parallel and perpendicular incidents fields, then $\sigma_{\text{d}}(\theta)=\frac{1}{2}\left[\sigma^{\perp}_{\text{d}}(\theta)+\sigma^{\parallel}_{\text{d}}(\theta)\right]$, and $\hat{o}\cdot\hat{e}_{i}=0$ if $\hat{e}_{i}$ is perpendicular to the scattering plane and $\hat{o}\cdot\hat{e}_{1}=\sin\theta$ if $\hat{e}_{i}$ lies in the plane. Using these information we can calculate the polarization scattered function as $\Pi(\theta)=\frac{\sigma^{\perp}_{\text{d}}(\theta)-\sigma^{\parallel}_{\text{d}}(\theta)}{\sigma^{\perp}_{\text{d}}(\theta)+\sigma^{\parallel}_{\text{d}}(\theta)}=\frac{\sin^{2}\theta}{1+\cos^{2}\theta}J_{m}^{2}\left(k_{t}r\right).$ (24) Figure 3: Normalized differential scattering cross section and the polarization scattered function for a Bessel beam with $m=1$ and with $\beta=60^{\circ}$. It is important to note, that integrating the equation Eq. (23) over $d\Omega$ is straightforward to obtain $\sigma_{\text{s}}$ [38], the scattering transversal section, while the absorbed transversal section $\sigma_{\text{a}}$ have to be obtained with the Poynting vector [2, 3, 25, 26], since in other way, this would result in $\sigma_{\text{ext}}=0$. Note that taking $r\rightarrow 0$ into (23) and (24) is recovered the plane wave case [2]. Finally, in order to validate our results, we show in Fig. (1) some numerical results, in which we compare the scattered radiation as function of the dispersion angle for both the differential cross section and the polarization scattered function, considering a plane wave and a Bessel beam with $m=0$. For simplicity, we make the analysis in one dimension and vary the angle from $0$ to $\pi$. For the Bessel beam case, $k_{t}=k\sin\beta$ is the transversal wave vector, and $\beta$ is the value of the half-cone angle. In addition, this model reproduce the expected behavior for a plane wave [2]. For a plane wave case, the differential cross section present a minimum at $\pi/2$ and the polarization scattered function a maximum at $\pi/2$ See Figure 3, these angles the scattered radiation is linear polarized, this effect was proposed by Rayleigh to explain the blue sky [2, 42, 41]. At these same values for a zero Bessel beam, the differential cross section shows a reduction respect to the plane wave case, with a maximum around $\pi/3$, while the scattered polarization function shows a clear decreasing of the scattered radiation. In Figure (2), we show the three dimensional behavior of the differential cross section $\left(\sigma_{s}\right)_{\text{BB}}$ and the polarization scatter function $\Pi(\theta)_{\text{BB}}$ with $m=0$ and $\beta=35^{\circ}$ for a Bessel beam. In addition, to illustrate how the scattering cross section and the polarization scattered function change with the variation of the incident field, we show in Figure (3) the behavior of these quantities for $m=1$ and $\beta=60^{\circ}$. Bi-dimensional scattered pattern were reported using only a zero order Bessel EM scattering in a dielectric sphere [38] where the author compare the incident Bessel beam as a function of the impinging angle $\beta$. ## 4 Conclusions A general optical theorem for any arbitrary beam was derived, using the amplitude scattering function and the Huygens principle in the far field approximation. the presented ordinary form of the optical theorem renders the particular case for free space derived in previous works [9, 10, 11, 12]. A general representation for the extinction in Rayleigh scattering regime was studied and the effect for a linearly polarized Bessel beam of $m$ order as a function of the incident impinging angle $\beta$. From the physical point of view this method Eq. (22) can be extended using another spectral beam wave representation [39], it allows to study waves such as X waves, Airy, Frozen waves, among others. It would be interesting to explore as was stated [12] the physics of differential cross section of scattering $d\sigma_{\text{s}}/d\Omega$ and for the extinction $d\sigma_{\text{ext}}/d\Omega$, the geometrical features and analogies between acoustic and electromagnetism. Several application such as the Rayleigh scattering of nanoparticles and optical forces [43] using focused femtosecond laser pulses are in development. Applications measuring the optical extinction has been recently experimentally explored using a radial polarized cylindrical beams [44]. ## References * [1] R. G. Newton, Optical theorem and beyond, Am. J. Phys. 44, 639-642 (1976). * [2] J.D. Jackson, Classic Electrodynamics, 3rd Edition, Wiley (1999). * [3] C. F. Bohren, and D. R. Huffman, Absorption and Scattering of Light by Small Particles, New York: Wiley (1983). * [4] J. Soares, Introduction to Optical Characterization of Materials. In: Sardela M. (eds) Practical Materials Characterization. Springer, New York, NY * [5] G. Gouesbet, Van de Hulst Essay: A review on generalized Lorenz-Mie theories with wow stories and an epistemological discussion, Journal of Quantitative Spectroscopy and Radiative Transfer 253, 107-117 (2020). * [6] G. Gouesbet, T-matrix methods for electromagnetic structured beams: A commented reference database for the period 2014–2018, Journal of Quantitative Spectroscopy and Radiative Transfer, 230, 247-281 (2019). * [7] P.L. Marston, Generalized optical theorem for scatterers having inversion symmetry: application to acoustic backscattering, J. Acoust. Soc. Am. 109, 4 (2001). * [8] P.L. Marston, J. H. Crichton Radiation torque on a sphere caused by a circularly-polarized electromagnetic wave, Phys. Rev. A 30, 2508 (1984). * [9] L. Zhang, and P. L. Marston, Unphysical consequences of negative absorbed power in linear passive scattering:Implications for radiation force and torque, J. Acous. Soc. 139, 3139 (2016). * [10] L. Zhang and P. L. Marston, Axial radiation force exerted by general non-diffracting beams J. Acoust. Soc. 131, EL329-EL535 (2012). * [11] L. Zhang, and P. L. Marston, Optical theorem for acoustic non-diffracting beams and application to radiation force and torque. Biomed. Opt. Express 4, 1610-1617 (2013). * [12] L. Zhang, Generalized optical theorem for an arbitrary incident field. J Acoust Soc Am. 145,3 (2019). * [13] M. J. Berg, C. M. Sorensen and A. Chakrabarti, Extinction and the optical theorem. Part I. Single particles, J. Opt. Soc. Am. A 25, 1504-1513 (2008). * [14] M. J. Berg, C. M. Sorensen, and A. Chakrabarti, Extinction and the optical theorem. Part II. Multiple particles, J. Opt. Soc. Am. A, 25, 1514-1520 (2008). * [15] P. Scott Carney, J. C. Schotland, and E. Wolf, Generalized optical theorem for reflection, transmission, and extinction of power for scalar fields, Phys. Rev. E 70, 036611 (2004). * [16] P. Scott Carney, E. Wolf, and G. S. Agarwal, Diffraction tomography using power extinction measurements. J. Opt. Soc. Am. A, 16, 11, 2643-2648 (1999). * [17] G. Gousbet, On the optical theorem and non-plane-wave scattering in quantum mechanics. J. Math. Phys., 50, 112302 (2009). * [18] E. A. Marengo and J. Tu, Generalized optical theorem in the time domain, Progress In Electromagnetics Research B, 65, 1-18 (2016). * [19] E. A. Marengo and J. Tu, Optical theorem for transmission lines, Progress In Electromagnetics Research B, 61, 253-268 (2014). * [20] E. A. Marengo, A New Theory of the Generalized Optical Theorem in Anisotropic Media. IEEE Transactions on Antennas and Propagation, 61, 4, 2164-2179 (2013). * [21] K. Wapenaar and H. Douma, A unified optical theorem for scalar and vectorial wave fields. J. Opt. Soc. Am. 131, 3611-3626 (2012). * [22] Y. J. Zhang et al, Manipulating the scattering pattern with non-Hermitian particle arrays, Opt. Express 28, 19492-19507 (2020). * [23] Fabrizio Frezza, Fabio Mangini, and Nicola Tedeschi, Introduction to electromagnetic scattering: tutorial, J. Opt. Soc. Am. A 35, 163-173 (2018). * [24] Fabrizio Frezza, Fabio Mangini, and Nicola Tedeschi, Introduction to electromagnetic scattering, part II: tutorial, J. Opt. Soc. Am. A 37, 1300-1315 (2020). * [25] L. Tsang, J.A. Kong, and K.H. Ding, Scattering of Electromagnetic Waves. Theories and Applications, John Wiley and Sons, Inc. (2000). * [26] A. Ishimaru, Electromagnetic Wave Propagation, Radiation, and Scattering, Prentice-Hall Inc. (1991). * [27] U. Levy, S. Derevyanko, Y. Silberberg , Light Modes of Free Space, Progress in Optics, 61, 237-281 (2016). * [28] E. T. Whitaker, On the partial differential equations of mathematical physics. Math. Ann. 57, 333-355 (1902). * [29] M. Nieto Vesperinas, Scattering and diffraction in physical optics. Wiley New York (1991). * [30] W. Miller, Jr., in Encyclopedia of Mathematics and Applications, G. C. Rota, ed. (Addison-Wesley, Reading, Mass., 1977). * [31] P. L. Marston, Scattering of a Bessel beam by sphere. J. Acoust. Soc. Am. 121, 2, 753-758 (2007). * [32] P.L. Marston, Scattering of a Bessel beam by sphere II: Helicoidal case shell example. J. Acoust. Soc. Am. 124, 5, 2905-2910 (2008). * [33] A. Belafhal, A. Chafiq, and Z. Hricha, Scattering of Mathieu beams by a rigid sphere. Opt. Comm. 284, 3030-3035 (2011). * [34] A. Belafhal, L. Ez-Zariy, A. Chafiq and Z. Hricha, Analysis of the scattering far field of a nondiffracting parabolic beam by a rigid sphere. Phys. and Chem. News, 60, 15-21 (2011). * [35] M. I. Mishchenko, The electromagnetic optical theorem revisited. J. Quant. Spectrosc. Radiat. Transfer. 101, 3, 404-410 (2006). * [36] M. I. Mishchenko, Far-field approximation in electromagnetic scattering. J. Quant. Spectrosc. Radiat. Transfer. 100, 1, 268-276 (2006). * [37] Born M , Wolf E . Principles of optics. Cambridge, UK: Cambridge University Press; 1999 . * [38] I. Rondon Ojeda, F. Soto-Eguíbar, Generalized optical theorem for propagation invariant beams, Optik 137 (2017) 17–24. * [39] H. E. Hernández-Figueroa, M. Zamboni-Rached and E. Recami, Non-Diffracting Waves, John Wiley & Sons. (2013). * [40] R.M. Drake and J. E. Gordon, Mie Scattering, Am. J. Phys. 53, 10, 955-962 (1985). * [41] K. Shimizu, Modification of the Rayleigh-Debye approximation, J. Opt. Soc. Am. 73, 4, 504-507 (1983). * [42] J. E. Gordon,Simple method for approximating Mie scattering. J. Opt. Soc. Am. 2, 2, 156-159 (1985). * [43] L. Gong et al, Optical forces of focused femtosecond laser pulses on nonlinear optical Rayleigh particles, Photon. Res. 6, 138-143 (2018). * [44] A.V. Krasavin et al. Generalization of the optical theorem: experimental proof for radially polarized beams. Light Sci Appl 7, 36 (2018).
# Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks Hyunjin Choi, Judong Kim, Seongho Joe, and Youngjune Gwon Samsung SDS, Seoul, Korea ###### Abstract Contextualized representations from a pre-trained language model are central to achieve a high performance on downstream NLP task. The pre-trained BERT and A Lite BERT (ALBERT) models can be fine-tuned to give state-of-the-art results in sentence-pair regressions such as semantic textual similarity (STS) and natural language inference (NLI). Although BERT-based models yield the [CLS] token vector as a reasonable sentence embedding, the search for an optimal sentence embedding scheme remains an active research area in computational linguistics. This paper explores on sentence embedding models for BERT and ALBERT. In particular, we take a modified BERT network with siamese and triplet network structures called Sentence-BERT (SBERT) and replace BERT with ALBERT to create Sentence-ALBERT (SALBERT). We also experiment with an outer CNN sentence-embedding network for SBERT and SALBERT. We evaluate performances of all sentence-embedding models considered using the STS and NLI datasets. The empirical results indicate that our CNN architecture improves ALBERT models substantially more than BERT models for STS benchmark. Despite significantly fewer model parameters, ALBERT sentence embedding is highly competitive to BERT in downstream NLP evaluations. ## I Introduction Pre-trained language models have impacted the way modern natural language processing (NLP) applications and systems are built. An important paradigm is to train a language model on large corpora to serve as a platform upon which an NLP application can be built and optimized. Such platform is shareable and can be distributed. Self-supervised learning with large corpora provides an appropriate starting point for extra task-specific layers being optimized from scratch while reusing the pre-trained model parameters. Transformer [1], a sequence transduction model based on attention mechanism, has revolutionized the design of a neural encoder for natural language sequences. By skipping any recurrent or convolutional structures, the transformer architecture enables the learning of sequential information in an input solely via attention, thanks to multihead self-attention layers in an encoder block. Devlin _et al._ [2] have proposed Bidirectional Encoder Representations from Transformers (BERT) to improve on predominantly unidirectional training of language models. By jointly conditioning on both left and right context in all layers, BERT uses the masked language modeling (MLM) loss to make the training of deep bidirectional language encoding possible. BERT uses an additional loss for pre-training known as next-sentence prediction (NSP). NSP is designed to learn high-level linguistic coherence by predicting whether or not given two text segments should appear consecutively as in the original text. NSP is expected to improve downstream NLP task performances such as semantic textual similarity (STS) and natural language inference (NLI) that need to infer reasoning about inter-sentence relations. A Lite BERT [3] is proposed to scale up the language representation learning via parameter reduction techniques. In ALBERT, cross-layer parameter sharing and factorization of embedding parameters can be thought as a regularization that helps stabilize its training. Furthermore, ALBERT uses an updated self- supervised loss known as sentence-order prediction (SOP) that enhances the ineffectiveness of NSP confused between topic and coherence predictions. SOP has been shown to consistently help downstream tasks with multi-sentence inputs. The pre-training tasks are intrinsic compared to downstream tasks. A key disadvantage of BERT is that no independent sentence embeddings are computed. As a higher means of abstraction, sentence embeddings can play a central role to achieve good downstream performances like machine reading comprehension (MRC). The specifics of NLP applications are well-abstracted by downstream tasks. For this reason, downstream performance is a good indicator for a language model. When pre-trained language models are used for downstream task evaluations, pre-trained models can generate additional feature representations in addition to being provided as a platform for fine-tuning. In this paper, we are interested in learning sentence representation using out-of-the-box BERT and ALBERT token embeddings. Sentence embedding models are essential for clustering and semantic search where a sentence input is mapped in a high-dimensional semantic vector space such that sentence vectors with similar meanings are close in distance. NLP researchers have started to input an individual sentence into BERT to derive a fixed-size embedding. A commonly accepted sentence embedding for BERT-based models is the [CLS] token used for sentence-order prediction (_i.e._ , NSP or SOP) during the pre-training. Averaging the representations obtained from the BERT or ALBERT output layer (_i.e._ , token embeddings) gives an alternative. Using the [CLS] token, which is optimized by an intrinsic task of the pre-training, is considered suboptimal while the average pooling of token embeddings has a limitation of its own. Nonetheless, it can be time consuming to perform multi-sentence tasks associated with semantic search, summarization, and paraphrase. Computing sentence embeddings from contextualized language models is an active, ongoing research problem. In our exploration for more elaborate sentence embedding models, we first consider Sentence-BERT (SBERT) [4], a modified BERT network with siamese and triplet network structures to derive semantically meaningful sentence embeddings. SBERT is computationally efficient and can compare sentences using only cosine-similarity at run-time. We then take the SBERT architecture and simply replace BERT with ALBERT to form Sentence-ALBERT (SALBERT). We also apply a convolutional neural net (CNN) instead of average pooling that takes in the BERT or ALBERT token embedding outputs. We have evaluated the empirical performance of all sentence embedding models by using the STS and NLI datasets. We find that our CNN architecture improves ALBERT models up to 8 points in Spearman’s rank correlation for STS benchmark, which is substantially larger than the case for BERT models with an improvement of only 1 point. Despite significantly fewer model parameters, ALBERT sentence embedding is highly competitive to BERT in downstream NLP evaluations. This paper is structured in the following manner. Section II presents related work. Section III describes all sentence embedding models of our consideration. In Section IV, we empirically evaluate the sentence embedding models using the STS and NLI datasets. The paper concludes in Section V. ## II Related Work Language models provide core building blocks for downstream NLP tasks. Task- specific fine-tuning of a pre-trained language model is a contemporary approach to implement an NLP system. BERT [2] is a pre-trained transformer encoder network [1] fine-tuned to give state-of-the-art results in question answering, sentence classification, and sentence-pair regression. A Lite BERT (ALBERT) [3] incorporates parameter reduction techniques to scale better than BERT. ALBERT is known to improve on inter-sentence coherence by a self- supervised loss from sentence-order prediction (SOP) compared to the next sentence prediction (NSP) loss in the original BERT. The BERT network structure contains a special classification token [CLS] as an aggregate sequence representation for NSP. (Similarly for ALBERT, [CLS] is used for SOP.) The [CLS] token therefore can serve a sentence embedding. Because there are no other independently computed sentence embeddings for BERT and ALBERT, one can average-pool the token embedding outputs to form a fixed- length sentence vector. Previously, sentence embedding research looked over convolutional and recurrent structures as building blocks. Kim [5] proposed a CNN with max pooling for sentence classification. In Conneau _et al._ [6], bidirectional LSTM (BiLSTM) was used as sentence embedding for natural language inference tasks. More complex neural nets such as Socher _et al._ [7] introduced recursive neural tensor network (RNTN) over parse trees to compute sentence embedding for sentiment analysis. Zhu _et al._ [8] and Tai _et al._ [9] proposed tree-LSTM while Yu & Munkhdalai [10] suggested neural semantic encoder (NSE) based on memory augmented neural net. Recently, sentence embedding research is exploring attention mechanisms. Vaswani _et al._ [1] have proposed Transformer, a self-attention network for the neural sequence-to-sequence task. A self-attention network uses multi-head scaled dot-product attention to represent each word as a weighted sum of all words in the sentence. The idea of self-attention pooling has existed before self-attention network as in Liu _et al._ [11] that have utilized inner- attention within a sentence to apply pooling for sentence embedding. Choi _et al._ [12] have developed a fine-grained attention mechanism for neural machine translation, extending scalar attention to vectors. Complex contextualized sentence encoders are usually pre-trained like language models, but they can be improved by supervised transfer tasks such as natural language inference (NLI). InferSent by Conneau _et al._ [6] has consistently outperformed unsupervised methods like SkipThought. Universal Sentence Encoder [13] trains a transformer network and augments unsupervised learning with training on the Stanford NLI (SNLI) dataset. Hill _et al._ [14] show that the task on which sentence embeddings are trained significantly impacts their quality. According to Conneau _et al._ [6] and Cer _et al._ [13], the SNLI datasets are suitable for training sentence embeddings. Yang _et al._ [15] present a method to train siamese deep averaging network (DAN) and transformer, using conversations from Reddit to yield good results on the STS benchmark. In Sentence-BERT (SBERT) [4], a comprehensive evaluation on the pre-trained BERT combined with siamese and triplet network structures is presented. To alleviate the run-time overhead, SBERT’s more elaborate fine-tuning mechanisms such as softmax on augmented sentence representations and triplet loss are replaced by the cosine similarity at inference. The simplistic SBERT inference helps reduce the effort for finding the most similar pair from 65 hours with BERT to about 5 seconds, while hardly impacting the accuracy. ## III Models The output of BERT or ALBERT constitutes token embeddings for a given text input. With a large output size (_e.g._ , up to 512 token vectors of 768 dimensions each), the contextualized word embeddings can be fine-tuned for any downstream task. To do sentence-level regressions such as semantic textual similarity (STS), fixed-size sentence embeddings would be necessary. In this section, we describe sentence embedding models for BERT and ALBERT. ### III-A The [CLS] token embedding The most straightforward sentence embedding model is the [CLS] vector used to predict sentence-level context (_i.e._ , BERT NSP, ALBERT SOP) during the pre- training. The [CLS] token summarizes the information from other tokens via a self-attention mechanism that facilitates the intrinsic tasks of the pre- training. A similar reasoning applies such that the [CLS] token can be further optimized while fine-tuning the downstream task. After the fine-tuning, the [CLS] token is expected to capture more semantically-relevant sentence-level context specific to the downstream task. ### III-B Pooled token embeddings Averaging the token embedding output gives our next model. The model works like a pooling layer in a convolutional neural net. Average pooling turns the token embeddings into a fixed-length sentence vector. An alternative would use max pooling instead, although max pooling tends to select the most important features rather than taking representative summary. In this paper, we choose to go with the average-pooling model. ### III-C Sentence-BERT (SBERT) Reimers & Gurevych [4] propose SBERT that modifies a pre-trained BERT with siamese and triplet network structures to derive semantically meaningful sentence embeddings comparable using only cosine similarity. The siamese architecture is computationally efficient. Note that using a single copy of pre-trained BERT would require to run all possible combinations of sentence pairs from a dataset to form a representation for sentence pairs. SBERT first average-pools a pair of the BERT embeddings to fixed-size sentence embeddings. Using the two sentence embeddings and an element-wise difference between them, SBERT can run a softmax layer configured for classification and regression tasks. ### III-D Sentence-ALBERT (SALBERT) Based on ALBERT, SALBERT has the same siamese and triplet networks as SBERT. The siamese network structure in SBERT and SALBERT is illustrated in Fig. 1. Figure 1: Siamese network structure used in SBERT and SALBERT ### III-E CNN-SBERT In SBERT, average pooling is used to make the BERT embeddings into fixed- length sentence vectors. CNN-SBERT instead employs a CNN architecture that takes in the token embeddings and computes a fixed-size sentence embedding through convolutional layers with the hyperbolic tangent activation function interlaced with pooling layers. In CNN-SBERT, all the pooling layers use max pooling except the final average pooling. The CNN architecture used in CNN- SBERT is described in Fig. 2. ### III-F CNN-SALBERT Similarly, CNN-SALBERT uses the same CNN architecture used in CNN-SBERT. Figure 2: CNN architecture used in CNN-SBERT and CNN-SALBERT. B, T, and H means mini-batch size, number of tokens, and transformer hidden size. ## IV Experiments We evaluate the performance of the sentence embedding models on Semantic Textual Similarity (STS) and Natural Language Inference (NLI) benchmarks. Following the methodology by Reimers & Gurevych [4], we use cosine-similarity as a main metric to evaluate the similarity between two sentence embeddings. We compute both Pearson and Spearman’s rank coefficients to indicate how our cosine-similarity estimate and a ground-truth label provided by the datasets are correlated. We use pre-trained BERT and ALBERT models from Hugging Face [16]111https://github.com/huggingface. ### IV-A Datasets and tasks We fine-tune the BERT and ALBERT sentence embedding models on the Semantic Textual Similarity benchmark (STSb) [17], the Multi-Genre Natural Language Inference (MultiNLI) [18], and the Stanford Natural Language Inference (SNLI) [19] datasets. #### IV-A1 Semantic Textual Similarity benchmark STSb gives a set of English data used for STS tasks organized in International Workshop on Semantic Evaluation (SemEval) [20] between 2012 and 2017. The dataset includes 8,628 sentence pairs from image captions, news headlines, and user forums that are partitioned in train (5,749), dev (1,500) and test (1,379) sets. They are annotated with a score from 0 to 5 indicating how similar a pair of sentences are in terms of semantic relatedness. #### IV-A2 Multi-genre Natural Language Inference The MultiNLI corpus [18] is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The dataset is used to evaluate entailment classification task. MultiNLI is modeled on the SNLI corpus, differing in its coverage of genres of spoken and written text. MultiNLI supports a distinctive cross-genre generalization evaluation. Each sentence pair in MultiNLI has a label that distinguishes whether the two sentences are contradiction, entailment, or neutral. #### IV-A3 Stanford Natural Language Inference The SNLI corpus [19] contains 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral for natural language inference (NLI), also known as recognizing textual entailment (RTE). The General Language Understanding Evaluation (GLUE) benchmark [21] recommends the SNLI dataset used as an auxiliary training data for MultiNLI task. Conneau _et al._ [6] and Cer _et al._ [13] find SNLI suitable for training sentence embeddings for asserting reasoning about the semantic relationship within sentences. ### IV-B Training In our evaluation, we consider only BERT and ALBERT base models (_i.e._ , multi-head attention over 12 layers) in the transformer package downloaded from Hugging Face [16]. We use GLUE benchmark to fine-tune the [CLS] token embedding and average-pooled token embedding models with a learning rate of $3\times 10^{-5}$. We train all of our models using the Adam optimizer with a linear learning rate warm-up for 10% of the training data. We use a learning rate of $2\times 10^{-5}$ for SBERT and SALBERT as suggested by the original SBERT architecture and $1\times 10^{-5}$ for CNN-SBERT and CNN-SALBERT. Using the MultiNLI and SNLI data, we optimize SBERT and SALBERT on the 3-way softmax loss. #### IV-B1 STSb To train STS benchmark task, we use siamese network as shown in Fig 1. We run 10 training epochs with a batch size of 32. #### IV-B2 NLI (MultiNLI + SNLI) To train NLI tasks, we adopt the siamese architecture in Fig 1. We use a softmax classifier instead of cosine similarity in training NLI tasks with a cross-entropy loss. We train 1 epoch because the NLI train set is much bigger than STSb. We use a batch size of 16. #### IV-B3 NLI + STSb After fine-tuning on the NLI dataset, we train on the STS benchmark with a batch size of 32. TABLE I: Evaluation on the STSb by fine-tuning sentence embeddings on STS, NLI, and both Model | Spearman (Pearson) | ---|---|--- Not fine-tuned | BERT [CLS]-token embedding | 6.43 (1.70) | BERT Avg. pooled token embedding | 47.29 (47.91) | ALBERT [CLS]-token embedding | 0.86 (4.57) | ALBERT Avg. pooled token embedding | 47.84 (46.57) | Fine-tuned on STSb | BERT [CLS]-token embedding | 12.96 (7.49) | BERT Avg. pooled token embedding | 55.76 (54.90) | SBERT | 84.66 (84.86) | CNN-SBERT | 85.72 (86.15) | ALBERT [CLS]-token embedding | 37.98 (27.89) | ALBERT Avg. pooled token embedding | 61.06 (60.41) | SALBERT | 74.33 (75.26) | CNN-SALBERT | 82.30 (83.08) | Fine-tuned on NLI (MultiNLI + SNLI) | BERT [CLS]-token embedding | 32.72 (26.88) | BERT Avg. pooled token embedding | 69.57 (68.49) | SBERT | 77.22 (74.53) | CNN-SBERT | 76.77 (75.31) | ALBERT [CLS]-token embedding | 24.87 (4.11) | ALBERT Avg. pooled token embedding | 54.21 (53.58) | SALBERT | 74.05 (70.78) | CNN-SALBERT | 73.70 (72.24) | Fine-tuned on NLI (MultiNLI + SNLI) and STSb | BERT [CLS]-token embedding | 44.77 (38.74) | BERT Avg. pooled token embedding | 67.61 (65.30) | SBERT | 85.32 (84.51) | CNN-SBERT | 85.91 (85.63) | ALBERT [CLS]-token embedding | 40.35 (33.46) | ALBERT Avg. pooled token embedding | 60.24 (59.98) | SALBERT | 77.59 (77.82) | CNN-SALBERT | 83.49 (83.87) | TABLE II: Evaluation on the GLUE STSb task. Model | Spearman (Pearson) ---|--- BERT | 88.58 (88.89) ALBERT | 90.13 (90.46) TABLE III: Evaluation on various STS tasks. Numbers represent Spearman (Pearson). Model | STS12 | STS13 | STS14 | STS15 | STS16 | STSb | Avg. ---|---|---|---|---|---|---|--- SBERT | 72.37 (77.61) | 87.49 (88.01) | 89.57 (89.87) | 89.76 (89.54) | 82.41 (80.59) | 85.32 (84.51) | 84.49 (85.02) CNN-SBERT | 69.80 (75.04) | 88.92 (89.86) | 89.23 (90.53) | 89.35 (89.37) | 82.81 (82.03) | 85.91 (85.63) | 84.34 (85.41) SALBERT | 63.87 (68.15) | 84.04 (84.59) | 84.89 (86.09) | 86.41 (86.31) | 75.26 (74.32) | 77.59 (77.82) | 78.68 (79.55) CNN-SALBERT | 65.41 (68.58) | 86.76 (87.29) | 86.17 (87.66) | 87.84 (88.12) | 81.58 (81.48) | 83.49 (83.87) | 81.76 (82.70) ### IV-C Results #### IV-C1 Effect of fine-tuning Table I presents the STS benchmark results. Note that performance we report is $\rho\times 100$, where $\rho$ is Spearman’s rank or Pearson correlation coefficient. In general, fine-tuning results in a better performance than no fine-tuning. Without fine-tuning, the [CLS] token as a sentence embedding gives poor downstream task performance. Quality of sentence embedding reflected on the STSb performance seems to be affected by how related train sets used for fine-tuning are to the task. We consider STSb train set, which is directly related to the task of STSb. We also consider NLI (_i.e._ , MultiNLI and SNLI) train sets that are not directly related to STSb. We have experimented with the following: i) fine-tuning with only STSb train set, ii) with only NLI train sets, and iii) with both NLI and STSb train sets. Fine- tuning with only STSb train set gives a reasonably good performance whereas fine-tuning with irrelevant NLI train sets only have yielded a suboptimal performance as expected. Our best STSb results are obtained by fine-tuning with both STSb and NLI train sets. #### IV-C2 Model comparison We expect a more elaborate sentence embedding model to give a better performance in STSb. We have found that pooling token embeddings will form a better sentence representation than [CLS]. We have also found that siamese structure further helps sentence embeddings. Generally, our CNN-based sentence embedding models give the best performance among all sentence embedding models. #### IV-C3 Performance of ALBERT ALBERT-based sentence embedding models generally achieve lower performance than the BERT counterparts in STSb evaluations. Before fine-tuning, there is no significant difference between ALBERT and BERT. The gap, however, increases after fine-tuning. Only for [CLS] token embedding and average-pooled embeddings, ALBERT has a better performance than BERT when fine-tuned on STSb. SALBERT has much lower performance than SBERT even though they both have the same siamese architecture. This is surprising because ALBERT has a higher score than BERT when evaluated on STSb using GLUE as shown in Table II. The performance of SALBERT catches up with SBERT when the CNN architecture applies, but CNN-SALBERT is still slightly inferior to CNN-SBERT. #### IV-C4 Effect of CNN In Table I, we find that the best score is from CNN-based models trained on NLI and STSb. According to these scores, the CNN architecture seems to have a positive impact on sentence embedding performances. The CNN architecture, however, improves the ALBERT-based sentence embedding models more than the BERT-based models. We have found that the improvement by CNN to ALBERT models can be as high as 8 points, which is compared to 1 point for the case of BERT models. We have empirically observed that ALBERT exposes more instability (due to parameter sharing) compared to BERT. Such instability can be alleviated by CNN, and this is a possible explanation for more improvement on ALBERT by adding CNN than BERT. #### IV-C5 Evaluation of STS12–STS16 Tasks In Table III, we present a comprehensive evaluation on the STS tasks from 2012 to 2016 [22, 23, 24, 25, 26] after fine-tuning with both the NLI and STSb train sets. We show the results of our best two models (_i.e._ , SBERT/SALBERT and CNN-SBERT/CNN-SALBERT). The STSb result is also presented for comparison. The purpose of the evaluation is to verify the improvement by CNN beyond STSb. In general, we see a similar trend that CNN architecture improves ALBERT-based sentence embedding models substantially more than BERT-based. On the average, SBERT embeddings achieve a Spearman’s rank correlation point of 84.49 while the average for CNN-SBERT is 84.34. The CNN architecture seems almost no effect on BERT-based sentence embedding models. On the other hand, the average correlation score of CNN-SALBERT is improved by 2 points. ## V Conclusion and Future Work In this paper, we have presented an evaluation of BERT and ALBERT sentence embedding models on Semantic Textual Similarity (STS). Knowing limitations of the [CLS] sentence vector, we facilitate the STS sentence-pair regression task with the siamese and triplet network architecture by Reimers & Gurevych for BERT and ALBERT. We have additionally developed a CNN architecture that takes in the token embeddings to compute a fixed-size sentence vector. Our CNN architecture improves ALBERT models up to 0.08 (8 points in percentile) in Spearman’s rank correlation for STS benchmark, which is substantially larger than the case for BERT models with an improvement of only 0.01 (1 point). Despite significantly fewer model parameters, ALBERT sentence embedding is highly competitive to BERT in downstream NLP evaluations. For our future work, we plan to evaluate sentence embedding with larger ALBERT models— _i.e._ , ALBERT-large and ALBERT-xlarge. (Note that the total number of parameters in ALBERT-xlarge is still fewer than that of BERT-base.) The ALBERT results in this paper are obtained with the number of groups for the hidden layers (num_hidden_groups) set to 1. We also plan to optimize the num_hidden_groups hyperparameter for better performance. ## References * [1] Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, Lukasz and Polosukhin, Illia, “Attention is All you Need,” in _Advances in Neural Information Processing Systems 30_ , 2017. * [2] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding,” _arXiv preprint arXiv:1810.04805_ , 2018. * [3] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A Lite BERT for Self-supervised Learning of Language Representations,” _arXiv preprint arXiv:1909.11942_ , 2019. * [4] N. Reimers and I. Gurevych, “Sentence-BERT: Sentence embeddings using Siamese BERT-networks,” in _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , 2019. * [5] Y. Kim, “Convolutional neural networks for sentence classification,” in _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , 2014. * [6] A. Conneau, D. Kiela, H. Schwenk, L. Barrault, and A. Bordes, “Supervised learning of universal sentence representations from natural language inference data,” in _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , 2017. * [7] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts, “Recursive deep models for semantic compositionality over a sentiment treebank,” in _Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing_ , 2013. * [8] X. Zhu, P. Sobhani, and H. Guo, “Long Short-Term Memory over Recursive Structures,” in _Proceedings of the 32nd International Conference on International Conference on Machine Learning_ , 2015. * [9] K. S. Tai, R. Socher, and C. D. Manning, “Improved semantic representations from tree-structured long short-term memory networks,” in _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , 2015. * [10] T. Munkhdalai and H. Yu, “Neural semantic encoders,” in _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics_ , 2017. * [11] Y. Liu, C. Sun, L. Lin, and X. Wang, “Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention,” _CoRR_ , vol. abs/1605.09090, 2016. * [12] H. Choi, K. Cho, and Y. Bengio, “Fine-Grained Attention Mechanism for Neural Machine Translation,” _CoRR_ , vol. abs/1803.11407, 2018. * [13] D. Cer, Y. Yang, S.-y. Kong, N. Hua, N. Limtiaco, R. St. John, N. Constant, M. Guajardo-Cespedes, S. Yuan, C. Tar, B. Strope, and R. Kurzweil, “Universal sentence encoder for English,” in _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , 2018. * [14] F. Hill, K. Cho, and A. Korhonen, “Learning distributed representations of sentences from unlabelled data,” in _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , 2016. * [15] Y. Yang, S. Yuan, D. Cer, S.-y. Kong, N. Constant, P. Pilar, H. Ge, Y.-H. Sung, B. Strope, and R. Kurzweil, “Learning semantic textual similarity from conversations,” in _Proceedings of The Third Workshop on Representation Learning for NLP_ , 2018. * [16] Hugging Face, “Open Source NLP,” https://huggingface.co. * [17] D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, “SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation,” in _Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)_ , 2017. * [18] A. Williams, N. Nangia, and S. Bowman, “A broad-coverage challenge corpus for sentence understanding through inference,” in _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , 2018. * [19] S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning, “A large annotated corpus for learning natural language inference,” in _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , 2015. * [20] Special Interest Group on the Lexicon of the Association for Computational Linguistics, “SemEval: International Workshop on Semantic Evaluation,” https://semeval.github.io/. * [21] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman, “GLUE: A multi-task benchmark and analysis platform for natural language understanding,” in _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , 2018. * [22] E. Agirre, D. Cer, M. Diab, and A. Gonzalez-Agirre, “SemEval-2012 task 6: A pilot on semantic textual similarity,” in _*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)_ , 2012. * [23] E. Agirre, D. Cer, M. Diab, A. Gonzalez-Agirre, and W. Guo, “*SEM 2013 shared task: Semantic textual similarity,” in _Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity_ , 2013. * [24] E. Agirre, C. Banea, C. Cardie, D. Cer, M. Diab, A. Gonzalez-Agirre, W. Guo, R. Mihalcea, G. Rigau, and J. Wiebe, “SemEval-2014 task 10: Multilingual semantic textual similarity,” in _Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)_ , 2014. * [25] E. Agirre, C. Banea, C. Cardie, D. Cer, M. Diab, A. Gonzalez-Agirre, W. Guo, I. Lopez-Gazpio, M. Maritxalar, R. Mihalcea, G. Rigau, L. Uria, and J. Wiebe, “SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability,” in _Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)_ , 2015. * [26] E. Agirre, C. Banea, D. Cer, M. Diab, A. Gonzalez-Agirre, R. Mihalcea, G. Rigau, and J. Wiebe, “SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation,” in _Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)_ , 2016.
# Causal inference for observational longitudinal studies using deep survival models Jie Zhu Centre for Big Data Research in Health (CBDRH) UNSW, Sydney NSW, 2052, Australia <EMAIL_ADDRESS> &Blanca Gallego Centre for Big Data Research in Health (CBDRH) UNSW, Sydney NSW, 2052, Australia <EMAIL_ADDRESS> Corresponding Author ###### Abstract Objective Causal inference for observational longitudinal studies often requires the accurate estimation of treatment effects on time-to-event outcomes in the presence of time-dependent patient history and time-dependent covariates. Materials and Methods To tackle this longitudinal treatment effect estimation problem, we have developed a time-variant causal survival (TCS) model that uses the potential outcomes framework with an ensemble of recurrent subnetworks to estimate the difference in survival probabilities and its confidence interval over time as a function of time-dependent covariates and treatments. Results Using simulated survival datasets, the TCS model showed good causal effect estimation performance across scenarios of varying sample dimensions, event rates, confounding and overlapping. However, increasing the sample size was not effective in alleviating the adverse impact of a high level of confounding. In a large clinical cohort study, TCS identified the expected conditional average treatment effect and detected individual treatment effect heterogeneity over time. TCS provides an efficient way to estimate and update individualized treatment effects over time, in order to improve clinical decisions. Discussion The use of a propensity score layer and potential outcome subnetworks helps correcting for selection bias. However, the proposed model is limited in its ability to correct the bias from unmeasured confounding, and more extensive testing of TCS under extreme scenarios such as low overlapping and the presence of unmeasured confounders is desired and left for future work. Conclusion TCS fills the gap in causal inference using deep learning techniques in survival analysis. It considers time-varying confounders and treatment options. Its treatment effect estimation can be easily compared with the conventional literature, which uses relative measures of treatment effect. We expect TCS will be particularly useful for identifying and quantifying treatment effect heterogeneity over time under the ever complex observational health care environment. _K_ eywords Survival Analysis $\cdot$ Causal Inference $\cdot$ Deep Learning $\cdot$ Neural Subnetworks ## 1 Introduction While randomized experiments are the gold standard in the comparison of interventions, it has become clear that observational studies using Big Data have an important role to play in comparative effectiveness research [1]. As a result, the last few years have seen a surge of studies proposing and comparing methods that can estimate the effect of interventions from routinely collected data. In particular, new methods have emerged that can investigate the heterogeneity of the treatment effect. In the health domain, these methods use Medical Claims [2] and Electronic Health Record (EHR) data and have been driven by the move towards personalized care [3]. In spite of significant progress, there remain challenges that must be addressed. In particular, no off-the-shelf treatment effect algorithm exists that takes full consideration of the temporal nature of medical data such as the time-dependent patient history and time-to-event outcomes. Accounting for the temporal nature of medical information is important when informing clinical guidelines or designing clinical decision support systems, since they underpin clinicians’ response to disease progression and patient deterioration. As a motivating example, early detection and treatment of sepsis are critical for improving sepsis outcomes, where each hour of delayed treatment has been associated with roughly a 4-8% increase in mortality [4]. To address this problem, clinicians have proposed new definitions for sepsis [5], but the fundamental need to detect and treat sepsis early still remains. In this context, time-dependent variables such as previous administration of antibiotics or use of mechanical ventilation (MV) may play a significant role on treatment decisions and their corresponding outcomes. The challenge of capturing the history of time-dependent biomarkers and other risk factors pervades the prediction of time-to-event outcomes and the estimation of their treatment effects. The standard method to estimate the treatment effect using time-dependent confounders uses the Cox model [6] such as in the landmark analysis [7], where the instantaneous probability of experiencing an event at time $t$ given covariates $X(t)$ is defined as a hazard function: $h(t|X(t))=h_{0}\exp(\beta^{\prime}X(t))$, where $\beta$ is a vector of constants, and $h_{0}$ is the baseline risk of having an event at time 0. When censoring is not considered, a Cox model compares the risk of an event between treatment and control conditions at each time $t$ regardless of previous history of $X(t)$ or history of treatment conditions.The piece-wise constant Cox model [8] extends the constant$\beta$ to $\beta(t)$ thus allowing for a time-dependent effect. However, neither model takes into account the longitudinal history of covariates and both treat missing covariates either by imputing their value or removing the incomplete observations. To address these limitations, models were proposed to jointly describe both longitudinal and survival processes [9, 10]. In particular, these joint models generally comprise two submodels: one for repeated measurements of time- dependent covariates and the other for time-to-event data such as a Cox model. The models are linked by a function of shared random effects. To find a full representation of the joint distribution of the two processes, the model needs to be correctly specified for both processes. Thus, model misspecification and computation efforts significantly limit the estimation accuracy of this approach when applied to high-dimensional EHR data. Recently, data-driven models such as recurrent neural networks [11, 12] have been proposed to learn efficiently from EHR data with complex longitudinal dependencies. For example, Dynamic DeepHit [12] is a longitudinal outcome model which learns the joint distribution of survival times and competing events from a sequence of longitudinal measurements with a recurrent neural network structure. However, as a single outcome prediction model, DeepHit does not provide an explanatory mechanism for causal inference. On the other hand, the recently proposed Counterfactual Recurrent Network (CRN) [11] estimates the average longitudinal treatment effect on continuous outcomes by correcting for time-dependent confounding using domain adversarial training (DAT). However, the efficacy of DAT depends on the feature alignment in the source (control) and target (intervention) domains [13], that is, whether the covariates observed under treatment and control conditions have similar distributions. As shown in the original work, the DAT is sensitive to the overlapping among covariates and drops in estimation performance with the level of overlapping like other existing causal inference algorithms such as the TMLE [14] and Causal Forest [15]. There is a lack of studies dedicated to the estimation of survival causal effects from longitudinal EHR data. We fill this gap by introducing a time- variant causal model for survival analysis, which we call TCS, which extends our previous work on modeling the treatment effect on time-to-event outcomes from static patient history [16]. In TCS, we choose an ensemble of recurrent neural networks as the outcome model. Neural networks learn more efficiently from the trajectories of covariates than semi-parametric or parametric methods such as Super-learner and Cox models, while the ensemble captures the uncertainty of network estimation. Both the baseline survival probability and its interaction with treatment can vary with time free from the proportional hazard assumption. In lieu of a single outcome model (like DeepHit) for estimating the joint distribution of the observed failure/censor times, TCS first captures the information from treated and control observations separately, and then encodes it into a shared subnetwork. The encoded information is fed into counterfactual subnetworks to predict the expected survival outcomes given either treatment or control conditions. The dedicated subnetworks explicitly model the outcomes originated from patient baseline covariates and their interaction with treatment conditions. Lastly, we adjust for bias in the counterfactual outcomes arising from nonrandom treatment allocation in observational studies. The difference between the counterfactual survival probabilities will give us adjusted treatment effect estimates. The key characteristics of the proposed algorithm are: 1) it learns the treatment assignment and outcome generating processes from the pattern of observed and missing covariates in longitudinal data; 2) it captures treatment specific outcomes by employing potential outcome subnetworks for treatment and control conditions; 3) it quantifies the uncertainty of the model estimations with an ensemble of neural networks with varied random seeds; and 4) it incorporates the history of previous treatments as additional covariates, allowing for straightforward updating of treatment effect estimations over time. The outline of this project is as follows. Section 2 describes the materials and methods. Section 3 provides the results. We end with a discussion. ## 2 Materials and Methods ### 2.1 The case study TCS provides a solution to the need to analyze the high-dimension time- dependent observations in the patient history. It predicts patient outcomes in terms of survival probabilities from time-dependent patient history without feature engineering and estimates conditional treatment effects for selected patient groups. We illustrate the TCS model in evaluating the effectiveness of mechanical ventilation (MV) on in-hospital mortality for sepsis patients in the ICU. The data source for this case study is MIMIC-III, an open-access, anonymised database of 61,532 admissions from 2001–2012 in six ICUs at a Boston teaching hospital [17]. Table 1: Summary of case study database | MIMIC-III ---|--- Unique patient ids | 20,938 Number of event patients | 2,880 Rows for the first 20 time stamp | 278,504 Static features | 5 Dynamic features | 39 In our case study, we define a sepsis patient as those who had a record of suspected infection (identified by the prescription of antibiotics and sampling of bodily fluids for microbiological culture) and the evidence of organ dysfunction (defined by a two-points deterioration of the SOFA score [18]). The final cohort has 20,938 patients (including both adults and non- adults, please see Figure 1 in the supplementary information of the previous work for detail [19]) and its summary is presented in Table 1. We consider the first 20 timestamps111The first 20 2-hour intervals for MIMIC-III, the discretization process takes the average value of each covariate during the interval and the missing values are masked as described in the method section. of each patient for the treatment effect estimation. The treatment, which is the use of mechanical ventilation(MV), is a time- invariant binary covariate. However, as illustrated in Table 1, there are 39 time-variant covariates making the adjustment for treatment effect a challenging task. In our simulation study, we further allow the treatment to be time-variant in order to generalize the application of the model. We train the model using the 10-fold cross validation with 70% of the original data injected in each training epoch. We estimate the average treatment effect at each time step using the whole sample. ### 2.2 A causal model for time-variant survival analysis As illustrated in Figure 1, suppose we observe a sample $\mathcal{O}\text{ of }N$ independent observations generated from an unknown distribution $\mathcal{P}_{0}$: Figure 1: Treatment effect estimation with time-varying covariates using potential outcomes. All dark gray cells indicate missing data. The TCS model estimates treatment effects as a function of time-dependent covariates, which is defined as the difference between potential survival probabilities from $t=0$ (start of follow-up) to $t=q$ (end of follow-up) given constant treatment or control conditions. $\displaystyle\mathcal{O}:=\big{(}X_{i}(t),Y_{i}(t),A_{i}(t),\tau_{i}=\min(\tau_{s,i},\tau_{c,i})\big{)},i~{}=1,2,\ldots,n$ where $X_{i}(t)=(X_{i,1}(t),X_{i,2}(t),\ldots,X_{i,\textit{d}}(t)),d=1,2,\ldots,D$ are covariates at time $t$; $A_{i}(t)$ is the treatment condition at time $t$, which can take the value of $0$ or $1$ for control and treatment conditions respectively; and $Y_{i}(t)$ denotes the outcome at time $t$, with $Y_{i}=1$ if $i$ experienced an event and $Y_{i}=0$ otherwise. Both $X_{i}(t)$ and $A_{i}(t)$ are captured from $t=-u$ to $t=q$ (inclusive),$u>1$ and $q>1$, where $u$ is the length of the patient’s history window. The end of follow-up for a given patient, $\tau_{i},\tau_{i}\geq 0$, is determined by the event or censor time, $\tau_{s,i}$ or $\tau_{c,i}$, whichever happened first. For simplicity, we drop the subscript $i$ in the sequel. To fit the TCS model and adjust for right-censoring, we create the longitudinal outcome label, $\Gamma$, as a matrix: $\displaystyle\begin{split}\Gamma&=\begin{bmatrix}\Theta\\\ \gamma\end{bmatrix},\\\ \end{split}$ (1) composed of the vector of events $\gamma=[Y(1),\dots,Y(\tau),\ldots,Y(q)]$ and the vector of terminal timing labels $\Theta=[\theta(1),\dots,\theta(\tau),\ldots,\theta(q)]$, where $\theta(\cdot)=1$ for $t<\tau$ if a patient is censored or has an event at $\tau$, and $\theta(\cdot)=0$ for $t\geq\tau$. As shown in Appendix A, the estimation $\hat{\theta}(t)$ using TCS is equivalent to the hazard rate of experiencing an event at time $t$ adjusted for right-censoring. Figure 2: Illustration of treatment recommendations based on potential outcomes. All dark gray cells indicate missing data. At time $n_{1}$, a treatment option, $A_{n_{1}}$, is selected and its corresponding potential outcome $Y_{n_{1}}$ is computed (with right-censoring and confounding adjusted) . Practitioner may decide whether $A_{n_{1}}$ is appropriate given $Y_{n_{1}}$. When there are multiple time steps, the decision process can be repeated as illustrated in steps 3 and 4. TCS maps the propensity score and covariate matrices in Figure 1 (which we denote as $\Lambda$) to the outcome $\Gamma$: $\displaystyle\Gamma=f(\Lambda)$ (2) The potential survival curves for the treatment arm are computed mapping $\hat{\Gamma}_{1}=f(\Lambda_{1})$, where $\Lambda_{1}$ is calculated by setting all $A(t)=1$ in $\Lambda$. Similarly, the potential survival curves under the control condition are computed mapping $\hat{\Gamma}_{0}=f(\Lambda_{0})$, where where $\Lambda_{0}$ is calculated setting all $A(t)=0$. This setup accounts for time-dependent covariates throughout the follow-up window, but assumes that there is no treatment- covariate feedback, that is the covariates observed after the treatment assignment during the follow-up are independent from previous treatments. We use the conditional probability, $\displaystyle\begin{split}S(t,\Lambda)&=\prod_{j=1}^{t}(1-\hat{\theta}(j,\Lambda)),\end{split}$ (3) to denote the time-to-event probabilities that the event did not occur in any observation from time 1 to time $t$ conditioned on $\Lambda$.TCS can be easily extended to a system that provides treatment recommendations at selected times based on estimated potential outcomes as illustrated in Figure 2. We discuss the implementation of TCS in Appendix A. ### 2.3 Define the survival treatment effect To estimate the treatment effect over the follow-up window, we follow Rosenbaum and Rubin’s potential outcomes framework [20], and assume 1) the censoring is non-informative conditioned on the treatment (coarsening at random), 2) there is no unmeasured confounders, 3) the history of treatment assignment $\overline{A}(u)$ is independent of the outcomes given the history of correctly estimated propensity scores $\overline{P}(u)$ and 4) $X(\tau)$ is independent of $A(t)$ for all $\tau>t$. Then the conditional average treatment effect (CATE) can be defined as: $\begin{split}\Psi(t,\lambda)=\mathbb{E}_{\Lambda=\lambda}\big{[}\mathbb{E}[S(t,\Lambda_{1})]-\mathbb{E}[S(t,\Lambda_{0})]\big{]}.\end{split}$ (4) Similarly, we define the individual treatment effect (ITE) as: $\begin{split}\psi(t_{i},\lambda_{i})=\mathbb{E}[S(t_{i},\lambda_{i,1})]-\mathbb{E}[S(t_{i},\lambda_{i,0})].\end{split}$ (5) To compare the absolute measure of treatment effect with the conventionally reported hazard ratio, we define an empirical hazard ratio as: $\begin{split}\text{HR}^{*}(t,\lambda)=\frac{1}{n}\sum_{i}\frac{\hat{\theta}_{i}(t,\Lambda_{0})}{\hat{\theta}_{i}(t,\Lambda_{1}))}\end{split}$ (6) where $n$ is the number of observations in a sample where $\Lambda=\lambda$. ### 2.4 Model evaluation #### 2.4.1 Benchmark data To explore the finite-sample performance of TCS, we ran several experiments with biologically plausible longitudinal data following a previous study [21]. In particular, we use: * • $D$ continuous confounders $X(t)_{1},X(t)_{2},\dots,X(t)_{D}\sim\mathrm{N}(\sqrt{t},V)$ from $t=t^{\prime}-u$ to $t=t^{\prime}+q$, where $V$ is the variance of the normal distribution and $D$ is the feature dimension; * • Binary exposure: $A(t)\sim Binom(\eta\cdot I(\sum_{d=1}^{3}X(t)_{D}>\frac{1}{3}\sum_{i=1}^{3}X(t)_{i})+0.5\cdot(1-\eta))$, where $I$ is an indicator function and $\eta$ controls the level of overlapping. When $\eta=0$, the probability of receiving the treatment is 50% regardless of $X(t)$; when $\eta=1$, the allocation follows the indicator function so that the outcome will be confounded by the first 3 confounders of $X(t)$; and when $\eta=0.5$, the chance of receiving the treatment is partially dependent on the indicator function which is $(0.5\cdot I~{}+0.25)100\%$. * • Hazard rate:$h(t)=\frac{\log(t)}{\lambda}\big{(}0.1A(t)+\beta\sum_{i=1}^{D}X(t)\big{)}$ where $\beta=1$; * • The survival probability $S(t)=\exp(-h(t))$; * • The censoring probability $SC(t)=\exp(-\frac{\log(t)}{\lambda}),$ where $\lambda=30$; * • An event indicator generated using root-finding [21] at each time $t$: $E(t)=I(S(t)<U\sim\mathrm{Uniform}(0,1))$, with the event time defined by $\tau_{s}\text{ if }E_{i}(\tau_{s})=1$, otherwise $\tau_{s}=q+1$; * • A censoring indicator generated using the root-finding technique: $C(t)=I(SC(t)<U\sim\mathrm{Uniform}(0,1))$, with the censoring time defined by $\tau_{c}\text{ if }C(\tau_{c})=1$, otherwise $\tau_{c}=q$, and; * • The survival outcome given by the indicator function: $Y=I(\tau_{s}\leq\tau_{c})$. A series of experiments were conducted by changing the following parameters: $V\in\\{0.5,1.0,1.5,2.0\\}$, $D\in\\{6,10,20,40\\}$, $\eta\in\\{0.7,0.8,0.9,1.0\\}$, $N\in\\{1500,3000,10000\\}$. We define our default data generation model with $V=0.5,D=6,\eta=0.9$, and $N=1500$. In this study, we set the length of the estimation window from $t^{\prime}$ to $q$ at 10 time steps and the length of history window at five time steps ($u=5$). For each scenario, we generate 50 sets of training and testing samples using the same parameters but different random seeds. All evaluations are averaged over the testing results from these 50 samples. #### 2.4.2 Benchmark metrics The explanatory performance of TCS is assessed with simulation studies using the three metrics described below: Root-mean-square error (RMSE): Refers to the expected mean squared error of the estimated individual treatment effect: $\displaystyle\text{RMSE}(t)=\frac{1}{n_{k}}\sum_{i_{k}}(\hat{\psi}_{i_{k}}(t)-\psi_{i_{k}}(t))^{2}$ where $n_{k}$ is the number of individuals in subgroup $k$and $i_{k}$ is the individual indicator in each group. When estimating the ATE, we will have $n_{k}=N$, the sample size. Absolute percentage bias (Bias): Defined as the absolute percentage bias in the estimated conditional average/individual treatment effect: $\displaystyle\text{Bias}(t)=\frac{1}{n_{k}}\sum_{i_{k}}\big{|}\frac{\hat{\psi}_{i_{k}}(t)-\psi_{i_{k}}(t)}{\psi_{i_{k}}(t)}\big{|}$ Coverage ratio: Refer to the percentage of times that the true treatment effect lies within the 95% confidence intervals of the posterior distribution of the estimated individual treatment effect. $\displaystyle\text{Coverage}(t)=\frac{1}{n_{k}}\sum_{i_{k}}I(|\hat{\psi}_{i_{k}}(t)-\psi_{i_{k}}(t)|<CI_{i_{k}}(t))$ where $I$ is an indicator function, $I=1$ if $I(\cdot)$ is true and 0 otherwise. CI is the 95% confidence interval of the estimations. Concordance and AUROC: We evaluate the models’ discrimination performance of the estimated survival curves with Harrell’s Concordance-index [22] and the area under the receiver operating characteristic curve (AUROC). #### 2.4.3 Benchmark algorithms The TCS model was benchmarked against two other machine learning algorithms: 1. 1. Plain recurrent neural network with survival outcomes (SNN): this is achieved by removing the propensity score estimation layer in Figure 5. 2. 2. Plain recurrent neural network with binary outcomes (Binary): direct prediction of the longitudinal outcome defined by the independent Binary labels in the first part of Equation (1) using mean squared error as the loss function. For a fair comparison, we applied the inverse probability weighting (IPW) and the iterative targeted maximum likelihood estimation (TMLE) to the raw estimations from SNN and Binary to correct for selection bias when estimating the CATE (please refer to Appendix A for a detailed explanation). We developed TCS using Python 3.8.0 with Tensorflow 2.5.0[23] (code available at https://github.com/EliotZhu/TCS). ## 3 Results ### 3.1 Experiments In Table 2, we compare TCS against the selected benchmark models using the test data generated under the default scenario. The Binary method achieves the highest AUROC, while TCS and SNN models have better performance in concordance due to the survival outcomes design. In terms of treatment effect estimation, TCS achieves nominal performance in both ITE and ATE estimations compared to both IPW and TMLE adjusted ATE estimations provided by the Binary and SNN models. Table 2: Estimation performance by benchmark algorithms under the default scenario | Algorithms ---|--- Metrics | Binary | TCS | SNN AUROC | 0.96 (0.816,1.106) | 0.82 (0.729,0.914) | 0.85 (0.725,0.983) Concordance | 0.76 (0.730,0.799) | 0.90 (0.852,0.947) | 0.86 (0.798,0.913) Bias (IPW) | 0.65 (0.625,0.675) | - | 0.15 (0.133,0.167) Bias (TMLE) | 0.63 (0.576,0.684) | - | 0.14 (0.129,0.151) Bias (ATE) | - | 0.10 (0.061,0.136) | - Bias (ITE) | 0.75 (0.706,0.794) | 0.10 (0.064,0.137) | 0.43 (0.401,0.459) * • All metrics are averaged across 50 independent simulations over 30 time points from the test dataset under the default scenario. The improvement of TCS is most noticeable in the estimation of the ITEs, where the Bias is only 0.10 (0.064,0.137) across 50 samples compared to 0.43 (0.401,0.459) of the SNN model. However, the improvement of ATE estimation by TCS compared to TMLE or IPW adjusted SNN estimation is less significant, at around 5%. TCS gains from its design of the propensity score layer as well as the potential outcomes subnetworks. In Figure 3, we illustrate how TCS provides ITE estimations close to the true values, unlike the Binary and the SNN models. In particular, the Binary model only maximises its discrimination performance in terms of AUROC but provides no value to the treatment effect estimation. Figure 3: Diagnostic plots for the TCS model. True and estimated individual treatment effect (ITE) distributions by benchmark algorithms. The colored dots are the true and estimated ITE averaged over of a randomly chosen sample under the default scenario. The dashed diagonal line indicates the equation of the true and estimated values. Table 3: Simulation study results. | Bias (ATE) | Coverage | Bias (ITE) | RMSE ---|---|---|---|--- Overlap ($\eta$) | | | | 0.7 | 0.11 (0.070,0.157) | 0.70 (0.644,0.756) | 0.12 (0.081,0.167) | 1.04 (0.630,1.420) 0.8 | 0.11 (0.063,0.151) | 0.72 (0.618,0.824) | 0.11 (0.063,0.151) | 1.08 (0.367,1.801) 0.9 | 0.10 (0.061,0.136) | 0.90 (0.804,0.995) | 0.10 (0.064,0.137) | 1.04 (0.330,1.743) 1 | 0.06 (0.012,0.107) | 0.98 (0.953,1.000) | 0.06 (0.014,0.106) | 0.56 (0.119,1.003) Dimension (D) | | | | 6 | 0.10 (0.061,0.136) | 0.90 (0.804,0.995) | 0.10 (0.064,0.137) | 1.04 (0.330,1.743) 10 | 0.09 (0.050,0.135) | 0.94 (0.863,1.000) | 0.08 (0.042,0.125) | 1.05 (0.547,1.553) 20 | 0.10 (0.044,0.161) | 0.94 (0.906,0.967) | 0.10 (0.044,0.161) | 1.29 (0.518,2.061) 40 | 0.08 (0.022,0.130) | 0.93 (0.885,0.971) | 0.08 (0.023,0.130) | 1.09 (0.627,1.552) Variance (V) | | | | 0.5 | 0.10 (0.061,0.136) | 0.90 (0.804,0.995) | 0.10 (0.064,0.137) | 1.04 (0.330,1.743) 1.0 | 0.10 (0.062,0.143) | 0.96 (0.924,1.004) | 0.10 (0.066,0.143) | 1.02 (0.661,1.379) 1.5 | 0.11 (0.063,0.166) | 0.83 (0.746,0.924) | 0.12 (0.066,0.166) | 1.21 (0.513,1.914) 2.0 | 0.08 (0.020,0.135) | 0.87 (0.794,0.949) | 0.10 (0.038,0.155) | 1.10 (0.490,1.715) Time | | | | 1 | 0.10 (0.101,0.101) | 0.88 (0.875,0.875) | 0.10 (0.101,0.101) | 0.10 (0.101,0.101) 5 | 0.09 (0.053,0.137) | 0.94 (0.891,0.993) | 0.10 (0.054,0.138) | 0.94 (0.130,1.744) 10 | 0.10 (0.059,0.139) | 0.89 (0.802,0.983) | 0.10 (0.061,0.139) | 1.16 (0.397,1.922) * • All metrics are averaged across 50 independent simulations over 30 time points from the test dataset under the default scenario. The shaded row indicates the default scenario. The performance of TCS over time is examined across different scenarios in Table 3. The performance of TCS stands out when there is perfect overlapping ($\eta=1$). In this case, the bias (as well as RMSE) of ATE and ITE estimations are about half as that in the decreased overlapping scenarios. Similarly, coverage is close to perfect when $\eta=1$, at 0.98 (0.953,1.005). As the degree of overlapping drops, there is no significant difference in estimation accuracy in terms of the Bias, but the coverage rate declines dramatically from 0.90 (0.804,0.995) when $\eta=0.9$ to 0.70 (0.644,0.756) when $\eta=0.7$ due to decreased confidence in individual estimations. TCS is stable over scenarios with different sample dimensions from the default 6 confounders to the high-dimension 40-confounder scenario. However, the estimation accuracy declines with higher sample variance, when the sample variance is high ($V=2.0$), the coverage declined by about 3% to 0.87 (0.794,0.949) compared to the default scenario. Lastly, we found the confidence interval widens over time, but there is no deterioration in the effect estimation accuracy. In Appendix C, we repeated the above scenarios with two additional sample sizes $N=3000$ and $N=10000$. We found larger sample sizes improve the estimation accuracy for more complex data (i.e., higher level of dimension and sample variance). However, they can not help to improve the estimation for samples that lack overlapping. For observational EHR data, using a sample with moderate to high levels of overlapping is necessary to achieve better estimation accuracy. ### 3.2 Treatment effect estimation with clinical data Figure 4 (a) shows the estimation of ATE in terms of the differences in survival probability using TCS and SNN with TMLE adjustment (labeled as SNN+TMLE). We compared this absolute measure with the inversed empirical hazard ratio (i.e., $1/\text{HR}^{*}$, this is to make the hazard ratio in the same direction as the absolute difference in survival curves) and found that both curves closely follow each other. Panel (a) depicts the effect of using mechanical ventilation (MV) on the mortality of sepsis patients. The first 12 hours of data were used to estimate the treatment effect of MV on patient mortality for the subsequent 40 hours. We found the empirical hazard ratio ranges from 0.947 to 0.999, suggesting a minimal impact of using mechanical ventilation. The estimation from TCS indicates that the usage of MV has an increasing negative impact over time and by the end of the follow-up, we saw the MV is expected to increase the probability of death by up to 4.39% (1.917%, 6.873%) using the estimation from TCS or 3.04% (2.11%, 6.54%) using TMLE. Figure 4: (a) Estimated treatment effect and empirical hazard ratio (HR∗) by benchmark algorithms. The shaded area is the 95% confidence interval of the individual treatment effect estimations. (b) Distributions of estimated individual treatment effects (ITE) averaged over follow-up time. ITE estimations averaged over time in each study. Abbreviations: MV: mechanical ventilation; KM: the Kaplan Meier estimator; TMLE: the SNN model with TMLE adjustment; Hazard Ratio*: the empirical hazard ratio. However, the heterogeneity of the estimated treatment effect is salient. Figure 4 (b) shows the distributions of ITEs averaged over time colored by the observed treatment conditions. We saw a negative average treatment effect of $-1.98\%(-3.066\%,-0.013\%)$ for patients in the control group, while a minimal positive effect of $1.04\%(-0.931\%,3.018\%)$ for patients administrated with mechanical ventilation. ## 4 Discussion We have developed a novel causal inference algorithm to estimate the individual potential survival response curves from time-variant observational data. It leverages information across individuals under different interventions with dedicated propensity layers and potential outcome subnetworks. We demonstrated significant gains in the accuracy of TCS over plain recurrent neural networks in estimating individual and conditional average treatment effects. In addition to extensive simulations, we applied TCS to the MIMIC-III sepsis study. Compared to the standard neural networks for binary outcome predictions, we found TCS has similar performance for the estimation of survival curves as DeepHit and Super-Learner models (the estimation performance of survival curves is presented in our working paper [24]), but it is superior in identifying the treatment effect heterogeneity over time than existing methods such as the Cox model. In particular, TCS estimates the causal effect by computing the adjusted potential survival curves under treatment and control conditions. The use of propensity scores as the input of the network helped to improve the estimation accuracy by correcting for confounding bias. In addition, TCS learns from the pattern of missing confounders of a time series using masking layers rather than imputing their values, and it efficiently captures the uncertainty of the estimation using an ensemble of networks with varied random seeds. In this study, time-dependent patient history has been simulated to resemble the observed data from observational electronic health records. Since deep learning techniques do not assume specific functional dependencies of treatment or outcomes, we expect our simulation results to hold under other choices of functional forms. Nevertheless, more extensive testing of TCS under extreme scenarios such as low overlapping and the presence of unmeasured confounders are desired and left for future work. Our estimation of confidence intervals for ITEs is conservative and results in a high coverage ratio. However, we observed the interval is responsive to the quality of input data. For scenarios with high levels of overlapping and low levels of sample variance, the corresponding confidence intervals are much narrower. Therefore, the usage of network ensembles effectively captures the model uncertainty. When comparing the ATE estimation from TCS with that from traditional confounding adjustment methods, we found that using propensity scores as a regressor in the neural network can achieve similar if not better performance than TMLE and IPW. When looking at the estimation performance of ITEs, TCS significantly outperforms both methods. Much of the challenge of longitudinal causal inference lays in its definition of treatment effect. In this study, the treatment effect for a given period $t=0$ to $t=q$ has been defined as the difference in survival probability given constant treatment vs. control throughout the follow-up window $t^{\prime}-u$ to $t^{\prime}+q$. However, when we have $n$ treatments, we will face the choice of making the contrast among $n(n-1)/2$ pairs. Nonetheless, the solution becomes more complicated when we consider the effect of a switch in the treatment, that is, we need to consider the timing of the switch as well as the choice of effect contrast. Similarly, it is also arduous to analyze continuous treatments. Nonparametric methods have been proposed to either discretise treatment options [25] or create splines to estimate the treatment effect on a single day [26]. Few has been discussed for time- dependent variables or treatments. One study [19] proposed to use reinforcement learning to control the intravenous fluid dosage for sepsis patients, but it does not answer the question of causal effect nor does it adjust for potential confounding bias. It would be interesting for future studies to explore time-dependent deconfounded treatment recommendations. The proposed model is limited in its ability to capture the bias arising from missing confounders or measurement errors, and cannot be reduced by collecting more data under the same experimental conditions. This is reflected in our scenario analysis where increasing the sample size cannot improve the estimation accuracy if the data lacks overlapping. With observational data, the impact of overlapping is often overlooked due to the limited ability to identify and collect potential confounders. A recent study [27] found 74 out of 87 ($85.1\%$) articles on the impact of alcohol on ischemic heart disease risk spuriously ignored or eventually dismissed confounding in their conclusions. Albeit this study acknowledges the caveats when interpreting results from case studies, it will be important for future researches to quantify the aleatoric uncertainty for data-adaptive models. TCS fills the gap in causal inference using deep learning techniques for survival analysis. It considers time-dependent patient history. Its treatment effect estimation can be easily compared with conventional literature which uses relative measures of treatment effect. We expect TCS will be particularly useful for identifying and quantifying treatment effect heterogeneity over time under the ever complex observational health care environment. We expect to improve TCS in future works to further account for the feedback between covariates and the time-varying treatments. ## Acknowledgment This work was supported by National Health and Medical Research Council, project grant no. 1125414. ## Appendix A Model implementation The architecture of TCS model is illustrated in Figure 5. This stacked matrix is first used to estimate the probability of treatment assignment via a densely connected neural network with Long Short-Term Memory (LSTM) units [28]. The output of this network is a vector of propensity scores. For each time point $t$ between $t=0$ and $t=q$, the propensity score of receiving treatment $A=a$ is given by: $\displaystyle p(a(t)|\overline{x}(t)):=\;Pr(A(t)=a(t)|\overline{X}(t)=\overline{x}(t)),$ (7) where $\overline{x}(t)$ is the history of covariates from $-u$ to $t$ (inclusive). In what follows, we denote $p(a(t)|\overline{x}(t))$ as $p(t)$ for simplicity. Figure 5: Illustration of the TCS model. The input data is organized into a stacked matrix of covariates (number of observations (N) x covariate dimension (D) x [baseline steps (u) + follow-up steps (q)]). All missing data for future observations are masked. Abbreviations: $t^{\prime}=0$, the start of follow- up; $t^{\prime}+q$, the end of follow-up; $t^{\prime}-u$, the start of patient history. Abbreviation: LSTM-Long short-term memory. We append the sequence of estimated propensities to the stacked matrix of $A(t),X(t)$ and get: $\displaystyle\begin{split}\Lambda&=\begin{bmatrix}p(-1)&x(-u)&x(-u+1)&\ldots&x(-1)&x^{\prime}(0)&x^{\prime}(1)&\ldots&x^{\prime}(q-1)&a(-1)\\\ p(0)&x(-u)&x(-u+1)&\ldots&x(-1)&x(0)&x^{\prime}(1)&\ldots&x^{\prime}(q-1)&a(0)\\\ p(\cdot)&x(-u)&x(-u+1)&\ldots&x(-1)&x(0)&\ldots&\ldots&x^{\prime}(q-1)&a(\cdot)\\\ \ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\\ p(q-1)&x(-u)&x(-u+1)&\ldots&x(-1)&x(0)&x(1)&\ldots&x(q-1)&a(q-1)\\\ \end{bmatrix},\\\ \end{split}$ (8) where the prime symbol indicates the data is missing. The TCS then maps the input $\Lambda$ to the outcome $\Gamma$: $\displaystyle\Gamma=f(\Lambda)$ (9) The potential outcomes under the treatment condition are computed mapping $\hat{\Gamma}_{1}=f(\Lambda_{1})$, where $\Lambda_{1}$ is calculated by setting all $a(t)=1$ in $\Lambda$. Similarly, the potential outcomes under the control condition are computed mapping $\hat{\Gamma}_{0}=f(\Lambda_{0})$, where where $\Lambda_{0}$ is calculated setting all $a(t)=0$. To train our neural network, we vectorized each individual event/censoring time to construct the target outcome in Equation (1) and apply a loss function with two components: 1) the partial log likelihood loss: the log likelihood loss of the joint distribution on the first event and censoring time: $\displaystyle{\mathcal{L}}_{1}=\sum_{i}\bigg{[}\text{ln}(\gamma_{i}\hat{\Theta}_{i})+\text{ln}((1-\gamma_{i})\hat{\Theta}_{i})\bigg{]}$ (10) which is a vector representation of the ordinary partial log-likelihood loss: $\displaystyle{\mathcal{L}}_{\text{uncensored}}=\sum_{i}e_{i}\bigg{[}\text{ln}(\hat{\theta}_{i}(t_{i}))+\sum_{j=0}^{t_{i}-1}\text{ln}(1-\hat{\theta}_{i}(j))\bigg{]}$ (11) $\displaystyle{\mathcal{L}}_{\text{censored}}=\sum_{i}(1-e_{i})\bigg{[}\sum_{j=0}^{t_{i}-1}\big{[}1-\hat{\theta}_{i}(j)\big{]}\bigg{]},$ (12) where $e_{i}=1$ if any $Y_{i}(t)=1$ and $e_{i}=0$ if all $Y_{i}(t)=0$ for all $t$. Therefore, each element $\hat{\theta}$ in the estimation $\hat{\Theta}$ is the conditional hazard rate, which is the probability of experiencing an event in interval $(t-1,t]$: $\displaystyle\hat{\theta}(t,\Gamma):=Pr(Y(t)=1|\Gamma).$ (13) Then ${\mathcal{L}}_{1}$ can be written as: $\displaystyle{\mathcal{L}}_{1}$ $\displaystyle={\mathcal{L}}_{\text{uncensored}}+{\mathcal{L}}_{\text{censored}}$ (14) $\displaystyle=\sum_{i}\bigg{[}e_{i}\text{ln}(\hat{\theta}_{i}(t_{i}))+e_{i}\sum_{j=0}^{t_{i}-1}\text{ln}(1-\hat{\theta}_{i}(j))+(1-e_{i})\sum_{j=0}^{t_{i}-1}\text{ln}(1-\hat{\theta}_{i}(j))\bigg{]}$ (15) $\displaystyle=\sum_{i}\bigg{[}e_{i}\text{ln}(\hat{\theta}_{i}(t_{i}))+\sum_{j=0}^{t_{i}-1}\text{ln}(1-\hat{\theta}_{i}(j))\bigg{]}$ (16) 2) the rank loss function: the loss function associated with the concordance index in survival analysis [22]: a subject who experienced an event at time $t$ should have a higher probability of failure than a subject who doesn’t or who is censored. We count the number of acceptable pairs of estimated hazard rate $\\{\hat{\theta}_{i}(t),\hat{\theta}_{j}(t)\\}$ in the loss function: $\displaystyle{\mathcal{L}}_{2}=\sum_{t}\sum_{i\neq j}I_{ij}$ where $I_{ij}$ is an indicator function: $\displaystyle I_{ij}=1,\text{if }Y_{i}(t)=1,Y_{j}(t)=0\text{ and }\hat{\theta}_{i}(t)>\hat{\theta}_{j}(t),$ $I_{ij}=0$, otherwise. The final loss function is defined as: $\displaystyle\begin{array}[]{@{}l}{\mathcal{L}}=\alpha{\mathcal{L}}_{1}+\beta{\mathcal{L}}_{2},\end{array}$ where random search is used to locate the best hyper-parameters $\alpha\text{ and }\beta$. To capture the uncertainty of the neural network, we iterate the model training with different random seeds for $20$ iterations and average the results. The estimated probability from $\hat{\Theta}$ is therefore the hazard rate adjusted for the probability of right censoring [29]. Following from our previous work [16], the probability that an individual will experience an event after time $t$ can be written as a product of ’hazard functions describing the conditional probability that the event did not occur in any observation: $\displaystyle\begin{split}S(t,\Lambda)&=\prod_{j=1}^{t}(1-\hat{\theta}(j,\Lambda)).\end{split}$ (18) we use $S(t,\Lambda_{1})$ to denote the time-to-event probabilities given patient receives treatment throughout the follow-up period and $S(t,\Lambda_{0})$ to denote the time-to-event probabilities given patient receives the control/comparator intervention throughout the follow-up period. ## Appendix B Average treatment effect estimation adjustment ### B.1 Inverse probability weighting (IPW) We apply the inverse probability weighting adjustment to the raw estimation of ATE with the following equation: $\displaystyle\hat{\psi}_{IPW}(t)=\frac{1}{N}\sum_{i=1}^{n}\bigg{(}\frac{A_{i}\hat{Y}_{i}(t)}{\hat{P}(X_{i}(0))}-\frac{(1-A_{i})\hat{Y}_{i}(t)}{1-\hat{P}(X_{i}(0))}\bigg{)},\text{ where }t\in\\{0,1,\ldots,q\\}$ where $N$ is the sample size, $q$ is the maximum of follow-up time and $\hat{P}(X_{i}(0))$ is the propensity score estimated as the probability of receiving the treatment at time 0 if the treatment assignment is time- invariant. When the treatment is time-variant, we estimate the propensity score at each time step as $\hat{P}(X_{i}(t))$. In this study, we estimated $\hat{P}(X_{i}(0))$ using a densely connected network to fit the binary label of the treatment assignment of each subject $i$ at time 0. ### B.2 The iterative targeted maximum likelihood estimation (TMLE) To apply the iterative targeted maximum likelihood estimation adjustment, we conducted the following adjustment at each time step: 1\. We first calculate the smart covariates $H(A,X(t))$ using the propensity score estimated using the procedure aforementioned: $\displaystyle H(1,X_{i}(t))=\frac{A_{i}}{\hat{P}(X_{i}(0))};H(0,X_{i}(t))=\frac{1-A_{i}}{1-\hat{P}(X_{i}(0))}$ 2\. Then we fit the residual of the initial estimate of the logit of the binary label with smart covariates using an intercept-free regression: $\displaystyle logit(\hat{Y}_{i}(t))-logit(Y_{i}(t))=\delta_{1}(t)h(1,X_{i}(t))+\delta_{0}(t)h(0,X_{i}(t))$ where $logit(x)$ represents the function $log(\frac{x}{1-x})$ 3\. Calculate the adjusted potential outcomes: $\displaystyle\hat{Y}_{A}(t)^{1}=log\bigg{(}\frac{logit(\hat{Y}_{A}(t)+\delta_{A}(t)}{P_{A}(X_{i}(0))}\bigg{)},\text{ for }A\in\\{0,1\\}$ where $\hat{P}_{1}(X_{i}(0))=\hat{P}(X_{i}(0))$ and $\hat{P}_{0}(X_{i}(0))=1-\hat{P}(X_{i}(0))$. 4\. Targeted estimate of ATE at time t: $\displaystyle{\widehat{\psi}}_{TMLE}(t)=\frac{1}{N}\sum_{i=1}^{n}(\hat{Y}_{1}(t)^{1}-\hat{Y}_{0}(t)^{1})$ ## Appendix C The structure of the TCS masking and outcome layers TCS estimates the the difference between potential survival curves under the treatment and control conditions to compute the estimated individual treatment effect (ITE) curve. Here, following the notations in the main manuscript, we describe the masking and outcome layers of the TCS model introduced in Figure 5 as follows: * • A masking layer taking account of informative missingness in longitudinal data [30], which consists of two representations of missing patterns, i.e., a masking vector $M_{t}\in\\{0,1\\}^{D}$to denote which variables are missing at time $t$, and a real vector $\delta_{t}\in\mathbb{R}^{D,q}$ to capture the time interval for each variable $d$ since its last observation over $q$time points. The masking layer takes as inputs the matrix $[\overline{X}(u),\overline{A}(u)]$ and produces as output a matrix $[\overline{M}(u),\overline{\delta}(u),\overline{X}(u),\overline{A}(u)]$, where the overlines indicate the corresponding vector observed during the history window $[t-u,t-1]$. This layer effectively uses the missing data patterns to achieve better predictions; * • The potential outcome layers make predictions of the log odds of the binary outcomes given by $Y^{M}$ given$[\overline{M}(u),\overline{\delta}(u),\overline{X}(u),\overline{A}(u)=1]$ and $[\overline{M}(u),\overline{\delta}(u),\overline{X}(u),\overline{A}(u)=0]$ and then convert the log odds into the conditional survival probability to form the potential survival curves under each treatment condition. ## Appendix D Additional simulation results Figure 6: Individual treatment effect (ITE) estimation bias by scenarios. All metrics are averaged across 50 independent simulations over 10 time points from the test dataset under each scenario. Bias: average absolute percentage bias. ## Appendix E Descriptive statistics for empirical databases Table 4: Descriptive statistics for case study 2 | Count | Mean | SD | 0.25 | 0.75 ---|---|---|---|---|--- Unique ID | 6225 | | | | Rows | 98716 | | | | Death (1 = Yes, 0 = No) | 459 (7.4%) | | | | Vesopressor Dosage (µg/kg/min) | | 0.29 | 1.913 | 0.00 | 0.14 Follow-Up Hours | | 35.16 | 20.354 | 16.00 | 52.00 Surgery | 350 (5.6%) | | | | Age | | 64.72 | 14.074 | 57.00 | 75.00 Gender (1 = Male, 0 = Female) | 3647 (58.6%) | | | | Glasgow Coma Scale (GCS) | | 169.35 | 16.448 | 162.60 | 177.80 Heart Rate (Bp/S) | | 83.38 | 28.676 | 73.15 | 100.13 Spo2 (%) | | 89.41 | 25.472 | 94.44 | 98.93 Respiratory Rate (Breaths/Min) | | 18.24 | 8.163 | 15.04 | 22.85 Non-Invasive BP Systolic (Mmhg) | | 84.63 | 51.103 | 62.25 | 117.88 Non-Invasive BP Diastolic (Mmhg) | | 46.17 | 28.420 | 28.99 | 65.25 Non-Invasive BP Mean (Mmhg) | | 56.16 | 34.852 | 0.00 | 79.25 Temperature (Celsius) | | 27.61 | 16.120 | 0.00 | 37.20 Shock Index | | 0.61 | 0.418 | 0.00 | 0.88 Sodium (Mmol/L) | | 43.46 | 64.717 | 0.00 | 135.00 Potassium (Mmol/L) | | 1.44 | 2.016 | 0.00 | 3.70 Chloride (Mmol/L) | | 31.21 | 48.566 | 0.00 | 99.00 Glucose (Mg/Dl) | | 45.94 | 81.823 | 0.00 | 101.00 Blood Urea Nitrogen (BUN, Mg/Dl) | | 9.41 | 19.110 | 0.00 | 13.00 Creatinine (Mg/Dl) | | 0.55 | 1.206 | 0.00 | 0.73 Magnesium (Mg/Dl) | | 0.42 | 0.859 | 0.00 | 0.00 Calcium (Mg/Dl) | | 2.23 | 3.592 | 0.00 | 6.90 Total Bilirubin (Mg/Dl) | | 0.21 | 1.206 | 0.00 | 0.00 AST (SGOT) (Units/L) | | 62.77 | 643.250 | 0.00 | 0.00 ALT (SGPT) (Units/L) | | 35.97 | 319.727 | 0.00 | 0.00 Albumin (G/Dl) | | 0.35 | 0.928 | 0.00 | 0.00 Hgb (G/Dl) | | 2.94 | 4.751 | 0.00 | 8.00 White Blood Cell Count (K/Mcl) | | 3.67 | 7.867 | 0.00 | 2.00 Platelets Count (K/Mcl) | | 42.32 | 87.025 | 0.00 | 34.00 Partial Thromboplastin Time (PTT, Sec) | | 4.69 | 16.312 | 0.00 | 0.00 Prothrombin Time (PT,Sec) | | 2.33 | 7.374 | 0.00 | 0.00 International Normalized Ratio (INR) | | 0.23 | 0.739 | 0.00 | 0.00 Arterial Ph | | 1.92 | 3.228 | 0.00 | 7.13 Pao2 (Mmhg) | | 30.49 | 62.572 | 0.00 | 54.00 Paco2 (Mmhg) | | 10.62 | 18.797 | 0.00 | 25.00 Base Excess (Meq/L) | | -0.96 | 3.644 | 0.00 | 0.00 Fio2 (%) | | 14.13 | 28.029 | 0.00 | 0.00 HCO3 (Mmol/L) | | 5.27 | 9.702 | 0.00 | 0.00 Lactate (Mmol/L) | | 0.57 | 2.069 | 0.00 | 0.00 Pre-Admission Fluid Input (Ml) | | 390.38 | 2657.466 | 0.00 | 0.00 Pre-Admission Fluid Output (Ml) | | 522.39 | 2513.307 | 0.00 | 100.00 Pre-Admission Balance (Ml) | | -132.01 | 3072.695 | 0.00 | 0.00 Fluid Input (Ml/4 Hours) | | 54.88 | 316.422 | 0.00 | 0.00 Fluid Output (Ml/4 Hours) | | 32.36 | 199.217 | 0.00 | 0.00 Fluid Balance (Ml/4 Hours) | | 22.52 | 328.079 | 0.00 | 0.00 ## References * [1] Blanca Gallego, Adam G Dunn, and Enrico Coiera. Role of electronic health records in comparative effectiveness research. Journal of comparative effectiveness research, 2(6):529–532, 2013\. * [2] T Wendling, K Jung, A Callahan, A Schuler, N H Shah, and B Gallego. Comparing methods for estimation of heterogeneous treatment effects using observational data from health care databases. Statistics in medicine, 37(23):3309–3324, 2018. * [3] Richard L Kravitz, Naihua Duan, and Joel Braslow. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. The Milbank Quarterly, 82(4):661–687, 2004. * [4] Anand Kumar, Daniel Roberts, Kenneth E Wood, Bruce Light, Joseph E Parrillo, Satendra Sharma, Robert Suppes, Daniel Feinstein, Sergio Zanotti, Leo Taiberg, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Critical care medicine, 34(6):1589–1596, 2006. * [5] Alistair E W Johnson, Jerome Aboab, Jesse D Raffa, Tom J Pollard, Rodrigo O Deliberato, Leo A Celi, and David J Stone. A Comparative Analysis of Sepsis Identification Methods in an Electronic Database*. Critical Care Medicine, 46(4):494–499, 2018. * [6] C R David. Regression models and life tables (with discussion). Journal of the Royal Statistical Society, 34(2):187–220, 1972\. * [7] Hans C Van Houwelingen. Dynamic prediction by landmarking in event history analysis. Scandinavian Journal of Statistics, 2007. * [8] Julie C Recknor and Alan J Gross. Fitting Survival Data to a Piecewise Linear Hazard Rate in the Presence of Covariates. Biometrical Journal, 1994. * [9] R Henderson. Joint modelling of longitudinal measurements and event time data. Biostatistics, 2000. * [10] Joseph G Ibrahim, Haitao Chu, and Liddy M Chen. Basic concepts and methods for joint models of longitudinal and survival data, 2010. * [11] Ioana Bica, Ahmed M Alaa, James Jordon, and Mihaela van der Schaar. Estimating counterfactual treatment outcomes over time through adversarially balanced representations. arXiv preprint arXiv:2002.04083, 2020. * [12] Changhee Lee, Jinsung Yoon, and Mihaela van der Schaar. Dynamic-DeepHit: A Deep Learning Approach for Dynamic Survival Analysis With Competing Risks Based on Longitudinal Data. IEEE Transactions on Biomedical Engineering, 67(1):122–133, 2020\. * [13] John Blitzer, Ryan McDonald, and Fernando Pereira. Domain adaptation with structural correspondence learning. In COLING/ACL 2006 - EMNLP 2006: 2006 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 2006\. * [14] S Rose and M J van der Laan. Targeted Learning: Causal Inference for Observational and Experimental Data. Targeted Learning: Causal Inference for Observational and Experimental Data, 2011. * [15] Susan Athey, Julie Tibshirani, and Stefan Wager. Generalized random forests. The Annals of Statistics, 47(2):1148–1178, 2019. * [16] Jie Zhu and Blanca Gallego. Targeted Estimation of Heterogeneous Treatment Effect in Observational Survival Analysis. Journal of Biomedical Informatics, page 103474, 2020. * [17] A Johnson, T Pollard, and L Shen. MIMIC-III, a freely accessible critical care database. Sci Data, 3:160035–160035, 2016. * [18] C W Seymour. Assessment of clinical criteria for sepsis: For the third international consensus definitions for sepsis and septic shock (sepsis-3). J Am Med Assoc, 315:762–774, 2016. * [19] Matthieu Komorowski, Leo A Celi, Omar Badawi, Anthony C Gordon, and A Aldo Faisal. The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care. Nature Medicine, 24(11):1716–1720, 2018. * [20] Paul R Rosenbaum and Donald B Ruban. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55, 1983. * [21] Michael J Crowther and Paul C Lambert. Simulating biologically plausible complex survival data. Statistics in medicine, 32(23):4118–4134, 2013. * [22] E Frank, Robert M Harrell, David B Califf, Kerry L Pryor, Robert A Lee, and Rosati. Evaluating the yield of medical tests. Journal of the American Medical Association, 247(18):2543–2546, 1982. * [23] Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. * [24] Jie Zhu and Blanca Gallego. Dynamic prediction of time to event with survival curves, 2021. * [25] David Powell. Quantile treatment effects in the presence of covariates. Review of Economics and Statistics, 102(5):994–1005, 2020. * [26] Edward H Kennedy, Zongming Ma, Matthew D McHugh, and Dylan S Small. Nonparametric methods for doubly robust estimation of continuous treatment effects. Journal of the Royal Statistical Society Series B, Statistical Methodology, 79(4):1229, 2017. * [27] Joshua D Wallach, Stylianos Serghiou, Lingzhi Chu, Alexander C Egilman, Vasilis Vasiliou, Joseph S Ross, and John PA Ioannidis. Evaluation of confounding in epidemiologic studies assessing alcohol consumption on the risk of ischemic heart disease. BMC medical research methodology, 20(1):1–10, 2020. * [28] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. * [29] Michael F Gensheimer and Balasubramanian Narasimhan. A scalable discrete-time survival model for neural networks. PeerJ, 7:e6257–e6257, 2019. * [30] Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. Recurrent Neural Networks for Multivariate Time Series with Missing Values. Scientific Reports, 8(1):1–12, 2018.
# Analyzing Zero-shot Cross-lingual Transfer in Supervised NLP Tasks Hyunjin Choi, Judong Kim, Seongho Joe, Seungjai Min, Youngjune Gwon Samsung SDS ###### Abstract In zero-shot cross-lingual transfer, a supervised NLP task trained on a corpus in one language is directly applicable to another language without any additional training. A source of cross-lingual transfer can be as straightforward as lexical overlap between languages (_e.g._ , use of the same scripts, shared subwords) that naturally forces text embeddings to occupy a similar representation space. Recently introduced cross-lingual language model (XLM) pretraining brings out neural parameter sharing in Transformer-style networks as the most important factor for the transfer. In this paper, we aim to validate the hypothetically strong cross-lingual transfer properties induced by XLM pretraining. Particularly, we take XLM-RoBERTa (XLM-R) in our experiments that extend semantic textual similarity (STS), SQuAD and KorQuAD for machine reading comprehension, sentiment analysis, and alignment of sentence embeddings under various cross-lingual settings. Our results indicate that the presence of cross-lingual transfer is most pronounced in STS, sentiment analysis the next, and MRC the last. That is, the complexity of a downstream task softens the degree of cross-lingual transfer. All of our results are empirically observed and measured, and we make our code and data publicly available. ## I Introduction Pretraining language models at a large scale has dramatically improved natural language understanding. According to a comprehensive analysis [1] on the limitations in pretraining a multi-lingual model, more languages lead to better cross-lingual performance for low-resource languages only up to a certain point when the number of languages increases. The phenomenon is dubbed the curse of multilinguality, which can only be freed up by scaling up the model size. The recent experimental results show that multilingual models can outperform their monolingual counterparts. For a low-resource language that lacks in labeled examples, such results are an encouraging breakthrough for building an NLP application for low-resource languages. In cross-lingual language understanding, XLM by Conneau & Lample [2], despite being pretrained by only masked language modeling (MLM), has reported the state-of-the-art on downstream benchmarks. Shared lexical features (_e.g._ , subwords, scripts, anchor points) across languages have been suspected for the primary source of learning language-independent representation that leads to cross-lingual transfer. Recent studies, however, show that parameter sharing induced by the Transformer architecture is instead the most attributable factor for the transfer. We are motivated by these progresses in language modeling. This work focuses on empirical analysis of cross-lingual transfer in supervised NLP tasks fine- tuned over XLM. In particular, we are interested in zero-shot transfer settings where no additional training is done using the target language examples after being fine-tuned in the source language. We experiment with XLM-RoBERTa (XLM-R) [1], a large XLM model with 550 million parameters and a 250k vocabulary size by extending semantic textual similarity, SQuAD [3] & KorQuAD [4] question answering, and sentiment classifications for various cross-lingual settings. At last, beyond previous work that has attempted to align word embeddings across different languages [5], we compute a projection that directly maps sentence embeddings of one language to those of another. We then analyze the effect of fine-grained alignment of sentences across different languages to the quality of zero-shot cross-lingual transfer, manifested through the aforementioned NLP task performances measured empirically. We make the following contributions. * • We provide rigorous results on cross-lingual transfer present in three important supervised NLP tasks that require high-level natural language understanding, namely STS, MRC, and sentiment classification. * • We propose to directly compute a cross-lingual mapping that aligns sentence embeddings of different languages whereas previous work has focused on word- level embeddings. * • We furthermore show benefits of the fine-grained cross-lingual sentence alignment that enables directly comparing sentences from different languages for sentence-pair regression tasks. The rest of this paper is organized as follows. In Section II, we describe our approach by presenting the zero-shot cross-lingual evaluation framework. Section III discusses our experimental methodology and empirical results. Section IV concludes the paper. Figure 1: Zero-shot cross-lingual evaluation ## II Our Approach XLM pretraining is known to effectively promote cross-lingual transfer where a supervised model fine-tuned in one language is applied to another without additional training. ### II-A Zero-shot Cross-lingual Evaluation Framework We propose a simple approach to transfer a supervised model learned in one language to another for zero-shot cross-lingual evaluation as illustrated in Fig. 1. First, we place a pretrained XLM–for our case, the 550 million- parameter XLM-RoBERTa (XLM-R) [1] trained on 100 languages is used. We then fine-tune XLM-R for a downstream task using labeled data in language A. Lastly, we evaluate the fine-tuned downstream task in both languages A and B. Note that running a test set from language B to the fine-tuned task evaluates zero-shot cross-lingual transfer. ### II-B Sentence Embedding and Pair Modeling Transformer [6] models such as BERT produce contextualized representations that are central to build a high-performance downstream task. XLM-R is a BERT variant whose output constitutes token embeddings (up to 512 token vectors of 768 dimensions each) for a given input. To produce fixed-size sentence embeddings necessary for a task like semantic textual similarity (STS), we average the token embedding output to obtain a single 768-dimensional pooled vector. For text regression (or classification), one learns a function that maps sentence embeddings to a target value. Sentence-pair modeling gives an important primitive that underlies supervised NLP tasks such as STS. We adopt a siamese network architecture by Sentence-BERT [7] that avoids the combinatorial explosion to form sentence pairs. Fig. 2 depicts our sentence pair modeling for STS. Figure 2: Siamese net for sentence-pair modeling ### II-C Cross-lingual Mapping for Fine-grained Alignment of Sentence Embedddings Cross-lingual mapping for word embeddings has been widely studied. Because context awareness is the key to language understanding, learning cross-lingual mapping for sentence-level transformations can be valuable. A sentence is less ambiguous than words since the words must be interpreted within a specific context. We learn cross-lingual sentence mappings directly from sentence-pair examples. Note that sentence embeddings produced from contextualized cross-lingual word embeddings would imply loosely aligned sentences. Similar to the projection- based cross-lingual word embeddings framework [5, 8], we use linear algebraic methods to compute a projection matrix that achieves fine-grained alignment of sentence embeddings across different languages. We also use a single-layer neural net that can iteratively learn the same projection by gradient descent. System of least squares via normal equation. Suppose languages $A$ and $B$ that are the source and the target languages of the projection $\mathbf{\Phi}$. We seek the solution to the problem $\mathbf{S}_{A}\mathbf{\Phi}=\mathbf{S}_{B}$ with $\displaystyle\mathbf{S}_{A}=\begin{bmatrix}~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}~{}\mathbf{s}_{A}^{(1)}~{}~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}\\\ ~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}~{}\mathbf{s}_{A}^{(2)}~{}~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}\\\ \vdots\\\ ~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}~{}\mathbf{s}_{A}^{(n)}~{}~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}\end{bmatrix},~{}\mathbf{S}_{B}=\begin{bmatrix}~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}~{}\mathbf{s}_{B}^{(1)}~{}~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}\\\ ~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}~{}\mathbf{s}_{B}^{(2)}~{}~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}\\\ \vdots\\\ ~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}~{}\mathbf{s}_{B}^{(n)}~{}~{}\rule[2.15277pt]{10.76385pt}{0.5pt}~{}\end{bmatrix},~{}$ $\displaystyle\mathbf{s}_{A}^{(i)}=\begin{bmatrix}a_{1}^{(i)}\\\ a_{2}^{(i)}\\\ \vdots\\\ a_{d}^{(i)}\end{bmatrix}^{\top},~{}\mathbf{s}_{B}^{(i)}=\begin{bmatrix}b_{1}^{(i)}\\\ b_{2}^{(i)}\\\ \vdots\\\ b_{d}^{(i)}\end{bmatrix}^{\top}$ (1) where $\mathbf{S}_{A}$ and $\mathbf{S}_{B}$ are datasets that contain $n$ sentence embeddings for languages $A$ and $B$ with each sentence $\mathbf{s}\in\mathbb{R}^{d}$. With $\mathbf{\Phi}=\begin{bmatrix}\boldsymbol{\phi}^{(1)}&\boldsymbol{\phi}^{(2)}&\dots&\boldsymbol{\phi}^{(j)}&\dots&\boldsymbol{\phi}^{(d)}\end{bmatrix}$ whose element $\boldsymbol{\phi}^{(j)}\in\mathbb{R}^{d}$ is a column vector, each $\mathbf{S}_{A}\,\boldsymbol{\phi}^{(j)}=[b_{j}^{(1)}b_{j}^{(2)}\dots b_{j}^{(n)}]$ gives a problem of the least squares. Since $j=1,\dots,d$, we have a system of $d$ least-square problems that can be solved linear algebraically via the normal equation: $\mathbf{\Phi}^{*}=\left(\mathbf{S}_{A}^{\top}\mathbf{S}_{A}\right)^{-1}\mathbf{S}_{A}^{\top}\mathbf{S}_{B}$. Solving the Procrustes problem. Given two data matrices, a source $\mathbf{S}_{A}$ and a target $\mathbf{S}_{B}$, the orthogonal Procrustes problem [9] describes a matrix approximation searching for an orthogonal projection that most closely maps $\mathbf{S}_{A}$ to $\mathbf{S}_{B}$. Formally, we write $\displaystyle\mathbf{\Psi}^{*}=\operatorname*{argmin}_{\mathbf{\Psi}}\left\|\mathbf{S}_{A}\mathbf{\Psi}-\mathbf{S}_{B}\right\|_{\textrm{F}}~{}~{}~{}~{}\textrm{s.t.}~{}\mathbf{\Psi}^{\top}\mathbf{\Psi}=\mathbf{I}$ (2) The solution to Eq. (2) has the closed-form $\mathbf{\Psi}^{*}=\mathbf{U}\mathbf{V}^{\top}$ with $\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}=\textrm{SVD}(\mathbf{S}_{B}\mathbf{S}_{A}^{\top})$, where SVD is the singular value decomposition. Fully-connected single-layer neural net with linearly activated neurons. Contrasted to linear algebraic solutions $\mathbf{\Phi}^{*}$ and $\mathbf{\Psi}^{*}$, a neural net can be used to compute the projection matrix iteratively via gradient descent. We consider a fully-connected single hidden- layer neural net with linear activation functions as illustrated in Fig. 3. We use the neural net as an array of linear regressors with mean square error (MSE) objectives $\displaystyle\mathbf{S}_{A}\mathbf{W}=\mathbf{S}_{B}~{}\textrm{(feedforward)}~{}\textrm{s.t.}~{}\frac{1}{2}\left\|\mathbf{s}_{B}^{(i)}-\mathbf{S}_{A}\mathbf{w}^{(j)}\right\|_{2}^{2}<\epsilon~{}~{}\forall i,j$ (3) where $\mathbf{W}=[\mathbf{w}^{(1)}\mathbf{w}^{(2)}\dots\mathbf{w}^{(j)}\dots\mathbf{w}^{(d)}]$ contains the weight parameters of the neural net. Instead of a cross-entropy loss, we impose the MSE loss function to optimize each $\mathbf{w}^{(j)}$ for stochastic gradient descent. Figure 3: Fully-connected single-layer neural net with neurons having linear activation functions. TABLE I: Evaluation on STS tasks. Numbers represent the Spearman (Pearson) correlations in percentile. | | Evaluation Language ---|---|--- Fine-tuning Task(s) | English | Korean | Spanish | Arabic Zero-shot | STSb (English) | 87.44 (87.43) | 82.34 (82.27) | 85.58 (87.02) | 72.67 (70.54) KorSTS (Korean) | 84.47 (84.40) | 83.38 (83.16) | 84.94 (85.00) | 70.99 (69.66) Mixed Launguage Fine-tuning | STSb $\rightarrow$ KorSTS | 86.43 (86.47) | 83.54 (83.42) | 85.47 (86.05) | 73.85 (73.39) KorSTS $\rightarrow$ STSb | 88.33 (88.34) | 85.12 (85.12) | 86.77 (87.83) | 73.37 (72.37) STSb + KorSTS | 87.71 (87.84) | 84.37 (84.48) | 86.53 (86.99) | 75.72 (75.22) TABLE II: Evaluation on MRC tasks. Numbers represent F1 score, and numbers in parentheses are exact matches. | | Evaluation Language ---|---|--- | Fine-tuning Task(s) | English | Korean | Spanish Zero-shot | SQuAD (Enlgish) | 88.81 (81.68) | 80.92 (45.08) | 72.07 (53.18) KorQuAD (Korean) | 72.03 (61.93) | 89.58 (65.29) | 58.65 (43.09) SQuAD-es (Spanish) | 84.75 (74.51) | 78.87 (42.76) | 76.11 (59.68) Mixed Language Fine-tuning | SQuAD $\rightarrow$ KorQuAD | 85.81 (77.16) | 90.17 (66.02) | 70.54 (52.40) SQuAD $\rightarrow$ SQuAD-es | 86.73 (76.78) | 78.16 (36.87) | 76.70 (59.87) KorQuAD $\rightarrow$ SQuAD | 89.16 (82.20) | 88.42 (62.83) | 72.78 (53.92) SQuAD + KorQuAD | 84.41 (75.93) | 86.79 (62.45) | 67.72 (48.49) SQuAD + KorQuAD + SQuAD-es | 89.29 (81.98) | 90.41 (66.36) | 76.75 (59.66) ## III Experiments Throughout all our experiments, we use the pretrained XLM-RoBERTa (XLM-R) [1] downloaded from Hugging Face111https://huggingface.co/ [10] unmodified, upon which we build supervised NLP tasks and fine-tune. We focus on experimenting with sentence-level representations and their cross-lingual transfer quality evaluations when used in a downstream task under zero-shot settings. There is a considerable amount of the existing literature on evaluating the cross- lingual transfer quality of word representations, which we will not cover in this paper. ### III-A Semantic Textual Similarity (STS) Task & dataset. The first of our cross-lingual experiments are STS benchmark (STSb) [11], Korean STS (KorSTS) [12], SemEval-2017 Spanish, and SemEval-2017 Arabic. STSb is a set of English data originated for the STS task evaluations in the International Workshop on Semantic Evaluation (SemEval) [13, 14, 15, 16, 17] between 2012 and 2017. STSb is distributed as one of the four similarity and paraphrase tasks in the GLUE benchmark [18]. The STSb dataset includes 8,628 sentence pairs from image captions, news headlines, and user forums that are partitioned in train (5,749), dev (1,500) and test (1,379) sets. The STSb sentence pairs are labeled with a similarity score ranging from 0 to 5 that indicates how similar the sentences are in terms of semantic relatedness. KorSTS is a translated dataset from STSb and has exactly the same structure. SemEval-2017 Spanish and Arabic are evaluation sets from SemEval-2017 Task 1 [19], which has 250 test pairs per each language. Fine-tuning. We run the GLUE benchmark code as-is from Hugging Face to fine- tune STS tasks. This means that a text input to XLM-R is in the Sentence A–[SEP]–Sentence B format, which is the same as in pretraining. We use Rectified Adam (RAdam) optimizer with a linear learning rate warm-up for 10% of the training data and a learning rate of $4\times 10^{-5}$. We have run 4 training epochs using a batch size of 32. To evaluate zero-shot cross-lingual transfer, we fine-tune on the STSb train set and test using the STSb, KorSTS, SemEval-2017 Spanish, and SemEval-2017 Arabic test sets, and similarly for fine-tuning and testing on KorSTS. Furthermore, we carry out the following mixed instances: 1) fine-tune on STSb the first and KorSTS the next; 2) fine-tune on KorSTS the first and STSb the next; 3) fine-tune on sentence pair examples uniformly drawn from STSb and KorSTS. Results. The upper portion of Table I reports the STS performances on zero- shot cross-lingual testing with 4 languages. We immediately find the presence of cross-lingual transfer strong for STS. When fine-tuned on English (the STSb train set), zero-shot testing with Korean results in 1.24% decrease in Spearman’s rank correlation. On the other hand, when fine-tuned using the KorSTS train set, zero-shot testing with English results in 3.40% degradation. For Spanish and Arabic, we observe better performance when fine-tuned on English. We find particularly low scores for Arabic and suspect that it is relatively lower resource language compared to the others. In fact, XLM-R uses 28.0GB of Arabic resources while for Korean 54.2GB is used, 53.3GB Spanish, and 300.8GB English [1]. The lower portion of Table I shows how two-stage fine-tuning mixed with two different languages affects the performance in each language. Although the performance numbers are similar regardless of the fine-tuning order, the last language fine-tuned slightly outperforms the others. ### III-B Machine Reading Comprehension (MRC) Task & dataset. Reading comprehension has been one of the most challenging tasks for machine, combining natural language understanding and generation with knowledge about the world. We use Stanford Question Answering Dataset (SQuAD) [3], Korean Question Answering Dataset (KorQuAD) [4], and Spanish SQuAD (SQuAD-es) [20] for the cross-lingual transfer evaluation of machine reading comprehension (MRC) tasks. Both SQuAD and KorQuAD consist of crowdsourced question-answer pairs from English and Korean Wikipedia articles, respectively. SQuAD-es is a translated dataset of SQuAD for Spanish. Using SQuAD v1.1, KorQuAD v1.0, and SQuAD-es v1.1, we do the following eight cross-lingual MRC tasks. We prepare three copies of XLM-R and fine-tune them on 1) SQuAD, 2) KorQuAD and 3) SQuAD-es for testing with SQuAD (English), KorQuAD (Korean) and SQuAD-es (Spanish) dev sets. We then fine-tune cross- lingually again using 4) KorQuAD on the SQuAD fine-tuned XLM-R, 5) SQuAD-es on the SQuAD fine-tuned XLM-R, and 6) SQuAD on the KorQuAD fine-tuned XLM-R for another round of testing with the dev sets. Additionally, we fine-tune XLM-R with 7) mixed set of SQuAD and KorQuAD and 8) mixed set of SQuAD, KorQuAD, and SQuAD-es. Fine-tuning. We use RAdam optimizer with a linear learning rate warm-up for 10% of the training data and a learning rate of $2\times 10^{-5}$. We have found that running just 3 training epochs with a batch size of 48 is sufficient. Results. The upper portion of Table II reports the cross-lingual MRC performance evaluated on the SQuAD, KorQuAD, and SQuAD-es dev sets. For fine- tuned SQuAD, zero-shot testing with Korean and Spanish degrades 9.67% and 5.30% in F1 score. (Here, the compared baseline is KorQuAD dev set tested on KorQuAD train set fine-tuned XLM-R.) Fine-tuned on KorQuAD, however, zero-shot testing with English and Spanish degrades 18.89% and 22.94%, respectively. The results with SQuAD-es shows 4.57% and 11.96% decreases for English and Korean. Compared to the performance on STS tasks, the degraded performance gap measured in F1 scores and exact match is much higher for MRC tasks. The lower portion of Table II reports the cross-lingual MRC performance for mixed language fine-tuning cases. The result shows a similar trend as in STS tasks. In general, fine-tuning with an additional language seems to improve the MRC performance regardless of testing language. Fine-tuning with all other languages yields the best MRC performance as shown in the last row of Table II. ### III-C Sentiment Analysis Task & dataset. For sentiment analysis, we use two datasets of the similar origin, namely Large Movie Review Dataset (LMRD) [21] and Naver Sentiment Movie Corpus (NSMC) [22]. LMRD is a movie review dataset in English. The dataset provides a set of 50,000 reviews with labels indicating whether a review is positive or negative. NSMC uses the same labeling system for movie reviews written in Korean language. The dataset consists of 200,000 reviews. Using LMRD and NSMC, we have experimented five cross-lingual evaluations: fine-tune using 1) LMRD, 2) NSMC, 3) NSMC on the LMRD fine-tuned XLM-R, 4) LMRD on the NSMC fine-tuned XLM-R, and 5) mixed set of LMRD and NSMC. All of these tasks are evaluated on the LMRD and NSMC test sets. Fine-tuning. Again, using RAdam optimizer with a linear learning rate warm-up for 5% of the training data and a learning rate of $4\times 10^{-5}$, we run 5 training epochs with a batch size of 48. Results. The upper portion of Table III presents the zero-shot cross-lingual transfer results on sentiment analysis tasks. The numbers represent classification accuracy in percentage. Zero-shot testing with NSMC (Korean) on the LMRD fine-tuned XLM-R results in 12.05% accuracy degradation, whereas zero-shot testing with English shows 7.63% decrease in classification accuracy. The lower portion of Table III presents the cross-lingual sentiment analysis performance for mixed language fine-tuning cases. Here, the performance of the last language fine-tuned is improved while the first language fine-tuned degrades a little. When fine-tuned on the train set mixed with both languages, the sentiment analysis performance improves for both languages. TABLE III: Evaluation on sentiment classification tasks. The numbers represent classification accuracy in percentage. | | Evaluation Language ---|---|--- | Fine-tuning Task(s) | English | Korean Zero-shot | LMRD (English) | 93.52 | 79.24 NSMC (Korean) | 86.38 | 90.10 Mixed Language Fine-tuning | LMRD $\rightarrow$ NSMC | 90.65 | 90.12 NSMC $\rightarrow$ LMRD | 93.69 | 89.47 LMRD + NSMC | 93.80 | 90.24 ### III-D Cross-lingual Mapping for Fine-grained Alignment of Sentence Embeddings Using the analytical findings of Section II.C, we have determined the cross- lingual mappings $\mathbf{\Phi}^{*}$ and $\mathbf{\Psi}^{*}$ linear algebraically. We have applied the mappings to align the translated sentence pairs of STSb and KorSTS. Precisely, we set the source $\mathbf{S}_{A}$ English sentences from STSb, and the target $\mathbf{S}_{B}$ Korean sentences from KorSTS. The quality of alignment via linear projections $\mathbf{\Phi}^{*}$ and $\mathbf{\Psi}^{*}$ is very similar. Based on the average cosine similarity of the translated sentence pairs, we find $\mathbf{\Phi}^{*}$ slightly better than $\mathbf{\Psi}^{*}$. Figure 4: t-SNE plots of English and Korean translated pairs from STSb and KorSTS. The leftmost plot on the top row is unaligned English sentences (source), and the middle represents aligned English via linear projection $\mathbf{\Phi}^{*}$, the rightmost Korean (target). The middle and the rightmost plots are aligned, showing similar patterns in t-SNE. The bottom plots are unaligned English, aligned English, and Korean sentences via the fully-connected single layer neural net whose weight parameters $\mathbf{W}$ are learned by stochastic gradient descent. We determine $\mathbf{W}$ by stochastic gradient descent on the single-layer neural net of Fig. 3. Using the translated sentence pairs, we set the input $\mathbf{S}_{A}$ to the neural net English sentences from STSb, and the output $\mathbf{S}_{B}$ Korean sentences from KorSTS. The average cosine similarity of the translated sentence pairs after alignment via the $\mathbf{\Phi}^{*}$ projection is 0.7131 whereas the average cosine similarity for the neural net is 0.7265. Without alignment by the projection matrix or the neural net, the average cosine similarity would have been 0.4636. Fig. 4 illustrates the t-SNE plots that visualize the effect of the sentence alignment. The top plots are unaligned English, aligned English and Korean sentences by the $\mathbf{\Phi}^{*}$ projection, whereas the bottom plots represent unaligned and aligned English and Korean sentences via the neural net. TABLE IV: STS evaluation with cross-lingual sentence pairs. | | Method ---|---|--- | | Zero-shot Transfer | Cross-lingual Mapping Fine-tuning Task | STSb | 49.03 | 59.16 KorSTS | 43.23 | 47.24 In Table IV, we compare the cosine similarity of aligned English and Korean translated sentence pairs of STSb and KorSTS through the fine-grained cross- lingual mapping to zero-shot transfer. Cross-lingual mapping that we compute linear algebraically or by the use of a neural net outperforms zero-shot cross-lingual transfer by 9.3–20% in cosine similarity matching of the translated sentences pairs of STSb and KorSTS. ### III-E Discussion Generally, we find that cross-lingual transfer is present in important supervised NLP tasks that require high-level natural language understanding, namely STS, MRC, and sentiment classification. Our empirical evaluation suggests the presence of cross-lingual transfer be most pronounced in STS. The next is sentiment analysis, and MRC comes the last. It seems that more complex a task is, and the quality of cross-lingual transfer becomes less effective. For STS, we have observed the transfer quality in two different measures, the Spearman’s rank and Pearson correlation coefficients, and found them concordant. For MRC, while zero-shot transfer performance measured by F1 score is reasonable, it suffers significantly more for the case of the exact match (EM) metric. Interestingly, if we fine-tune XLM-R with both source and target languages, the last language fine-tuned has the strongest impact on the performance. ## IV Conclusion This paper focuses on the empirical validation of the cross-lingual transfer properties induced by XLM pretraining. We have experimented with XLM-RoBERTa (XLM-R), a large cross-lingual language model, and extended semantic textual similarity (STS), SQuAD and KorQuAD for machine reading comprehension (MRC), and sentiment analysis to cross-lingual settings. Our results suggest the presence of cross-lingual transfer be most pronounced in STS, the sentiment analysis the next, and MRC the last. We compute matrix projections linear algebraically that directly map sentence embeddings of one language to another for analyzing the effect of fine-grained alignment of sentences in zero-shot cross-lingual transfer. We have shown that such mapping can also be determined iteratively using a simple neural net. Our future work includes more systematic evaluations on broader range of low- and high-resource languages to generalize the quality of cross-lingual transfer manifested through important NLP tasks. ## References * [1] A. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov, “Unsupervised cross-lingual representation learning at scale,” in _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , 2020. * [2] A. Conneau and G. Lample, “Cross-lingual Language Model Pretraining,” in _Advances in Neural Information Processing Systems 32_ , 2019. * [3] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “SQuAD: 100,000+ questions for machine comprehension of text,” in _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , 2016. * [4] S. Lim, M. Kim, and J. Lee, “Korquad1.0: Korean QA Dataset for Machine Reading Comprehension,” _arXiv preprint arXiv:1909.07005_ , 2019. * [5] G. Glavaš, R. Litschko, S. Ruder, and I. Vulić, “How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions,” in _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , 2019. * [6] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is All you Need,” in _Advances in Neural Information Processing Systems 30_ , 2017. * [7] N. Reimers and I. Gurevych, “Sentence-BERT: Sentence embeddings using Siamese BERT-networks,” in _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , 2019. * [8] T. Mikolov, Q. V. Le, and I. Sutskever, “Exploiting Similarities among Languages for Machine Translation,” _arXiv preprint arXiv:1309.4168_ , 2013\. * [9] P. Schönemann, “A Generalized Solution of the Orthogonal Procrustes Problem,” _Psychometrika_ , vol. 31, no. 1, pp. 1–10, 1966. * [10] Hugging Face, “Open Source NLP,” https://huggingface.co, 2020. * [11] D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, “SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation,” in _Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)_ , 2017. * [12] J. Ham, Y. J. Choe, K. Park, I. Choi, and H. Soh, “KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding,” _arXiv preprint arXiv:2004.03289_ , 2020. * [13] E. Agirre, D. Cer, M. Diab, and A. Gonzalez-Agirre, “SemEval-2012 task 6: A pilot on semantic textual similarity,” in _*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)_ , 2012. * [14] E. Agirre, D. Cer, M. Diab, A. Gonzalez-Agirre, and W. Guo, “*SEM 2013 shared task: Semantic textual similarity,” in _Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity_ , 2013. * [15] E. Agirre, C. Banea, C. Cardie, D. Cer, M. Diab, A. Gonzalez-Agirre, W. Guo, R. Mihalcea, G. Rigau, and J. Wiebe, “SemEval-2014 task 10: Multilingual semantic textual similarity,” in _Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)_ , 2014. * [16] E. Agirre, C. Banea, C. Cardie, D. Cer, M. Diab, A. Gonzalez-Agirre, W. Guo, I. Lopez-Gazpio, M. Maritxalar, R. Mihalcea, G. Rigau, L. Uria, and J. Wiebe, “SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability,” in _Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)_ , 2015. * [17] E. Agirre, C. Banea, D. Cer, M. Diab, A. Gonzalez-Agirre, R. Mihalcea, G. Rigau, and J. Wiebe, “SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation,” in _Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)_ , 2016. * [18] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman, “GLUE: A multi-task benchmark and analysis platform for natural language understanding,” in _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , 2018. * [19] D. M. Cer, M. T. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, “SemEval-2017 task 1: Semantic textual similarity, multilingual and cross-lingual focused evaluation,” 2017. * [20] C. P. Carrino, M. R. Costa-jussà, and J. A. Fonollosa, “Automatic Spanish Translation of the SQuAD Dataset for Multilingual Question Answering,” _arXiv preprint arXiv:1912.05200_ , 2019. * [21] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, “Learning Word Vectors for Sentiment Analysis,” in _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_ , 2011. * [22] Naver Sentiment Movie Corpus (NSMC), “Naver Corp.” https://github.com/e9t/nsmc, 2020.
On the nonlocal Darboux transformation for the stationary axially symmetric Schrödinger and Helmholtz equations A. G. Kudryavtsev Institute of Applied Mechanics, Russian Academy of Sciences, Moscow 125040, Russia <EMAIL_ADDRESS> ###### Abstract The nonlocal Darboux transformation for the stationary axially symmetric Schrödinger and Helmholtz equations is considered. Formulae for the nonlocal Darboux transformation are obtained and its relation to the generalized Moutard transformation is established. New examples of two - dimensional potencials and exact solutions for the stationary axially symmetric Schrödinger and Helmholtz equations are obtained as an application of the nonlocal Darboux transformation. PACS: 02.30.Jr, 02.30.Ik, 03.65.Ge ## 1 Introduction Consider the stationary Schrödinger equation in the form $\left(\Delta-{\it u}\left(x,y,z\right)\right)Y\left(x,y,z\right)=0\,.$ In the case ${\it u}=-E+V\left(x,y,z\right)$ this equation describes nonrelativistic quantum system with energy $E$ [1]. In the case ${\it u}=-{\omega}^{2}/{c\left(x,y,z\right)}^{2}$ equation describes an acoustic pressure field with temporal frequency $\omega$ in inhomogeneous media with sound velocity $c$ [2] and is known as Helmholtz equation [3]. The case of fixed frequency $\omega$ is of interest for modelling in acoustic tomography [4]. The case of fixed energy $E$ for two - dimensional equation is of interest for the multidimentional inverse scattering theory due to connections with two - dimensional integrable nonlinear systems [5]. In the case of axial symmetry the stationary equation in cylindrical coordinates has the form $\left({\frac{\partial^{2}}{\partial{r}^{2}}}+{\frac{1}{r}}{\frac{\partial}{\partial r}}+{\frac{\partial^{2}}{\partial{z}^{2}}}-u\left(r,z\right)\right)Y\left(r,z\right)=0\,.$ (1) The Darboux-type transformations are useful for obtaining new solvable linear differential equations from initial solvable equation. The authors of the book [6] note that the importance of the Darboux transformation lies in the possibility of obtaining new solvable equations based on the original solvable equation. Changing the solution of the original equation in the Darboux transformation formula gives solutions to the new equation. If a set of selected special functions allows representing all solutions analytically, we can say that the original equation is exactly solvable. If the original equation is exactly solvable, you can get all the solutions of the new equation using the Darboux transform. The Darboux transformation can be repeated any number of times. Thus, we can say that an equation is solvable if it can be obtained from another solvable equation by a finite number of Darboux transformations. For example, if the initial equation is the stationary Schrödinger equation in one-dimensional quantum mechanics, it is natural to call the equations describing the free motion of particles, the harmonic oscillator, and the Coulomb and Morse potentials solvable. In this paper, we say that the stationary Schrödinger and Helmholtz equations are solvable in the two-dimensional case if they can be obtained from an equation with a constant (including zero) potential by using a finite number of Darboux transformations. Applications of the classical Darboux transformation for the one - dimensional Schrödinger equation can be found in the book [6]. The classical Moutard transformation for the two - dimensional Schrödinger equation in cartesian coordinates is reviewed in [7]. Various generalizations of the classical Darboux transformations and their applications to the two - dimensional systems were investigated, see review [8], recent publication [9] and references therein. In the papers [10], [11] the nonlocal variable was included in Darboux transformation. The nonlocal Darboux transformation of the two - dimensional stationary Schrödinger equation in cartesian coordinates was obtained and its relation to the Moutard transformation was established. The main idear of this papers was inspired by the approach to nonlocal symmetries in the symmetry group analysis of differential equations [12], [13] (for the variant based on the theory of coverings, see [14] and references therein) . In the paper [15] the generalized Moutard transformation was considered and applied for the Schrödinger equation in cylindrical coordinates. In the present paper we consider the nonlocal Darboux transformation for the stationary Schrödinger equation in cylindrical coordinates (1) using the approach of papers [10], [11]. We use the relation of the Schrödinger equation with the Fokker-Planck equation [16]. By the substitution $Y\left(r,z\right)=P\left(r,z\right){e^{h\left(r,z\right)}}$ (2) we obtain the Fokker-Planck type equation ${\frac{\partial}{\partial r}}\left({\it P_{r}}+2\,{\it h_{r}}\,P+{\frac{1}{r}}\,P\right)+{\frac{\partial}{\partial z}}\left({\it P_{z}}+2\,{\it h_{z}}\,P\right)=0$ (3) if $u$ and $h$ satisfy the condition ${\it u}=-{\it h_{rr}}+{{\it h_{r}}}^{2}+{\frac{1}{r}}\,{\it h_{r}}+{\frac{1}{{r}^{2}}}-{\it h_{zz}}+{{\it h_{z}}}^{2}\,.$ (4) The equation (3) has the conservation law form that yields a pair of equations ${\it P_{r}}+2\,{\it h_{r}}\,P+{\frac{1}{r}}\,P-{\it Q_{z}}=0\,,$ (5) ${\it P_{z}}+2\,{\it h_{z}}\,P+{\it Q_{r}}=0\,.$ (6) The variable $Q$ is a nonlocal variable for the equation (3) and related Schrödinger equation (1). The Darboux transformation for the equations (5), (6) including $Q$ provides the nonlocal Darboux transformation for the Schrödinger equation (1). ## 2 The nonlocal Darboux transformation Let us consider linear operator corresponding to the system of equations (5), (6) $\hat{L}\left(h\left(r,z\right)\right)\,{\bf f}=\begin{pmatrix}{2\,{\it h_{r}}+{\frac{1}{r}}+\frac{\partial}{\partial r}}&{-\frac{\partial}{\partial z}}\\\ {{2\,{\it h_{z}}+\frac{\partial}{\partial z}}}&{\frac{\partial}{\partial r}}\end{pmatrix}\,\begin{pmatrix}f_{1}\\\ f_{2}\end{pmatrix}\,.$ Consider Darboux transformation in the form $\hat{L}_{D}\,{\bf f}=\begin{pmatrix}{g_{11}-a_{11}\,\frac{\partial}{\partial r}-b_{11}\,\frac{\partial}{\partial z}}&{g_{12}-a_{12}\,\frac{\partial}{\partial r}-b_{12}\,\frac{\partial}{\partial z}}\\\ {g_{21}-a_{21}\,\frac{\partial}{\partial r}-b_{21}\,\frac{\partial}{\partial z}}&{g_{22}-a_{22}\,\frac{\partial}{\partial r}-b_{22}\,\frac{\partial}{\partial z}}\end{pmatrix}\,\begin{pmatrix}f_{1}\\\ f_{2}\end{pmatrix}\,.$ If linear operators $\hat{L}$ and $\hat{L}_{D}$ hold the intertwining relation $\left(\hat{L}\left(h\left(r,z\right)+s\left(r,z\right)\right)\hat{L}_{D}-\hat{L}_{D}\hat{L}\left(h\left(r,z\right)\right)\right)\,{\bf f}=0$ (7) for any ${\bf f}\in\mathcal{F}\supset Ker\left(\hat{L}\left(h\right)\right)$ where $Ker\left(\hat{L}\left(h\right)\right)=\\{{\bf f}:{\hat{L}}\left(h\right){\bf f}=0\\}$, then for any ${\bf f_{s}}\in Ker\left(\hat{L}\left(h\right)\right)$ the function $\tilde{\bf f}\left(r,z\right)=\hat{L}_{D}{\bf f_{s}}\left(r,z\right)$ is a solution of the equation ${\hat{L}}\left({\tilde{h}}\right)\tilde{\bf f}=0$ with the new potential $\tilde{h}=h+s$. The equations for $s,g_{ij},a_{ij},b_{ij}$ can be obtained from the intertwining relation (7). When solving equation (7) the following expression arises: $V\left(r,z\right)=s\left(r,z\right)+2\,h\left(r,z\right)+\ln\left(r\right)\,.$ (8) The special situation occur in the case $V\left(r,z\right)=0$. In this case we obtain from the equation (7) the following transformation $\hat{L}_{D}=r\,{e^{2\,h\left(r,z\right)}}\begin{pmatrix}{0}&{1}\\\ {-1}&{0}\end{pmatrix}\,.$ (9) This transformation corresponds to the generalisation of Moutard transformation. The generalized Moutard transformation was applied for the Schrödinger equation in cylindrical coordinates in the paper [15]. Here we write out formulae of generalized Moutard transformation for the sake of completeness and subsequent use. By the formula (4) with $\tilde{h}=-h-\ln\left(r\right)$ we obtain for the new Schrödinger potential $\tilde{u}\left(r,z\right)=u\left(r,z\right)+2\,{\frac{\partial^{2}}{\partial{r}^{2}}}h\left(r,z\right)+2\,{\frac{\partial^{2}}{\partial{z}^{2}}}h\left(r,z\right)-{\frac{1}{{r}^{2}}}\,.$ (10) Consider $Y_{h}\left(r,z\right)={\frac{1}{r}}\,{{\rm e}^{-h\left(r,z\right)}}\,.$ (11) According to the formula (4), $Y_{h}$ is a solution of the Schrödinger equation (1) with potential $u$. Then we get another form of the formula (10) $\tilde{u}\left(r,z\right)=u\left(r,z\right)-2\,{\frac{\partial^{2}}{\partial{r}^{2}}}\ln\left({\it Y_{h}}\left(r,z\right)\right)+{\frac{1}{{r}^{2}}}-2\,{\frac{\partial^{2}}{\partial{z}^{2}}}\ln\left({\it Y_{h}}\left(r,z\right)\right)\,.$ (12) This formula is the generalization of the formula of Moutard transformation for the potential of the Schrödinger equation. From the formula (9) and the relation $\tilde{Y}\left(r,z\right)=\tilde{P}\left(r,z\right){e^{\tilde{h}\left(r,z\right)}}$ (13) we have the following equations $\tilde{P}\left(r,z\right)=r\,{e^{2\,h\left(r,z\right)}}Q\left(r,z\right)\,,$ (14) $\tilde{Y}\left(r,z\right)={e^{h\left(r,z\right)}}Q\left(r,z\right)\,.$ (15) One can express $P,Q,h$ by equations (2), (15), (11) trough $Y,\tilde{Y},Y_{h}$ and substitute to the system of equations (5), (6). The result is ${\frac{\partial}{\partial z}}\left({\it Y_{h}}\left(r,z\right){\tilde{Y}}\left(r,z\right)\right)-\left({\it Y_{h}}\left(r,z\right)\right)^{2}{\frac{\partial}{\partial r}}\left({\frac{{\it Y}\left(r,z\right)}{{\it Y_{h}}\left(r,z\right)}}\right)=0\,,$ (16) ${\frac{\partial}{\partial r}}\left({\it Y_{h}}\left(r,z\right){\tilde{Y}}\left(r,z\right)\right)+{\frac{1}{r}}\,{\it Y_{h}}\left(r,z\right){\tilde{Y}}\left(r,z\right)\\\ +\left({\it Y_{h}}\left(r,z\right)\right)^{2}{\frac{\partial}{\partial z}}\left({\frac{{\it Y}\left(r,z\right)}{{\it Y_{h}}\left(r,z\right)}}\right)=0\,.$ (17) Here ${\it Y},{\it Y_{h}}$ are the solutions of equation (1) with the initial potential $u$, and the function ${\tilde{Y}}$ defined as a solution of a consistent system of equations (16) and (17) is a solution of equation (1) with the new potential $\tilde{u}$. These formulae are the generalization of the formulae of Moutard transformation for the solution of the Schrödinger equation. Thus the case $V\left(r,z\right)=0\,$ of the nonlocal Darboux transformation for the stationary axially symmetric Schrödinger equation provides the generalization of the Moutard transformation. Note that ${\it Y}={\it Y_{h}}$ provides ${\tilde{Y}}={\left(r\,{\it Y_{h}}\right)}^{-1}$ as the simple example of solution for equations (16), (17). It is convenient to use the formula for the superposition of two generalized Moutard transformations. Let ${\it Y_{1}}$ and ${\it Y_{2}}$ are solutions of equation (1) with potential $u$. Applying consistently the formulas of the generalized Moutard transformation, we obtain the following superposition formulas for two transformations: ${\tilde{\tilde{u}}}=u-2\,{\frac{\partial^{2}\ln\left(F\right)}{\partial{r}^{2}}}-2\,{\frac{\partial^{2}\ln\left(F\right)}{\partial{z}^{2}}}\\\ =u+2\left({\frac{\partial{\it Y_{2}}}{\partial z}}\right)\left(2\,r{\frac{\partial{\it Y_{1}}}{\partial r}}+{\it Y_{1}}\right)F^{-1}\\\ -2\left({\frac{\partial{\it Y_{1}}}{\partial z}}\right)\left(2\,r{\frac{\partial{\it Y_{2}}}{\partial r}}+{\it Y_{2}}\right)F^{-1}\\\ +2\,{r}^{2}\left({\frac{\partial{\it Y_{2}}}{\partial r}}{\it Y_{1}}-{\it Y_{2}}{\frac{\partial{\it Y_{1}}}{\partial r}}\right)^{2}F^{-2}\\\ +2\,{r}^{2}\left({\frac{\partial{\it Y_{2}}}{\partial z}}{\it Y_{1}}-{\it Y_{2}}{\frac{\partial{\it Y_{1}}}{\partial z}}\right)^{2}F^{-2}\,,$ (18) where function $F$ satisfies the consistent system of equations ${\frac{\partial}{\partial z}}F=r\left({\frac{\partial{\it Y_{2}}}{\partial r}}{\it Y_{1}}-{\it Y_{2}}{\frac{\partial{\it Y_{1}}}{\partial r}}\right)\,,$ (19) ${\frac{\partial}{\partial r}}F=-r\left({\frac{\partial{\it Y_{2}}}{\partial z}}{\it Y_{1}}-{\it Y_{2}}{\frac{\partial{\it Y_{1}}}{\partial z}}\right)\,.$ (20) Formulas (18) - (20) are invariant under substitution ${\it Y_{1}}\rightarrow{\it Y_{2}},{\it Y_{2}}\rightarrow{\it Y_{1}},F\rightarrow-F$. This reflects the commutativity property of generalized Moutard transformations, the result does not depend on the order of choice of functions ${\it Y_{1}},{\it Y_{2}}$. Examples of solutions for the equation (1) with potential (18) can be obtained by the formulas ${\tilde{\tilde{Y}}}_{1}={\it Y_{1}}F^{-1},\,{\tilde{\tilde{Y}}}_{2}={\it Y_{2}}F^{-1}$. In the general case, when $V$ is not equal to zero, we obtain from the equation (7) the following Darboux transformation $\hat{L}_{D}={e^{-s\left(r,z\right)}}\begin{pmatrix}{R_{1}+\frac{\partial}{\partial z}}&{R_{2}}\\\ {{s}_{r}-R_{2}}&{{s}_{z}+R_{1}+\frac{\partial}{\partial z}}\end{pmatrix}\,,$ (21) where $\displaystyle{\it R_{1}}={\frac{1}{2}}\,\left({V}_{z}-2\,{s}_{z}+\left({V}_{z}H+{V}_{r}T\right)/{G}\right)\,,$ $\displaystyle{\it R_{2}}={\frac{1}{2}}\,\left({s}_{r}+\left({s}_{r}H-{s}_{z}T\right)/{G}\right)\,,$ $\displaystyle{\it G}={s}_{r}{V}_{r}+{s}_{z}{V}_{z},\,\,{\it H}={V}_{zz}-{s}_{rr},\,\,{\it T}={V}_{rz}+{s}_{rz}$ and $s$ satisfies the following system of two nonlinear differential equations: $G\left(G+H\right)s_{{{\it rz}}}-GTs_{{{\it zz}}}+\left(-G_{{z}}H+\left(H_{z}-V_{{r}}T\right)G\right)s_{{r}}\,\\\ +\left(G_{{z}}T-\left(V_{{z}}T+T_{{z}}\right)G\right)s_{{z}}-\left(V_{{z}}H+V_{{r}}T\right)G_{{r}}+V_{{{\it rz}}}{G}^{2}\\\ +\left(V_{{{\it rz}}}H+V_{{z}}H_{{r}}+V_{{{\it rr}}}T+V_{{r}}T_{{r}}\right)G=0\,,$ (22) $G\left(G+H\right)s_{{{\it rr}}}-GTs_{{{\it rz}}}+\left(-G_{{r}}H+V_{{r}}{G}^{2}+\left(H_{{r}}+V_{{r}}H\right)G\right)s_{{r}}\\\ +\left(G_{{r}}T+{G}^{2}V_{{z}}+\left(V_{{z}}H-T_{{r}}\right)G\right)s_{{z}}+\left(V_{{z}}H+V_{{r}}T\right)G_{{z}}-V_{{{\it zz}}}{G}^{2}\\\ -\left(V_{{{\it zz}}}H+V_{{z}}H_{{z}}+V_{{{\it rz}}}T+V_{{r}}T_{{z}}\right)G=0\,.$ (23) From the formula (21) and the relation (13) we have $\tilde{P}={{\rm e}^{-s}}\left({\it R_{1}}P+{\frac{\partial}{\partial z}}P+{\it R_{2}}Q\right)\,,$ (24) $\tilde{Y}={\frac{\partial}{\partial z}}Y+\left({\it R_{1}}-h_{{z}}\right)Y+{{\rm e}^{h}}{\it R_{2}}Q\,.$ (25) Taking into account the relation (2), the equations (5), (6) for $Q$ take the form ${{\rm e}^{-h}}\left({\it Y_{r}}+{\it h_{r}}\,Y+{\frac{1}{r}}\,Y\right)-{\it Q_{z}}=0\,,$ (26) ${{\rm e}^{-h}}\left({\it Y_{z}}+{\it h_{z}}\,Y\right)+{\it Q_{r}}=0\,.$ (27) ## 3 Application of the nonlocal Darboux transformation The equations (22), (23) contain unknown function $s$ and function $h$ associated with the initial potencial $u$ by the formula (4). According to the formula (11), we can take $h_{u}=-\ln\left(rf\left(r,z\right)\right)$ where $f$ is any solution of the Schrödinger equation (1) with the initial potential $u$. Let us consider $u=0,\,{\it f}={\frac{1}{\sqrt{{r}^{2}+{z}^{2}}}}$ and $h_{0}=-\ln\left(r\right)+{\frac{1}{2}}\,\ln\left({r}^{2}+{z}^{2}\right)$ respectively. An example of the particular solution of equations (22), (23) can be found for the ansatz $s=S\left({r}^{2}+{z}^{2}\right)$. The equation (22) is satisfied for any function $S$. The equation (23) leads to the solution $s_{0}=-1/2\,\ln\left({r}^{2}+{z}^{2}\right)+\ln\left(\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+C\right)\\\ -\ln\left(\left(1-2\,{\it C_{1}}\right)\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+\left(1+2\,{\it C_{1}}\right)C\right)\,,$ (28) where $C,C_{1}$ are arbitrary constants. By the formula (4) with $\tilde{h}=h_{0}+s_{0}$ we obtain the new potential $\tilde{u}\left(r,z\right)=-8\,{\frac{C{{\it C_{1}}}^{2}\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}-1}}{\left(\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+C\right)^{2}}}\,.$ (29) This potential satisfies the condition $\tilde{u}<0$ and has no singularities, provided that $C>0,C_{1}\geq 1$. Thus, we obtained two-parameter family of solvable Helmholtz potentials. From the formulas (25)-(27) we have the formula for the solution of the equation (1) with the potential (29) $\tilde{Y}={\frac{\partial}{\partial z}}Y\\\ -{\frac{\left(\left(1+2\,{\it C_{1}}\right)\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+C\left(1-2\,{\it C_{1}}\right)\right)\left(zY-\sqrt{{r}^{2}+{z}^{2}}Q\right)}{2\,\left({r}^{2}+{z}^{2}\right)\left(\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+C\right)}}\,,$ (30) where the function $Q$ is a solution of the system of equations $\left({\it Y_{r}}+{\frac{r\,Y}{{r}^{2}+{z}^{2}}}\right){\frac{r}{\sqrt{{r}^{2}+{z}^{2}}}}-{\it Q_{z}}=0\,,$ (31) $\left({\it Y_{z}}+{\frac{zY}{{r}^{2}+{z}^{2}}}\right){\frac{r}{\sqrt{{r}^{2}+{z}^{2}}}}+{\it Q_{r}}=0\,,$ (32) and ${\it Y}$ is any solution of equation (1) with the initial potential $u=0$. For example let us consider the following simple solutions of equation (1): $1,z,{r}^{2}-2\,{z}^{2},3\,z{r}^{2}-2\,{z}^{3},{\frac{1}{\sqrt{{r}^{2}+{z}^{2}}}},\ln\left(r\right)$. Substituting this solutions into the equations (30)-(32) we obtain the corresponding solutions of equation (1) with the potential (29): ${\tilde{Y}}_{1}={\frac{\left(1+2\,{\it C_{1}}\right)\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+\left(1-2\,{\it C_{1}}\right)C}{\sqrt{{r}^{2}+{z}^{2}}\left(\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+C\right)}}\,,$ ${\tilde{Y}}_{2}={\frac{\left(1-2\,{\it C_{1}}\right)\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+\left(1+2\,{\it C_{1}}\right)C}{\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+C}}\,,$ ${\tilde{Y}}_{3}={\frac{z\left(\left(3-2\,{\it C_{1}}\right)\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+\left(3+2\,{\it C_{1}}\right)C\right)}{\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+C}}\,,$ ${\tilde{Y}}_{4}={\frac{\left({r}^{2}-2\,{z}^{2}\right)\left(\left(5-2\,{\it C_{1}}\right)\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+\left(5+2\,{\it C_{1}}\right)C\right)}{\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+C}}\,,$ ${\tilde{Y}}_{5}={\frac{z\left(\left(3+2\,{\it C_{1}}\right)\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+\left(3-2\,{\it C_{1}}\right)C\right)}{\left({r}^{2}+{z}^{2}\right)^{3/2}\left(\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+C\right)}}\,,$ ${\tilde{Y}}_{6}={\frac{\left(\left(1+2\,{\it C_{1}}\right)\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+\left(1-2\,{\it C_{1}}\right)C\right)\left(\ln\left(z+\sqrt{{r}^{2}+{z}^{2}}\right)-\ln\left(r\right)\right)}{\left(\left({r}^{2}+{z}^{2}\right)^{{\it C_{1}}}+C\right)\sqrt{{r}^{2}+{z}^{2}}}}\,.$ Now the potential (29) can be taken as the initial potential for new transformations. In the case of the two - dimensional Schrödinger equation it was shown that the twofold application of the Moutard transformation can be effective for obtaining nonsingular potentials in cartesian coordinates [17] and in cylindrical coordinates [15]. To avoid cumbersome formulas let us consider the simple case $C_{1}=1$. The initial potential has the form $u=-{\frac{8\,C}{\left({r}^{2}+{z}^{2}+C\right)^{2}}}\,.$ (33) For the first example let us take ${\tilde{Y}}_{1}$ and ${\tilde{Y}}_{2}$ at $C_{1}=1$ as the solutions of the initial equation: ${\it Y_{1}}={\frac{3\,\left({r}^{2}+{z}^{2}\right)-C}{\sqrt{{r}^{2}+{z}^{2}}\left({r}^{2}+{z}^{2}+C\right)}}\,\,,\,{\it Y_{2}}={\frac{{r}^{2}+{z}^{2}-3\,C}{{r}^{2}+{z}^{2}+C}}\,.$ From the equations (19)-(20) we obtain ${\it F}={\frac{z}{\sqrt{{r}^{2}+{z}^{2}}}}+K\,,$ where $K$ is an arbitrary constant. Then from the formula (18) we obtain new solvable potential ${\tilde{\tilde{u}}}=-{\frac{8\,C}{\left({r}^{2}+{z}^{2}+C\right)^{2}}}+{\frac{2\,\left(Kz+\sqrt{{r}^{2}+{z}^{2}}\right)}{\left(z+\sqrt{{r}^{2}+{z}^{2}}K\right)^{2}\sqrt{{r}^{2}+{z}^{2}}}}\,\,.$ (34) For the second example let us take ${\tilde{Y}}_{2}$ and ${\tilde{Y}}_{3}$ at $C_{1}=1$ as the solutions of the initial equation: ${\it Y_{1}}={\frac{{r}^{2}+{z}^{2}-3\,C}{{r}^{2}+{z}^{2}+C}}\,\,,\,{\it Y_{2}}={\frac{z\left({r}^{2}+{z}^{2}+5\,C\right)}{{r}^{2}+{z}^{2}+C}}\,.$ From the equations (19)-(20) we obtain ${\it F}={\frac{{r}^{4}+\left({z}^{2}-15\,C\right){r}^{2}}{{r}^{2}+{z}^{2}+C}}+K\,,$ where $K$ is an arbitrary constant. Then from the formula (18) we obtain new solvable potential ${\tilde{\tilde{u}}}=4\,{\it N}/\left({r}^{4}+\left({z}^{2}+K-15\,C\right){r}^{2}+K\left({z}^{2}+C\right)\right)^{2}\,,$ (35) where ${\it N}=\left({r}^{2}-K\right)\left({r}^{2}+{z}^{2}\right)^{2}-C\left(30\,{z}^{2}+22\,K-225\,C\right){r}^{2}\\\ +KC\left(14\,{z}^{2}-2\,K+15\,C\right)\,.$ This potential has no singularities, provided that $C>0,K\geq 15\,C$. ## 4 Results and Discussion The stationary Schrödinger and Helmholtz equations in the case of axial symmetry are investigated. Using the approach of the papers [10], [11] the nonlocal Darboux transformations of the two - dimensional stationary Schrödinger and Helmholtz equations in cylindrical coordinates are considered. Formulae for the nonlocal Darboux transformation are obtained and its relation to the generalized Moutard transformation is established. New examples of two - dimensional potencials and exact solutions for the stationary axially symmetric Schrödinger and Helmholtz equations are obtained as an application of the nonlocal Darboux transformation. The examples considered show that by combining the nonlocal Darboux transformation and the generalized Moutard transformation, one can obtain many new examples of solvable potentials and exact solutions for stationary axially symmetric Schrödinger and Helmholtz equations. ## References * [1] L.D. Landau and E.M. Lifshitz, Quantum Mechanics: Nonrelativistic Theory, Pergamon Press, 1977. * [2] P.M. Morse and K.U. Ingard, Theoretical Acoustics, New York, NY: McGraw- Hill, 1968. * [3] M. B. Vinogradova, O. V. Rudenko and A. P. Sukhorukov, The Wave Theory, Nauka Publishers, Moscow, 1990. * [4] A.C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging, Society of Industrial and Applied Mathematics, 2001. * [5] P.G. Grinevich, A.E. Mironov and S.P. Novikov, 2D-Schrodinger Operator, (2+1) evolution systems and new reductions, 2D-Burgers hierarchy and inverse problem data, arXiv:1005.0612 and Russian Math Surveys, 2010, v 65, n 3, p. 580-582. * [6] V.B. Matveev, M.A. Salle, Darboux Transformations and Solitons, Springer, 1991. * [7] C. Athorne and J. J. C. Nimmo, On the Moutard transformation for integrable partial differential equations, Inverse Problems, 7 (1991), p. 809–826. * [8] A A Andrianov and M V Ioffe, Nonlinear Supersymmetric Quantum Mechanics: concepts and realizations, 2012 J. Phys. A: Math. Theor. 45 (2012), p. 503001 * [9] M. V. Ioffe and D. N. Nishnianidze, Generalization of SUSY intertwining relations: New exact solutions of Fokker-Planck equation, EPL 129 (2020),p. 61001 * [10] A.G. Kudryavtsev, Exactly solvable two - dimensional stationary Schrödinger operators obtained by the nonlocal Darboux transformation, Phys. Lett. A, 377 (2013) 2477-2480. * [11] A.G. Kudryavtsev, Nonlocal Darboux transformation of the two-dimensional stationary Schrödinger equation and its relation to the Moutard transformation, Theoretical and Mathematical Physics, 187(1) (2016) 455-462. * [12] I.S. Akhatov, R.K. Gazizov and N.H. Ibragimov. Nonlocal symmetries. Heuristic approach. J. Sov. Math. 55 (1991) 1401-1450. * [13] G.W. Bluman, A.F. Cheviakov and S.C. Anco, Applications of Symmetry Methods to Partial Differential Equations, Springer, 2010. * [14] I.S. Krasil’shchik, and A.M. Vinogradov (Eds.) “Symmetries and conservation laws for differential equations of mathematical physics” (1999). Translations of Mathematical Monographs Vol. 182, AMS, Providence. * [15] A.G. Kudryavtsev, Exact solutions of the time-independent axially symmetric Schrödinger equation, JETP Letters, 2020, Vol. 111, No. 2 (2020), pp. 126–128. * [16] H. Risken, The Fokker-Planck Equation, Springer, 1989. * [17] I.A. Taimanov, S.P. Tsarev, Two-dimensional rational solitons and their blowup via the moutard transformation, Theoretical and Mathematical Physics, November 2008, Volume 157, Issue 2, pp 1525-1541.
# Variational Information Bottleneck Model for Accurate Indoor Position Recognition Weizhu Qian1, Franck Gechter1 2 25th International Conference on Pattern Recognition (ICPR 2020), p2529–p2535, Milan, Italy, Jan 10-15, 2021 1CIAD UMR 7533, Université Bourgogne-Franche-Comté, UTBM, 90010, Belfort, France 2Mosel LORIA UMR CNRS 7503, Université de Lorraine, 54506, Vandœuvre-lès-Nancy, France Email: {weizhu.qian<EMAIL_ADDRESS> ###### Abstract Recognizing user location with WiFi fingerprints is a popular approach for accurate indoor positioning problems. In this work, our goal is to interpret WiFi fingerprints into actual user locations. However, WiFi fingerprint data can be very high dimensional in some cases, we need to find a good representation of the input data for the learning task first. Otherwise, using neural networks will suffer from severe overfitting. In this work, we solve this issue by combining the Information Bottleneck method and Variational Inference. Based on these two approaches, we propose a Variational Information Bottleneck model for accurate indoor positioning. The proposed model consists of an encoder structure and a predictor structure. The encoder is to find a good representation in the input data for the learning task. The predictor is to use the latent representation to predict the final output. To enhance the generalization of our model, we also adopt the Dropout technique for each hidden layer of the decoder. We conduct the validation experiments on a real- world dataset. We also compare the proposed model to other existing methods so as to quantify the performances of our method. ## I Introduction Localizing smartphone users is a essential technique for many location-based services like navigation and advertisement. Though GPS can provide relatively accurate position, it cannot function well in indoor environment. Thus we need to seek other options to recognize user location. Since the WiFi signal strength is related to distance of the hotspots and the devices, if we have the WiFi fingerprint data labeled with actual user coordinates, then we can interpret WiFi fingerprints into user location information via supervised learning approaches. A lot of research works are focusing on the use of the WiFi received signal strength indicator (RSSI) value data. Among the solutions presented in literature, traditional neural networks are historically among the most widespread. Their limitations are that they can be regarded as deterministic functions in a sense and their loss functions usually are Euclidean distance (for instance, mean squared errors) for regression problems. Conventional neural networks work well in many cases but when the dataset contains too much noisy information, they are not powerful enough to learn the useful information from the dataset. In our case, the user current position is normally only related to a few number of WiFi access points, and the rest of RSSI values in the input vector are in fact not useful. However, in the modelling process, we need to feed all the RSSI values to the model. As a result, the unrelated information in the input data will lead to bad performance when training the neural network. Alternatively, some deep probabilistic models can be applied to the aforementioned problems, for instance, Mixture Density Networks (MDNs) [2], Bayesian Neural Networks (BNNs) [3] and Variational Autoencoders (VAEs)-based models [18]. These methods are based on the probability theory and Bayesian statistics, by introducing uncertainty to the models to prevent overfitting problems. However, according to the Information Bottleneck theory [21], [1] these models do not consider what the useful information in the dataset are for the learning tasks. Thus, the aforementioned deep probabilistic models can solve our problems in a sense but they may not be the optimal solutions. In this work, we propose a novel model to calculate the accurate user location by using the related WiFi fingerprints. We treat this problem as a supervised regression problem. It means that we use the WiFi RSSI value data as the input and the actual user location (latitudes and longitudes) as the output. However, there are some difficulties to achieve this goal. First, to provide good quality of network connections, modern building are normally equipped with abundant WiFi access points (WAPs). Therefore, when we use the WiFi received signal strength indicator (RSSI) value data as the modeling input, which usually are very high dimensional. Meanwhile, due to the signal-fading and multi-path [12] effects, the RSSI values can be very noisy. These two properties result in severe overfitting when we use conventional neural network-based models. For this reason, in contrast with the existing methods, based on the Information Bottleneck method and Variational Inference, we propose a Variational Information Bottleneck model in this work. This model consists of two sub-models, one is the encoder model, the other is the predictor model which is used to predict the target values. According to the Information Bottleneck theory [21], the encoder in our model is used to find a good latent representation of the input data for the related learning task so that the nuisance information in the original input will be token out. Afterwards, the predictor utilizes the latent representation as its input, instead of the original input, to predict the target values. Our model is an end-to-end deep learning model and scalable to large scale datasets which makes it easy to train. The reminder of the paper is organized as follows. Section II surveys the related research work. In Section III introduce the proposed model. Section IV demonstrates the validation experiments and the results and gives a detailed discussion. The conclusions and the possible future work are in Section V. ## II Related Work In previous research, both conventional machine learning and deep learning methods are widely explored for WiFi fingerprint based user location recognition problems. Many previous works treat this problem as classification or clustering tasks, which means to identify the buildings and/or floors. Some researchers used conventional machine learning methods, for instance, i.g., Decision Trees, K-nearest neighbors, Naive Bayes, Neural Networks, K-means and the affinity clustering algorithm [4], [6], [8], [9], [23], [24]. In addition, since RSSI values are high dimensional sometimes, some researchers used deep learning techniques like Autoencoders [11] to reduce the input dimension before preceding the main learning tasks [17], [20], [14]. For learning the accurate user position information, i.e., calculating the real coordinates of the users, Gaussian Processes (GPs) can be one of the options [9], [8], [23]. But GPs are extremely computationally expensive when it comes to datasets with large scales because they need to compute the covariances between each data points. To circumvent this issue, one can resort to deep learning approaches. [13], [20] and [12] used Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). However, since deterministic neural networks can cause serve overfitting issue, their methods calculate the use coordinates indirectly. In our study, we find that some deep probabilistic models can be better solutions. For example, Mixture Density Networks [2] use a set of mixed Gaussian distributions at the output layers to compute the final output and use the negative log-likelihood as the loss function. The disadvantage of such method is that, as a maximum likelihood estimation (MLE) method and it fails to consider the prior of the model parameters so it is prone to be overfitting. Though Bayesian Neural Networks (BNNs) [3] are maximum a posteriori (MAP) methods, they do not extract the noisy information in the input data, so their performance is not good as expected either. In our previous research, we take advantage of Variational Autoencoders (VAEs) [16], [19] to develop the VAE-based semi-supervised learning models [18]. However, these models all neglect the effects of the nuisance information in the dataset. The nuisance information is redundant for the learning tasks and will damage the modeling performance. To circumvent the effect of the nuisance information, one can resort to the Deep Variational Information Bottleneck (DVIB) model [1]. DVIB is a model based on Variational Inference and the Information Bottleneck method. It aims at learning a good latent representation of the input data for the downstream learning tasks. In this work, we want to apply the Information Bottleneck method to WiFi fingerprint-based location recognition problem in order to reduce the nuisance information damaging the modeling performance. Inspired by VAEs, $\beta$-VAEs [10], [5] and DVIB, we devise a Variational Information Bottleneck model to interpret the user WiFi RSSI values into the actual user location information. This model is solved via Monte Carlo sampling and Variational Inference. ## III Method ### III-A Preliminaries In our model, the input is the WiFi RSSI values $x$, the output is the user’s coordinates $y$. To make the model more robust to noise, we use a set of probabilistic distributions such as $p(z|x)$ and $p(y|z)$ to describe the relationship between the variables instead of deterministic functions as in conventional neural networks. Furthermore, in order to let our model work theoretically, we need to make some assumptions first: * • Assumption $1$: assume that there exists a latent distribution of $z$. Let’s say that $x$, $z$, $y$ belong to the same information Markov chain: $x\rightarrow z\rightarrow y$. * • Assumption $2$: assume that $x$ is solely sufficient enough to learn $z$, which leads to $p(z|x,y)=p(z|x)$. * • Assumption $3$: assume that $z$ is solely sufficient enough to learn $y$, which leads to $p(y|x,z)=p(y|z)$. We make the above assumptions based on the idea that the values of both the WiFi RSSIs and GPS coordinates are related to the user’s real physical position. Hence either the WiFi RSSI values or the GPS coordinates contains the sufficient information of the real user physical position (which we use a the latent variable $z$ to represent). This suggests that we can use $x$ to compute $p(z|x)$ (encoding step) and then use $y$ to compute $p(y|z)$ (predicting step). The above assumptions will facilitate the derivation of our model. ### III-B Model In a maximum a posteriori (MAP) modeling setting, the parameters of the model are related to not only the dataset but also the prior of the parameters: $\displaystyle p(\lambda|D)\propto p(D|\lambda)q(\lambda)$ (1) where, $D$ is the dataset, $\lambda$ is the model parameters, $p(\lambda|D)$ is the posterior, $p(D|\lambda)$ is the likelihood and $q(\lambda)$ is the prior. Applying such a setting to our problem, the prior of the latent representation $z$, $q(z)$ and the posterior $p(z|x)$ can both be represented by Gaussian distributions. Through using Variational Inference, $p(z|x)$ can be calculated via a neural network. In Variational Autoencoders, one assumes that there is a latent distribution of $z$ which can be used to reconstruct the original input $x$. Hence the information Markov chain of VAEs is $x\rightarrow z\rightarrow x^{\prime}$, where $x^{\prime}$ is the reconstructed input. Accordingly, the loss function can be written as : $\displaystyle\mathcal{L}(D,\theta,\phi)$ $\displaystyle~{}=\mathop{{}\mathbb{E}}{{}_{z\sim p_{\phi}(z|x)}}[p_{\theta}(x|z)]$ $\displaystyle\qquad- D_{KL}[p_{\phi}(z|x)||q(z)]$ (2) where $D_{KL}$ represents the Kullback–Leibler (KL) divergence, which is to measure the closeness between the posterior and the prior, $\phi$ is the parameters of the encoder network, $\theta$ is the parameters of the decoder network, $q(z)$ is an uninformative prior of $z$, here we can use a standard Normal distribution $\mathcal{N}(0,\mathbb{I})$. Furthermore, according to the Information Bottleneck principle [21], let $x$ be the input, $y$ be the learning target and $z$ be the representation, then we can have the following optimization objective: $\displaystyle\text{max}~{}I(z;y)~{}\text{s.t.}~{}I(z;x)\leq I_{c}$ (3) where $I$ is the mutual information, $I_{c}$ is the information constraint. Or equivalently, if we apply the Karush-Kuhn-Tucker (KKT) conditions to Eq. (3), then we will have the following Lagrangian: $\displaystyle\mathcal{L}_{IB}=I(z;y)-\beta I(z;x)$ (4) where $\beta$ is a Lagrangian multiplier. $\beta$-VAEs leverage Eq. (4) to formulate a constrained variational framework, so that Eq. (III-B) becomes: $\displaystyle\mathcal{L}(D,w,\phi)$ $\displaystyle~{}=\mathop{{}\mathbb{E}}{{}_{z\sim p_{\phi}(z|x)}}[p_{w}(x|z)]$ $\displaystyle\qquad-\beta D_{KL}[p_{\phi}(z|x)||q(z)]$ (5) Since our learning task is supervised, as opposed to VAEs and $\beta$-VAEs, we have the information Markov chain: $x\rightarrow z\rightarrow y$. As opposed to $\beta$-VAEs [10], [5], based on Eq. (3) and the assumptions we have made, we know that the latent variable $z$ can be represented by $x$ alone ($p(z|x,y)=p(z|x)$) and the output $y$ can depend on $z$ alone ($p(y|x,z)=p(y|z)$). For this reason, we can replace the term $p(x|z)$ in Eq. (III-B) with $p(y|z)$. As a result, now we have this following optimization objective for our model: $\displaystyle\underset{\theta,~{}\phi}{\mathrm{argmax}}$ $\displaystyle~{}\mathop{{}\mathbb{E}}{{}_{D}}[\mathop{{}\mathbb{E}}{{}_{p_{\phi}(z|x)}}[\log p_{w}(y|z)]]$ $\displaystyle~{}s.t.~{}D_{KL}[p_{\phi}(z|x),q(z)]\leq\epsilon$ (6) where $D=\\{x,y\\}$ is the dataset, $\theta$ is the parameters of the predictor network, $\epsilon$ is a positive constant with small value. Further, based on Eq. (4), the Lagrangian form of Eq. (III-B) can be written as: $\displaystyle\mathcal{L}(D,w,\phi)$ $\displaystyle~{}=\mathop{{}\mathbb{E}}{{}_{z\sim p_{\phi}(z|x)}}[p_{w}(y|z)]$ $\displaystyle\qquad-\beta D_{KL}[p_{\phi}(z|x)||q(z)]$ (7) Finally, Eq. (III-B) is the loss function of the proposed model. In contrast with VAEs and $\beta$-VAEs, which are unsupervised learning models, our model is an end-to-end supervised learning model. As shown in Fig 1, $p_{\phi}(z|x)$ represents the encoder neural network and $p_{w}(y|z)$ represents the predictor neural network. Figure 1: Comparisons. ### III-C Model Solver To solve the Eq. (III-B), we need to adopt some special techniques. First, for computing the term $D_{KL}[p{\phi}(z|x)||q(z)]$, we can use the reparameterization trick proposed in [16], in which the random distribution of $z$ is decomposed as the combination of the mean $\mu$ and the variance $\sigma$: $\displaystyle z=\mu_{z}+\sigma_{z}\odot\epsilon_{z}$ (8) where, $\mu_{z}$ and $\sigma_{z}$ can be calculated via the neural networks respectively, and $\epsilon_{z}$ can be sampled from a standard diagonal Normal distribution. Afterwards we need to calculate the term $\mathop{{}\mathbb{E}}{{}_{z\sim p_{\phi}(z|x)}}[p_{w}(y|z)]$. This term cannot be solved directly but we can use Monte Carlo method to compute it. If we adopt Monte Carlo sampling, then Eq. (III-B) becomes: $\displaystyle\mathcal{L}(D,w,\phi)=\frac{1}{N}\sum_{n=1}^{N}\mathop{{}\mathbb{E}_{\epsilon_{z}\sim p(\epsilon_{z})}}[p_{w}(y_{n}|f_{\phi}(x_{n},\epsilon_{z}))]$ $\displaystyle-\beta D_{KL}[p_{\phi}(z|x_{n})||q(z)]$ (9) where $N$ denotes the total instance number. $f_{\phi}(x)$ is the same deterministic neural network used in the encoder to calculate the parameters of the distribution $p(z|x)$: $\displaystyle f_{\phi}(x)=\mu_{z}(x)+\sigma_{z}(x)\odot\epsilon_{z}$ (10) Last but not least, $\beta$ is a hyperparameter which is used to balance the encoding term and the predicting term so that it needs to be chosen carefully. ### III-D Computing Output In VAEs and $\beta$-VAEs, one can obtain new samples from an uninformative standard Gaussian first then use them as the input of the decoder. Whereas since our model is a supervised model, once the model is trained, we use the sample from the conditional distribution, i.e., $p(z|x)$, to feed the predictor network to compute the final output, which is the same as the training procedure. Algorithm 1 Algorithm 1:$X$ (inputs), $Y$ (targets) 2:$Y^{\prime}$ (predictions) 3: 4:while $\text{epoch}\leq\text{max epoch}$ do 5: $\mu_{z}$, $\sigma_{z}$ $\leftarrow$ $E_{\phi}(X)$ $\triangleright$ $E_{\phi}(X)$: Encoder network 6: $z\sim\mathcal{N}(\mu_{z},\sigma_{z})$ $\triangleright$ Sample latent codes 7: $Y^{\prime}$ $\leftarrow$ $F_{y}(z;w)$ $\triangleright$ $F_{y}$: Predictor network 8: minimize loss function $\mathcal{L}(D,w,\phi)$ $\triangleright$ Eq. (III-B) 9:end while 10: 11:return $Y^{\prime}$ The overall algorithm is summarized in Algorithm 1. ## IV Experimental Results ### IV-A Dataset Description For the validation, we use the UJIindoor dataset [22] whose input dimension is $520$ and each dimension represents a WAP. The RSSI values range from $-110$ dB to $0$ dB when the WAPs are detected, otherwise the RSSI values are set to be $100$. Also each RSSI vector corresponds to a pair of latitude and longitude as the geo-location label. In our experiments, we use scaled GPS coordinates values for computational convenience. The total instance number is about $20000$. For Experiment 1 and Experiment 2, we use $80\%$ of the dataset for training and the rest $20\%$ as the test dataset. In Experiment 3, the training data number will vary. ### IV-B Model Structure TABLE I: Model Implementation Details Sub-network | Layer | Parameter | Activation Function ---|---|---|--- Encoder | hidden layer | neuron number: 512; latent dimension: 5 | ReLU Predictor | hidden layer | neuron number: 512: dropout rate: 0.3 | ReLU Predictor | hidden layer | neuron number: 512: dropout rate: 0.3 | ReLU Predictor | hidden layer | neuron number: 512: dropout rate: 0.3 | ReLU Optimizer: Adam; learning rate: 1e-3 Table I demonstrates the implementation details of our model. The encoder neural network includes of one hidden layer, and the dimension of the latent codes is set to be $5$. In practice, we find that the latent dimension of $5$ is in line with the Minimal Description Length principle [11] for our task. The predictor is composed of three hidden layers. Each hidden layer has $512$ units. Especially, in order to improve modeling generalization on test data, we can increase the model uncertainty. Hence we apply the Dropout technique [7] to the hidden layers of the predictor. The optimizer for the model is Adam [15] and the learning rate is $1\text{e-}3$. ### IV-C Experiment 1 In the loss function of the proposed model, the constant $\beta$ is related to the constraint for the optimization, which is to balance the encoding error term $\mathop{{}\mathbb{E}}{{}_{z\sim p_{\phi}(z|x)}}[p_{w}(y|z)]$ and the prediction error term $D_{KL}[p_{\phi}(z|x)||q(z)]$. A larger $\beta$ value means the model tends to be more compressive for the input and less expressive for the output, and vice versa. Therefore, different $\beta$ values can result in different modeling results. To find the optimal $\beta$ values, we test different $\beta$ values, ranging from $1\text{e-}3$ to $1\text{e-}8$, for our model. From the results shown in Fig. 2, we can see that, when $\beta$ is $1\text{e-}6$, the proposed model has the best performance. Thus, we will hereafter set $\beta$ to be $1\text{e-}6$ for the propose model in all following experiments. Fig. 3 shows the ground truth and the test modeling result of our model. It can be seen the proposed model can calculate the user location coordinates accurately using the relevant WiFi fingerprints. In addition, Fig. 4 demonstrates how the latent distribution is related to the building IDs and floor IDs. Figure 2: Changing the value of $\beta$. (a) Ground truth. (b) Test result. Figure 3: Experimental results. (a) Latent variables labeled with the building IDs. (b) Latent variables labeled with the floor IDs. Figure 4: Latent variables with dimension of $5$, here shows the 2D projection. ### IV-D Experiment 2 TABLE II: Comparison Results Method | k-NN | GP | MDN-2 | MDN-5 | BNN | Semi-VAE | Proposed ---|---|---|---|---|---|---|--- RMSE | $0.092\pm 2\text{e-}3$ | $0.252\pm 3\text{e-}3$ | $0.099\pm 3\text{e-}4$ | $0.103\pm 3\text{e-}3$ | $1.033\pm 4\text{e-}3$ | $0.088\pm 1\text{e-}3$ | $0.075\pm 6\text{e-}3$ To show the advantages of our method, we run other methods proposed in the literature on the UJIindoor dataset. K-NN is used as the baseline model. The MDN-2 model refers to the Mixture Density Network model with $2$ Gaussian distributions at the output layers. Similarly, the MDN-5 model is a MDN model with $5$ Gaussian distributions at the output layers. The Semi-VAE model is a semi-supervised variational autoencoder (VAE) model, which will be explained later. The overall results are shown in Table II. We use the root mean squared error (RMSE) as the evaluation metrics. From the results, we can see that the proposed model has the best modeling performance. Also in practice we find that compared to our model, the Gaussian Process model suffers from heavy computation load and the MDN models are not very stable during the learning process. ### IV-E Experiment 3 From our previous assumptions, as an alternative approach, we can also formulate a semi-supervised learning approach, the semi-VAE model. The learning procedure can be described briefly as follow. If we learn a VAE model via unsupervised model at first, then we will have $p(z|x)$ and $p(x|z)$. After that we can do a supervised learning procedure, by sampling $z$ from $p(z|x)$ to compute $p(y|z)$. Especially, in the semi-VAE model, the model uses both the labeled and unlabeled data for unsupervised learning and then uses the labeled data for supervised learning. While, in our proposed model, we only use the labeled data for supervised learning. To compare with the semi-supervised learning approach, the semi-VAE model, we run our model and other models on different portions of labeled data. As shown in Fig. 5, we can see that once the labeled data are more than $10\%$ of the total training data, our method surprisingly has the best performance among all the methods. Figure 5: Results on different portions of labeled data. ### IV-F Discussion Why the proposed method can outperform other deep learning methods? First, our problem can be regarded as a regression problem, and especially, the input (RSSI vectors) is relatively high dimensional and the target (GPS coordinates) is rather low dimensional. Thus, it causes the issue that the input has redundant information for the learning tasks. If we use a conventional Neural Network to solve this problem directly, the results will not be satisfying at all. Mixture Density Networks and Bayesian Neural Networks handle this problem by introducing uncertainty into the models. The difference is that MDNs are MLE method while BNNs are MAP method. Surprisingly, The BNN has worse performance than the MDNs on our tasks because the uncertainty of BNNs does not depend on the input data. Variational Autoencoders are originally designed as generative approaches to obtain new sample data. For our problem, we can use a VAE to learn the latent representation of the input data first. Then this model can be trivially extended to be a semi-supervised model by using the pre-learned representation to obtain the final output. However, in our study, we find that leveraging the Information Bottleneck method to this problem is a better option than the semi-VAE model. It is because that, with the Information Bottleneck method, we can treat the original task as a constrained optimization problem. The optimization objective is the learning tasks and the constraint is the data representation. That’s to say the Variational Information Bottleneck Model is to directly find the optimal representation for the learning tasks, whereas the semi-VAE model is to find the representation to reconstruct the original inputs. ## V Conclusions Interpreting WiFi fingerprints into real user location via neural networks is a tricky problem. In this work, we combined the Information Bottleneck theory with Variational Inference to propose a novel deep learning model for WiFi fingerprint-based user location recognition. The proposed model consists of two neural networks, an encoder and a predictor. According to the Information Bottleneck theory, the encoder neural network is to find an optimal representation of the input data and mitigate the negative effect of the nuisance information for the learning tasks. The predictor neural network is to use the latent representation to compute the final output. The main advantages of the proposed model are that it is scalable to large scale dataset, computationally stable and robust to noisy information. To evaluate our model, we run our model and other previous models on the real-world WiFi fingerprint dataset and the finally results verify the effectiveness and show the advantages of our method compared to the existing approaches. For the future research, we plan to explore other methods in information theory and Variational Inference to improve the performance of our models or develop other applications. ## Acknowledgment The authors would like to thank the China Scholarship Council for the financial support. ## References * [1] Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016. * [2] Christopher M Bishop. Mixture density networks. 1994\. * [3] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015. * [4] Sinem Bozkurt, Gulin Elibol, Serkan Gunal, and Ugur Yayan. A comparative study on machine learning algorithms for indoor positioning. In 2015 International Symposium on Innovations in Intelligent SysTems and Applications (INISTA), pages 1–8. IEEE, 2015. * [5] Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in $\beta$-vae. arXiv preprint arXiv:1804.03599, 2018. * [6] Andrei Cramariuc, Heikki Huttunen, and Elena Simona Lohan. Clustering benefits in mobile-centric wifi positioning in multi-floor buildings. In 2016 International Conference on Localization and GNSS (ICL-GNSS), pages 1–6. IEEE, 2016. * [7] George E Dahl, Tara N Sainath, and Geoffrey E Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 8609–8613. IEEE, 2013. * [8] Brian Ferris, Dieter Fox, and Neil D Lawrence. Wifi-slam using gaussian process latent variable models. In IJCAI, volume 7, pages 2480–2485, 2007. * [9] Brian Ferris Dirk Hähnel and Dieter Fox. Gaussian processes for signal strength-based location estimation. In Proceeding of robotics: science and systems, 2006. * [10] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. Iclr, 2(5):6, 2017. * [11] Geoffrey E Hinton and Richard S Zemel. Autoencoders, minimum description length and helmholtz free energy. In Advances in neural information processing systems, pages 3–10, 1994. * [12] Minh Tu Hoang, Brosnan Yuen, Xiaodai Dong, Tao Lu, Robert Westendorp, and Kishore Reddy. Recurrent neural networks for accurate rssi indoor localization. arXiv preprint arXiv:1903.11703, 2019. * [13] Mai Ibrahim, Marwan Torki, and Mustafa ElNainay. Cnn based indoor localization using rss time-series. In 2018 IEEE Symposium on Computers and Communications (ISCC), pages 01044–01049. IEEE, 2018. * [14] Kyeong Soo Kim, Sanghyuk Lee, and Kaizhu Huang. A scalable deep neural network architecture for multi-building and multi-floor indoor localization based on wi-fi fingerprinting. Big Data Analytics, 3(1):4, 2018. * [15] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. * [16] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. * [17] Michał Nowicki and Jan Wietrzykowski. Low-effort place recognition with wifi fingerprints using deep learning. In International Conference Automation, pages 575–584. Springer, 2017. * [18] Weizhu Qian, Fabrice Lauri, and Franck Gechter. Supervised and semi-supervised deep probabilistic models for indoor positioning problems. arXiv, pages arXiv–1911, 2019. * [19] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. * [20] Xudong Song, Xiaochen Fan, Chaocan Xiang, Qianwen Ye, Leyu Liu, Zumin Wang, Xiangjian He, Ning Yang, and Gengfa Fang. A novel convolutional neural network based indoor localization framework with wifi fingerprinting. IEEE Access, 7:110698–110709, 2019. * [21] Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000. * [22] Joaquín Torres-Sospedra, Raúl Montoliu, Adolfo Martínez-Usó, Joan P Avariento, Tomás J Arnau, Mauri Benedito-Bordonau, and Joaquín Huerta. Ujiindoorloc: A new multi-building and multi-floor database for wlan fingerprint-based indoor localization problems. In 2014 international conference on indoor positioning and indoor navigation (IPIN), pages 261–270. IEEE, 2014. * [23] Simon Yiu and Kai Yang. Gaussian process assisted fingerprinting localization. IEEE Internet of Things Journal, 3(5):683–690, 2015. * [24] Yanru Zhong, Zhixiang Yuan, Yiyuan Li, and Bing Yang. A wifi positioning algorithm based on deep learning. In 2019 7th International Conference on Information, Communication and Networks (ICICN), pages 99–104. IEEE, 2019.
# Kirby diagrams and 5-colored graphs representing compact 4-manifolds Maria Rita Casali Department of Physics, Mathematics and Computer Science, University of Modena and Reggio Emilia Via Campi 213 B, I-41125 Modena (Italy<EMAIL_ADDRESS>Paola Cristofori Department of Physics, Mathematics and Computer Science, University of Modena and Reggio Emilia Via Campi 213 B, I-41125 Modena (Italy<EMAIL_ADDRESS> ###### Abstract It is well-known that in dimension 4 any framed link $(L,c)$ uniquely represents the PL 4-manifold $M^{4}(L,c)$ obtained from $\mathbb{D}^{4}$ by adding 2-handles along $(L,c)$. Moreover, if trivial dotted components are also allowed (i.e. in case of a Kirby diagram $(L^{(*)},d)$), the associated PL 4-manifold $M^{4}(L^{(*)},d)$ is obtained from $\mathbb{D}^{4}$ by adding 1-handles along the dotted components and 2-handles along the framed components. In this paper we study the relationships between framed links and/or Kirby diagrams and the representation theory of compact PL manifolds by edge-colored graphs: in particular, we describe how to construct algorithmically a (regular) 5-colored graph representing $M^{4}(L^{(*)},d)$, directly “drawn over” a planar diagram of $(L^{(*)},d)$, or equivalently how to algorithmically obtain a triangulation of $M^{4}(L^{(*)},d)$. As a consequence, the procedure yields triangulations for any closed (simply- connected) PL 4-manifold admitting handle decompositions without 3-handles. Furthermore, upper bounds for both the invariants gem-complexity and regular genus of $M^{4}(L^{(*)},d)$ are obtained, in terms of the combinatorial properties of the Kirby diagram. Keywords: framed link, Kirby diagram, PL 4-manifold, handle decomposition, edge-colored graph, regular genus, gem-complexity. 2020 Mathematics Subject Classification: 57K40 - 57M15 - 57K10 - 57Q15. ## 1 Introduction Among combinatorial tools representing PL manifolds, framed links (and/or Kirby diagrams) turn out to be a very synthetic one, both in the 3-dimensional setting and in the 4-dimensional one, while edge-colored graphs have the advantage to represent all compact PL manifolds and to allow the definition and computation of interesting PL invariants in arbitrary dimension (such as the regular genus, which extends the Heegard genus, and the gem-complexity, similar to Matveev’s complexity of a 3-manifold). Previous works exist establishing a connection between the two theories, both in the 3-dimensional and 4-dimensional setting ([26], [5], [7]): they make use of the so called edge-colored graphs with boundary, which are dual to colored triangulations of PL manifolds with non-empty boundary, and fail to be regular. More recently, a unifying method has been introduced and studied, so to represent by means of regular colored graphs all compact PL manifolds, via the notion of singular manifold associated to a PL manifold with non-empty boundary. Purpose of the present work is to update the relationship between framed links/Kirby diagrams and colored graphs (or, equivalently, colored triangulations) in dimension 4, by making use of regular 5-colored graphs representing compact PL 4-manifolds. The new tool turns out to be significantly more efficient than the classic one, both with regard to the simplicity and algorithmicity of the procedure and with regard to the possibility of estimating graph-defined PL invariants directly from the Kirby diagram. As it is well-known, a framed link is a pair $(L,c)$, where $L$ is a link in $\mathbb{S}^{3}$ with $l\geq 1$ components and $c=(c_{1},c_{2},\dots,c_{l})$, is an $l$-tuple of integers. $(L,c)$ represents - in dimension 3 - the 3-manifold $M^{3}(L,c)$ obtained from $\mathbb{S}^{3}$ by Dehn surgery along $(L,c)$, as well as - in dimension 4 - the (simply-connected) PL 4-manifold $M^{4}(L,c)$, whose boundary coincides with $M^{3}(L,c)$, obtained from $\mathbb{D}^{4}$ by adding 2-handles along $(L,c)$. Moreover, in virtue of a celebrated result by [29] and [25], in case $M^{3}(L,c)=\\#_{r}(\mathbb{S}^{1}\times\mathbb{S}^{2})$ (with $r\geq 0$), then the framed link $(L,c)$ represents also the closed PL 4-manifold $\overline{M^{4}(L,c)}$ obtained from $M^{4}(L,c)$ by adding - in a unique way - $r$ 3-handles and a 4-handle. However, while it is well-known that every 3-manifold $M^{3}$ admits a framed link $(L,c)$ so that $M^{3}=M^{3}(L,c)$, it is an open question whether or not each closed simply-connected PL 4-manifold $M^{4}$ may be represented by a suitable framed link (or, even more, if $M^{4}$ admits a so called special handle decomposition, i.e. a handle decomposition lacking in 1-handles and 3-handles: see [24, Problem 4.18], [28], [11]). As far as general compact PL 4-manifolds (with empty or connected boundary) are concerned, it is necessary to extend the notion of framed link, so to comprehend also the case of trivial (i.e. unknotted and unlinked) dotted components, which represent 1-handles of the associated handle decomposition of the manifold: in this way, any framed link $(L^{(m)},d)$ admitting $m\geq 1$ trivial dotted components - which is properly called a Kirby diagram \- uniquely represents the compact PL 4-manifold $M^{4}(L^{(m)},d)$ obtained from $\mathbb{D}^{4}$ by adding 1-handles according to the $m$ trivial dotted components and 2-handles along the framed components. Note that the boundary of $M^{4}(L^{(m)},d)$ coincides with $M^{3}(L,c)$, $(L,c)$ being the framed link obtained from the Kirby diagram $(L^{(m)},d)$ by substituting each dotted component with a 0-framed one; hence, in case $M^{3}(L,c)=\\#_{r}(\mathbb{S}^{1}\times\mathbb{S}^{2})$ (with $r\geq 0$), the Kirby diagram $(L^{(m)},d)$ uniquely represents also the closed PL 4-manifold $\overline{M^{4}(L^{(m)},d)}$ obtained from $M^{4}(L^{(m)},d)$ by adding - in a unique way - $r$ 3-handles and a 4-handle. In this paper we describe how to obtain algorithmically a regular 5-colored graph representing $M^{4}(L,c)$ (resp. representing $M^{4}(L^{(m)},d))$ directly “drawn over” a planar diagram of $(L,c)$ (resp. of $(L^{(m)},d))$: see Procedure B and Theorem 7 in Section 3 (see Procedure C and Theorem 12 in Section 5). Hence, the algorithms allow to construct explicitly triangulations of the compact 4-manifolds associated to framed links and Kirby diagrams111Indeed, both procedures are going to be implemented in a C++ program, connected to the topological software package Regina ([4]): see R.A. Burke, Triangulating Exotic 4-Manifolds, in preparation.. As a consequence, the procedures yield upper bounds for both the invariants regular genus and gem-complexity of the represented $4$-manifolds. As regards framed links, the upper bounds - which significantly improve the ones obtained in [5] \- are summarized in the following statement where $m_{\alpha}$ denotes the number of $\alpha$-colored regions in a chess-board coloration of $L$, by colors $\alpha$ and $\beta$ say, with the convention that the infinite region is $\alpha$-colored; furthermore, if $w_{i}$ and $c_{i}$ denote respectively the writhe and the framing of the $i$-th component of $L,$ (for each $i\in\\{1,\ldots,l\\}$, $l$ being the number of components of $L$), we set: $\bar{t}_{i}=\begin{cases}|w_{i}-c_{i}|\quad\text{if}\ w_{i}\neq c_{i}\\\ 2\quad\text{otherwise}\end{cases}$ ###### Theorem 1 Let $(L,c)$ be a framed link with $l$ components and $s$ crossings. Then, the following estimation of the regular genus of $M^{4}(L,c)$ holds: $\mathcal{G}(M^{4}(L,c))\leq m_{\alpha}+l$ Moreover, if $L$ is not the trivial knot, then the gem-complexity of $M^{4}(L,c)$ satisfies the following inequality: $k(M^{4}(L,c))\leq 4s-l+2\sum_{i=1}^{l}\bar{t}_{i}$ As regards Kirby diagrams $(L^{(m)},d)$, the estimation for the gem-complexity involves the quantity $\bar{t}_{i}$, defined exactly as in the case of framed links, but only for the framed components, while the estimation for the regular genus involves a quantity depending on the construction (i.e. the quantity $u$ appearing in Theorem 12), which can be increased by the number of undercrossings of the framed components.222Note that previous work [7] didn’t yield upper bounds for gem-complexity or regular genus, since the combinatorial properties of the obtained 5-colored graph with boundary representing $M^{4}(L^{(m)},d)$ could not be “a priori” determined. ###### Theorem 2 Let $(L^{(m)},d)$ be a Kirby diagram with $s$ crossings, $l$ components, whose first $m\geq 1$ are dotted, and $\bar{s}$ undercrossings of the framed components; then, $\mathcal{G}(M^{4}(L^{(m)},d))\leq\ s+\bar{s}+(l-m)+1$ Moreover, if $L$ is different from the trivial knot, $k(M^{4}(L^{(m)},d))\leq\ 2s+2\bar{s}+2m-1+2\sum_{i=m+1}^{l}\bar{t}_{i}$ Various examples are presented, including infinite families of framed links where the above upper bound for the regular genus turns out to be sharp (Example 1 and Example 2 in Section 3). Moreover, the process is applied in order to obtain a pair of 5-colored graphs representing an exotic pair of compact PL 4-manifolds (i.e. a pair of 4-manifolds which are TOP-homeomorphic but not PL-homeomorphic), thus opening the search for possibile graph-defined PL invariants distinguishing them (Example 4 in Section 3, with related Figures 8 and 9). Note that, although for better understanding the procedure regarding framed links is presented in a separate section of the paper, it is nothing but a particular case of the one regarding Kirby diagrams with $m\geq 1$ dotted components. Hence, if we denote by $(L^{(*)},d)$ an arbitrary Kirby diagram (possibly without dotted components), we can concisely state that the paper shows how to obtain a $5$-colored graph representing the compact $4$-manifold $M^{4}(L^{(*)},d)$, directly “drawn over” the Kirby diagram $(L^{(*)},d)$. Finally, we point out that, if the associated $3$-manifold is the $3$-sphere, then the obtained $5$-colored graph actually represents the closed $4$-manifold $\overline{M^{4}(L^{(*)},d)}$, too. Hence, the procedure yields triangulations for any closed (simply-connected) PL 4-manifold admitting handle decompositions without 3-handles. In the general case of Kirby diagrams representing a closed 4-manifold $\overline{M^{4}(L^{(*)},d)}$ (i.e., according to [29], in case of Heegaard diagrams for closed $4$-manifolds), we hope soon to be able to extend the above procedure, in order to construct algorithmically - at least in a wide set of situations, when the boundary 3-manifold may be combinatorially recognized as $\\#_{r}(\mathbb{S}^{1}\times\mathbb{S}^{2})$ (with $r\geq 1$) - a $5$-colored graph representing $\overline{M^{4}(L^{(*)},d)}$. ## 2 Colored graphs representing PL manifolds In this section we will briefly recall some basic notions about the representation of compact PL manifolds by regular colored graphs (crystallization theory). For more details we refer to the survey papers [19] and [12]. From now on, unless otherwise stated, all spaces and maps will be considered in the PL category, and all manifolds will be assumed to be connected and orientable333Actually all concepts and results exist also, with suitable adaptations, for non-orientable manifolds; however, since the present paper focuses on the relationship between Kirby diagrams and colored graphs, we will restrict to the orientable case.. Crystallization theory was first developed for closed manifolds; the extension to the case of non-empty boundary, that is more recent, is performed by making use of the wider class of singular manifolds. ###### Definition 1 A singular (PL) $n$-manifold is a closed connected $n$-dimensional polyhedron admitting a simplicial triangulation where the links of vertices are closed connected $(n-1)$-manifolds, while the links of all $h$-simplices of the triangulation with $h>0$ are (PL) $(n-h-1)$-spheres. Vertices whose links are not PL $(n-1)$-spheres are called singular. ###### Remark 1 If $N$ is a singular $n$-manifold, then a compact $n$-manifold $\check{N}$ is easily obtained by deleting small open neighbourhoods of its singular vertices. Obviously $N=\check{N}$ iff $N$ is a closed manifold, otherwise $\check{N}$ has non-empty boundary (without spherical components). Conversely, given a compact $n$-manifold $M$, a singular $n$-manifold $\widehat{M}$ can be constructed by capping off each component of $\partial M$ by a cone over it. Note that, by restricting ourselves to the class of compact $n$-manifolds with no spherical boundary components, the above correspondence is bijective and so singular $n$-manifolds and compact $n$-manifolds of this class can be associated to each other in a well-defined way. For this reason, throughout the present work, we will make a further restriction considering only compact manifolds without spherical boundary components. Obviously, in this wider context, closed $n$-manifolds are characterized by $M=\widehat{M}.$ ###### Definition 2 An $(n+1)$-colored graph ($n\geq 2$) is a pair $(\Gamma,\gamma)$, where $\Gamma=(V(\Gamma),E(\Gamma))$ is a multigraph (i.e. multiple edges are allowed, but no loops) which is regular of degree $n+1$ (i.e. each vertex has exactly $n+1$ incident edges), and $\gamma:E(\Gamma)\rightarrow\Delta_{n}=\\{0,\ldots,n\\}$ is a map which is injective on adjacent edges (edge-coloration). In the following, for sake of concision, when the coloration is clearly understood, we will drop it in the notation for a colored graph. As usual, we will call order of a colored graph the number of its vertices. For every $\\{c_{1},\dots,c_{h}\\}\subseteq\Delta_{n}$ let $\Gamma_{\\{c_{1},\dots,c_{h}\\}}$ be the subgraph obtained from $\Gamma$ by deleting all the edges that are not colored by $\\{c_{1},\dots,c_{h}\\}$. Furthermore, the complementary set of $\\{c\\}$ (resp. $\\{c_{1},\dots,c_{h}\\}$) in $\Delta_{n}$ will be denoted by $\hat{c}$ (resp. $\hat{c}_{1}\cdots\hat{c}_{h}$). The connected components of $\Gamma_{\\{c_{1},\dots,c_{h}\\}}$ are called $\\{c_{1},\dots,c_{h}\\}$-residues of $\Gamma$; their number will be denoted by $g_{\\{c_{1},\dots,c_{h}\\}}$ (or, for short, by $g_{c_{1},c_{2}}$ and $g_{\hat{c}}$ if $h=2$ and $h=n$ respectively). For any $(n+1)$-colored graph $\Gamma$, an $n$-dimensional simplicial cell- complex $K(\Gamma)$ can be constructed in the following way: * • the $n$-simplexes of $K(\Gamma)$ are in bijective correspondence with the vertices of $\Gamma$ and each $n$-simplex has its vertices injectively labeled by the elements of $\Delta_{n}$; * • if two vertices of $\Gamma$ are $c$-adjacent ($c\in\Delta_{n}$), the $(n-1)$-dimensional faces of their corresponding $n$-simplices that are opposite to the $c$-labeled vertices are identified, so that equally labeled vertices coincide. $|K(\Gamma)|$ turns out to be an $n$-pseudomanifold and $\Gamma$ is said to represent it. Note that, by construction, $\Gamma$ can be seen as the 1-skeleton of the dual complex of $K(\Gamma)$. As a consequence there is a bijection between the $\\{c_{1},\dots,c_{h}\\}$-residues of $\Gamma$ and the $(n-h)$-simplices of $K(\Gamma)$ whose vertices are labeled by $\hat{c}_{1}\cdots\hat{c}_{h}$. In particular, given an $(n+1)$-colored graph $\Gamma$, each connected component of $\Gamma_{\hat{c}}$ ($c\in\Delta_{n}$) is an $n$-colored graph representing the disjoint link444Given a simplicial cell-complex $K$ and an $h$-simplex $\sigma^{h}$ of $K$, the disjoint star of $\sigma^{h}$ in $K$ is the simplicial cell-complex obtained by taking all $n$-simplices of $K$ having $\sigma^{h}$ as a face and identifying only their faces that do not contain $\sigma^{h}.$ The disjoint link, $lkd(\sigma^{h},K)$, of $\sigma^{h}$ in $K$ is the subcomplex of the disjoint star formed by those simplices that do not intersect $\sigma^{h}.$ of a $c$-labeled vertex of $K(\Gamma)$, that is also (PL) homeomorphic to the link of this vertex in the first barycentric subdivision of $K.$ Therefore, we can characterize $(n+1)$-colored graphs representing singular (resp. closed) $n$-manifolds as satisfying the condition that for each color $c\in\Delta_{n}$ any $\hat{c}$-residue represents a connected closed $(n-1)$-manifold555In case of polyhedra arising from colored graphs, the condition about links of vertices obviously implies the one about links of $h$-simplices, with $h\geq 0.$ (resp. the $(n-1)$-sphere). Furthermore, in virtue of the bijection described in Remark 1, an $(n+1)$-colored graph $\Gamma$ is said to represent a compact $n$-manifold $M$ with no spherical boundary components (or, equivalently, to be a gem of $M$, where gem means Graph Encoding Manifold) if $\Gamma$ represents its associated singular manifold, i.e. if $|K(\Gamma)|=\widehat{M}$. Actually, if $\partial M\neq\emptyset$, $K(\Gamma)$ naturally gives rise to a “triangulation” of $M$ consisting of partially truncated $n$-simplexes obtained by removing small open neighbourhoods of the singular vertices of $\widehat{M}$. Therefore, in the present paper, by a little abuse of notation, we will call $K(\Gamma)$ a triangulation of $M$ also in the case of a compact manifold with non-empty boundary. The following theorem extends to the boundary case a well-known result - originally stated in [31] \- founding the combinatorial representation theory for closed manifolds of arbitrary dimension via regular colored graphs. ###### Theorem 3 ([13]) Any compact orientable (resp. non-orientable) $n$-manifold with no spherical boundary components admits a bipartite (resp. non-bipartite) $(n+1)$-colored graph representing it. If $\Gamma$ is a gem of a compact $n$-manifold, an $n$-residue of $\Gamma$ will be called singular if it does not represent $\mathbb{S}^{n-1}.$ Similarly, a color $c$ will be called singular if at least one of the $\hat{c}$-residues of $\Gamma$ is singular. An advantage of colored graphs as representing tools for compact $n$-manifolds is the possibility of combinatorially defining PL invariants. One of the most important and studied among them is the (generalized) regular genus extending to higher dimension the classical genus of a surface and the Heegaard genus of a $3$-manifold. Spheres are characterized by having null regular genus ([16]), while classification results according to regular genus and concerning $4$\- and $5$-manifolds can be found in [14], [6], [12], [8] both for the closed and for the non-empty boundary case. The definition of the invariant relies on the following result about the existence of a particular type of embedding of colored graphs into closed surfaces. ###### Proposition 4 ([20]) Let $\Gamma$ be a bipartite666Since this paper concerns only orientable manifolds, we have restricted the statement only to the bipartite case, although a similar result holds also for non-bipartite graphs. $(n+1)$-colored graph of order $2p$. Then for each cyclic permutation $\varepsilon=(\varepsilon_{0},\ldots,\varepsilon_{n})$ of $\Delta_{n}$, up to inverse, there exists a cellular embedding, called _regular_ , of $\Gamma$ into an orientable closed surface $F_{\varepsilon}(\Gamma)$ whose regions are bounded by the images of the $\\{\varepsilon_{j},\varepsilon_{j+1}\\}$-colored cycles, for each $j\in\mathbb{Z}_{n+1}$. Moreover, the genus $\rho_{\varepsilon}(\Gamma)$ of $F_{\varepsilon}(\Gamma)$ satisfies $2-2\rho_{\varepsilon}(\Gamma)=\sum_{j\in\mathbb{Z}_{n+1}}g_{\varepsilon_{j},\varepsilon_{j+1}}+(1-n)p.$ (1) ###### Definition 3 The regular genus of an $(n+1)$-colored graph $\Gamma$ is defined as $\rho(\Gamma)=min\\{\rho_{\varepsilon}(\Gamma)\ |\ \varepsilon\ \text{cyclic permutation of \ }\Delta_{n}\\};$ the (generalized) regular genus of a compact $n$-manifold $M$ is defined as $\mathcal{G}(M)=\min\\{\rho(\Gamma)\ |\ \Gamma\ \text{gem of \ }M\\}.$ Within crystallization theory a notion of “complexity” of a compact $n$-manifold arises naturally and, similarly to other concepts of complexity (for example Matveev’s complexity for 3-manifolds) is related to the minimum number of $n$-simplexes in a colored triangulation of the associated singular manifold: ###### Definition 4 The (generalized) gem-complexity of a compact $n$-manifold $M$ is defined as $k(M)=\min\\{p-1\ |\ \exists\textit{\ a gem of\ }M\textit{\ with \ }2p\textit{\ vertices}\\}$ Important tools in crystallization theory are combinatorial moves transforming colored graphs without affecting the represented manifolds (see for example [19], [18], [3], [26], [27]); we will recall here only the most important ones, while other moves will be introduced in the following sections. ###### Definition 5 An $r$-dipole ($1\leq r\leq n$) of colors $c_{1},\ldots,c_{r}$ in an $(n+1)$-colored graph $\Gamma$ is a subgraph of $\Gamma$ consisting in two vertices joined by $r$ edges, colored by $c_{1},\ldots,c_{r}$, such that the vertices belong to different $\hat{c}_{1}\ldots\hat{c}_{r}$-residues of $\Gamma$. An $r$-dipole can be eliminated from $\Gamma$ by deleting the subgraph and welding the remaining hanging edges according to their colors; in this way another $(n+1)$-colored graph $\Gamma^{\prime}$ is obtained. The addition of the dipole to $\Gamma^{\prime}$ is the inverse operation. The dipole is called proper if $|K(\Gamma)|$ and $|K(\Gamma^{\prime})|$ are (PL) homeomorphic. ###### Proposition 5 ([21, Proposition 5.3]) An $r$-dipole ($1\leq r\leq n$) of colors $c_{1},\ldots,c_{r}$ in an $(n+1)$-colored graph $\Gamma$ is proper if and only if at least one of the two connected components of $\Gamma_{\hat{c}_{1}\ldots\hat{c}_{r}}$ intersecting the dipole represents the $(n-r)$-sphere. Without going into details, we point out that - as proved in the quoted paper - the elimination (or the addition) of a proper dipole corresponds to a re- triangulation of a suitable ball embedded in the cell-complex associated to the colored graph. ###### Remark 2 Note that, if $\Gamma$ represents a compact $n$-manifold $M$, then all $r$-dipoles with $1<r\leq n$ are proper; further, if $M$ has either empty or connected boundary, then $1$-dipoles are proper, too. Given an arbitrary $(n+1)$-colored graph representing a compact $n$-manifold $M$ with empty or connected boundary, then by eliminating all possible (proper) $1$-dipoles, we can always obtain an $(n+1)$-colored graph $\Gamma$ still representing $M$ and such that for each color $c\in\Delta_{n}$, $\Gamma_{\hat{c}}$ is connected. Such a colored graph is called a crystallization of $M.$ Moreover, it is always possible to assume - up to permutation of the color set - that any gem (and, in particular, any crystallization) of such a manifold, has color $n$ as its (unique) possible singular color. Finally, as already hinted to in the Introduction, we recall that a graph- based representation for compact PL manifolds with non-empty boundary - different from the one considered in this section - was already introduced by Gagliardi in the eighties (see [19]) by means of colored graphs failing to be regular. More precisely, any compact $n$-manifold can be represented by a pair $(\Lambda,\lambda)$, where $\lambda$ is still an edge-coloration on $E(\Lambda)$ by means of $\Delta_{n}$, but $\Lambda$ may miss some (or even all) $n$-colored edges: such a $(\Lambda,\lambda)$ is said to be an $(n+1)$-colored graph with boundary, regular with respect to color $n$, and vertices missing the $n$-colored edge are called boundary vertices. However, a connection between these different kinds of representation can be established through an easy combinatorial procedure, called capping-off. ###### Proposition 6 ([17]) Let $(\Lambda,\lambda)$ be an $(n+1)$-colored graph with boundary, regular with respect to color $n$, representing the compact $n$-manifold $M$. Chosen a color $c\in\Delta_{n-1}$, let $(\Gamma,\gamma)$ be the regular $(n+1)$-colored graph obtained from $\Lambda$ by _capping-off with respect to color $c$_, i.e. by joining two boundary vertices by an $n$-colored edge, whenever they belong to the same $\\{c,n\\}$-colored path in $\Lambda.$ Then, $(\Gamma,\gamma)$ represents the singular $n$-manifold $\widehat{M}$, and hence $M$, too. ## 3 From framed links to $5$-colored graphs In this section we will present a construction that enables to obtain $5$-colored graphs representing all compact (simply-connected) $4$-manifolds associated to framed links, i.e. Kirby diagrams without dotted components. Note that such a class of compact $4$-manifolds contains also each closed (simply-connected) $4$-manifold admitting a special handle decomposition ([28, Section 3.3]), i.e. a handle decomposition containing no $1$\- and $3$-handles. As already recalled in the Introduction, for each framed link $(L,c)$ ($c=(c_{1},\dots,c_{l})$, with $c_{i}\in\mathbb{Z}$ $\forall i\in\\{1,\dots,l\\}$, $l$ being the number of components of $L$), we denote by $M^{4}(L,c)$ the $4$-manifold with boundary obtained from $\mathbb{D}^{4}$ by adding $l$ 2-handles according to the framed link $(L,c)$. The boundary of $M^{4}(L,c)$ is the closed orientable 3-manifold $M^{3}(L,c)$ obtained from $\mathbb{S}^{3}$ by Dehn surgery along $(L,c)$. In case $M^{3}(L,c)\cong\mathbb{S}^{3}$, we will consider, and still denote by $M^{4}(L,c),$ the closed $4$-manifold obtained by adding a further $4$-handle. Now, let us suppose that the link $L$ is embedded in $S^{3}=\mathbb{R}^{3}\cup\\{\infty\\}$ so that it admits a regular projection $\pi\ :\mathbb{S}^{3}\to\mathbb{R}^{2}\times\\{0\\};$ in the following we will identify $L$ with its planar diagram $\pi(L)$, thus referring to arcs, crossings and regions of $L$ instead of $\pi(L).$ Similarly, by the writhe of a component $L_{i}$ of $L$ (denoted by $w(L_{i})$) we mean the writhe of the corresponding component of $\pi(L).$ For each $i\in\\{1,\ldots,l\\}$, we say that $L_{i}$ needs $|c_{i}-w(L_{i})|$ “additional curls”, which are positive or negative according to whether $c_{i}$ is greater or less than $w(L_{i})$ (see Figure 1). In [5] a construction is described, yielding a $4$-colored graph representing the $3$-manifold associated to a given framed link . The procedure consists of the following steps. PROCEDURE A - from $\mathbf{(L,c)}$ to $\mathbf{\Lambda(L,c)}$ representing $\mathbf{M^{3}(L,c)}$: 1. 1. Each crossing of $L$ gives rise to the order eight graph in Figure 2, while each possible (whether already in $L$ or additional) curl gives rise to one of the order four graphs of Figure 3-left or Figure 3-right according to the curl being positive or negative. Figure 1: Positive (left) and negative (right) curls Figure 2: 4-colored graph corresponding to a crossing Figure 3: 4-colored graphs corresponding to a positive curl (left) and a negative curl (right) 2. 2. The hanging $0$\- and $1$-colored edges of the above graphs should be “pasted” together so that every region of $L$, having $r$ crossings on its boundary, gives rise to a $\\{1,2\\}$-colored cycle of length $2r$ (with each 1-colored edge corresponding to a part of the boundary between two crossings) while each component $L_{i}$ ($i\in\\{1,\ldots,l\\}$), having $s_{i}$ crossings and $t_{i}$ additional curls, gives rise to two $\\{0,3\\}$-colored cycles of length $2(s_{i}+t_{i}).$ ###### Remark 3 As pointed out in [5], $\Lambda(L,c)$ can be directly “drawn over” $L$ (see for example Figure 4, obtained by applying Procedure A to the trefoil knot, with framing $c=+1$). In particular, if $a$ is the part of an arc of $L$ lying between two adjacent crossings, there are exactly two $1$-colored edges of $\Lambda(L,c)$ that are “parallel” to $a$, one for each region of $L$ having $a$ on its boundary. Moreover, note that - by possibly adding to $L$ a trivial pair of opposite additional curls - a particular subgraph $Q_{i}$, called quadricolor, can be selected in $\Lambda(L,c)$ for each component $L_{i}$ of $L$ ($i\in\\{1,\dots,l\\}$). A quadricolor consists of four vertices $\\{P_{0},\ P_{1},\ P_{2},\ P_{3}\\}$ such that $P_{s},P_{s+1}$ are connected by an $s$-colored edge (for each $s\in\mathbb{Z}_{4}$) and $P_{s}$ does not belong to the $\\{s+1,s+2\\}$-colored cycle shared by the other three vertices. It is not difficult to see that - in virtue of the above described procedure A - such a situation arises with $\\{P_{0},\ P_{2},\ P_{3}\\}$ belonging to the subgraph corresponding to a curl and $P_{1}$ to an adjacent undercrossing or curl of the same sign (see again Figure 4, where the vertices of the quadricolor are highlighted). Figure 4: The $4$-colored graph $\Lambda(L,c)$ representing $M^{3}(L,c)$, for $c=+1$ and $L=$ trefoil Let us now describe how to construct, starting from a given framed link, a 5-colored graph which will be proved to represent the $4$-manifold associated to the framed link itself. PROCEDURE B - from $\mathbf{(L,c)}$ to $\mathbf{\Gamma(L,c)}$ (representing $\mathbf{M^{4}(L,c)}$): 1. 1. Let $\Lambda(L,c)$ be the 4-colored graph constructed from $(L,c)$ according to Procedure $A$. 2. 2. For each component $L_{i}$ of $L$ ($i\in\\{1,\dots,l\\}$), choose a quadricolor $Q_{i}$, according to Remark 3. For each $i\in\\{1,\dots,l\\}$, add a triad of $4$-colored edges between the vertices $P_{2r}$ and $P_{2r+1}$, $\forall r\in\\{0,1,2\\}$, involved in the quadricolor $\mathcal{Q}_{i}$ (as shown in Figure 5). 3. 3. Add $4$-colored edges between the remaining vertices of $\Lambda(L,c)$, so to “double” the $1$-colored ones. Figure 5: Main step yielding $\Gamma(L,c)$ Figure 6 shows the $5$-colored graph $\Gamma(L,c)$ in the case of the trefoil knot with framing $+1$. Figure 6: The $5$-colored graph $\Gamma(L,c)$, for $c=+1$ and $L=$ trefoil The following theorem states that - as already disclosed \- the $5$-colored graph $\Gamma(L,c)$ represents $M^{4}(L,c)$. Moreover, the theorem also states the existence of a further 5-colored graph representing $M^{4}(L,c)$, with reduced regular genus, whose estimation involves the number $m_{\alpha}$ of $\alpha$-colored regions in a chess-board coloration of $L$, by colors $\alpha$ and $\beta$ say, with the convention that the infinite region is $\alpha$-colored. With this aim, if $(L,c)$ is a framed link with $l$ components and $s$ crossings, let us recall that, for each $i\in\\{1,\ldots,l\\}$, we set $\bar{t}_{i}=\begin{cases}|w_{i}-c_{i}|\quad if\ w_{i}\neq c_{i}\\\ 2\quad otherwise\end{cases}$ where $w_{i}$ denotes the writhe of the $i$-th component of $L.$ ###### Theorem 7 * (i) For each framed link $(L,c)$, the $5$-colored graph $\Gamma(L,c)$ obtained via Procedure B represents the compact $4$-manifold $M^{4}(L,c);$ it has regular genus less or equal to $s+l+1$ and, if $L$ is different from the trivial knot,777More precisely we suppose the projection $\pi(L)$ to be different from the standard diagram of the trivial knot. This case, which is already well- known (see [8]), is nevertheless discussed in details in Example 1. its order is $8s+4\sum_{i=1}^{l}\bar{t}_{i};$ * (ii) via a standard sequence of graph moves, a $5$-colored graph, still representing $M^{4}(L,c)$, can be obtained, whose regular genus is less or equal to $m_{\alpha}+l$, while the regular genus of its $\hat{4}$-residue, representing $\partial M^{4}(L,c)=M^{3}(L,c),$ is less or equal to $m_{\alpha}.$ Theorem 7 will be proved in Section 4. As a direct consequence of Theorem 7, upper bounds can be established for both the invariants regular genus and gem-complexity of a compact $4$-manifold represented by a framed link $(L,c)$, in terms of the combinatorial properties of the link itself, as already stated in Theorem 1 in the Introduction. Proof of Theorem 1 The upper bound for the regular genus of $M^{4}(L,c)$ trivially follows from Theorem 7 (ii). As regards the upper bound for the gem-complexity of $M^{4}(L,c)$, we have to make use of the computation of the order of $\Gamma(L,c)$ obtained in Theorem 7 (i), but also to note that - as already pointed out in [5] \- the 4-colored graph $\Lambda(L,c)$ has exactly $l$ $\hat{2}$-residues, and that the same happens for $\Gamma(L,c)$; hence, by deleting $l-1$ (proper) $1$-dipoles, a new $5$-colored graph $\Gamma^{\prime}(L,c)$ representing $M^{4}(L,c)$ may be obtained, with $\\#V(\Gamma^{\prime}(L,c))=\\#V(\Gamma(L,c))-2(l-1)=8s-2l+2+4\sum_{i=1}^{l}\bar{t}_{i}$ $\Box$ The case of the trivial knot is discussed in the following example. ###### Example 1 Let $(K_{0},c)$ be the trivial knot with framing $c\in\mathbb{N}\cup\\{0\\}$; if $c\geq 2$, then $K_{0}$ requires $c$ additional positive curls and the $5$-colored graph $\Gamma(K_{0},c)$, with $4c$ vertices, which is obtained by applying Procedure B, turns out to coincide with the one that in [8] is proved to represent exactly $\xi_{c}$, the $\mathbb{D}^{2}$-bundle over $\mathbb{S}^{2}$ with Euler number $c$, as expected from Theorem 7. Furthermore, if $c$ is even, it is known that $k(L(c,1))=2c-1$ (see [9, Remark 4.5]); hence $\Gamma(K_{0},c)$ realizes the gem-complexity of $\xi_{c}$, and therefore the second bound of Theorem 7 is sharp. If $c=0$ (resp. $c=1$), then $K_{0}$ requires two positive and two negative (resp. two positive and one negative) additional curls in order to get a quadricolor; however in this case the resulting graph $\Gamma(K_{0},c)$ admits a sequence of dipole moves consisting in three $3$-dipoles and one $2$-dipole (resp. consisting in two $3$-dipoles) cancellations yielding a minimal order eight crystallization of $\mathbb{S}^{2}\times\mathbb{D}^{2}$ (resp. the minimal order eight crystallization of $\mathbb{CP}^{2}$) obtained in [8] (resp. in [22]). Note that for each $c\in\mathbb{N}\cup\\{0\\}$, $\Gamma(K_{0},c)$ realizes the regular genus of the represented 4-manifold, which is equal to $2$ ($=m_{\alpha}+l$), as proved in [8]. Hence, for this infinite family of compact 4-manifolds, the first upper bound of Theorem 1 turns out to be sharp. We will end this section with further examples of the described construction. ###### Example 2 Let $(L_{H},c)$ be the Hopf link and $c=(\bar{c},0)$ with $\bar{c}$ even (resp. odd); then Procedure B yields a $5$-colored graph that, by Theorem 7, represents $\mathbb{S}^{2}\times\mathbb{S}^{2}$ (resp. $\mathbb{CP}^{2}\\#(-\mathbb{CP}^{2})$) and realizes its regular genus (which is known to be equal to 4: see [12] and references therein). In particular, if $\bar{c}=0$, a sequence of dipole cancellations and a $\rho$-pair switching (see Definition 6 in Section 5), applied to $\Gamma(L_{H},(0,0))$, yield a $5$-colored graph which belongs to the existing catalogue888More details about such catalogue (together with other similar ones) can be found at https://cdm.unimore.it/home/matematica/casali.mariarita/CATALOGUES.htm#dimension_4 of crystallizations of $4$-manifolds up to gem-complexity 8 (see [10]). ###### Example 3 Let $M^{4}(L,c)$ be a linear plumbing of spheres, whose boundary is therefore the lens space $L(p,q)$ such that $(c_{1},\ldots,c_{l})$ is a continued fraction expansion of $-\frac{p}{q}$ ([23]); then, by Theorem 1, the regular genus of $M^{4}(L,c)$ is less or equal to $2l.$ ###### Example 4 Procedure B, applied to the framed links description given in [30] of an exotic pair (see Figure 7), allows to obtain two regular 5-colored graphs representing two compact simply-connected PL 4-manifolds $W_{1}$ and $W_{2}$ with the same topological structure that are not PL-homeomorphic: see Figures 8 and 9, which obviously encode two triangulations of $W_{1}$ and $W_{2}$ respectively. Other applications of the procedures obtained in the present paper, in order to get triangulations of exotic 4-manifolds, will appear in R.A. Burke, Triangulating Exotic 4-Manifolds (in preparation). Figure 7: Framed links representing the exotic pair $W_{1}$ and $W_{2}$ (pictures from [30]) Figure 8: A 5-colored graph representing the compact simply-connected PL 4-manifold $W_{1}$ Figure 9: A 5-colored graph representing the compact simply-connected PL 4-manifold $W_{2}$ ## 4 Proof of Theorem 7 Roughly speaking, the proof of the first statement of Theorem 7 \- i.e. the fact that $\Gamma(L,c)$ represents $M^{4}(L,c)$ \- will be performed by means of the followings steps: * (i) starting from the $4$-colored graph $\Lambda(L,c)$ \- already proved to represent $M^{3}(L,c)$ in [5] \- we obtain a $4$-colored graph $\Lambda_{smooth}$ representing $\mathbb{S}^{3}$ by suitably exchanging a triad of $1$-colored edges for each component of $L$ (Proposition 9(i)); * (ii) by capping-off with respect to color 1, we obtain a 5-colored graph representing $\mathbb{D}^{4}$; * (iii) by re-establishing the triads of $1$-colored edges, the 5-colored graph $\Gamma(L,c)$ is obtained. Since the only singular 4-residue of $\Gamma(L,c)$ is the $\hat{4}$-residue $\Lambda(L,c)$, $\Gamma(L,c)$ represents a 4-manifold with connected boundary $M^{3}(L,c)$; moreover, $\Gamma(L,c)$ represents $M^{4}(L,c)$ since each triad exchanging is proved to correspond to the addition of a 2-handle according to the related framed component (Proposition 9(ii)). Let us now go into details. Given a framed link $(L,c)$, we can always assume that, for each component $L_{i}$ ($i\in\\{1,\ldots,l\\}$), an additional curl is placed near an undercrossing; as observed in Section 3 such a configuration gives rise, in the $4$-colored graph $\Lambda(L,c)$, to a quadricolor that we denote by $\mathcal{Q}_{i}.$ By cancelling the quadricolor $\mathcal{Q}_{i}$ and pasting the resulting hanging edges of the same color, we obtain a new $4$-colored graph $\Lambda^{(\hat{\imath})}(L,c)$; we call this operation the smoothing of the quadricolor $\mathcal{Q}_{i}.$ The following proposition shows that the smoothing of a quadricolor in a $4$-colored graph obtained from a framed link via Procedure B (see Section 3) turns out to be equivalent to the Dehn surgery on the complementary knot of the involved link component. More precisely, with the above notations, the result can be stated as follows: ###### Proposition 8 If $\Lambda^{(\hat{\imath})}(L,c)$ is the 4-colored graph obtained from $\Lambda(L,c)$ by smoothing the quadricolor of the $i$-th framed component, then $K(\Lambda^{(\hat{\imath})}(L,c))$ is obtained from $K(\Lambda(L,c))$ by Dehn surgery on the complementary knot of $L_{i}.$ Hence, $K(\Lambda^{(\hat{\imath})}(L,c))$ represents the 3-manifold associated to the framed link $(L^{\hat{\imath}},c^{\hat{\imath}})$ obtained from $(L,c)$ by deleting the $i$-th component. _Proof._ Let $(L^{(\tilde{\imath})},c^{(\tilde{\imath})})$ denote the $l+1$ components link obtained from $L$ by adding the complementary knot of $L_{i}$, i.e. a framed 0 trivial knot linking the component $L_{i}$ geometrically once; moreover, let us suppose that the added trivial component is inserted between the curl and the crossing corresponding to the quadricolor $\mathcal{Q}_{i}.$ Then, let us consider the $4$-colored graph $\Lambda(L^{(\tilde{\imath})},c^{(\tilde{\imath})})$ obtained by applying Procedure B of Section 3 to the framed link $(L^{(\tilde{\imath})},c^{(\tilde{\imath})}).$ $\Lambda(L^{(\tilde{\imath})},c^{(\tilde{\imath})})$ is everywhere like $\Lambda(L,c)$ except “near” the quadricolor $\mathcal{Q}_{i}$, where it contains the subgraph in Figure 10 (we denote by $P_{j},\ j\in\\{0,1,2,3\\}$ the vertices of $\mathcal{Q}_{i}$, even if they are no longer a quadricolor in $\Lambda(L^{(\tilde{\imath})},c^{(\tilde{\imath})})$). In the proof of Lemma 4 of [5] it is shown that the above subgraph yields, through a sequence of eliminations of dipoles, the subgraph in Figure 11. By subsequently cancelling the 2-dipoles of vertices $\\{\bar{\bar{P}}_{0},R^{\prime}_{1}\\},$ $\ \\{\bar{P}_{0},P_{1}\\},$ $\ \\{\bar{P}_{2},\bar{P}_{3}\\},$ $\ \\{\bar{\bar{P}}_{3},R_{3}\\}$, all vertices of the quadricolor $\mathcal{Q}_{i}$ are eliminated and the obtained $4$-colored graph is precisely $\Lambda^{(\hat{\imath})}(L,c).$ Since the addition to $(L,c)$ of the complementary knot of $L_{i}$ corresponds to the Dehn surgery along it, the first part of the statement is proved. Moreover, the last part follows directly by noting that the component $L_{i}$ and its complementary knot constitute a pair of complementary handles, whose cancellation does not affect the represented 3-manifold. $\Box$ Figure 10: Figure 11: ###### Remark 4 Quadricolors in $4$-colored graphs were originally introduced by Lins. Note that the transformation from $\Lambda(L,c)$ to the $4$-colored graph where the quadricolor $\mathcal{Q}_{i}$ is replaced by the subgraph in Figure 11 corresponds to the substitution, in the pseudocomplex $K(\Lambda(L,c))$, of a solid torus with another solid torus having the same boundary. Hence, as already observed by Lins himself, the smoothing of a quadricolor in any $4$-colored graph is equivalent to perform a Dehn surgery on the represented manifold. The above proposition allows, when considering $4$-colored graphs arising from framed links, to identify this surgery precisely as the one along the complementary knot of the component naturally associated to the quadricolor. ###### Proposition 9 * (i) The 4-colored graph $\Gamma^{(\hat{\imath})}_{smooth}$ (resp. $\Gamma_{smooth}$), obtained from $\Lambda(L,c)$ by exchanging the triad of $1$-colored edges (according to Figure 12) in a quadricolor of the $i$-th component of $(L,c)$ (resp. in a quadricolor for each framed component of $(L,c)$), represents the 3-manifold associated to the framed link $(L^{\hat{\imath}},c^{\hat{\imath}})$ obtained from $(L,c)$ by deleting the $i$-th component (resp. represents $\mathbb{S}^{3}$). * (ii) Let $\tilde{\Gamma}_{smooth}$ be the 5-colored graph obtained from $\Gamma_{smooth}$ by “capping off” with respect to color $1$. Then, the 5-colored graph $\tilde{\Gamma}^{(i)}$, obtained from $\tilde{\Gamma}_{smooth}$ by exchanging the triad of $1$-colored edges (according to Figure 13) in a quadricolor of the $i$-th component of $(L,c)$, represents the 4-manifold obtained from $\mathbb{D}^{4}$ by adding a 2-handle according to the $i$-the component of $(L,c))$. Figure 12: Exchanging the triad of 1-colored edges in a quadricolor (I) Figure 13: Exchanging the triad of 1-colored edges in a quadricolor (II) _Proof._ (i) It is sufficient to make use of the proof of Proposition 8 and to note that $\Gamma^{(\hat{\imath})}_{smooth}$ is obtained (modulo the name exchange of $P_{j}$ into $\bar{P}_{j}$, for $j\in\\{0,2,3\\}$) from the subgraph in Figure 11 via cancellation of the two 2-dipoles of vertices $\\{\bar{\bar{P}}_{0},R^{\prime}_{1}\\},\ \\{\bar{\bar{P}}_{3},R_{3}\\}$ in the quadricolor of the $i$-th framed component, while $\Gamma_{smooth}$ is obtained by performing the same procedure for each component of $(L,c)$. (ii) It is easy to check that, by a standard sequence of dipole addition, the 4-colored graph $\Gamma_{smooth}$ may be transformed (modulo the name exchange of $P_{2}$ into $\bar{P}_{2}$ and $P_{j}$ into $\bar{\bar{P}}_{j}$, for $j\in\\{0,3\\}$) in the 4-colored graph $\tilde{\Lambda}_{\hat{4}}(L,c)$, already considered both in [5] and in [7]: more precisely, for each component of the link, it is necessary to add the 2-dipoles of vertices $\\{\bar{\bar{P}}_{0},R^{\prime}_{1}\\}$ and $\\{\bar{\bar{P}}_{3},R_{3}\\}$ shown in Figure 11, and then to add a 2-dipole of vertices $\\{R^{\prime}_{2},R^{\prime}_{3}\\}$ within the $1$-colored edge with endpoints $\\{R^{\prime}_{1},R_{3}\\}$ and the $3$-colored edge with endpoints $\\{\bar{\bar{P}}_{0},R^{\prime}_{1}\\}$. The 4-colored graph $\tilde{\Lambda}_{\hat{4}}(L,c)$ is deducible from Figure 14, that illustrates the main step to obtain the 5-colored graph with boundary $\tilde{\Lambda}(L,c)$, representing $M^{4}(L,c)$, from the 4-colored graph $\Lambda(L,c)$. Moreover, as explained in [7, pp. 442-443], the 1-skeleton of the associated colored triangulation $K(L,c)=K(\tilde{\Lambda}_{\hat{4}})$ of $\mathbb{S}^{3}$, contains two copies $L^{\prime}=L^{\prime}_{1}\cup\dots\cup L^{\prime}_{l}$ and $L^{\prime\prime}=L^{\prime\prime}_{1}\cup\dots\cup L^{\prime\prime}_{l}$ of $L$, with linking number $c_{i}$ between $L^{\prime}_{i}$ and $L^{\prime\prime}_{i}$, for each $i\in\\{1,\dots,l\\}$, and the addition of the triad of $4$-colored edges with endpoints $\\{R_{1},R_{1}^{\prime}\\}$, $\\{R_{2},R_{2}^{\prime}\\}$, $\\{R_{3},R_{3}^{\prime}\\}$ corresponds - as proved in [5, Theorem 3] \- to the attachment on the boundary of $\mathbb{D}^{4}$ (i.e. the cone over $K(L,c)$) of a 2-handle whose attaching map sends $L_{i}^{\prime}$ into $L_{i}^{\prime\prime}$ (see Figure 14-right, and Figure 15 for an example of the $5$-colored graph with boundary999Recall that in this type of colored graphs, some vertices lack incident $4$-colored edges. $\tilde{\Lambda}(L,c)$, where $(L,c)$ is the trefoil knot with framing $+1$). Figure 14: Main step from $\Lambda(L,c)$ to $\tilde{\Lambda}(L,c)$ Figure 15: The $5$-colored graph with boundary $\tilde{\Lambda}(L,c)$ representing $M^{4}(L,c)$, for $c=+1$ and $L=$ trefoil. Now, if the “capping off” procedure described in Proposition 6 is applied with respect to color $1$, the obtained regular 5-colored graph (which represents the compact 4-manifold obtained from $\mathbb{D}^{4}$ by adding a 2-handle along the $i$-th component of $(L,c)$) turns out to admit a sequence of three 2-dipoles involving only vertices of the quadricolor and never involving color $4$: in fact, they consist of the pairs of vertices $\\{P_{3},R_{3}\\}$, $\\{R_{3}^{\prime},R_{2}^{\prime}\\}$, $\\{P_{0},R_{1}^{\prime}\\}$ in Figure 14 (right). It is not difficult to see that, after these cancellations, we obtain exactly the (regular) $5$-colored graph $\tilde{\Gamma}^{(i)}$, obtained from $\tilde{\Gamma}_{smooth}$ (which obviously represents $\mathbb{D}^{4}$) by cyclically exchanging the triad of $1$-colored edges (according to Figure 13) in the quadricolor $Q_{i}$ of the $i$-th component of $(L,c)$. $\Box$ ###### Remark 5 Note that a standard sequence of dipole moves exists, transforming $\Gamma^{(\hat{\imath})}_{smooth}$ into $\Lambda(L^{\hat{\imath}},c^{\hat{\imath}})$: it follows $L_{i}$ starting from the quadricolor $Q_{i}$, where the triad of $1$-colored edges have been cyclically exchanged as in Figure 12 (right), by deleting first the 2-dipoles $\\{P_{2},P_{3}\\}$ and $\\{P_{4},P_{5}\\}$, and then all subsequently generated 2-dipoles, among pairs of vertices, belonging to different bipartition classes, which are either endpoints of $1$-colored edges “parallel” to adjacent segments of $L_{i}$, or $0$-adjacent vertices of the subgraph associated to an undercrossing of $L_{i}$, till to obtain an order two component of the 4-colored graph consisting only of the vertex $P_{0}$ and its $2$-adjacent vertex. Obviously, if the procedure is applied for each $i\in\\{1,\dots,l\\}$, a standard sequence of dipole eliminations is obtained, transforming the 4-colored graphs $\Gamma_{smooth}$ into the order two 4-colored graph representing $\mathbb{S}^{3}$. Proof of Theorem 7 (i) In order to prove that $\Gamma(L,c)$ represents $M^{4}(L,c)$, it is sufficient to note that the main step yielding $\Gamma(L,c)$, depicted in Figure 5, exactly coincides with the transformation from the 4-colored graph of Figure 12-left (representing $M^{3}(L,c)$) to the 5-colored graph of Figure 13-right (representing $M^{4}(L,c)$, if the procedure is applied to a quadricolor for each component of $(L,c)$). Hence, Proposition 9 (i) and (ii) ensure $\Gamma(L,c)$ actually to represent the compact 4-manifold obtained from $\mathbb{D}^{4}$ by adding $l$ 2-handles according to the $l$ components of $(L,c)$. Now note that, by construction, the $4$-colored graph $\Lambda(L,c)$ has $8s+4\sum_{i=1}^{l}|w_{i}-c_{i}|$ vertices. As already observed, the presence of a curl near an undercrossing in a component of $L$ yields a quadricolor $Q_{i}$. Therefore, for each $i\in\\{1,\ldots,l\\}$, if $|w_{i}-c_{i}|\neq 0$, then the required addition of curls ensures the existence of a quadricolor relative to $L_{i}$, while if $|w_{i}-c_{i}|=0$ a pair of opposite curls has to be added in order to produce one (except in the case of the trivial knot which is discussed in Example 1). Since each curl contributes with $4$ vertices to the final $5$-colored graph, the statement concerning the order of $\Gamma(L,c)$ is easily proved. With regard to the regular genus of $\Gamma(L,c)$, let us consider the cyclic permutations $\bar{\varepsilon}=(1,0,2,3)$ of $\Delta_{3}$ and $\varepsilon=(1,0,2,3,4)$ of $\Delta_{4}.$ As already pointed out in [5], the construction of $\Lambda(L,c)$ directly yields $\rho_{\bar{\varepsilon}}(\Lambda(L,c))=s+1.$ On the other hand, it is easy to check - via formula (1) - that $2\rho_{\varepsilon}(\Gamma(L,c))-2\rho_{\bar{\varepsilon}}(\Lambda(L,c))=g_{1,3}-g_{3,4}-g_{1,4}+p,$ where $2p$ is the order of $\Gamma(L,c)$ (as well as of $\Lambda(L,c)$). Since, by construction, $g_{3,4}=g_{1,3}$ and $g_{1,4}=p-2l$, we obtain: $\rho_{\varepsilon}(\Gamma(L,c))=\rho_{\bar{\varepsilon}}(\Lambda(L,c))+l.$ (2) The result about the regular genus of $\Gamma(L,c)$ now directly follows. (ii) As proved in [5, Theorem 1], the $4$-colored graph $\Lambda(L,c)$ admits a finite sequence of moves, called generalized dipole eliminations101010A generalized dipole in a $4$-colored graph representing a closed $3$-manifold is a particular subgraph, whose cancellation factorizes into a sequence of proper dipole moves; from the topological point of view, this move corresponds to a Singer move of type III’ involving a pair of curves in a suitable Heegaard diagram which can be associated to the $4$-colored graph (see [18] for details)., which preserve the represented manifold and do not affect the quadricolor structures, but reduce the regular genus. Hence, a new 4-colored graph $\Omega(L,c)$ representing $M^{3}(L,c)$ is obtained, having regular genus $m_{\alpha}$ with respect to the cyclic permutation $\bar{\varepsilon}=(1,0,2,3)$ of $\Delta_{3}$ (see [5] for details). $\Omega(L,c)$ contains a quadricolor for each component of $L$, too, and the results of Proposition 9 (i) and (ii) may be applied, exactly as previously done for $\Lambda(L,c)$, so to obtain - via the move depicted in Figure 5 performed on a quadricolor for each component of $L$ \- a new 5-colored graph $\tilde{\Gamma}(L,c)$ representing $M^{4}(L,c)$.111111It is not difficult to check that $\tilde{\Gamma}(L,c)$ could also be obtained through the $5$-colored graph with boundary $\tilde{\Omega}(L,c)$, constructed in [5] by applying the move depicted in Figure 14 for each component of the link: in fact, $\tilde{\Omega}(L,c)$ represents $M^{4}(L,c)$, too, and in order to obtain $\tilde{\Gamma}(L,c)$ it is sufficient to make the capping off with respect to color $1$ and to delete three $2$-dipoles for each quadricolor, exactly as done in the proof of Proposition 9 (ii) for $\tilde{\Gamma}_{smooth}$. Now, it is not difficult to check that - in full analogy to equation (2) - the following relation holds between the regular genera of $\tilde{\Gamma}(L,c)$ and $\Omega(L,c)$, with respect to $\bar{\varepsilon}$ and $\varepsilon$ respectively: $\rho_{\varepsilon}(\tilde{\Gamma}(L,c))=\rho_{\bar{\varepsilon}}(\Omega(L,c))+l.$ Then, both statements of Theorem 7(ii) directly follow from $\rho_{\bar{\varepsilon}}(\Omega(L,c))=m_{\alpha}$: $\rho_{\varepsilon}(\tilde{\Gamma}(L,c))=m_{\alpha}+l$, while $\rho_{\bar{\varepsilon}}((\tilde{\Gamma}(L,c))_{\hat{4}}))=m_{\alpha}$ (since the $\hat{4}$-residue of $\tilde{\Gamma}(L,c)$ is exactly $\Omega(L,c)$). $\Box$ See Figures 16 and 17 for examples of graphs $\Omega(L,c)$ and $\tilde{\Gamma}(L,c)$ respectively, where $(L,c)$ is the trefoil knot with framing $+1.$ Figure 16: The $4$-colored graph $\Omega(L,c)$ representing $M^{3}(L,c)$, for $c=+1$ and $L=$ trefoil Figure 17: The $5$-colored graph $\tilde{\Gamma}(L,c)$ representing $M^{4}(L,c)$, for $c=+1$ and $L=$ trefoil ## 5 From dotted links to $5$-colored graphs In this section we will take into account the more general case of Kirby diagrams with dotted components, extending the procedure and the results of Section 3. Note that, as a consequence, the class of manifolds involved in the construction includes all closed (simply-connected) $4$-manifolds admitting a handle decomposition without 3-handles ([24, Problem 4.18], which is of particular interest with regard to exotic PL 4-manifolds: see, for example, [1] and [2]). Let $(L^{(m)},d)$ be a Kirby diagram, where $L$ is a link with $l$ components, $L_{i}$ with $i\in\\{1,\ldots,m\\}$ (resp. $i\in\\{m+1,\ldots,l\\}$) being a dotted (resp. framed) component, and $d=(d_{1},\dots,d_{l-m})$, where $d_{i}\in\mathbb{Z}$ $\forall i\in\\{1,\dots,l-m\\}$ is the framing of the $(m+i)$-th (framed) component. As already recalled in the Introduction, we denote by $M^{4}(L^{(m)},d)$ the $4$-manifold with boundary obtained from $\mathbb{D}^{4}$ by adding $m$ 1-handles according to the dotted components and $l-m$ 2-handles according to the framed components of $(L^{(m)},d)$. The boundary of $M^{4}(L^{(m)},d)$ is the closed orientable 3-manifold $M^{3}(L,c)$ obtained from $\mathbb{S}^{3}$ by Dehn surgery along the associated framed link $(L,c)$, obtained by substituting each dotted component by a 0-framed one, i.e. $c=(c_{1}\dots,c_{l})$, where $c_{i}=\begin{cases}0&1\leq i\leq m\\\ d_{i-m}&m+1\leq i\leq l\end{cases}$ In case $M^{3}(L,c)\cong\mathbb{S}^{3}$, we will consider, and still denote by $M^{4}(L^{(m)},d),$ the closed $4$-manifold obtained by adding a further $4$-handle. Before describing the new procedure, the following preliminary notations are needed: * - For each $i\in\\{1,\ldots,m\\},$ let us “mark” two points $H_{i}$ and $H_{i}^{\prime}$ on $L_{i}$, such that they divide $L_{i}$ into two parts, one containing only overcrossings and the other containing only undercrossings of $L$.121212Note that $1$-handles and $2$-handles may always be re-arranged, so to respect this requirement: see for example [23, Prop. 4.2.7] or [28, Chapter 1 - Principle 1]. * - For each $j\in\\{m+1,\ldots,l\\},$ let us fix on $L_{j}$ a point $X_{j}$, between a curl and an undercrossing and let us consider the component $L_{j}$ in the diagram of $L$ as the union of consecutive segments obtained by cutting it not only at undercrossings, but also at overcrossings and at the point $X_{j}.$ * - Then, for each $j\in\\{m+1,\dots,l\\}$ let us “highlight” on $L_{j}$ \- starting from $X_{j}$ and in the direction opposite to the undercrossing - a sequence $Y_{j}$ of consecutive segments, so that, for each $i\in\\{1,\ldots,m\\},$ $H_{i}$ and $H_{i}^{\prime}$ belong to the boundary of the same region $\mathcal{R}_{i}$ of the “diagram” obtained from $L$ by deleting the points $X_{m+1},\ldots,X_{l}$ and the segments of the sequences $Y_{m+1},\ldots,Y_{l}$ (with a little abuse of notation we will describe this new diagram as $L-\cup_{j=m+1}^{l}(X_{j}\cup Y_{j})$). Note that $Y_{j}$ can be empty, while it never comprehends all segments of $L_{j}.$ Let us denote by $Y=Y_{m+1}\wedge\dots\wedge Y_{l}$ the sequence resulting from juxtaposition of the sequences of highlighted segments. * - Finally, for each $i\in\\{1,\ldots,m\\}$, let $\bar{e}_{i}$ (resp. $\bar{e}_{i}^{\prime}$) be the $1$-colored edge of $\Lambda(L,c)$ “parallel” to the part of arc of $L_{i}$ containing the point $H_{i}$ (resp. $H_{i}^{\prime}$) “on the side” of the regions of $L$ merging into $\mathcal{R}_{i}$ (see Remark 3), and let $v_{i}$ (resp. $v_{i}^{\prime}$) be its endpoint belonging to the subgraph corresponding to an undercrossing of the dotted component $L_{i}$. PROCEDURE C - from $\mathbf{(L^{(m)},d)}$ to $\mathbf{\Gamma(L^{(m)},d)}$ (representing $\mathbf{M^{4}(L^{(m)},d)}$): * (a) Let $\Lambda(L,c)$ be the 4-colored graph constructed from $(L,c)$ according to Procedure $A$; in $\Lambda(L,c)$, let us choose a quadricolor $\mathcal{Q}_{j}$ for each (undotted) component $L_{j}\ (j\in\\{m+1,\ldots,l\\})$ in the position corresponding to the point $X_{j}$. * (b) Follow the sequence $Y=Y_{m+1}\wedge\dots\wedge Y_{l}$, starting, for each $j\in\\{m+1,\ldots,l\\}$ with $Y_{j}\neq\emptyset,$ from the segment corresponding to the pair of $1$-colored edges adjacent to vertices $P_{4}$ and $P_{5}$ identified by the quadricolor $Q_{j};$ at each step of the sequence, if $f,f^{\prime}$ is the pair of $1$-colored edges which are “parallel” to the considered highlighted segment, then: if no 4-colored edge has already been added to the endpoints of $f$ and $f^{\prime}$, join, by $4$-colored edges, endpoints of $f$ to endpoints of $f^{\prime}$ belonging to different bipartition classes of $\Lambda(L,c);$ otherwise connect only the endpoints of $f$ and $f^{\prime}$ having no already incident $4$-colored edge. Moreover, if two consecutive highlighted segments correspond to an undercrossing, whose overcrossing does not correspond to previous segments in $Y$, add $4$-colored edges so to double the pairs of $0$-colored edges within the subgraph corresponding to that crossing. * (c) For each $i\in\\{1,\ldots,m\\}$, add a 4-colored edge, so to connect $v_{i}$ and ${v_{i}^{\prime}}$. * (d) For each $j\in\\{m+1,\ldots,l\\}$, add a triad of $4$-colored edges between the vertices $P_{2r}$ and $P_{2r+1}$, $\forall r\in\\{0,1,2\\}$, of the quadricolor $\mathcal{Q}_{j}$ (as shown in Figure 5). * (e) Add $4$-colored edges between the remaining vertices of $\Lambda(L,c)$, joining those which belong to the same $\\{1,4\\}$-residue. ###### Remark 6 We point out that a quadricolor always arises in a component $L_{j}$ ($j\in\\{m+1,\dots,l\\}$) of $\Lambda(L,c)$ not only between a curl and an undercrossing but also between two curls with the same sign. Actually in this last case two quadricolors appear, one for each curl, and either of them can be indifferently chosen as $\mathcal{Q}_{j}$; therefore we put the point $X_{j}$ between the curls and the sequence $Y_{j}$ can start from either “side” of it. Moreover, note that the position of points $X_{j}$ may be suitably chosen, so to minimize the length of the sequence $Y$, provided that the above conditions for the existence of the quadricolor are satisfied. ###### Example 5 Figures 18 and 19 show the result of the above construction applied to the depicted Kirby diagrams. In particular, note that step (b) of Procedure C is not required for the graph of Figure 18, since the highlighted sequence of segments is empty; on the contrary, the case of Figure 19 requires to highlight a suitable set of consecutive segments in the Kirby diagram, as depicted in Figure 20. Via Kirby calculus, it is easy to check that the 5-colored graph in Figure 18 represents the $4$-sphere, while the 5-colored graph in Figure 19 represents $\mathbb{S}^{2}\times\mathbb{D}^{2}$; both facts can also be proved via suitable sequences of dipole moves. Figure 18: A Kirby diagram and the 5-colored graph representing the associated (closed) 4-manifold ($\mathbb{S}^{4}$) Figure 19: A Kirby diagram and the 5-colored graph representing the associated bounded 4-manifold ($\mathbb{S}^{2}\times\mathbb{D}^{2}$) Figure 20: The Kirby diagram of Fig. 19, with points and highlighted segments, according to Procedure C. The yellow highlighted segments form the sequence $Y_{2}$, while $Y_{3}=\emptyset.$ The shaded regions, together with the infinite one, give rise to the region $\mathcal{R}_{1}$ of $L-\cup_{j=2}^{3}(X_{j}\cup Y_{j})$ containing both points $H_{1}$ and $H_{1}^{\prime}.$ ###### Example 6 Given a framed link $(L^{(m)},d)$, the above construction may be implemented in different ways, depending on the choice of the points $X_{i}$ ($i=m+1,\dots,l$) on the framed components (step (a) of Procedure C) and on the choice of the sequence $Y=Y_{m+1}\wedge\dots\wedge Y_{l}$ of highlighted segments (step (b) of Procedure C). Figures 21 and 22 show two possibile ways to perform the above choices on the same Kirby diagram: in Figure 21 (resp. Figure 22) the yellow highlighted segments form the sequence $Y_{3}$, while the green highlighted segments form the sequence $Y_{4}$ (resp. while $Y_{4}=\emptyset$). Note that, in the case of Figure 21, the regions $\mathcal{R}_{1}$ and $\mathcal{R}_{2}$ of $L-\cup_{j=3}^{4}(X_{j}\cup Y_{j})$ coincide: they are obtained by merging the shaded regions, together with the infinite one. On the other hand, in the case of Figure 22, the regions $\mathcal{R}_{1}$ and $\mathcal{R}_{2}$ of $L-\cup_{j=3}^{4}(X_{j}\cup Y_{j})$ are distinct. Figure 21: Figure 22: The proof that the graph obtained via Procedure C really represents $M^{4}(L^{(m)},d)$ is given in Theorem 12. In order to help the reader, we can anticipate that it will be performed by means of the followings steps: * (i) starting from the $4$-colored graph $\Lambda(L,c)$ \- already proved to represent $M^{3}(L,c)$ in [5] \- we obtain a $4$-colored graph $\Lambda_{smooth}$ representing $\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})$ by suitably exchanging a triad of $1$-colored edges for each framed component of $(L^{(m)},d)$ (Proposition 11(i)); * (ii) by capping-off with respect to color 1, we obtain a 5-colored graph representing $[\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})]\times I$; * (iii) this 5-colored graph is modified by a sequence of moves not affecting the represented 4-manifold (the so called $\rho_{2}$-pairs switching), so to have on one boundary component of $[\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})]\times I$ a particular structure (called $\rho_{3}$-pair) for each dotted component of $(L^{(m)},d)$; * (iv) a suitable move ($\rho_{3}$-pair switching) is applied on each such structure, realizing - on the considered boundary component - the attachment of 1-handles corresponding to the $m$ dotted components of $(L^{(m)},d)$ (Proposition 11(ii)); * (v) by re-establishing the triads of $1$-colored edges of step (i), the 5-colored graph $\Gamma(L^{(m)},d)$ is obtained. Since its only singular 4-residue is the $\hat{4}$-residue $\Gamma(L,c)$, it represents a 4-manifold with connected boundary $M^{3}(L,c)$; moreover, $\Gamma(L^{(m)},d)$ represents $M^{4}(L^{(m)},d)$ since - similarly as in Procedure B - each triad re- exchanging is proved to correspond to the addition of a 2-handle according to the framed component, on the remaining boundary component (Proposition 9(ii)). In order to go into details, the notion of $\rho$-pair131313$\rho$-pairs and their switching were introduced by Lins ([26] and subsequently studied in [10], [3], [15]. and some preliminary results are needed. ###### Definition 6 A $\rho_{h}$-pair ($1\leq h\leq n$) of color $c\in\Delta_{n}$ in a bipartite $(n+1)$-colored graph $\Gamma$ is a pair of $c$-colored edges $(e,f)$ sharing the same $\\{c,i\\}$-colored cycle for each $i\in\\{c_{1},\ldots,c_{h}\\}\subseteq\Delta_{n}.$ Colors $c_{1},\ldots,c_{h}$ are said to be involved, while the other $n-h$ colors are said to be not involved in the $\rho_{h}$-pair. The switching of $(e,f)$ consists in canceling $e$ and $f$ and establishing new $c$-colored edges between their endpoints in such a way as to preserve the bipartition. The topological consequences of the switching of $\rho_{n-1}$\- and $\rho_{n}$-pairs have been completely determined in the case of closed $n$-manifolds: see [3], where it is proved that a $\rho_{n-1}$-pair (resp. $\rho_{n}$-pair) switching does not affect the represented $n$-manifold (resp. either induce the splitting into two connected summands, or the “loss” of a $\mathbb{S}^{1}\times\mathbb{S}^{n-1}$ summand in the represented $n$-manifold). In dimension three the study has been performed also in the case of manifolds with boundary: see [15], where more cases are proved to occur. As we will see in the proof of the following Proposition 11, we are particularly interested in the effect of switching $\rho_{2}$\- and $\rho_{3}$-pairs in $5$-colored graphs. A useful result is the following. ###### Lemma 10 Let $(e,f)$ be a $\rho_{2}$-pair in a $5$-colored graph $\Gamma$ representing a compact $4$-manifold $M^{4}$ and let $\Gamma^{\prime}$ be obtained from $\Gamma$ by switching the $\rho_{2}$-pair. Then $\Gamma^{\prime}$ represents $M^{4}$, too. Moreover, for each cyclic permutation $\varepsilon$ of $\Delta_{4}$, where $\varepsilon_{k}$ is the color of $(e,f)$: * - if both $\varepsilon_{k-1}$ and $\varepsilon_{k+1}$ are involved, then $\rho_{\varepsilon}(\Gamma^{\prime})=\rho_{\varepsilon}(\Gamma)-1$ * - if neither $\varepsilon_{k-1}$ nor $\varepsilon_{k+1}$ is involved, then $\rho_{\varepsilon}(\Gamma^{\prime})=\rho_{\varepsilon}(\Gamma)+1$ * - if exactly one between $\varepsilon_{k-1}$ and $\varepsilon_{k+1}$ is involved, then $\rho_{\varepsilon}(\Gamma^{\prime})=\rho_{\varepsilon}(\Gamma).$ _Proof._ In order to prove that $\Gamma^{\prime}$ represents $M^{4}$, too, it is sufficient to observe that the switching of $(e,f)$ can be factorized by a sequence of dipole moves as shown in Figure 23, i.e. by the addition of a $2$-dipole of the colors not involved in the $\rho_{2}$-pair, followed by the cancellation of a $2$-dipole of the colors involved in the $\rho_{2}$-pair. Note that any $2$-dipole in a $5$-colored graph is proper (see Proposition 5), and hence both moves do not change the represented manifold, since - as already pointed out in Section 2 \- they correspond to re-triangulations of balls embedded in the cell-complexes associated to the involved colored graphs. With regard to the regular genus of $\Gamma^{\prime}$ with respect to $\varepsilon$, note that the switching of $(e,f)$ increases by one (resp. decreases by one) the number of $\\{\varepsilon_{k},i\\}$-colored cycles of $\Gamma$ if $i$ is an involved (resp. a not involved) color, while the number of $\\{i,j\\}$-colored cycles with $i,j\neq\varepsilon_{k}$ is not changed. An easy calculation yields the statement. $\Box$ Figure 23: Factorization of a $\rho_{2}$-pair switching into two proper dipoles (not affecting the represented 4-manifold) ###### Proposition 11 * (i) The 4-colored graph $\Gamma^{(m)}_{smooth}$, obtained from $\Lambda(L,c)$ by exchanging the triad of $1$-colored edges (according to Figure 12) in a quadricolor for each framed component of $(L^{(m)},d)$, represents $\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})$. * (ii) The 5-colored graph $\bar{\Gamma}^{(m)}_{smooth}$, obtained by applying steps (b) and (c) of Procedure C to $\Gamma^{(m)}_{smooth}$, and then by “capping off” with respect to color $1$, represents the genus $m$ 4-dimensional handlebody $\mathbb{Y}^{4}_{m}$. _Proof._ Part (i) directly follows from Proposition 9 (i), by noting that, if all framed components of $(L^{(m)},d)$ are deleted, only the $m$ dotted components remain, and the associated framed link, consisting in $m$ disjoint trivial 0-framed components, actually represents the 3-manifold $\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})$. As regards part (ii), it is necessary to note that $\Gamma^{(m)}_{smooth}$ gives rise, by “capping off” with respect to color $1$, to a 5-colored graph representing $[\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})]\times I$, whose two boundary components - both homeomorphic to $\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})$ \- are represented by the (color-isomorphic) subgraphs $\Theta$ and $\Theta^{\prime}$, obtained by deleting the $4-$colored and $1-$colored edges respectively. This 5-colored graph admits $\rho_{2}$-pairs of color $4$ in a suitable sequence induced by the sequence of $2$-dipoles whose cancellation from $\Gamma^{(m)}_{smooth}$ yields $\Lambda(\bigsqcup_{m}K_{0},0)$, the 4-colored graph associated to the trivial link with $m$ 0-framed components (see Remark 5, applied to all framed components of $(L^{(m)},d)$). The switching of these $\rho_{2}$-pairs is equivalent (up to “capping off” with respect to color $1$) to the addition of $4$-colored edges according to step (b) in $\Lambda(L,c).$ More precisely, the pairs of $4$-colored edges that have to be switched in the sequence of $\rho_{2}$-pairs are exactly the 4-colored edges adjacent to the pairs of vertices constituting 2-dipoles of the sequence of dipole eliminations (starting, for each $j\in\\{m+1,\ldots,l\\}$ such that $Y_{j}\neq\emptyset$, with the dipole whose vertices are 2-adjacent to the vertices $P_{4}$ and $P_{5}$ identified by the quadricolor $\mathcal{Q}_{j}$) from $\Gamma^{(m)}_{smooth}$ to $\Lambda(\bigsqcup_{m}K_{0},0)$; moreover, the colors involved in each $\rho_{2}$-pair are exactly those (never comprehending color $1$) of the corresponding $2$-dipole. Hence, the graph $\tilde{\Gamma}^{(m)}_{smooth}$, obtained after all $\rho_{2}$-pairs switchings, still represents $[\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})]\times I$ and one of its boundary component is represented by $\Theta$, too, but the other is represented by the 4-colored graph $\Theta^{\prime\prime}$ obtained from $\Theta^{\prime}$ by switching $\rho_{2}$-pairs induced by the above ones. We point out that, for each $i\in\\{1,\ldots,m\\}$, the pair of $4$-colored edges having an endpoint in $v_{i}$ and ${v^{\prime}_{i}}$ respectively, turn out to form a $\rho_{3}$-pair of color $4$ in $\tilde{\Gamma}^{(m)}_{smooth}.$ In fact, they double $\bar{e}_{i}$ and/or $\bar{e}_{i}^{\prime}$, or they arise from the possible switching of $4$-colored edges doubling $\bar{e}_{i}$ and/or $\bar{e}_{i}^{\prime}$ by the above sequence of $\rho_{2}$-pairs switchings; as a consequence, they belong both to the same $\\{0,4\\}$-residue and to the same $\\{3,4\\}$-residue (since $\bar{e}_{i}$ and $\bar{e}_{i}^{\prime}$ share both the same $\\{0,1\\}$-residue and the same $\\{1,3\\}$-residue in $\Lambda(L,c)$), and the sequence of $\rho_{2}$-pair switchings makes them to belong also to the same $\\{2,4\\}$-residue (which corresponds to the boundary of the region $\mathcal{R}_{i}$ of $L-\cup_{j=m+1}^{l}(X_{j}\cup Y_{j})$). It is known that the switching of a $\rho_{3}$-pair in a 4-colored graph representing a closed 3-manifold has the effect of “subtracting” an $\mathbb{S}^{1}\times\mathbb{S}^{2}$ summand (see [3] for details); hence the switching of the above $m$ $\rho_{3}$-pairs transforms $\Theta^{\prime\prime}$ into a 4-colored graph representing $\mathbb{S}^{3}$, while the $\hat{4}$-residue $\Theta$ is unaltered and each $\hat{c}$-residue with $i\in\\{0,2,3\\}$ still represents the 3-sphere as in $\tilde{\Gamma}^{(m)}_{smooth}$ (since a $\rho_{2}$-pair switching has been performed in each affected $\hat{c}$-residue, for $i\in\\{0,2,3\\}$). Moreover, supposing $(e,f)$ to be one of the above $\rho_{3}$-pairs in $\tilde{\Gamma}^{(m)}_{smooth}$, its switching can be factorized as in Figure 24 by inserting a 1-colored edge and subsequently canceling a 3-dipole. The insertion of the $1$-colored edge in the colored triangulation associated to $\tilde{\Gamma}^{(m)}_{smooth}$ consists in “breaking” a tetrahedral face on the boundary of $[\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})]\times I$ corresponding to the $\hat{1}$-residue $\Theta^{\prime\prime}$, and inserting a new pair of $4$-simplices sharing the same $3$-dimensional face opposite to the $1$-labelled vertex; hence, it may be seen as the attachment of a polyhedron homeomorphic to $\mathbb{D}^{3}\times\mathbb{D}^{1}$ to the considered boundary, so to transform it into a triangulation of $\\#_{m-1}(\mathbb{S}^{1}\times\mathbb{S}^{2})$, without affecting the interior of $[\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})]\times I$, nor its boundary corresponding to the $\hat{4}$-residue.141414Actually, the switching of each of the $m$ $\rho_{3}$-pairs corresponds to the attaching of a $3$-handle to the boundary corresponding to the $\hat{4}$-residue of $\tilde{\Gamma}^{(m)}_{smooth}$. Whenever all $m$ $\rho_{3}$-pairs are switched, the $\hat{1}$-residue of the obtained $5$-colored graph comes to represent the $3$-sphere, i.e. the represented $4$-manifold has a connected boundary, corresponding to the (unaltered) $\hat{4}$-residue $\Theta=\Gamma^{(m)}_{smooth}$. On the other hand, the switching of these $\rho_{3}$-pairs is equivalent (up to “capping off” with respect to color $1$) to the addition of $4$-colored edges in $\Lambda(L,c)$ according to step (c); therefore, step (c) of Procedure C can be thought as the identification $\phi$ between the boundary of a genus $m$ 4-dimensional handlebody $\mathbb{Y}^{4}_{m}$ and the boundary component represented by $\Theta^{\prime\prime}$ in the triangulation of $[\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})]\times I$ obtained in step (b). This proves statement (ii), since $\bar{\Gamma}^{(m)}_{smooth}$ \- which admits $4$ as its unique singular color - turns out to represent $\mathbb{Y}^{4}_{m}\cup_{\phi}([\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})]\times I)\cong\mathbb{Y}^{4}_{m}.$ $\Box$ Figure 24: Factorization of a $\rho_{3}$-pair switching into two moves, the first (resp. second) one possibly affecting (resp. always not affecting) the represented 4-manifold We are now going to prove that the $5$-colored graph $\Gamma(L^{(m)},d)$ obtained via Procedure C (i.e. by applying to $\Lambda(L,c)$ steps (b)-(e)) represents the compact 4-manifold associated to the Kirby diagram; we will also give an estimation of its regular genus and compute its order. With this aim, if $(L^{(m)},d)$ is a Kirby diagram with $l$ components where the first $m>0$ ones are dotted and $s$ crossings, and $(L,c)$ is its associated framed link, let us set, for each $i\in\\{m+1,\ldots,l\\}$, $\bar{t}_{i}=\begin{cases}|w_{i}-c_{i}|\quad if\ w_{i}\neq c_{i}\\\ 2\quad otherwise\end{cases}$ where $w_{i}$ denotes the writhe of the $i$-th (framed) component of $L;$ moreover, let us denote by $u$ the number of undercrossings which are passed when following the sequence $Y$, with the condition that the associated overcrossing does not correspond to previous segments in the sequence itself. ###### Theorem 12 For each Kirby diagram $(L^{(m)},d)$, the bipartite $5$-colored graph $\Gamma(L^{(m)},d)$ represents the compact $4$-manifold $M^{4}(L^{(m)},d).$ Moreover, it has regular genus less or equal to $s+(l-m)+u+1$ and, if $L$ is different from the trivial knot, its order is $8s+4\sum_{i=m+1}^{l}\bar{t}_{i}.$ _Proof._ In order to prove the first statement, we point out that in the proof of Proposition 11 (ii) we have considered a suitable triangulation of $[\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})]\times I$, and then we have “closed” one of its boundary components by identifying it with the boundary of the genus $m$ 4-dimensional handlebody (via the addition of 4-colored edges according to steps (b) and (c) of the Procedure C). Hence, the polyhedron represented by $\bar{\Gamma}^{(m)}_{smooth}$ may be seen as the union of $0$\- and $1$-handles of $M^{4}(L^{(m)},d),$ plus a “collar” on its boundary . Moreover, the “free” boundary, homeomorphic to $\\#_{m}(\mathbb{S}^{1}\times\mathbb{S}^{2})$, is represented by the 4-colored graph $\Theta=(\bar{\Gamma}^{(m)}_{smooth})_{\hat{4}}=\Gamma^{(m)}_{smooth}$. Then, in order to obtain a $5$-colored graph representing $M^{4}(L^{(m)},d),$ it is sufficient to operate on this “free” boundary, so perform the addition of a $2$-handle according to each framed component of $(L^{(m)},d)$. Now, the proof of Proposition 9(ii) shows that the goal is achieved by exchanging the triad of $1$-colored edges, according to Figure 13, in the quadricolor $Q_{j}$ of the $j$-th component of $(L^{(m)},d)$, for each $j\in\\{m+1,\ldots,l\\}$. Since all these exchanging of $1$-colored edges have the effect to transform $\Gamma^{(m)}_{smooth}$ into $\Lambda(L,c)$, and step (d) applied to $\Lambda(L,c)$ is equivalent to the exchanging of $1$-colored edges according to Figure 13 applied to $\bar{\Gamma}^{(m)}_{smooth}$, the final 5-colored graph representing the compact $4$-manifold $M^{4}(L^{(m)},d)$ turns out to be obtained by applying directly to $\Lambda(L,c)$ steps (b)-(d), and then by “capping off” with respect to color $1$ (step (e)). In order to give an estimation of the regular genus of $\Gamma(L^{(m)},d)$, we first recall that $\rho_{\bar{\varepsilon}}(\Lambda(L,c))=s+1$ with $\bar{\varepsilon}=(1,0,2,3)$ (see also the proof of Theorem 7(i)), and hence that $s+1$ is also the regular genus, with respect to the permutation $\varepsilon=(1,0,2,3,4)$, of the $5$-colored graph obtained by doubling the 1-colored edges of $\Lambda(L,c)$ by color $4$. Then, we have to analyze how the the regular genus is affected by the switchings of $\rho_{2}$\- and $\rho_{3}$-pairs and the exchanging of triad of edges in the quadricolors described in the proofs of Proposition 11 and Theorem 7. Now, let us point out that color $1$ is never involved in the considered $\rho_{2}$-pairs, while color $3$ is involved only in one of the two $\rho_{2}$-pairs corresponding to an undercrossing whose associated overcrossing does not correspond to previous segments in the sequence $Y$. Therefore, by Lemma 10, the regular genus with respect to $\varepsilon$ increases by $u$, when performing the sequence of $\rho_{2}$-pairs corresponding to the sequence $Y.$ With regard to the $\rho_{3}$-pairs, since they do not involve color $1$, which is consecutive in $\varepsilon$ to color $4$, the same argument used in the proof of Lemma 10, shows that the regular genus does not change after their switchings. Finally, the exchanging of the triad of $4$-colored edges in a quadricolor, producing the attaching of a $2$-handle, decreases by two the number of $\\{1,4\\}$-colored cycles, while the numbers of all other bicolored cycles remain unaltered (see Figure 13). Hence, the regular genus increases by one for each quadricolor. Since the quadricolors are $l-m$, the statement is proved. The proof of the theorem is completed by noting that $\Gamma(L^{(m)},d)$ has exactly the same order as $\Gamma(L,c)$ (and as $\Lambda(L,c)$, too). Hence, its calculation directly follows from Theorem 7 (i). $\Box$ We are now able to prove both upper bounds for the invariants of the 4-manifold associated to a Kirby diagram, already stated in Theorem 2 in the Introduction. Proof of Theorem 2 The upper bound for the regular genus of $M^{4}(L^{(m)},d)$ directly follows from the computation of $\rho_{\bar{\varepsilon}}(\Gamma(L^{(m)},d))$ obtained in Theorem 12, together with the trivial inequality $u\leq\bar{s}$. As regards the upper bound for the gem-complexity, it is sufficient to make use of the computation of the order of $\Gamma(L^{(m)},d)$ obtained in Theorem 12, by pointing out that $\Gamma(L^{(m)},d)$ contains a pair of $3$-dipoles of colors $\\{0,1,4\\}$ for each pair of adjacent undercrossings of dotted components; hence, a new $5$-colored graph $\Gamma^{\prime}(L^{(m)},d))$ representing $M^{4}(L^{(m)},d)$ may be obtained, with $\\#V(\Gamma^{\prime}(L^{(m)},d))=\\#V(\Gamma(L^{(m)},d))-4[(s-\bar{s})-m]=4s+4\bar{s}+4m+4\sum_{i=m+1}^{l}\bar{t}_{i}.$ $\Box$ Acknowledgements. This work was supported by GNSAGA of INDAM and by the University of Modena and Reggio Emilia, project: “Discrete Methods in Combinatorial Geometry and Geometric Topology”. ## References * [1] S. Akbulut, The Dolgachev Surface - Disproving Harer-Kas-Kirby Conjecture Comment. Math. Helv. 87 (1) (2012), 187-241. * [2] S. Akbulut, An infinite family of exotic Dolgachev surfaces without 1- and 3- handles, Jour. of GGT 3 (2009), 22-43. * [3] P. Bandieri and C. Gagliardi, Rigid gems in dimension $n$. Bol. Soc. Mat. Mex., 18(3) (2012), 55–67. * [4] B.A. Burton - R. Budney - W. Pettersson et al., Regina: Software for low-dimensional topology, http://regina-normal.github.io/, 1999–2021. * [5] M. R. Casali, From framed links to crystallizations of bounded 4-manifolds, J. Knot Theory Ramifications 9 (4) (2000), 443-458. * [6] M. R. Casali, On the regular genus of 5-manifolds with free fundamental group, Forum Math. 15 (2003), 465-475. * [7] M. R. Casali, Dotted links, Heegaard diagrams and coloured graphs for PL 4-manifolds, Rev. Mat. Complut. 17(2) (2004), 435-457. * [8] M. R. Casali - P. Cristofori, Classifying compact 4-manifolds via generalized regular genus and G-degree, Ann. Inst. Henri Poincarè D (2022), to appear. arXiv:1912.01302 * [9] M. R. Casali - P. Cristofori, A note about complexity of lens spaces, Forum Math. 27 (2015), 3173–3188. * [10] M. R. Casali - P. Cristofori, Cataloguing PL 4-manifolds by gem-complexity, Electron. J. Combin. 22 (4) (2015), #P4.25. * [11] M. R. Casali - P. Cristofori, Compact $4$-manifolds admitting special handle decompositions, RACSAM 115, 118 (2021). https://doi.org/10.1007/s13398-021-01001-x * [12] M. R. Casali - P. Cristofori - C. Gagliardi, Classifying PL 4-manifolds via crystallizations: results and open problems, in: “Mathematical Tribute to Professor José María Montesinos Amilibia”, Universidad Complutense Madrid (2016). [ISBN: 978-84-608-1684-3] * [13] M. R. Casali - P. Cristofori - L. Grasselli, G-degree for singular manifolds, RACSAM 112 (3) (2018), 693-704. https://doi.org/10.1007/s13398-017-0456-x * [14] M. R. Casali - C. Gagliardi, Classifying PL 5-manifolds up to regular genus seven, Proc. Amer. Math. Soc. 120 (1994), 275-283. * [15] P. Cristofori - E. Fominykh - M. Mulazzani - V. Tarkaev, $4$-colored graphs and knot/link complements, Results in Math. 72 (2017), 471-490. * [16] M. Ferri - C. Gagliardi, The only genus zero n-manifold is $\mathbb{S}^{n}$, Proc. Amer. Math. Soc. 85 (1982), 638-642. * [17] M. Ferri - C. Gagliardi, A characterization of punctured n-spheres, Yokohama Math. J. 33 (1985), 29-38. * [18] M. Ferri - C. Gagliardi, Crystallization moves, Pacific J. Math. 100 (1982) 85-103. * [19] M. Ferri - C. Gagliardi - L. Grasselli, A graph-theoretical representation of PL-manifolds. A survey on crystallizations, Aequationes Math. 31 (1986), 121-141. * [20] C. Gagliardi, Extending the concept of genus to dimension $n$, Proc. Amer. Math. Soc. 81 (1981), 473-481. * [21] C. Gagliardi, On a class of 3-dimensional polyhedra, Ann. Univ. Ferrara 33 (1987), 51-88. * [22] C. Gagliardi, On the genus of the complex projective plane, Aequationes Math., 37(2-3) (1989), 130-140. * [23] R.E. Gompf - A.I. Stipsicz, 4-manifolds and Kirby calculus, American Mathematical Society, vol. 20, 1999. * [24] Kirby, R. (ed.): Problems in Low-dimensional Topology AMS/IP Stud. Adv. Math. 2 (2), Geometric topology (Athens, GA, 1993), 35-473 (Amer. Math. Soc. 1997). * [25] F. Laudenbach - V. Poenaru, A note on 4-dimensional handlebodies, Bull. Soc. Math. France, 100 (1972), 337-344. * [26] S. Lins. Gems, computers and attractors for 3-manifolds. Knots and Everything, no. 5. World Scientific, River Edge, NJ, 1995. * [27] S. Lins and M. Mulazzani. Blobs and flips on gems. J. Knot Theory Ramifications, 15(8) (2006), 1001–1035. * [28] R. Mandelbaum, Four-dimensional topology: an introduction, Bull. Amer. Math. Soc. 2 (1980), 1-159. * [29] J.M. Montesinos Amilibia, Heegaard diagrams for closed 4-manifolds, In: Geometric topology, Proc. 1977 Georgia Conference, Academic Press (1979), 219-237. [ISBN 0-12-158860-2] * [30] H. Naoe, Corks with Large Shadow-Complexity and Exotic Four-Manifolds, Experimental Math., 30 (2021), 157-171. https://doi.org/10.1080/10586458.2018.1514332 * [31] M. Pezzana, Sulla struttura topologica delle varietà compatte, Atti Semin. Mat. Fis. Univ. Modena 23 (1974), 269-277.
# Extreme case of density scaling: The Weeks-Chandler-Andersen system at low temperatures Eman Attia<EMAIL_ADDRESS>Jeppe C. Dyre<EMAIL_ADDRESS>Ulf R. Pedersen <EMAIL_ADDRESS>“Glass and Time”, IMFUFA, Dept. of Science and Environment, Roskilde University, P. O. Box 260, DK-4000 Roskilde, Denmark ###### Abstract This paper studies numerically the Weeks-Chandler-Andersen (WCA) system, which is shown to obey hidden scale invariance with a density-scaling exponent that varies from below 5 to above 500. This unprecedented variation makes it advantageous to use the fourth-order Runge-Kutta algorithm for tracing out isomorphs. Good isomorph invariance of structure and dynamics is observed over more than three orders of magnitude temperature variation. For all state points studied, the virial potential-energy correlation coefficient and the density-scaling exponent are controlled mainly by the temperature. Based on the assumption of statistically independent pair interactions, a mean-field theory is developed that rationalizes this finding and provides an excellent fit to data at low temperatures and densities. ## I Introduction Density scaling is an important experimental discovery of the last 20 years’ liquid-state research, which by now has been demonstrated for high-pressure data of hundreds of systems Alba-Simionesco _et al._ (2004); Roland _et al._ (2005); Lopez _et al._ (2012); Adrjanowicz _et al._ (2016). The crucial insight is that, in order to characterize a thermodynamic state point, the relevant variable supplementing the temperature $T$ is not the pressure $p$, but the number density $\rho\equiv N/V$ (considering $N$ particles in volume $V$) Alba-Simionesco _et al._ (2004); Roland _et al._ (2005); Lopez _et al._ (2012); Adrjanowicz _et al._ (2016); Gundermann _et al._ (2011); Kivelson _et al._ (1996). If $\gamma$ is the so-called density-scaling exponent, plotting data for the dynamics as a function of $\rho^{\gamma}/T$ results in a collapse Alba-Simionesco _et al._ (2004); Roland _et al._ (2005); Lopez _et al._ (2012); Adrjanowicz _et al._ (2016). In other words, the dynamics depends on the two variables of the thermodynamic phase diagram only via the single variable $\rho^{\gamma}/T$. This provides a significant rationalization of data, as well as an important hint for theory development. It should be noted, though, that density scaling does not apply universally; for instance, it usually works better for van der Waals liquids than for hydrogen-bonded liquids Roland _et al._ (2005); Adrjanowicz _et al._ (2016). Some time after these developments were initiated, a framework for density scaling was provided in terms of the isomorph theory Gnan _et al._ (2009); Dyre (2014), which links density scaling to Rosenfeld’s excess-entropy scaling method from 1977 Rosenfeld (1977); Dyre (2018). According to isomorph theory, any system with strong correlations between the fixed-volume virial and potential-energy equilibrium fluctuations has curves of invariant structure and dynamics in the thermodynamic phase diagram. These “isomorphs” Gnan _et al._ (2009); Schrøder and Dyre (2014) are defined as curves of constant excess entropy ${S}_{\rm ex}$, which is the entropy minus that of an ideal gas at the same temperature and density (${S}_{\rm ex}<0$ because any system is more ordered than an ideal gas). If the potential energy is denoted by $U$ and the virial by $W$, their Pearson correlation coefficient $R$ is defined by $R\,=\,\frac{\langle\Delta U\Delta W\rangle}{\sqrt{\langle(\Delta U)^{2}\rangle\langle(\Delta W)^{2}\rangle}}\,.$ (1) Here $\Delta$ denotes the deviation from the thermal average and the sharp brackets are canonical ($NVT$) averages. The pragmatic criterion defining “strong” correlation is $R>0.9$ Pedersen _et al._ (2008); Bailey _et al._ (2008a). Systems with strong correlations have good isomorphs, i.e., approximate invariance of structure and dynamics along the configurational adiabats Gnan _et al._ (2009). Such systems are termed R-simple, signaling the simplification of having an effectively one-dimensional thermodynamic phase diagram in regard to structure and dynamics when these are given in so- called reduced units (see below). Hydrogen-bonded systems usually have $R<0.9$ and are thus not R-simple Pedersen _et al._ (2008); this explains why density scaling does not apply universally. Isomorph theory is only rigorously correct in the unrealistic case of an Euler-homogeneous potential-energy function that is realized, for instance, in systems with inverse-power-law (IPL) pair potentials Heyes and Branka (2007). Nevertheless, isomorph-theory predictions apply to a good approximation for many systems, e.g., Lennard-Jones (LJ) type liquids Bailey _et al._ (2008b); Gnan _et al._ (2009); Schrøder _et al._ (2011); Yoon _et al._ (2019), the EXP pair-potential system at low temperatures Bacher _et al._ (2019, 2020), simple molecular models Ingebrigtsen _et al._ (2012a); Fragiadakis and Roland (2019); Koperwas _et al._ (2020), polydisperse systems Ingebrigtsen and Tanaka (2015), crystals Albrechtsen _et al._ (2014), nano-confined liquids Ingebrigtsen _et al._ (2013), polymer-like flexible molecules Veldhorst _et al._ (2014), metals Hummel _et al._ (2015); Friedeheim _et al._ (2019), Yukawa plasmas Veldhorst _et al._ (2015); Tolias and Castello (2019), etc. In some cases, isomorphs are well described by the equation $\rho^{\gamma}/T=$ Const. with a constant $\gamma$ Schrøder _et al._ (2009), which as mentioned accounts for density scaling as discussed in most experimental contexts Roland _et al._ (2005). Isomorph theory, however, does not require $\gamma$ to be constant throughout the thermodynamic phase diagram, and $\gamma$ indeed does vary in most simulations Schrøder _et al._ (2011); Ingebrigtsen _et al._ (2012b); Bacher _et al._ (2018); Heyes _et al._ (2019). The general isomorph-theory definition of the density-scaling exponent $\gamma$ at a given state point Gnan _et al._ (2009); Dyre (2018) is $\gamma\,\equiv\,\left(\frac{\partial\ln T}{\partial\ln\rho}\right)_{{S}_{\rm ex}}\,=\,\frac{\langle\Delta U\Delta W\rangle}{\langle(\Delta U)^{2}\rangle}\,.$ (2) The second equality gives the statistical-mechanical expression of $\gamma$ in terms of the constant-volume canonical-ensemble fluctuations of potential energy and virial. The question whether experimental density-scaling exponents are strictly constant throughout the phase diagram has recently come into focus Sanz _et al._ (2019); Casalini and Ransom (2019). In simulations, isomorphs are in many cases described by the following equation Alba-Simionesco _et al._ (2004); Bøhling _et al._ (2012); Ingebrigtsen _et al._ (2012b); Dyre (2014) $\frac{h(\rho)}{T}\,=\,\textrm{Const.}$ (3) in which $h(\rho)$ is a function of the density. For the Lennard-Jones (LJ) system, for instance, one has $h(\rho)\propto(\gamma_{0}/2-1)(\rho/\rho_{0})^{4}-(\gamma_{0}/2-2)(\rho/\rho_{0})^{2}$ in which $\gamma_{0}$ is the density-scaling exponent at a reference state point of density $\rho_{0}$ Bøhling _et al._ (2012); Ingebrigtsen _et al._ (2012b). For isomorphs given by Eq. (3), Eq. (2) implies $\gamma\,=\,\frac{d\ln h(\rho)}{d\ln\rho}\,.$ (4) We see that unless $h(\rho)$ is a power-law function, the density-scaling exponent depends on the density, though not on the temperature. More generally, $\gamma$ also depends on the temperature Bacher _et al._ (2018). This is the case, for instance, for the LJ system at very high temperatures: for $T\to\infty$ at a fixed density, the LJ system is dominated by the repulsive $r^{-12}$ term of the pair potential, implying that $\gamma$ approaches $12/3=4$ in this limit and that Eq. (3) cannot apply. A likely reason that many experiments are well described by a constant $\gamma$ is the fact that density often does not vary much. As shown by Casalini and coworkers Casalini and Ransom (2019); Ransom _et al._ (2019), when extreme pressure is applied, the density-scaling exponent is no longer constant. Although it is now clear that $\gamma$ is not a material constant Sanz _et al._ (2019); Casalini and Ransom (2019), its variation is as mentioned often insignificant in experiments. This paper gives an example in which $\gamma$ varies dramatically. We present a study of the noted Weeks- Chandler-Andersen (WCA) system Weeks _et al._ (1971); Chandler _et al._ (1983) that 50 years ago introduced the idea of a cutoff at the potential- energy minimum of the LJ system de Kuijper _et al._ (1990); Bishop _et al._ (1999); Ben-Amotz and Stell (2004); Nasrabad (2008); Ahmed and Sadus (2009); Benjamin and Horbach (2015). This idea is still very popular and used in various contexts Atreyee and Wales (2020); Dawass _et al._ (2020); Gußmann _et al._ (2020); Mirzaeinia and Feyzi (2020); Nogueira _et al._ (2020); Tong and Tanaka (2020); Ueda and Morita (2020). We show below that the WCA system has strong virial potential-energy correlations and thus is R-simple at typical liquid-state densities. We find that $\gamma$ varies by more than two decades in the investigated part of the phase diagram. In comparison, the LJ system has a density-scaling exponent that varies less than 50% throughout the phase diagram. To the best of our knowledge, the $\gamma$ variation of the WCA system is much larger than has so far been reported for any system in simulations or experiments. For all state points studied, we find that $\gamma$ depends primarily on the temperature. A mean-field theory is presented that explains this observation and which accounts well for the low-temperature and low-density behavior of the system. After providing a few technical details in Sec. II, the paper starts by presenting the thermodynamic phase diagram with the state points studied numerically (Sec. III). The paper’s main focus is on three isomorphs, numbered 1-3. Each of these is associated with an isotherm and an isochore, the purpose of which is to put into perspective the isomorph variation of structure and dynamics by comparing to what happens when a similar density/temperature variation is studied, keeping the other variable constant. In Sec. III we also give data for the virial potential-energy correlation coefficient $R$ and the density-scaling exponent $\gamma$, demonstrating that all state points studied have strong correlations ($R>0.9$) while $\gamma$ varies from about 5 to above 500. A mean-field theory is developed in Sec. IV predicting that $R$ and $\gamma$ both depend primarily on the temperature. Section V presents simulations of the structure and dynamics along the isotherms, isochores, and isomorphs. Despite the extreme $\gamma$ variation, which implies that an approximate inverse-power-law description fails entirely, we find good isomorph invariance of the reduced-unit structure and excellent isomorph invariance of the reduced-unit dynamics. Section Sec. VI gives a brief discussion. Appendix I details the implementation of the fourth-order Runge- Kutta method for tracing out isomorphs and compares its predictions to those of the previously used simple Euler method. Appendix II gives isomorph state- point details. ## II Model and simulation details Liquid model systems are often defined in terms of a pair potential $v(r)$. If $r_{ij}=|\mathbb{r}_{i}-\mathbb{r}_{j}|$ is the distance between particles $i$ and $j$, the potential energy $U$ as a function of all particle coordinates $\mathbb{R}\equiv(\mathbb{r}_{1},\mathbb{r}_{2},..,\mathbb{r}_{N})$ is given by $U(\mathbb{R})\,=\,\sum_{i<j}v(r_{ij})\,.$ (5) We study in this paper the single component Weeks-Chandler-Andersen (WCA) system Weeks _et al._ (1971), which cuts the standard LJ pair potential at its minimum and subsequently shifts the potential by adding a constant such that the minimum is lifted to zero Weeks _et al._ (1971); Hansen and McDonald (2013). The result is the purely repulsive pair potential given by $\displaystyle v(r)=\begin{cases}4\varepsilon\,\left[(r/\sigma)^{-12}-(r/\sigma)^{-6}\right]+\varepsilon\,\,&\,\,(r<2^{1/6}\sigma)\\\ 0\,\,&\,\,(r>2^{1/6}\sigma)\end{cases}\,.$ (6) Like the LJ pair potential, $v(r)$ involves two parameters: $\sigma$ that reflects the particle radius and $\varepsilon$ that reflects the energy depth of the LJ potential well at its minimum at $r=2^{1/6}\sigma$. The WCA system was simulated by Molecular Dynamics (MD) in the canonical ($NVT$) ensemble using the Nosé-Hoover thermostat Allen and Tildesley (1987). The simulated system consisted of 4000 particles in a cubic box with periodic boundaries. The simulations were performed using the open-source Roskilde University Molecular Dynamics software (RUMD v3.5) that runs on GPUs (graphics processing units) Bailey _et al._ (2017) (http://rumd.org). For updating the system state, the leap-frog algorithm was employed with reduced-unit time step 0.0025. At each state point, a simulation first ran for 25 million time steps for equilibration. This was followed by 50 million time steps for the production run. The simulations were conducted in the “reduced” unit system of isomorph theory in which the energy unit is $e_{0}\equiv k_{B}T$, the length unit is $l_{0}\equiv\rho^{-1/3}$, and the time unit is $t_{0}\equiv\rho^{-1/3}\sqrt{m/k_{B}T}$ where $m$ is the particle mass Gnan _et al._ (2009). A few simulations were also carried out in MD units to check for consistency. Using reduced units in a simulation implies that density and temperature are both equal to unity; thus the state point is changed by varying $\sigma$ and $\varepsilon$, i.e., by changing the pair potential. In contrast, performing simulations in MD units implies putting $\sigma=\varepsilon=1$, i.e., fixing the pair potential and varying $\rho$ and $T$ in order to change the state point. The two methods are mathematically equivalent, of course. Simulating in reduced units is convenient because the time step is then automatically adjusted to take into account the thermal velocity. Reduced quantities are generally marked by a tilde, for instance $\tilde{r}\equiv r/l_{0}=\rho^{1/3}r$. These units are used below for all quantities except for the density and the temperature; thermodynamic state points are reported by giving density and temperature in standard MD units, i.e., $\rho$ is given in units of $\sigma^{-3}$ and $T$ in units of $\varepsilon/k_{B}$. Figure 1: (a) The three isomorphs in focus (denoted 1-3) shown as full curves in the temperature-density thermodynamic phase diagram. Each isomorph was generated as described in the text and in Appendix I, starting from the reference state point ($\rho_{0},T_{0}$) with $\rho_{0}=0.84$ and $T_{0}$ equal to 0.6, 1.0, and 2.0, respectively. A fourth isomorph (denoted 0) is marked by the red dashed line and is in the supercooled liquid phase. The horizontal lines are three isotherms and the vertical lines are three isochores, which are studied in order to compare their structure and dynamics variations to those along the isomorphs. The freezing and melting lines are shown as yellow and orange lines, respectively de Kuijper _et al._ (1990); Ahmed and Sadus (2009); note that these are parallel to the isomorphs. (b) The four isomorphs shown in a logarithmic temperature-density phase diagram. The slope $\gamma$ (Eq. (2)) increases significantly as the temperature is lowered along an isomorph. The stars mark the lowest simulated temperature and density on each isomorph; these state points are used in Fig. 9 below. ## III Simulated state points Figure 1(a) shows the thermodynamic phase diagram of the WCA system. The yellow and orange lines to the right are the freezing and melting lines de Kuijper _et al._ (1990); Ahmed and Sadus (2009). The blue, green, and purple lines marked 1, 2, and 3, respectively, are the isomorphs of main focus below, while the red dashed line is a fourth isomorph marked 0, which is in the liquid-solid coexistence region. Note that the freezing and melting lines are both approximate isomorphs Gnan _et al._ (2009); Pedersen _et al._ (2016). Each isomorph was traced out starting from a “reference” state point of density 0.84. Isomorphs are often identified by integrating Eq. (2) using the simple first-order Euler integration scheme for density changes of order one percent Gnan _et al._ (2009); Schrøder _et al._ (2011); Ingebrigtsen _et al._ (2012a). The extreme variation of $\gamma$ found for the WCA system, however, means that Euler integration can only be used reliably for very small density changes and a more accurate integration scheme is called for. We used the fourth-order Runge-Kutta integration (denoted by RK4) as detailed in Appendix I, where it is demonstrated that RK4 is 10-100 times more computationally efficient that Euler integration for tracing out isomorphs with a given accuracy. Data for selected state points of the four isomorphs are listed in Appendix II. In order to investigate the degree of isomorph invariance of the reduced-unit structure and dynamics (Sec. V), for each isomorph we also performed simulations along an isotherm and an isochore, limiting all simulations to state points in the equilibrium liquid phase, though. Figure 1(b) shows the isomorphs and the melting and freezing lines in a diagram with logarithmic density and temperature axes. In this diagram the density-scaling exponent $\gamma$ is the isomorph slopes, compare Eq. (2), which increases significantly along each isomorph as the density is lowered. Figure 2: The virial potential-energy Pearson correlation coefficient $R$ (Eq. (1)) for all state points studied (Fig. 1). There are strong correlations everywhere ($R>0.9$). The horizontal dashed-dotted lines mark the low- temperature, low-density limit of the mean-field-theory prediction, $R=\sqrt{8/3\pi}=0.921$ (Eq. (20)). (a) $R$ as a function of the density. (b) $R$ as a function of the temperature. We see that the correlations are mainly controlled by the temperature. A configurational adiabat is an isomorph only for state points with strong virial potential-energy correlations, i.e., when $R\gtrsim 0.9$ at the relevant state points in which $R$ is given by Eq. (1). This condition is validated in Fig. 2, which shows $R$ for all state points simulated. Figure 2(a) shows $R$ as a function of the density, while (b) shows $R$ as a function of the temperature. We see that $R$ increases with increasing density and temperature, approaching unity. This reflects the fact that the $(r/\sigma)^{-12}$ term of the pair potential dominates the interactions in these limits and that an inverse-power-law pair potential has $R=1$. An important observation from Fig. 2 is that strong correlations are maintained even at the lowest densities and temperatures studied. Comparing (a) to (b) reveals that $R$ is primarily controlled by the temperature. This may be understood from a mean-field theory, which assumes that the interactions at low densities are dominated by single-pair interactions (Sec. IV). Figure 3: The density-scaling exponent $\gamma$ defined in Eq. (2) for the state points studied (Fig. 1). Full symbols are isomorph state-point data, half open circles are isochore and isotherm data. The top row gives data for state points with $\gamma$ below 50, the bottom row gives data for all state points. (a) $\gamma$ as a function of the density; (b) $\gamma$ as a function of the pressure; (c) $\gamma$ as a function of the temperature; (d) $\gamma$ as a function of the density in a log-log plot; (e) $\gamma$ as a function of the pressure in a log-log plot; (f) $\gamma$ as a function of the temperature in a log-log plot. We see that $\gamma$ is primarily a function of the temperature. The dashed line in (f) marks the $T\to 0$ limit of the mean-field theory (Eq. (19)). Figure 3 gives data for the density-scaling exponent $\gamma$ at the state points simulated plotted in different ways, using the same symbols as in Fig. 2. We see that $\gamma$ increases monotonically as either density, pressure, or temperature is lowered, eventually reaching values above 500. Figure 3(a) shows $\gamma$ as a function of the density $\rho$. Clearly, knowledge of $\rho$ is not enough to determine $\gamma$, implying that Eq. (4) does not apply for the WCA system. It has been suggested that $\gamma$ is controlled by the pressure Casalini and Ransom (2020). This works better than the density for collapsing data, but there is still some scatter ((b)). Figure 3(c) plots $\gamma$ as a function of the temperature. We here observe a quite good collapse, concluding thus $\gamma$ is primarily controlled by the temperature. Figure 3(d), (e), and (f) show data for all the state points simulated in a logarithmic plot as functions of density, pressure, and temperature, respectively. ## IV Mean-field theory for $R$ and $\gamma$ at low densities This section presents a mean-field theory for estimating the virial potential- energy correlation coefficient $R$ and the density-scaling exponent $\gamma$. Along the lines of Refs. Bacher _et al._ , 2019, 2018; Maimbourg and Kurchan, 2016; Maimbourg _et al._ , 2020, we assume that the individual pair interactions are statistically independent; this is expected to be a good approximation at relatively low densities. In MD units the truncated WCA pair potential Eq. (6) is $v(r)=4r^{-12}-4r^{-6}+1\textrm{ for }r<r_{c}\equiv 2^{1/6}=1.122\ldots$ (7) and zero otherwise. The virial of the configuration $\mathbb{R}$ is given by $W(\mathbb{R})=\sum_{i>j}^{N}w(r_{ij})$ in which the pair virial is defined as $w(r)\equiv-(r/3)v^{\prime}(r)$ Allen and Tildesley (1987). Although the WCA potential is our primary focus, the arguments given below apply to any truncated purely repulsive potential. The general the partition function of the configurational degrees of freedom is given by $Z\propto\int_{V^{N}}d\mathbb{r}_{1}...d\mathbb{r}_{N}\exp(-\sum_{i<j}v(r_{ij})/k_{B}T)$ in which $r_{ij}=|\mathbb{r}_{i}-\mathbb{r}_{j}|$. At low densities it is reasonable to regard the pair distances as uncorrelated, i.e., to treat the interactions in a mean-field way. This leads to the approximation $Z\propto Z_{s}^{N}$ in which $Z_{s}=\int_{V}d\mathbb{r}\exp(-v_{s}(\mathbb{r})/k_{B}T)$ is the partition function of a single particle moving in the potential $v_{s}(\mathbb{r})$ of all other particles frozen in space. In the low-density limit, none of the frozen particles “overlap” and $Z_{s}$ has consequently two contributions, one for the positions for which $v(\mathbb{r})=0$ and one for the positions at which the particle interacts with one of the frozen particles. The former is the “free” volume that in the low-density limit is approaches the entire volume $V$. The latter is $N$ times the following integral (putting for simplicity $k_{B}=1$ in this section), $Z_{1}(T)=\int_{0}^{r_{c}}4\pi r^{2}\exp(-v(r)/T)dr\,.$ (8) In terms of $Z_{1}(T)$ the single-particle partition function is thus in the thermodynamic limit given by $Z_{s}(\rho,T)/N=Z_{1}(T)+1/\rho\,.$ (9) Based on the above, any pair-defined quantity $A(r)$ that is zero for $r>r_{c}$ has an expectation value that is computed as (in which $p(r)=4\pi r^{2}\exp(-v(r)/T)$ is the unnormalized probability) $\langle A\rangle=\int_{0}^{r_{c}}A(r)p(r)dr/Z_{s}(\rho,T)\,.$ (10) Based on Eq. (2) and Eq. (1) one gets $\gamma(\rho,T)=\frac{\langle wv\rangle-\langle w\rangle\langle v\rangle}{\langle v^{2}\rangle-\langle v\rangle^{2}}$ (11) and $R(\rho,T)=\frac{\langle wv\rangle-\langle w\rangle\langle v\rangle}{\sqrt{(\langle w^{2}\rangle-\langle w\rangle^{2})(\langle v^{2}\rangle-\langle v\rangle^{2})}}.$ (12) Figure 4 compares the predictions of the mean-field theory to data along isomorphs and isochores. There is good overall agreement. Systematic deviations are visible in (b) and (d), however, which focus on densities that are now low enough to avoid frozen-particle overlap. Figure 4: Comparing the predictions of the mean-field theory for $\gamma$ and $R$ as functions of the temperature (dashed lines) to simulation results. (a) and (c) show results along the three isomorphs. (b) and (d) show results along the three isochores, focusing on higher densities where the mean-field theory is not expected to be accurate. We proceed to discuss the low-density limit in which $Z_{s}\to\infty$. Terms that involve a single expectation value ($\langle v^{2}\rangle$, $\langle w^{2}\rangle$, and $\langle wu\rangle$) scale as $1/Z_{s}$ while terms that involve a multiplication of expectation values, i.e., $\langle v\rangle^{2}$, $\langle w\rangle^{2}$, and $\langle v\rangle\langle w\rangle$, scale as $1/Z_{s}^{2}$. Consequently, at low densities one can neglect terms that involve multiplications of expectation values Maimbourg and Kurchan (2016); Bacher _et al._ (2019, 2018); Maimbourg _et al._ (2020), leading to $\gamma(T)=\langle wv\rangle/\langle v^{2}\rangle\textrm{ for }\rho\rightarrow 0$ (13) and $R(T)=\langle wv\rangle/\sqrt{\langle w^{2}\rangle\langle v^{2}\rangle}\textrm{ for }\rho\rightarrow 0.$ (14) Note that these averages do not depend on $Z_{s}$ since both numerators and denominators scale as $1/Z_{s}$. This implies that $\gamma$ and $R$ at low densities depend only of $T$, which explains the observation in Fig. 3. Consider now the further limit of low temperature. In that case the probability distribution $p(r)$ concentrates near $r_{c}$ and one can expand around $x\equiv r_{c}-r=0$ by writing the pair potential as $v(x)=k_{1}x+k_{2}x^{2}/2+k_{3}x^{3}/6+.....$ (15) The pair virial then becomes Bailey _et al._ (2008b) $w(x)=(r_{c}-x)(k_{1}/3+k_{2}x/3)\,+\,k_{3}r_{c}x^{2}/6+O(x^{3})\,.$ (16) For the WCA potential $k_{1}=0$ and $k_{2}=36\sqrt[3]{4}$. Since $p(x)$ is concentrated near $x=0$, the upper limit of the integral Eq. (17) may be extended to infinity, leading to $\langle A\rangle=\int_{0}^{\infty}A(x)p(x)dx/Z\,\,\,(T\rightarrow 0)$ (17) in which $p(x)=4\pi(r_{c}-x)^{2}\exp(-k_{2}x^{2}/(2T))\,.$ (18) The Gaussian integrals can be evaluated by hand or, e.g., using the SymPy Python library for symbolic mathematics. We find that $\gamma$ and $R$ in the low-density limit are given by $\gamma_{0}=\frac{4r_{c}\sqrt{2k_{2}}}{9\sqrt{\pi T}}=\frac{16}{3\sqrt{\pi T}}\,\,\,(T\rightarrow 0)$ (19) and $R_{0}=\sqrt{\frac{8}{3\pi}}=0.921\ldots\,\,\,(T\rightarrow 0)\,.$ (20) Figure 5 shows the mean-field predictions for $\gamma$ and $R$ at $T=0.01$ plotted as a function of the density. As expected, the theory works well at low densities, even though one is here still not quite at the $T\to 0$ limit marked by the horizontal lines. Figure 5: The density dependence of (a) $\gamma$ and (b) $R$ at $T=0.01$. Results are also shown for high-density samples that crystallized during the simulations. ## V Variation of structure and dynamics along isotherms, isochores, and isomorphs Figure 6: Reduced-unit radial distribution functions (RDF) for the three isotherms, isochores, and isomorphs (Fig. 1). The green curves give the lowest temperature/density, the orange curves give the mid temperature/density, and the blue curves give the highest temperature/density. Although the first-peak maximum is not entirely isomorph invariant, in comparison to isotherms and isochores we see a good RDF invariance along the isomorphs. This is the case even though the density variation of the isotherms and the temperature variation of the isochores are somewhat smaller than those of the isomorphs (compare Fig. 1). Isotherms: The green curves give data for $(\rho,T)=$ (0.56, 0.60), (0.82, 2.72), (0.81, 12.1), the orange curves for $(\rho,T)=$ (0.69, 0.60 ), (1.0, 2.72), (1.21, 12.1), and the blue curves for $(\rho,T)=$ (0.84, 0.60), (1.22, 2.72), (1.47, 12.1). Isochores: The green curves give data for $(\rho,T)=$ (0.84, 0.33), (1.00, 0.82), (1.21, 2.44), the orange curves for $(\rho,T)=$ (0.84, 1.99), (1.00, 3.32), (1.21, 6.64), and the blue curves for $(\rho,T)=$ (0.84, 14.72), (1.00, 13.46), (1.21, 14.78). Isomorphs: The green curves give data for the reference state points $(\rho,T)=$ (0.84, 0.60), (0.84, 1.00), (0.84, 2.00), the orange curves for $(\rho,T)=$ (1.06, 2.43), (1.04, 3.32), (0.94, 3.64), and the blue curves for $(\rho,T)=$ (1.57, 14.72), (1.40, 13.46), (1.26, 14.78). The considerable $\gamma$ variation of the WCA system means that it cannot be described approximately by an Euler-homogeneous potential-energy function. This section investigates to which degree the reduced-unit structure and dynamics are, nevertheless, invariant along isomorphs 1-3. Isomorph invariance is rarely exact, so in order to put the simulation results into perspective, we present also results for the variation of the reduced-unit structure and dynamics along isotherms and isochores (Fig. 1). As a measure of the structure, we look at the reduced radial distribution function (RDF) as a function of the reduced radial distance. As a measure of the dynamics, we look at the reduced mean-squared displacement (MSD) as a function of the reduced time, as well as on the reduced diffusion coefficient $\tilde{D}$ identified from the long-time MSD. Starting with structure, Fig. 6 shows reduced-unit RDF data along the three isotherms, isochores, and isomorphs. The isotherms span almost the same density range and the isochores span almost the same temperature range as the corresponding isomorphs (restricted to the equilibrium liquid phase, i.e., to data above the freezing line). Along the isomorphs the RDFs show some variation at the first peak maximum (lowest row), but in comparison to the isotherms and isochores, there is excellent overall isomorph invariance of the RDF. For all three isomorphs we find that the peak height increases as the temperature decreases. This is an effect of larger $\gamma$ resulting in a higher first peak, which may be understood as follows. Consider the IPL pair- potential system with $v(r)\propto r^{-n}$, which has $\gamma=n/3$ and perfect isomorphs Heyes _et al._ (2015). The larger $n$ is, the more harshly repulsive are the forces. From the Boltzmann probability of finding two particles at the distance $r$, $\propto\exp(-v(r)/k_{B}T)$, it follows that particle near encounters become less likely as $n\to\infty$, thus suppressing the RDF at distances below the first peak. If there is isomorph invariance of the number of particles within the first coordination shell, as $n$ increases some of the RDF must therefore move from small $r$ to larger $r$ within the first coordination shell, resulting in a higher first peak. This argument has recently been confirmed by the observation that the bridge function, a fundamental quantity of liquid-state theory Hansen and McDonald (2013), is isomorph invariant to a very good approximation Castello _et al._ (2021). A similar increase of the height of the first RDF peak with increasing $\gamma$ has been observed for the EXP system (Fig. 5 in Ref. Bacher _et al._ , 2018). In that case it was a much less dramatic effect, however, because the EXP system’s $\gamma$ variation at the investigated state points covered less than a factor of 3 compared to more than a factor of 100 for the WCA state points studied here. Interestingly, for both systems the data imply that $\gamma\to\infty$ as $T\to 0$ along an isomorph, i.e., both systems become more and more hard-sphere like as the temperature is lowered. Figure 7: Reduced-unit radial mean-squared displacement (MSD) plotted against time for the three isotherms, isochores, and isomorphs (Fig. 1). The state points and color codings are the same as in Fig. 6. The dynamics is isomorph invariant to a very good approximation. Proceeding to investigate the dynamics, Fig. 7 shows data for the reduced-unit MSD as a function of the reduced time along the three isotherms, isochores, and isomorphs. There is only invariance along the isomorphs. Along the isotherms, the lowest density (green) give rise to the largest reduced diffusion coefficient. This is because the mean collision length increases when density is decreased. Along the isochores, the lowest temperature (green) has the smallest reduced diffusion coefficient. This is because the effective hard-sphere radius increases when temperature is decreased, leading to a smaller mean-collision length. In MD units, the MSDs are also not invariant along the isotherms or isochores (data not shown); thus the lack of invariance for the isotherms and isochores is not a consequence of the use of reduced units. In regard to the isomorph data, with Fig. 6 in mind we conclude that the non-invariant first-peak heights of the RDFs along the isomorphs has little influence on the dynamics. This is consistent with expectations from liquid-state quasiuniversality, according to which many systems have structure and dynamics similar to those of the EXP generic liquid system, which as mentioned also exhibits varying first-peak heights along its isomorphs Bacher _et al._ (2018). Figure 8: Diffusion coefficients along isomorphs 1-3 in MD units (upper row) and in reduced units (lower row), plotted as functions of the logarithm of the temperature. When given in MD units, the diffusion coefficients vary significantly along the isomorphs, while they are fairly constant in reduced units. This illustrates the importance of using reduced units when checking for isomorph invariance. From end point to end point of the isomorphs, the variation in the reduced diffusion coefficient $\tilde{D}$ is, respectively, 39%, 23%, and 14%. The corresponding numbers are 1000%, 880%, and 549% along the isochores, and 214%, 893%, and 305% along the isotherms. The reduced diffusion coefficient $\tilde{D}\equiv\rho^{1/3}\sqrt{m/k_{B}T}\,D$ is extracted from the data in Fig. 7 by making use of the fact that the long-time reduced MSD is $6\tilde{D}\tilde{t}$. Figure 8 shows how both $D$ and $\tilde{D}$ vary along the three isomorphs. The upper figures demonstrate a large variation in $D$ along each isomorph. The lower figures show $\tilde{D}$, which is rigorously invariant for a system with perfect isomorphs ($R=1$). This is not the case for the WCA system, but the variation is below 40% for all three isomorphs in situations where the temperature varies by more than four orders of magnitude. Thus the reduced diffusion coefficient is isomorph invariant to a good approximation. Figure 9: The reduced diffusion coefficients at the lowest temperature and density for isomorphs 1-3 supplemented by data for isomorph 0, plotted versus the density of the lowest-temperature state point simulated on the isomorph in question. The points are fitted by a cubic spline function (dashed curve), which by construction goes through the random close-packing (rcp) density ($\rho=0.864$) marked by the black star on the y-axis. As rcp is approached, one expects $\tilde{D}\to 0$ because the system jams. This is consistent with our data. The rcp density is calculated as follows. With $r_{c}=2^{1/6}$ one finds $V_{\rm sphere}=\pi r_{c}^{3}/6=0.74048$. The rcp volume fraction is roughly 64%; putting this equal to $\rho V_{\rm sphere}$, one arrives at $\rho=0.864$. Figure 8 suggests that $\tilde{D}$ stabilizes as $T\to 0$, and for each isomorph one can tentatively identify this low-temperature limit. Figure 9 plots estimates of these limiting values obtained at the lowest density simulated on each isomorph. An obvious question is: which density corresponds to $\tilde{D}=0$? At very low temperatures, because $\gamma$ becomes very large the WCA system behaves increasingly as a system of hard spheres (HS). The disordered HS system has a maximum density corresponding to the random closed-packed (rcp) structure at roughly 64% packing fraction. In Fig. 9, the black star at $\tilde{D}=0$ marks the corresponding density. Our data are consistent with a convergence to this point. ## VI Discussion We have studied three isomorphs of the WCA system and showed that along them the density-scaling exponents vary by more than a factor of 100. This extreme variation means that the WCA system can not be considered as an effective IPL system Bailey _et al._ (2008b). In the LJ case, the pair potential may be approximated by the so-called extended IPL (eIPL) pair potential, which is a sum of an IPL term $\sim r^{-18}$, a constant, and a term proportional to $r$ Bailey _et al._ (2008b). The latter two terms contribute little to the fluctuations of virial and potential energy Bailey _et al._ (2008b), which explains the strong correlations of the LJ system and why $\gamma$ is close to 6 (not to 4 as one might guess from the repulsive $r^{-12}$ term of the potential). The WCA situation is very different. Because the WCA system is purely repulsive, it has no liquid-gas phase transition and no liquid-gas coexistence region. This means that isomorphs may be studied over several orders of magnitude of temperature and, in particular, followed to very low temperatures. Interestingly, even here the strong-correlation property is maintained. At the same time, $\gamma$ increases in an unprecedented fashion. Despite this, the reduced-unit structure and dynamics are both invariant to a good approximation along the isomorphs. The significant difference between the LJ and WCA systems in regard to isomorph properties is also emphasized by the fact that the density-scaling exponent $\gamma$ of the LJ system is primarily a function of the density and well described by Eq. (3). This is explained by the above-mentioned approximate eIPL pair-potential argument Bailey _et al._ (2008b). The finding that $R$ and $\gamma$ of the WCA system are both primarily functions of the temperature is accounted for by a mean-field theory based on the assumption of statistically independent pair interactions. The same feature is observed for the EXP pair-potential system Bacher _et al._ (2018), and also here do both $R$ and $\gamma$ at low densities primarily depend on the temperature. Another situation where this is expected to apply is the repulsive Yukawa pair-potential system at low densities Veldhorst _et al._ (2015); Tolias and Castello (2019). In summary, the WCA systems presents a striking case where the density-scaling exponent is very far from being constant throughout the thermodynamic phase diagram Sanz _et al._ (2019); Casalini and Ransom (2019). Nevertheless, the system R-simple and has good isomorph invariance of the structure and dynamics. ###### Acknowledgements. This work was supported by the VILLUM Foundation’s Matter grant (16515). ## Appendix I: Using the Runge-Kutta method for tracing out isomorphs efficiently The density-scaling exponent $\gamma$ is the slope of the lines of constant ${S}_{\rm ex}$ in the $(\ln T,\ln\rho)$ plane (Eq. (2)). By numerical integration one can from Eq. (2) compute the lines of constant ${S}_{\rm ex}$, the configurational adiabats, which are isomorphs for any R-simple system. The density-scaling exponents required for the integration are determined from the thermal equilibrium virial potential-energy fluctuations in an $NVT$ simulation (Eq. (2)). In the following we denote the _theoretical slope_ by $f$, i.e., the slope without the unavoidable statistical noise of any MD simulation. Let $(x,y)$ be $(\ln\rho,\ln T)$ (occasionally it is better to choose instead $(x,y)=(\ln T,\ln\rho)$). In this notation, let $\frac{dy}{dx}=f(x,y)$ (21) be the first-order differential equation to be integrated. Several methods have been developed to do this numerically Press _et al._ (2007). The simplest one is Euler’s method: Imagine that one has estimated the slope at some point $(x_{i},y_{i})$ by computing $\gamma=f(x_{i},y_{i})$ from the virial potential-energy fluctuations by means of Eq. (2). The point $(x_{i+1},y_{i+1})$ is then calculated from $\displaystyle x_{i+1}$ $\displaystyle=$ $\displaystyle x_{i}+h$ $\displaystyle y_{i+1}$ $\displaystyle=$ $\displaystyle y_{i}+hf(x_{i},y_{i})+O(h^{2})\,.$ (22) Here, $h$ is the size of the numerical integration step along $x$. The truncation error on the estimated $y_{i+1}$ scales as $h^{2}$. The statistical error on the numerical calculation of the slope $f$ scales as $1/\sqrt{\tau}$ where $\tau$ is the simulation time. Thus, the statistical error on $y_{i+1}$ scales as $h/\sqrt{\tau}$ (rounding errors from the finite machine precision are not relevant for the $h$’s investigated here). The scaling of the total error is thus proportional to $h^{2}+ch/\sqrt{\tau}$ in which $c$ is a constant. We are interested, however, in the “global” truncation error, i.e., the accumulated error for some integration length $\Delta x$. Let $N=\Delta x/h$ be the number of steps needed to complete the integration. The total simulation time is $t=N(\tau+\tau_{eq})$ where $\tau_{eq}$ is the time it takes for the system to come into equilibrium when temperature and density are changed. Thus $\tau=t/N-\tau_{eq}$, and with $h=\Delta x/N$ the statistical error on $y$ is $ch/\sqrt{\tau}=c\Delta x/\sqrt{Nt-N^{2}\tau_{eq}}$. The global error from truncation scales as $N$ since it is systematic, while the statistical error scales as $\sqrt{N}$ due to its randomness. Thus, the total global error is proportional to $(\Delta x)^{2}/N+c\Delta x/\sqrt{t-N\tau_{eq}}$. The first term is lowered by making $N$ large, while the second term favors small $N$’s and diverges as $N\rightarrow t/\tau_{eq}$. Thus, since $c$ is in general unknown, the optimal choice of $N$ for a given $t$ and $\Delta x$ is not straightforward to determine. We give below a recipe for the optimal parameter choice. First, however, we show how to reduce the truncation error significantly by adopting a higher-order integration method, using the often favored fourth-order Runge-Kutta (RK4) method: For a given point $(x_{i},y_{i})$, if one defines $\displaystyle k_{1}$ $\displaystyle=$ $\displaystyle hf(x_{i},y_{i})$ $\displaystyle k_{2}$ $\displaystyle=$ $\displaystyle hf(x_{i}+h/2,y_{i}+k_{1}/2)$ $\displaystyle k_{3}$ $\displaystyle=$ $\displaystyle hf(x_{i}+h/2,y_{i}+k_{2}/2)$ (23) $\displaystyle k_{4}$ $\displaystyle=$ $\displaystyle hf(x_{i}+h,y_{i}+k_{3})\,,$ the next point $(x_{i+1},y_{i+1})$ is computed as $\displaystyle x_{i+1}$ $\displaystyle=$ $\displaystyle x_{i}+h$ $\displaystyle y_{i+1}$ $\displaystyle=$ $\displaystyle y_{i}+k_{1}/6+k_{2}/3+k_{3}/3+k_{4}/6+O(h^{5})\,.$ (24) While the simple Euler method has a truncation error scaling as $O(h^{2})$, the truncation error of RK4 scales as $O(h^{5})$. This allows for significantly larger steps along $x$ and thus smaller $N$. From the same type of arguments as given above for the Euler method, the global error of the RK4 method scales approximately as $(\Delta x)^{5}/N^{4}+c\Delta x/\sqrt{t-N\tau_{eq}}$ where $c$ is a (new) unknown constant. To compare the Euler and RK4 methods, we use each of them for integrating from the initial state point $(\rho,T)=(0.84,0.694)$ to density 1.25 and back again to the initial density of 0.84, see Fig. 10. This involves a $\gamma$ variation from 6.825 at the intial density to 4.539 at $\rho=1.25$. The difference between the final temperature of the down integration and the initial temperature, denoted by $\Delta T$, provides a measure of the maximum temperature error. Ideally $\Delta T=0$. Since the RK4 involves four simulations per step, we compare its accuracy where $h$ is four times larger than for the Euler method, which corresponds to approximately the same wall- clock time for the computation. With this constraint, the RK4 is still about two orders of magnitude more accurate: we find $\Delta T=0.186$ for the Euler algorithm and $\Delta T\cong 0.002$ for RK4. Figure 11 shows estimates of the maximum error $\Delta T$ for several values of $h$. To focus on the truncation error, we performed long-time simulations with $\tau\cong 650$. Nonetheless, this analysis demonstrates that a significantly smaller $N$ (larger $h$) is allowed for with the RK4. . Figure 10: Configurational adiabat of the WCA system traced out in the thermodynamic phase diagram. (a) The Euler method; (b) the RK4 method. The Euler integration uses a log-density step of size $h=0.1$ (steps in density of $e^{0.1}-1\simeq 10$%), while the RK4 uses $h=0.4$, corresponding to density variation of $e^{0.4}-1\simeq 50$%. The temperature difference of the here presented combined forward-backwards integration $\Delta T$ provides a convenient measure of the maximum error of the predicted temperature. We find $\Delta T\cong 0.186$ for the Euler algorithm and $\Delta T\cong 0.002$ for the RK4 algorithm. The solid lines are interpolations using a cubic Hermite spline. . Figure 11: (a) The temperature difference $\Delta T$ of the forward-backward integration in Fig. 10, for different steps sizes $h$. The blue dots show results for Euler integration and the orange dots show results for RK4 integration. The temperature difference measures the maximum error in the integration interval $0.84\leq\rho\leq 1.30$. The RK4 is significantly more accurate than the Euler algorithm, which allows for larger $h$ steps. The dashed lines indicate the expected scaling of the global error from truncation – deviations stem from statistical errors on the estimated slopes (slopes are evaluated using simulations lengths of $\tau=655$). The arrow connects Euler and RK4 calculations with approximately the same computational cost (see Fig. 10). (b) Same analysis for the integration interval $0.58\leq\rho\leq 0.84$. Since the RK4 algorithm allows for large $h$, it can be necessary to interpolate in order to identify additional state-points on the isomorph. The solid lines in Fig. 10 show such interpolations using a cubic Hermite spline: Define $x_{\phi}$ as a point between the two adjacent points $x_{i}$ and $x_{i+1}$, i.e. let $x_{i}\leq x_{\phi}<x_{i+1}$ where $x_{\phi}=x_{i}+\phi[x_{i+1}-x_{i}]$ and $0\leq\phi\leq 1$. The interpolated $y_{\phi}$ is given by the third-degree polynomial: $y_{\phi}=Ax_{\phi}^{3}+Bx_{\phi}^{2}+Cx_{\phi}+D$ where $y_{\phi}=y_{i}+[y_{i+1}-y_{i}][a{\phi}^{3}+b{\phi}^{2}+c{\phi}]$. For simplicity we introduce the notation $y_{\phi}^{\prime}=[y_{\phi}-y_{i}]/[y_{i+1}-y_{i}]$ and write the polynomial as $y_{\phi}^{\prime}=a{\phi}^{3}+b{\phi}^{2}+c{\phi}$. The coefficients yielding smooth first derivative are $a=f_{i}^{\prime}+f_{i+1}^{\prime}-2$, $b=3-2f_{i}^{\prime}-f_{i+1}^{\prime}$, and $c=f_{i}^{\prime}$ in which $f_{i}^{\prime}=f_{i}({x_{i+1}-x_{i}})/({y_{i+1}-y_{i}})$ and $f_{i+1}^{\prime}=f_{i+1}({x_{i+1}-x_{i}})/({y_{i+1}-y_{i}})$ are “reduced” slopes at the start- and end point, respectively. The $f^{\prime}$ slopes are given by known $\gamma$’s along the configurational adiabat; thus no extra simulations are needed to evaluate the interpolation. We investigated the local error by comparing a full $h$ step to two half steps of size $h/2$. The small black dot in the middle of Fig. 10(b) shows the results of such two half-steps. The truncation error for the half-step approach is then raised to the sixth order Press _et al._ (2007), one order higher than RK4 (the price is that one must perform twice as many simulations for each integration step). The triangles on Fig. 12 show the resulting $T_{i+1}$ starting from the reference state-point $(\rho,T)=(0.84,0.694)$, using a full step with $h=0.4$ and varying $\tau$’s. For comparison, the dashed line results from long-time simulations using the half-step algorithm. The distance from triangles to the dashed line provides an estimate of the total error. For short simulation times (small $\tau$’s) the statistical error dominates as seen by the scatter. The truncation error dominates at long simulation times, as seen by the triangles’ systematic deviation from the dashed line. For efficient calculation we suggest choosing $h$ and $\tau$ such that the statistical and truncation errors are of the same order of magnitude. The red $\times$ on Fig. 12 indicates the simulation time $\tau$ used for the figures in the paper. Figure 12: The difference in temperature between using a full step of $h=0.4$ and two half steps of $h=0.2$ when integrating from $\rho=0.84$ up to $\rho=1.25$, plotted against the simulation time per slope evaluation. The desired $h$ can change and the simulation time changes accordingly. The error bar indicates the “bad statistics with few blocs” mentioned in the text, computed from Eq. (28) in Ref. Flyvbjerg and Petersen, 1989. The red $\times$ marks the simulation time used in the paper. Figure 13: Estimate of the statistical error on $\gamma$ from the blocking method. The analysis indicates that $N_{B}=128$ is an good choice for the number of blocks. This gives $SE(\gamma)=0.03$ on the estimated $\gamma=6.82$. The above analysis to arrive at the optimal computation time $\tau$ is tedious and involves computationally expensive simulations. We proceed to suggest an efficient optimization recipe that utilizes the fact that the local statistical error of the slopes can be estimated by dividing a given simulation into blocks. If the simulation time for each block is sufficiently long, the blocks are statistically independent. The 67% confidence standard error is then given by $\text{SE}(\gamma)=\sqrt{\text{VAR}(\gamma)/(N_{B}-1)}$ where $\text{VAR}(\gamma)$ is the variance of the $\gamma$’s using $N_{B}$ blocks Flyvbjerg and Petersen (1989). If the blocks are independent, $\text{VAR}(\gamma)$ scales as $N_{B}$ and $\text{SE}(\gamma)$ will be independent of the number of blocks. If we divide the simulation into few blocks, $\text{VAR}(\gamma)$ may give a bad estimate of the underlying distribution’s theoretical variance. On the other hand, if one divides the simulation into many blocks, the simulation time for each block ($\tau/N_{B}$) may be brief and the blocks are not independent. In effect, the above formula for $\text{SE}(\gamma)$ gives an overestimate. The optimal $N_{B}$ is determined by tests of several different $N_{B}$, as shown on Fig. 13 (the red $\times$ corresponds to a good choice of $N_{B}=128$). The statistical error on $y_{i+1}$ can now be estimated as $\text{SE}(y_{i+1})=\text{SE}(\gamma)h/2$. Here, $2=\sqrt{4}$ enters since the RK4 algorithm includes four independent estimates of slopes (the factor is unity for the Euler algorithm, and $\sqrt{8}$ for the double-step RK4). Based on the above analysis, we propose the following recipe for efficient and accurate computation of a configurational adiabat: 1. 1. Make an $NVT$ simulation at a reference state point of temperature $T_{0}$ and density $\rho_{0}$. The simulation time $\tau$ should be sufficiently long that the equilibration time $\tau_{eq}$ can be determined using any standard method (e.g., as the time when the mean-squared displacement has reached the diffusive limit). Use the block method to determine $\text{SE}(\gamma)$, using only the equilibrated part of the trajectory. 2. 2. Choose $h$. Make a full RK4 step and estimate the local statistical error using $\text{SE}(y_{i+1})=h\text{SE}(\gamma)/\sqrt{4}$. Use the RK4 two half- step approach to estimate the total local error. If the total local error is unacceptably large, then either 1. (a) increase $\tau$ if the statistical error is of the same magnitude as the total error, or 2. (b) decrease $h$ if the total error is larger than the statistical error. Small errors suggest that the simulation time, $\tau$, could be decreased or that $h$ can be increased to make the calculation more efficient. $h$ may safely be increased or $\tau$ decreased if the statistical and total errors are of similar magnitude. 3. 3. Compute adiabatic state-points using the RK4 algorithm with the parameters determined in the above steps. Based on these, a continuous curve can be produced by interpolation using a cubic spline. 4. 4. Estimate the maximum error by integrating backwards. This error estimate quantifies the accuracy of the computed adiabat. As a consistency check of this recipe, Fig. 14 shows the excess entropy from the equation of state (EOS) of the single-component LJ system by in Ref. Kolafa and Nezbeda, 1994. The agreement with the configurational adiabat of this EOS is excellent. Figure 14: The excess entropy values plotted against the densities of the state points on the configurational adiabat traced out for the single- component LJ system starting from the triple point ($\rho=0.84$, $T=0.694$) using RK4 with $h=0.04$. The values are zoomed in to see the deviation from the average value, the black dotted line. ## Appendix II: State point data for isomorphs 0-3 Selected state points of isomorph 0 (Fig. 1). --- $\rho\sigma^{3}$ | $k_{B}T/\varepsilon$ | $P\sigma^{3}/\varepsilon$ | $\gamma$ | $R$ | $U/(N\varepsilon)$ | $W/(N\varepsilon)$ ---|---|---|---|---|---|--- 1.714 | 13.41 | 464.2 | 4.288 | 0.9995 | 50.67 | 257.5 1.636 | 10.98 | 357.7 | 4.329 | 0.9993 | 39.92 | 207.7 1.493 | 7.360 | 211.9 | 4.435 | 0.9988 | 24.52 | 134.6 1.366 | 4.933 | 125.2 | 4.582 | 0.9978 | 14.84 | 86.70 1.254 | 3.307 | 73.86 | 4.787 | 0.9961 | 8.841 | 55.59 1.156 | 2.217 | 43.60 | 5.068 | 0.9936 | 5.190 | 35.49 1.071 | 1.486 | 25.81 | 5.445 | 0.9902 | 3.007 | 22.60 0.9985 | 0.9960 | 15.35 | 5.939 | 0.9860 | 1.721 | 14.38 0.9364 | 0.6677 | 9.200 | 6.571 | 0.9808 | 1.298 | 9.157 0.9091 | 0.5466 | 7.145 | 6.945 | 0.9782 | 0.7332 | 7.313 0.8610 | 0.3664 | 4.341 | 7.835 | 0.9726 | 0.4117 | 4.675 0.8400 | 0.3000 | 3.396 | 8.353 | 0.9698 | 0.3079 | 3.743 0.8207 | 0.2456 | 2.664 | 8.932 | 0.9671 | 0.2300 | 2.100 0.7592 | 0.1104 | 1.034 | 11.94 | 0.9566 | 0.0711 | 1.251 0.7168 | 0.04960 | 0.4159 | 16.42 | 0.9475 | 0.0218 | 0.5306 0.6877 | 0.02230 | 0.1725 | 23.09 | 0.9402 | 0.006653 | 0.2285 0.6680 | 0.009059 | 0.06587 | 33.06 | 0.9349 | 0.001742 | 0.08984 0.6546 | 0.004972 | 0.03509 | 47.83 | 0.9304 | 0.0007115 | 0.04379 0.6456 | 0.002021 | 0.01382 | 69.90 | 0.9277 | 0.0001853 | 0.01938 0.6353 | 0.0004081 | 0.002703 | 152.0 | 0.9243 | 0.00001690 | 0.003846 Selected state points of isomorph 1 (Fig. 1). --- $\rho\sigma^{3}$ | $k_{B}T/\varepsilon$ | $P\sigma^{3}/\varepsilon$ | $\gamma$ | $R$ | $U/(N\varepsilon)$ | $W/(N\varepsilon)$ ---|---|---|---|---|---|--- 1.565 | 14.72 | 340.4 | 4.337 | 0.9993 | 39.48 | 202.9 1.495 | 12.05 | 262.7 | 4.385 | 0.9991 | 31.11 | 163.7 1.366 | 8.078 | 156.1 | 4.506 | 0.9983 | 19.14 | 106.2 1.252 | 5.415 | 92.69 | 4.671 | 0.9971 | 11.61 | 68.63 1.151 | 3.630 | 55.03 | 4.893 | 0.9954 | 6.956 | 44.18 1.024 | 1.992 | 25.27 | 5.366 | 0.9913 | 3.149 | 22.70 0.9527 | 1.335 | 15.12 | 5.799 | 0.9875 | 1.830 | 14.53 0.8916 | 0.8951 | 9.101 | 6.351 | 0.9831 | 1.053 | 9.310 0.8400 | 0.6000 | 5.520 | 7.041 | 0.9782 | 0.6015 | 5.972 0.7961 | 0.4022 | 3.377 | 7.899 | 0.9730 | 0.3412 | 3.840 0.7590 | 0.2696 | 2.085 | 8.955 | 0.9677 | 0.1925 | 2.477 0.7280 | 0.1807 | 1.299 | 10.25 | 0.9624 | 0.1081 | 1.603 0.7019 | 0.1211 | 0.8161 | 11.84 | 0.9574 | 0.06048 | 1.041 0.6708 | 0.06648 | 0.4129 | 14.88 | 0.9504 | 0.02517 | 0.5491 0.6543 | 0.04456 | 0.2646 | 17.48 | 0.9465 | 0.01399 | 0.3598 0.6245 | 0.01639 | 0.08940 | 26.75 | 0.9380 | 0.003198 | 0.1268 0.6060 | 0.006031 | 0.03111 | 42.01 | 0.9319 | 0.0007245 | 0.04532 0.5945 | 0.002219 | 0.01105 | 67.19 | 0.9282 | 0.0001632 | 0.01637 0.5787 | 0.00002724 | 0.0001290 | 579.6 | 0.9222 | 0.0000002251 | 0.0001957 Selected state points of isomorph 2 (Fig. 1). --- $\rho\sigma^{3}$ | $k_{B}T/\varepsilon$ | $P\sigma^{3}/\varepsilon$ | $\gamma$ | $R$ | $U/(N\varepsilon)$ | $W/(N\varepsilon)$ ---|---|---|---|---|---|--- 1.403 | 13.46 | 219.9 | 4.415 | 0.9989 | 27.20 | 143.3 1.341 | 11.02 | 169.8 | 4.474 | 0.9985 | 21.39 | 115.6 1.228 | 7.389 | 101.2 | 4.620 | 0.9976 | 13.12 | 75.04 1.128 | 4.953 | 60.34 | 4.814 | 0.9961 | 7.946 | 48.53 1.040 | 3.320 | 36.01 | 5.070 | 0.9940 | 4.756 | 31.30 1.001 | 2.718 | 27.85 | 5.225 | 0.9926 | 3.664 | 25.11 0.9637 | 2.226 | 21.56 | 5.399 | 0.9919 | 2.814 | 20.14 0.8972 | 1.492 | 12.96 | 5.820 | 0.9876 | 1.648 | 12.95 0.8675 | 1.221 | 10.07 | 6.071 | 0.9856 | 1.256 | 10.39 0.8400 | 1.000 | 7.837 | 6.350 | 0.9834 | 0.9557 | 8.330 0.8146 | 0.8187 | 6.110 | 6.663 | 0.9811 | 0.7256 | 6.683 0.7494 | 0.4493 | 2.930 | 7.828 | 0.9737 | 0.3141 | 3.461 0.6987 | 0.2466 | 1.432 | 9.411 | 0.9659 | 0.1342 | 1.803 0.6295 | 0.07427 | 0.3613 | 14.45 | 0.9515 | 0.02378 | 0.4998 0.5945 | 0.02734 | 0.1204 | 21.64 | 0.9420 | 0.005505 | 0.1752 0.5694 | 0.008230 | 0.03360 | 36.68 | 0.9337 | 0.0009355 | 0.05084 0.5591 | 0.003698 | 0.01464 | 53.01 | 0.9300 | 0.0002851 | 0.02248 0.5507 | 0.001360 | 0.005242 | 85.02 | 0.9268 | 0.00006423 | 0.008158 0.5436 | 0.0002747 | 0.001034 | 185.1 | 0.9238 | 0.000005878 | 0.001628 Selected state points of isomorph 3 (Fig. 1). --- $\rho\sigma^{3}$ | $k_{B}T/\varepsilon$ | $P\sigma^{3}/\varepsilon$ | $\gamma$ | $R$ | $U/(N\varepsilon)$ | $W/(N\varepsilon)$ ---|---|---|---|---|---|--- 1.261 | 14.79 | 160.7 | 4.468 | 0.9986 | 21.25 | 112.7 1.206 | 12.10 | 124.4 | 4.531 | 0.9982 | 91.04 | 16.73 1.106 | 8.110 | 74.47 | 4.687 | 0.9971 | 10.30 | 59.25 1.060 | 6.640 | 57.64 | 4.781 | 0.996 | 8.044 | 47.75 0.9766 | 4.451 | 34.57 | 5.011 | 0.9946 | 4.870 | 30.95 0.9389 | 3.644 | 26.80 | 5.148 | 0.9934 | 3.774 | 24.90 0.9036 | 2.984 | 20.79 | 5.304 | 0.9922 | 2.917 | 20.03 0.8400 | 2.000 | 12.55 | 5.675 | 0.9890 | 1.730 | 12.94 0.8114 | 1.638 | 9.771 | 5.892 | 0.9873 | 1.327 | 10.41 0.7603 | 1.098 | 5.947 | 6.410 | 0.9833 | 0.7760 | 6.725 0.6787 | 0.4932 | 2.248 | 7.836 | 0.9741 | 0.2592 | 2.820 0.6196 | 0.2216 | 0.8762 | 9.977 | 0.9641 | 0.08440 | 1.192 0.5776 | 0.0996 | 0.3520 | 13.15 | 0.9548 | 0.02690 | 0.5098 0.5481 | 0.04474 | 0.1453 | 17.89 | 0.9466 | 0.008466 | 0.2204 0.5276 | 0.02010 | 0.06135 | 24.92 | 0.9395 | 0.002630 | 0.09617 0.5135 | 0.009033 | 0.02636 | 35.42 | 0.9347 | 0.0008093 | 0.04231 0.5038 | 0.004059 | 0.01148 | 50.9571 | 0.9305 | 0.0002475 | 0.01873 0.4973 | 0.001824 | 0.005048 | 74.1566 | 0.9278 | 0.00007535 | 0.008328 0.4920 | 0.0006709 | 0.001824 | 119.9858 | 0.9254 | 0.00001656 | 0.003037 ## References * Alba-Simionesco _et al._ (2004) C. Alba-Simionesco, A. Cailliaux, A. Alegria, and G. Tarjus, “Scaling out the density dependence of the alpha relaxation in glass-forming polymers,” Europhys. Lett. 68, 58–64 (2004). * Roland _et al._ (2005) C. M. Roland, S. Hensel-Bielowka, M. Paluch, and R. Casalini, “Supercooled dynamics of glass-forming liquids and polymers under hydrostatic pressure,” Rep. Prog. Phys. 68, 1405–1478 (2005). * Lopez _et al._ (2012) E. R. Lopez, A. S Pensado, J. Fernandez, and K. R. Harris, “On the Density Scaling of pVT Data and Transport Properties for Molecular and Ionic Liquids,” J. Chem. Phys. 136, 214502 (2012). * Adrjanowicz _et al._ (2016) K. Adrjanowicz, M. Paluch, and J. Pionteck, “Isochronal superposition and density scaling of the intermolecular dynamics in glass-forming liquids with varying hydrogen bonding propensity,” RSC Adv. 6, 49370 (2016). * Gundermann _et al._ (2011) D. Gundermann, U. R. Pedersen, T. Hecksher, N. P. Bailey, B. Jakobsen, T. Christensen, N. B. Olsen, T. B. Schrøder, D. Fragiadakis, R. Casalini, C. M. Roland, J. C. Dyre, and K. Niss, “Predicting the density–scaling exponent of a glass–forming liquid from Prigogine–Defay ratio measurements,” Nat. Phys. 7, 816–821 (2011). * Kivelson _et al._ (1996) D. Kivelson, G. Tarjus, X. Zhao, and S. A. Kivelson, “Fitting of viscosity: Distinguishing the temperature dependences predicted by various models of supercooled liquids,” Phys. Rev. E 53, 751–758 (1996). * Gnan _et al._ (2009) N. Gnan, T. B. Schrøder, U. R. Pedersen, N. P. Bailey, and J. C. Dyre, “Pressure-energy correlations in liquids. IV. “Isomorphs” in liquid phase diagrams,” J. Chem. Phys. 131, 234504 (2009). * Dyre (2014) J. C. Dyre, “Hidden scale envariance in condensed matter,” J. Phys. Chem. B 118, 10007–10024 (2014). * Rosenfeld (1977) Y. Rosenfeld, “Relation between the transport coefficients and the internal entropy of simple systems,” Phys. Rev. A 15, 2545–2549 (1977). * Dyre (2018) J. C. Dyre, “Perspective: Excess-entropy scaling,” J. Chem. Phys. 149, 210901 (2018). * Schrøder and Dyre (2014) T. B. Schrøder and J. C. Dyre, “Simplicity of condensed matter at its core: Generic definition of a Roskilde-simple system,” J. Chem. Phys. 141, 204502 (2014). * Pedersen _et al._ (2008) U. R. Pedersen, N. P. Bailey, T. B. Schrøder, and J. C. Dyre, “Strong pressure-energy correlations in van der Waals liquids,” Phys. Rev. Lett. 100, 015701 (2008). * Bailey _et al._ (2008a) N. P. Bailey, U. R. Pedersen, N. Gnan, T. B. Schrøder, and J. C. Dyre, “Pressure-energy correlations in liquids. I. Results from computer simulations,” J. Chem. Phys. 129, 184507 (2008a). * Heyes and Branka (2007) D. M. Heyes and A. C. Branka, “Physical properties of soft repulsive particle fluids,” Phys. Chem. Chem. Phys. 9, 5570–5575 (2007). * Bailey _et al._ (2008b) N. P. Bailey, U. R. Pedersen, N. Gnan, T. B. Schrøder, and J. C. Dyre, “Pressure-energy correlations in liquids. II. Analysis and consequences,” J. Chem. Phys. 129, 184508 (2008b). * Schrøder _et al._ (2011) T. B. Schrøder, N. Gnan, U. R. Pedersen, N. P. Bailey, and J. C. Dyre, “Pressure-energy correlations in liquids. V. Isomorphs in generalized Lennard-Jones systems,” J. Chem. Phys. 134, 164505 (2011). * Yoon _et al._ (2019) T.-J. Yoon, M. Y. Ha, E. A. Lazar, W. B. Lee, and Y.-W. Lee, “Topological extension of the isomorph theory based on the shannon entropy,” Phys. Rev. E 100, 012118 (2019). * Bacher _et al._ (2019) A. K. Bacher, T. B. Schrøder, and J. C. Dyre, “The EXP pair-potential system. I. Fluid phase isotherms, isochores, and quasiuniversality,” J. Chem. Phys. 149, 114501 (2019). * Bacher _et al._ (2020) A. K. Bacher, U. R. Pedersen, T. B. Schrøder, and J. C. Dyre, “The EXP pair-potential system. IV. Isotherms, isochores, and isomorphs in the two crystalline phases,” J. Chem. Phys. 152, 094505 (2020). * Ingebrigtsen _et al._ (2012a) T. S. Ingebrigtsen, T. B. Schrøder, and J. C. Dyre, “Isomorphs in model molecular liquids,” J. Phys. Chem. B 116, 1018–1034 (2012a). * Fragiadakis and Roland (2019) D. Fragiadakis and C. M. Roland, “Intermolecular distance and density scaling of dynamics in molecular liquids,” J. Chem. Phys. 150, 204501 (2019). * Koperwas _et al._ (2020) K. Koperwas, A. Grzybowski, and M. Paluch, “Virial–potential-energy correlation and its relation to density scaling for quasireal model systems,” Phys. Rev. E 102, 062140 (2020). * Ingebrigtsen and Tanaka (2015) T. S. Ingebrigtsen and H. Tanaka, “Effect of size polydispersity on the nature of Lennard-Jones liquids,” J. Phys. Chem. B 119, 11052–11062 (2015). * Albrechtsen _et al._ (2014) D. E. Albrechtsen, A. E. Olsen, U. R Pedersen, T. B. Schrøder, and J. C. Dyre, “Isomorph invariance of the structure and dynamics of classical crystals,” Phys. Rev. B 90, 094106 (2014). * Ingebrigtsen _et al._ (2013) T. S. Ingebrigtsen, J. R Errington, T. M. Truskett, and J. C. Dyre, “Predicting how nanoconfinement changes the relaxation time of a supercooled liquid,” Phys. Rev. Lett. 111, 235901 (2013). * Veldhorst _et al._ (2014) A. A. Veldhorst, J. C. Dyre, and T. B. Schrøder, “Scaling of the dynamics of flexible Lennard-Jones chains,” J. Chem. Phys. 141, 054904 (2014). * Hummel _et al._ (2015) F. Hummel, G. Kresse, J. C. Dyre, and U. R. Pedersen, “Hidden scale invariance of metals,” Phys. Rev. B 92, 174116 (2015). * Friedeheim _et al._ (2019) L. Friedeheim, J. C. Dyre, and N. P. Bailey, “Hidden scale invariance at high pressures in gold and five other face-centered-cubic metal crystals,” Phys. Rev. E 99, 022142 (2019). * Veldhorst _et al._ (2015) A. A. Veldhorst, T. B Schrøder, and J. C. Dyre, “Invariants in the Yukawa system’s thermodynamic phase diagram,” Phys. Plasmas 22, 073705 (2015). * Tolias and Castello (2019) P. Tolias and F. L. Castello, “Isomorph-based empirically modified hypernetted-chain approach for strongly coupled Yukawa one-component plasmas,” Phys. Plasmas 26, 043703 (2019). * Schrøder _et al._ (2009) T. B. Schrøder, U. R. Pedersen, N. P. Bailey, S. Toxvaerd, and J. C. Dyre, “Hidden Scale Invariance in Molecular van der Waals Liquids: A Simulation Study,” Phys. Rev. E 80, 041502 (2009). * Ingebrigtsen _et al._ (2012b) T. S. Ingebrigtsen, L. Bøhling, T. B. Schrøder, and J. C. Dyre, “Thermodynamics of condensed matter with strong pressure-energy correlations,” J. Chem. Phys. 136, 061102 (2012b). * Bacher _et al._ (2018) A. K. Bacher, T. B. Schrøder, and J. C. Dyre, “The EXP pair-potential system. II. Fluid phase isomorphs,” J. Chem. Phys. 149, 114502 (2018). * Heyes _et al._ (2019) D. M. Heyes, D. Dini, L. Costigliola, and J. C. Dyre, “Transport coefficients of the Lennard-Jones fluid close to the freezing line,” J. Chem. Phys. 151, 204502 (2019). * Sanz _et al._ (2019) A. Sanz, T. Hecksher, H. W. Hansen, J. C. Dyre, K. Niss, and U. R. Pedersen, “Experimental evidence for a state-point-dependent density-scaling exponent of liquid dynamics,” Phys. Rev. Lett. 122, 055501 (2019). * Casalini and Ransom (2019) R. Casalini and T. C. Ransom, “On the experimental determination of the repulsive component of the potential from high pressure measurements: What is special about twelve?” J. Chem. Phys. 151, 194504 (2019). * Bøhling _et al._ (2012) L. Bøhling, T. S. Ingebrigtsen, A. Grzybowski, M. Paluch, J. C. Dyre, and T. B. Schrøder, “Scaling of viscous dynamics in simple liquids: Theory, simulation and experiment,” New J. Phys. 14, 113035 (2012). * Ransom _et al._ (2019) T. C. Ransom, R. Casalini, D. Fragiadakis, and C. M. Roland, “The complex behavior of the “simplest” liquid: Breakdown of density scaling in tetramethyl tetraphenyl trisiloxane,” J. Chem. Phys. 151, 174501 (2019). * Weeks _et al._ (1971) J. D. Weeks, D. Chandler, and H. C. Andersen, “Role of repulsive forces in determining the equilibrium structure of simple liquids,” J. Chem. Phys. 54, 5237–5247 (1971). * Chandler _et al._ (1983) D. Chandler, J. D. Weeks, and H. C. Andersen, “Van der Waals picture of liquids, solids, and phase transformations,” Science 220, 787–794 (1983). * de Kuijper _et al._ (1990) A. de Kuijper, J. A. Schouten, and J. P. J. Michels, “The melting line of the Weeks–Chandler–Anderson Lennard‐Jones reference system,” J. Chem. Phys. 93, 3515–3519 (1990). * Bishop _et al._ (1999) M. Bishop, A. Masters, and J. H. R. Clarke, “Equation of state of hard and Weeks–Chandler–Andersen hyperspheres in four and five dimensions,” J. Chem. Phys. 110, 11449–11453 (1999). * Ben-Amotz and Stell (2004) D. Ben-Amotz and G. Stell, “Reformulaton of Weeks–Chandler–Andersen perturbation theory directly in terms of a hard-sphere reference system,” J. Phys. Chem. B 108, 6877–6882 (2004). * Nasrabad (2008) A. E. Nasrabad, “Thermodynamic and transport properties of the Weeks–Chandler–Andersen fluid: Theory and computer simulation,” J. Chem. Phys. 129, 244508 (2008). * Ahmed and Sadus (2009) A. Ahmed and R. J. Sadus, “Phase diagram of the Weeks-Chandler-Andersen potential from very low to high temperatures and pressures,” Phys. Rev. E 80, 061101 (2009). * Benjamin and Horbach (2015) Ronald Benjamin and Jürgen Horbach, “Crystal growth kinetics in Lennard-Jones and Weeks-Chandler-Andersen systems along the solid-liquid coexistence line,” J. Chem. Phys. 143, 014702 (2015). * Atreyee and Wales (2020) B. Atreyee and D. J. Wales, “Fragility and correlated dynamics in supercooled liquids,” J. Chem. Phys. 153, 124501 (2020). * Dawass _et al._ (2020) N. Dawass, P. Krüger, S. K. Schnell, O. A. Moultos, I. G. Economou, T. J. H. Vlugt, and J.-M. Simon, “Kirkwood-Buff integrals using molecular simulation: Estimation of surface effects,” Nanomaterials 10 (2020), 10.3390/nano10040771. * Gußmann _et al._ (2020) F. Gußmann, S. Dietrich, and R. Roth, “Toward a density-functional theory for the Jagla fluid,” Phys. Rev. E 102, 062112 (2020). * Mirzaeinia and Feyzi (2020) A. Mirzaeinia and F. Feyzi, “A perturbed-chain equation of state based on wertheim tpt for the fully flexible lj chains in the fluid and solid phases,” J. Chem. Phys. 152, 134502 (2020). * Nogueira _et al._ (2020) T. P. O. Nogueira, H. O. Frota, F. Piazza, and J. R. Bordin, “Tracer diffusion in crowded solutions of sticky polymers,” Phys. Rev. E 102, 032618 (2020). * Tong and Tanaka (2020) H. Tong and H. Tanaka, “Role of attractive interactions in structure ordering and dynamics of glass-forming liquids,” Phys. Rev. Lett. 124, 225501 (2020). * Ueda and Morita (2020) Shun Ueda and Kazuki Morita, “Theoretical calculation of the free energy of mixing of liquid transition-metal alloys using a bond-order potential and thermodynamic perturbation theory,” J. Non-Cryst. Solids 528, 119743 (2020). * Hansen and McDonald (2013) J.-P. Hansen and I. R. McDonald, _Theory of Simple Liquids: With Applications to Soft Matter_ , 4th ed. (Academic, New York, 2013). * Allen and Tildesley (1987) M. P. Allen and D. J. Tildesley, _Computer Simulation of Liquids_ (Oxford Science Publications (Oxford), 1987). * Bailey _et al._ (2017) N. P. Bailey, T. S. Ingebrigtsen, J. S. Hansen, A. A. Veldhorst, L. Bøhling, C. A. Lemarchand, A. E. Olsen, A. K. Bacher, L. Costigliola, U. R. Pedersen, H. Larsen, J. C. Dyre, and T. B. Schrøder, “RUMD: A general purpose molecular dynamics package optimized to utilize GPU hardware down to a few thousand particles,” Scipost Phys. 3, 038 (2017). * Pedersen _et al._ (2016) U. R. Pedersen, L. Costigliola, N. P. Bailey, T. B Schrøder, and J. C. Dyre, “Thermodynamics of freezing and melting,” Nat. Commun. 7, 12386 (2016). * Casalini and Ransom (2020) R. Casalini and T. C. Ransom, “On the pressure dependence of the thermodynamical scaling exponent $\gamma$,” Soft Matter 16, 4625–4631 (2020). * Maimbourg and Kurchan (2016) T. Maimbourg and J. Kurchan, “Approximate scale invariance in particle systems: A large-dimensional justification,” EPL 114, 60002 (2016). * Maimbourg _et al._ (2020) T. Maimbourg, J. C. Dyre, and L. Costigliola, “Density scaling of generalized Lennard-Jones fluids in different dimensions,” SciPost Phys. 9, 90 (2020). * Heyes _et al._ (2015) D. M. Heyes, D. Dini, and A. C. Branka, “Scaling of Lennard-Jones liquid elastic moduli, viscoelasticity and other properties along fluid-solid coexistence,” Phys. Status Solidi (b) 252, 1514–1525 (2015). * Castello _et al._ (2021) F. L. Castello, P. Tolias, and J. C. Dyre, “Testing the isomorph invariance of the bridge functions of Yukawa one-component plasmas,” J. Chem. Phys. 154, 034501 (2021). * Press _et al._ (2007) W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, _Numerical recipes: the art of scientific computing_ , 3rd ed. (Cambridge University Press, 2007). * Flyvbjerg and Petersen (1989) H. Flyvbjerg and H. G. Petersen, “Error estimates on averages of correlated data,” J. Chem. Phys. 91, 461–466 (1989). * Kolafa and Nezbeda (1994) J. Kolafa and I. Nezbeda, “The Lennard-Jones fluid: An accurate analytic and theoretically-based equation of state,” Fluid Phase Equilib. 100, 1–36 (1994).
11institutetext: Hong Kong Baptist University, Hong Kong, China 22institutetext: University of Southern California, CA, USA 33institutetext: Hangzhou Dianzi University, Hang Zhou, China 44institutetext: The Hong Kong University of Science and Technology (Guangzhou), China # Evolutionary Multi-objective Architecture Search Framework: Application to COVID-19 3D CT Classification Xin He 11 Guohao Ying 22 Jiyong Zhang$\dagger$ 33 Xiaowen Chu$\dagger$ $\dagger$Corresponding authors<EMAIL_ADDRESS>jzhang@hdu.edu.cn).4411 ###### Abstract The COVID-19 pandemic has threatened global health. Many studies have applied deep convolutional neural networks (CNN) to recognize COVID-19 based on chest 3D computed tomography (CT). Recent works show that no model generalizes well across CT datasets from different countries, and manually designing models for specific datasets requires expertise; thus, neural architecture search (NAS) that aims to search models automatically has become an attractive solution. To reduce the search cost on large 3D CT datasets, most NAS-based works use the weight-sharing (WS) strategy to make all models share weights within a supernet; however, WS inevitably incurs search instability, leading to inaccurate model estimation. In this work, we propose an efficient Evolutionary Multi-objective ARchitecture Search (EMARS) framework. We propose a new objective, namely potential, which can help exploit promising models to indirectly reduce the number of models involved in weights training, thus alleviating search instability. We demonstrate that under objectives of accuracy and potential, EMARS can balance exploitation and exploration, i.e., reducing search time and finding better models. Our searched models are small and perform better than prior works on three public COVID-19 3D CT datasets. ###### Keywords: COVID-19 Neural Architecture Search (NAS) Weight-sharing Evolutionary Algorithm (EA) 3D Computed Tomograph (CT) ## 1 Introduction The rapid spread of coronavirus disease 2019 (COVID-19) pandemic has threatened global health. Isolating infected patients is an effective way to block the transmission of the virus. Thus, fast and accurate methods to detect infected patients are crucial. Chest CT is relatively easy to perform and has been proved an important complement to nucleic acid test [7]. However, there is a serious lack of radiologists during the pandemic. Many researchers have applied deep learning (DL) techniques to assist CT diagnosis. For COVID-19 3D CT classification, there are two mainstream CNN-based methods: 1) multiview- based methods [22, 15] use 2D CNN to extract features for each 2D CT slice and then fuse these features to make predictions; and 2) voxel-based methods [8, 32] feed 3D CNNs with 3D CT scans to make full use of the geometric information. He et al. [9] benchmark a series of hand-crafted 2D and 3D CNNs and demonstrate that 3D CNNs generally outperform 2D CNNs. Some recent works [8, 11] benchmark multiple COVID-19 datasets from different countries and find that no model can maintain absolute advantages on different datasets. However, since it is difficult to design models manually for specific datasets, the neural architecture search (NAS) [10, 6] has become an attractive solution to discover superior models without human assistance. Reinforcement learning [33, 21], gradient descent (GD) [18], and evolutionary algorithm (EA) [30, 23] are three mainstream NAS methods. The comparative results of a recent survey [10] show that the EA-based NAS can discover better networks than other types of NAS methods. However, the better performance of EA-based NAS is at the cost of more computing resources because they need to retrain all searched models to compare their performance, e.g., AmoebaNet [23] took 3,150 GPU days to search. Thanks to the weight-sharing method [21, 29], any model can be evaluated without retraining, and Yang et al. [30] reduced the search time of the EA-based NAS to 0.4 GPU days. NAS was originally proposed for large-scale 2D image tasks. Although some works [9, 8] have extended NAS to search 3D models for COVID-19 3D datasets, they suffered from the search instability (analyzed in Sec. 3.1) incurred by weight-sharing, which leads to fluctuation in the search process and even worse results than random search in some cases. In this work, we propose an efficient Evolutionary Multi-objective ARchitecture Search framework, dubbed as EMARS. We summarize our contributions below. 1. 1. We propose a new objective, i.e., potential, which can help exploit promising models and indirectly reduce the number of models involved in weights training, thereby alleviating search instability. 2. 2. We demonstrate that compared to conventional objective settings (e.g., only considering accuracy), EMARS that aims at accuracy, potential, and small size objectives can trade-off between exploitation and exploration, reducing search time by 22% on average and discovering better models. 3. 3. Our searched models are small in size and outperform prior works [9, 8] on three public datasets: CC-CCII [31], MosMed [20], and Covid-CTset [22]. ## 2 Preliminaries In this section, we describe the common basis of weight-sharing neural architecture search (NAS) [29]. NAS is formulated as a bi-level optimization problem: $\begin{array}[]{cl}\min_{\alpha}&L_{\text{val }}\left(w^{*},\alpha\right)\\\ \text{ s.t. }&w^{*}=\operatorname{argmin}_{w}L_{\text{train }}(w,\alpha)\end{array}$ (1) where $L_{\text{train}}$ and $L_{\text{val}}$ indicate the training and validation loss; $w$ and $\alpha$ indicate the weights and architecture of a candidate model. The early NAS methods [33, 23] search and evaluate the networks by retraining them from scratch, resulting in huge computational cost. To reduce the burden, the weight-sharing strategy [29] was proposed, in which the SuperNet $\mathcal{N}$ contains all possible architectures (subnets) and its weights $\mathcal{W}$ are shared among these subnets. The architecture and weights of each subnet are denoted by $\mathcal{N}(\alpha)$ and $\mathcal{W}(\alpha)$, respectively, where $\alpha$ is the subnet architecture, encoded by one-hot sequences (described in Sec. 3.3). The loss of a subnet is expressed as $L(\alpha)=L(\mathcal{N(\alpha)},\mathcal{W(\alpha)},X,Y)$, where $L,X,Y$ indicate the loss function, input data, and target, respectively, and the gradient of subnet weights is $\nabla_{\mathcal{W}(\alpha)}=\frac{\partial L(\alpha)}{\partial\mathcal{W}}$. Then gradients of SuperNet weights $\mathcal{W}$ can be calculated as the average gradient of all subnets, i.e., $\nabla_{\mathcal{W}}=\frac{1}{N}\sum_{i=1}^{N}\nabla_{\mathcal{W}(\alpha_{i})}=\frac{1}{N}\sum_{i=1}^{N}\frac{\partial L(\alpha_{i})}{\partial\mathcal{W}}$, where $N$ is the total number of subnets. Obviously, it is not practical to use all subnets to update SuperNet weights at each time. Therefore, we use a mini-batch of subnets for training, detailed as Eq. 2 $\nabla_{\mathcal{W}}\approx\frac{1}{M}\sum_{i=1}^{M}\nabla_{\mathcal{W}(\alpha_{i})}$ (2) where $M$ is the number of subnets sampled in a mini-batch and $M<<N$. In our experiments, we find that $M=1$ works just fine, i.e., we can update $\mathcal{W}$ using the gradient from any single sampled subnets for each training batch. ## 3 Methodology ### 3.1 Potential objective: Alleviating Search Instability By instability, we mean that the same subnet can produce a completely different performance at different times of the search process. The instability is caused by the weight-sharing strategy because the weights of all subnets are coupled, then an update of any subnet’s weights is bound to affect (usually negatively) other subnets. Therefore, the performance of a subnet at a specific time does not necessarily represent its real performance but instead misleads the direction of evolutionary search (described in Sec. 3.2). To mitigate the search instability caused by weight-sharing, a natural idea is to reduce the number of models involved in weights training (i.e., Eq. 2). For this reason, some works [13, 2] directly reduce the number of models by progressively shrinking the search space based on the model performance, but this may eliminate promising models in the early stage of the search. To avoid this problem, we take an indirect approach in which we keep exploring various models in the early stage of the search and then spend more effort on training those promising models in the later stage of the search. In this way, we can indirectly reduce the number of models involved in weights training without deliberately reducing the search space. However, how do we determine whether a model is promising or not? Here, we propose a new objective, namely potential, to help find promising models. Specifically, for each sampled model, we maintain and update its historical performances $Z=(E,F)$, where $E=[e_{1},...,e_{m}]^{T}$ is a column vector recording the epochs when the model is sampled, $F=[f_{1},...,f_{m}]^{T}$ is a column vector recording the corresponding validation accuracy. Note that, $Z$ is dynamically updated with the search process, so the size of $Z$ (i.e., $m$) varies for models. The potential $\mathcal{P}$ of a model is calculated by ordinary least squares (OLS): $\mathcal{P}=(E^{T}E)^{-1}E^{T}F$ (3) To some extent, $E$ can also reflect how promising a model is, e.g., if $E$ is densely distributed, it means this model outperforms other models in multiple rounds of search and hence wins more chances to be sampled. However, considering only $E$ will exacerbate the Matthew effect, and the search may get trapped in a local optimum. Our proposed potential solves this problem by considering the coupling relation between sampling frequency $E$ and validation accuracy $F$, i.e., the growing trend of accuracy rather than the accuracy at a specific time. The larger the $\mathcal{P}$ value, the more promising the model is. ### 3.2 Evolutionary Search The search algorithm (see Supplement Alg. 1) starts with a warm-up stage, followed by the evolutionary search stage. In the warm-up, the SuperNet is trained by uniformly sampling subnets, thus all candidate operations are trained equally. After the warm-up, top-$P$ best-performing subnets form the initial population, i.e., $\mathcal{A}^{(0)}$, and will be evolved for multiple generations. Each generation comprises two sequential processes: 1) weights training, where each individual (i.e., subnet) is selected from the population and trained based on Eq 2; and 2) architecture search, comprising selection, crossover, and mutation (see Fig. 1). Figure 1: Overview of search space and search method. Upper-right: MBConv$3\\_3$, where C, D, H, W indicate channels, depth, height, and width. Lower-right: An example of exploitation and exploration under different objectives. (best viewed in color) Selection. After weights training, we record multiple objectives for all individuals in the population. We adopt NSGA-II [4] method to select Pareto- front individuals under the recorded objectives from the population. We compare different combinations of these objectives in Sec. 4.2 and find that searching with potential and accuracy can discover better models with less cost. Crossover&Mutation. The selection produces $K$ Pareto-front individuals, based on which we further generate $P-K$ new individuals. Each new individual is generated by randomly sampling from the SuperNet or performing crossover and mutation (CM) with certain probabilities. The basic unit of CM is the one-hot sequence, representing the candidate operation (see Fig. 1). Exploitation&Exploration. Fig. 1 (lower-right) shows an example of two important issues in the evolutionary algorithm (EA) based search: exploration and exploitation. Exploitation prefers the current optimal solution, which reduces search cost but may lead to a local optimum; exploration is more likely to find the optimal solution but consumes more resources. The common opinion about EA is that the steps of crossover and mutation determine the exploration, and exploitation is done by selection. However, our experiments in Sec. 4.2 show that setting different objectives in the selection step can also control the evolution direction. Specifically, accuracy and potential will make the evolution process towards exploration and exploitation, respectively, while combining accuracy and potential can balance exploration and exploitation. ### 3.3 Search Space SuperNet. The search space is represented by a SuperNet $\mathcal{N}$, containing all possible subnets. SuperNet comprises two parts: 1) the searchable part, i.e., $N=6$ layers; 2) the fixed part, i.e., stem block, global average pooling [17], and a fully connected layer. The stem block is a standard $3\times 3\times 3$ 3D convolution followed by a 3D batch normalization and a ReLu6 activation function [12]. Layer. The $i$-th layer comprises a calibration block and $B_{i}$ searchable blocks. The calibration block is a 3D $1\times 1\times 1$ point-wise convolution to solve the problem of feature dimension mismatch; thus, all subsequent blocks have a stride of 1. The number of searchable blocks and the stride of calibration block in six layers are [4,4,4,4,4,1] and [2,2,2,1,2,1], respectively. The output channels of the stem block and six layers are 32 and [24,40,80,96,192,320], respectively. Block. Each searchable block is a candidate operation, encoded by a one-hot sequence. We adopt eight candidate operations, including a skip-connection operation and seven mobile inverted bottleneck convolutions [24], denoted by MBConv$k\\_e$, where $k\\_e\in\\{3\\_3,3\\_4,3\\_6,5\\_3,5\\_4,7\\_3,7\\_4\\}$, $k$ is the kernel size of the intermediate depth-wise convolution (DWConv), and $e$ is the expansion ratio between the input channel and inner channel of MBConv. ## 4 Experiments ### 4.1 Implementation Details Datasets. For a fair comparison, we apply the same three datasets as prior works [9, 8]. CC-CCII [31] has 3,993 CT scans of three classes: novel coronavirus pneumonia (NCP), common pneumonia (CP), and normal case; MosMed [20] has 1,110 scans of NCP and normal classes; Covid-CTset [22] has 526 scans of NCP and normal classes. More details of datasets can be referred to supplement. Search stage. We use four Nvidia V100 GPUs to search for 100 epochs, where the warm-up stage has 10 epochs. During each search epoch, a population of models are equally trained on the training set and evaluated on the validation set. The population size is 20, where 10 Pareto-front models are selected from the population using NSGA-II [4] under multiple objectives (e.g., validation accuracy, potential, and model size), and 10 new models are generated by crossover and mutation with the probabilities of 0.3 and 0.2. To improve search efficiency, we set the input size ($width\times height\times depth$) to $64\times 64\times 16$. We use Adam optimizer [14] with a weight decay of 3e-4 and an initial learning rate of 0.001. Retraining stage. After the search stage, we combine the training and validation set and retrain the Pareto-front models on the combined set for 200 epochs. We use the same Adam settings as the search stage. The 3D input sizes of CC-CCII, MosMed, and Covid-CTset datasets are $128\times 128\times 32$, $256\times 256\times 40$, and $512\times 512\times 32$, respectively. Our framework is based on NNI [19] and available at: https://github.com/marsggbo/MICCAI2022-EMARS. ### 4.2 Results and Analysis Figure 2: The model size-aware search results. X and Y axes indicate model size and validation accuracy (Acc). The purple and yellow points indicate the sampled models in the first and last half of the search stage, respectively. (best viewed in color) Figure 3: The potential (P) aware search results. Different colored points indicate the models sampled in different epoch periods. The solid and dashed lines in each period indicate the average and 25/75 percentile accuracy, respectively. (best viewed in color) Model Size-aware Search. Fig. 2 presents model size-aware search results on CC-CCII dataset. Fig. 2 (a) shows that searching under only validation accuracy (Acc) will explore both extremes of model size, but with no performance gain, while Fig. 2 (b)&(c) show that additional consideration of model size on top of Acc helps find better models in the later stage, indicating multi-objective can facilitate the search process. Besides, compared to Fig. 2 (b), searching under Acc and small model size in Fig. 2 (c) can not only reduce search time from 9.31 hours to 8.46 hours but also discover competitive models. Potential-aware Search. We further build three experiments on the CC-CCII dataset to validate potential objective. Each sub-figure of Fig. 3 divides the search process into four periods based on the search epoch. Each period is presented with different colors and marked with the accuracy of 25/50/75th percentiles. Fig. 3 (a) shows that searching under Acc tends to explore more models, regardless of whether the model performance is good or bad, leading to wasting time on those unpromising models (lower-right points). On the contrary, in Fig. 3 (b), the difference between the 25th and 75th percentiles and the number of sampled models are gradually reduced with the search process, which implies that potential will guide the evolution process in the later stage to exploit promising models already discovered. Although it reduces search time, it has lower Acc due to being trapped in local optima in the early stage. Fig. 3 (c) shows that searching under potential, Acc, and small size can reduce the search time by 19% on average and balance exploitation and exploration. Specifically, the first two periods are dominated by exploration, as a wide accuracy range of models is explored, and we can find models with an accuracy of more than 0.7 faster in the second period. On the other hand, the last two periods focus more on exploitation, as the number of unpromising models is significantly reduced, and the accuracy of 25/50/75th percentiles is improved steadily. Comparison with Prior Works. Table. 1 compares our searched models with prior works based on four widely used metrics: accuracy, precision, sensitivity, and f1 score. Precision and sensitivity are a pair of negatively correlated metrics, so they cannot fully describe model performance. F1 score is the harmonic mean of the precision and sensitivity; thus, it is a better metric. As can be seen, our models searched under APS (accuracy, potential, and small model size) objectives have small sizes and outperform all prior hand-crafted and NAS-based models on three datasets in terms of accuracy, precision, and f1 score. Besides, MosMed is an imbalanced dataset, and we can find that the models (e.g., CovidNet3D-S/L and EMARS-A) searched without potential are overfitted on positive class (i.e., NCP), as they have extremely high sensitivity but low precision. On the contrary, EMARS-P and EMARS-APS are searched with potential objective, balancing precision and sensitivity well and achieving higher accuracy and f1 scores. More results can be referred to the Supplement. Table 1: Results on CC-CCII [31], MosMed [20], and Covid-CTset [22] datasets. A, P, and S in our model name indicate accuracy, potential, and small model size, e.g., EMARS-A indicates the model searched under the accuracy objective. Dataset | Model | Size (MB) | Type | Accuracy | Precision | Sensitivity | F1 ---|---|---|---|---|---|---|--- CC- CCII [China] [31] | ResNet3D101 [26] | 325.21 | Manual | 85.54 | 89.62 | 77.15 | 82.92 DenseNet3D121 [5] | 43.06 | 87.02 | 88.97 | 82.78 | 85.76 MC3_18 [26] | 43.84 | 86.16 | 87.11 | 82.78 | 84.89 COVID-AL [28] | - | 86.60 | - | - | - VGG16-Ensemble[16] | - | 88.12 | 84.04 | 89.19 | 86.54 CovidNet3D-S [8] | 11.48 | Auto | 88.55 | 88.78 | 91.72 | 90.23 CovidNet3D-L [8] | 53.26 | 88.69 | 90.48 | 88.08 | 89.26 MNas3DNet [9] | 22.91 | 87.14 | 88.44 | 86.09 | 87.25 EMARS-A | 5.93 | 89.67 | 89.26 | 89.22 | 89.23 EMARS-P | 5.63 | 88.78 | 88.81 | 88.22 | 88.51 | EMARS-APS | 3.38 | 89.61 | 91.48 | 89.97 | 90.72 Mos- Med [Russia] [20] | ResNet3D101 [26] | 325.21 | Manual | 81.82 | 81.31 | 97.25 | 88.57 DenseNet3D121 [5] | 43.06 | 79.55 | 84.23 | 92.16 | 88.01 MC3_18 [26] | 43.84 | 80.4 | 79.43 | 98.43 | 87.92 DeCoVNet [27] | - | 82.43 | - | - | - CovidNet3D-S [8] | 12.48 | Auto | 81.17 | 78.82 | 99.22 | 87.85 CovidNet3D-L [8] | 60.39 | 82.29 | 79.50 | 98.82 | 88.11 EMARS-A | 2.89 | 80.98 | 77.91 | 99.61 | 87.44 EMARS-P | 18.22 | 84.34 | 93.56 | 85.49 | 89.34 EMARS-APS | 10.69 | 88.09 | 93.52 | 90.59 | 92.03 Covid- CTset [Iran] [22] | ResNet3D101 [26] | 325.21 | Manual | 93.87 | 92.34 | 95.54 | 93.92 DenseNet3D121 [5] | 43.06 | 91.91 | 92.57 | 92.57 | 92.57 MC3_18 [26] | 43.84 | 92.57 | 90.95 | 94.55 | 92.72 CovCTx[3] | - | 96.37 | - | 97.00 | - Vit-32$\times$32 [25] | - | 95.36 | - | 83.00 | - CovidNet3D-S [8] | 8.36 | Auto | 94.27 | 92.68 | 90.48 | 91.57 CovidNet3D-L [8] | 62.82 | 96.88 | 97.50 | 92.86 | 95.12 AutoGluon model [1] | 93.00 | 89.00 | 90.00 | 88.00 | 88.00 EMARS-A | 8.36 | 95.16 | 95.77 | 95.16 | 95.46 EMARS-P | 14.41 | 92.87 | 92.73 | 92.74 | 92.74 | EMARS-APS | 9.95 | 97.66 | 97.61 | 97.58 | 97.59 ## 5 Conclusion and Future Work In this work, we introduce an EA-based neural architecture search (EMARS) framework, which can efficiently discover superior 3D models under multiple objectives for COVID-19 3D CT classification. We demonstrate that our proposed objective, i.e., potential, can effectively alleviate the search instability and help exploit promising models. The models searched by EMARS under accuracy and potential objectives have small sizes and outperform the previous work on three public datasets. We believe our framework can also be extended to other types of datasets and tasks (e.g., segmentation), which is also our future work. ## 6 Acknowledge This work was supported in part by Hong Kong Research Matching Grant RMGS2019_1_23, the Zhejiang Province Nature Science Foundation of China under Grant LZ22F020003, and the HDU-CECDATA Joint Research Center of Big Data Technologies under Grant KYH063120009. ## References * [1] Anwar, T.: Covid19 diagnosis using automl from 3d ct scans. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 503–507 (2021) * [2] Chen, M., Fu, J., Ling, H.: One-shot neural ensemble architecture search by diversity-guided search space shrinking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 16530–16539 (2021) * [3] Chetoui, M., Akhloufi, M.A.: Efficient deep neural network for an automated detection of covid-19 using ct images. In: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC). pp. 1769–1774. IEEE (2021) * [4] Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE transactions on evolutionary computation 6(2), 182–197 (2002) * [5] Diba, A., Fayyaz, M., Sharma, V., Karami, A.H., Arzani, M.M., Yousefzadeh, R., Van Gool, L.: Temporal 3d convnets: New architecture and transfer learning for video classification. arXiv preprint arXiv:1711.08200 (2017) * [6] Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: A survey. arXiv preprint arXiv:1808.05377 (2018) * [7] Fu, Z., Tang, N., Chen, Y., Ma, L., Wei, Y., Lu, Y., Ye, K., Liu, H., Tang, F., Huang, G., et al.: Ct features of covid-19 patients with two consecutive negative rt-pcr tests after treatment. Scientific Reports 10(1), 1–6 (2020) * [8] He, X., Wang, S., Chu, X., Shi, S., Tang, J., Liu, X., Yan, C., Zhang, J., Ding, G.: Automated model design and benchmarking of deep learning models for covid-19 detection with chest ct scans. Proceedings of the AAAI Conference on Artificial Intelligence pp. 4821–4829 (May 2021) * [9] He, X., Wang, S., Shi, S., Chu, X., Tang, J., Liu, X., Yan, C., Zhang, J., Ding, G.: Benchmarking deep learning models and automated model design for covid-19 detection with chest ct scans. medRxiv (2020) * [10] He, X., Zhao, K., Chu, X.: Automl: A survey of the state-of-the-art. Knowledge-Based Systems 212, 106622 (2021) * [11] Horry, M.J., Chakraborty, S., Pradhan, B., Fallahpoor, M., Chegeni, H., Paul, M.: Factors determining generalization in deep learning models for scoring covid-ct images. Mathematical Biosciences and Engineering 18(6), 9264–9293 (2021) * [12] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Mobilenets, H.A.: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017) * [13] Hu, Y., Liang, Y., Guo, Z., Wan, R., Zhang, X., Wei, Y., Gu, Q., Sun, J.: Angle-based search space shrinking for neural architecture search. In: European Conference on Computer Vision. pp. 119–134. Springer (2020) * [14] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR (2015) * [15] Li, L., Qin, L., Xu, Z., Yin, Y., Wang, X., Kong, B., Bai, J., Lu, Y., Fang, Z., Song, Q., et al.: Artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest ct. Radiology p. 200905 (2020) * [16] Li, X., Tan, W., Liu, P., Zhou, Q., Yang, J.: Classification of covid-19 chest ct images based on ensemble deep learning. Journal of Healthcare Engineering 2021 (2021) * [17] Lin, M., Chen, Q., Yan, S.: Network in network. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR (2014) * [18] Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search. In: 7th International Conference on Learning Representations, ICLR (2019) * [19] Microsoftware: Neural network intelligence (nni). https://github.com/microsoft/nni/tree/v1.4 (2019) * [20] Morozov, S., Andreychenko, A., Pavlov, N., Vladzymyrskyy, A., Ledikhova, N., Gombolevskiy, V., Blokhin, I., Gelezhe, P., Gonchar, A., Chernina, V., Babkin, V.: Mosmeddata: Chest ct scans with covid-19 related findings. medRxiv (2020) * [21] Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search via parameter sharing. In: Dy, J.G., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018. Proceedings of Machine Learning Research, vol. 80, pp. 4092–4101. PMLR (2018) * [22] Rahimzadeh, M., Attar, A., Sakhaei, S.M.: A fully automated deep learning-based network for detecting covid-19 from a new and large lung ct scan dataset. medRxiv (2020) * [23] Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. In: The Thirty-Third AAAI Conference on Artificial Intelligence. pp. 4780–4789. AAAI Press (2019) * [24] Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., Chen, L.: Mobilenetv2: Inverted residuals and linear bottlenecks. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR. pp. 4510–4520. IEEE Computer Society (2018) * [25] Than, J.C., Thon, P.L., Rijal, O.M., Kassim, R.M., Yunus, A., Noor, N.M., Then, P.: Preliminary study on patch sizes in vision transformers (vit) for covid-19 and diseased lungs classification. In: 2021 IEEE National Biomedical Engineering Conference (NBEC). pp. 146–150. IEEE (2021) * [26] Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer look at spatiotemporal convolutions for action recognition. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR. pp. 6450–6459. IEEE Computer Society (2018) * [27] Wang, X., Deng, X., Fu, Q., Zhou, Q., Feng, J., Ma, H., Liu, W., Zheng, C.: A weakly-supervised framework for covid-19 classification and lesion localization from chest ct. IEEE transactions on medical imaging 39(8), 2615–2625 (2020) * [28] Wu, X., Chen, C., Zhong, M., Wang, J., Shi, J.: Covid-al: The diagnosis of covid-19 with deep active learning. Medical Image Analysis 68, 101913 (2021) * [29] Xie, L., Chen, X., Bi, K., Wei, L., Xu, Y., Wang, L., Chen, Z., Xiao, A., Chang, J., Zhang, X., et al.: Weight-sharing neural architecture search: A battle to shrink the optimization gap. ACM Computing Surveys (CSUR) 54(9), 1–37 (2021) * [30] Yang, Z., Wang, Y., Chen, X., Shi, B., Xu, C., Xu, C., Tian, Q., Xu, C.: CARS: continuous evolution for efficient neural architecture search. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR. pp. 1826–1835. IEEE (2020) * [31] Zhang, K., Liu, X., Shen, J., Li, Z., Sang, Y., Wu, X., Zha, Y., Liang, W., Wang, C., Wang, K., et al.: Clinically applicable AI system for accurate diagnosis, quantitative measurements, and prognosis of covid-19 pneumonia using computed tomography. Cell (2020) * [32] Zheng, C., Deng, X., Fu, Q., Zhou, Q., Feng, J., Ma, H., Liu, W., Wang, X.: Deep learning-based detection for covid-19 from chest ct using weak label. MedRxiv (2020) * [33] Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: 5th International Conference on Learning Representations, ICLR 2017 (2017)
∎ # Ordinal Monte Carlo Tree Search Tobias Joppen Johannes Fürnkranz (Received: date / Accepted: date) ###### Abstract In many problem settings, most notably in game playing, an agent receives a possibly delayed reward for its actions. Often, those rewards are handcrafted and not naturally given. Even simple terminal-only rewards, like winning equals $1$ and losing equals $-1$, can not be seen as an unbiased statement, since these values are chosen arbitrarily, and the behavior of the learner may change with different encodings. It is hard to argue about good rewards and the performance of an agent often depends on the design of the reward signal. In particular, in domains where states by nature only have an ordinal ranking and where meaningful distance information between game state values is not available, a numerical reward signal is necessarily biased. In this paper we take a look at Monte Carlo Tree Search (MCTS), a popular algorithm to solve MDPs, highlight a reoccurring problem concerning its use of rewards, and show that an ordinal treatment of the rewards overcomes this problem. Using the General Video Game Playing framework we show dominance of our newly proposed ordinal MCTS algorithm over other MCTS variants, based on a novel bandit algorithm that we also introduce and test versus UCB. ## 1 Introduction In reinforcement learning, an agent solves a Markov decision process (MDP) by selecting actions that maximize its long-term reward. Most state-of-the-art algorithms assume numerical rewards. In domains like finance, real-valued reward is naturally given, but many other domains do not have a natural numerical reward representation. In such cases, numerical values are often handcrafted by experts so that they optimize the performance of their algorithms. This process is not trivial, and it is hard to argue about good rewards. Hence, such handcrafted rewards may easily be erroneous and contain biases. For special cases such as domains with true ordinal rewards, it has been shown that it is impossible to create numerical rewards that are not biased. For example, yannakakis2017ordinal argue that emotions need to be treated as ordinal information. In fact, it often is hard or impossible to tell whether domains are real- valued or ordinal by nature. Experts may even design handcrafted numerical reward without thinking about alternatives, since using numerical reward is state of the art and most algorithms need them. In this paper we want to emphasize that numerical rewards do not have to be the ground truth and it may be worth-while for the machine learning community to have a closer look on other options, ordinal being only one of them. A popular example where the use of numerical values fails is a minimalistic medicine treatment setting: Consider three possible outcomes of a treatment: healthy, unchanged and dead. In the process of reward shaping one assigns a numerical value to each outcome. Those numbers define the trade-off between how many patients have to be healed until one patient may die in comparison to no treatment (all patient unchanged). It is impossible to avoid this trade-off with numerical values. Using an ordinal score one could define the outcomes to be ordered as $\textit{healthy}>\textit{unchanged}>\textit{dead}$ without an implicit trade-off. In this paper we present HO-UCB, an ordinal algorithm that is able to solve this treatment problem without trading off healed and dead patients. ##### MCTS Monte Carlo tree search (MCTS) is a popular algorithm to solve MDPs. MCTS is used in many successful AI systems, such as AlphaGo silver2017mastering or top-ranked algorithms in the general video game playing competitions perez2018general ; YOLOBOT . A reoccurring problem of MCTS with limited time resources is its behavior in case of danger: As a running example we look at a generic platform game, where an agent has to jump over deadly gaps to eventually reach the goal at the right. Dying is very bad, and the more the agent proceeds to the right, the better. The problem occurs by comparing the actions _jump_ and _stand still_ : jumping either leads to a better state than before because the agent proceeded to the right by successfully jumping a gap, or to the worst possible state (_death_) in case the jump attempt failed. Standing still, on the other hand, safely avoids death, but will never advance to a better game state. MCTS averages the obtained rewards gained by experience, which lets it often choose the safer action and therefore not progress in the game, because the (few) experiences ending with its death pull down the average reward of _jump_ below the mediocre but steady reward of standing still. Because of this, the behavior of MCTS has also been called _cowardly_ in the literature jacobsen2014monte ; khalifa2016modifying . Transferring those platform game experiences into an ordinal scale eliminates the need of meaningful distances. In this paper, we present an algorithm that only depends on pairwise comparisons in an ordinal scale, and selects _jump_ over _stand still_ if it more often is better than worse. We call this algorithm Ordinal MCTS (O-MCTS) and compare it to different MCTS variants using the General Video Game AI (GVGAI) framework perez2016general . In the next section we introduce MABs and MDPs as our problem definitions and UCB, MultiSBM, PB-MCTS and MCTS as already known solutions to solve those problems. In Section 3.3, we present our novel algorithms, followed by experiments to present how our algorithms compare to the existing ones (Sections 4 and 4.4). ## 2 Monte Carlo Tree Search In this section, we briefly recapitulate Monte Carlo tree search and some of its variants, which are commonly used for solving Markov decision processes (MDP). Prior to this, we introduce the multi-armed bandit problem (MAB) and a common used algorithm to solve MABs: UCB1. ### 2.1 Multi-Armed Bandits A _Multi-Armed Bandit_ (MAB) is a common problem one often faces: One has a set of possible actions (or arms, $A$) to choose one from. Once an arm $a$ is chosen, the player receives a reward $r$ sampled from a unknown distribution $X_{a}$. Often rewards are designed to be numerical ($r\in R$). The task is to identify the optimal arm, which returns the best rewards. For numerical rewards, the _best arm_ $a^{*}$ can be defined to be the arm with the highest rewards in average, or $a^{*}=\arg\max_{a}\mathbb{E}[X_{a}]$. For ordinal rewards, the reward function does not necessarily be in $\mathbb{R}$, but in $\mathbb{O}$, where all elements $o\in\mathbb{O}$ can be ordered via a given preference function $o_{1}<o_{2}$. Notice, that for $\mathbb{R}$ such a preference function is given through the natural ordering. Since one can not add together or calculate an average of elements from $\mathbb{O}$, we need a different definition of what the _best arm_ is. Other than for numerical rewards, there is not such a consensus about what the definition of optimality. In this paper, we are interested in the borda winner black1976partial \- the arm that has the highest chance of beating an other randomly chosen arm. We present our algorithm to solve ordinal MABs in a later section. ### 2.2 Markov Decision Process Conventional Monte Carlo tree search assumes a scenario in which an agent moves through a state space by taking different actions. This can not be modeled by a MAB, since the arms and their reward distributions are fixed. A Markov Decision Process (MDP) MDP takes MABs to a next level by not only being in one fixed state with its fixed arms, but by changing the state every time an action is played. A MDP can be formalized as the following: * – A (finite) set of _states_ $S$. * – A (finite) set of _actions_ $A$ the agent can perform. Sometimes, only a subset $A(s)\subset A$ of actions is applicable in a state $s$. * – A Markovian _state transition_ function $\delta(s^{\prime}\mid s,a)$ denoting probability that invoking action $a$ in state $s$ leads the agent to state $s^{\prime}$. * – A _reward function_ $r(s)\in\mathbb{R}$ that defines the reward the agent receives in state $s$. * – A distribution of _start states_ $\mu(s)\in[0,1]$, defining the probability that the MDP starts in that state. We assume a single start state $s_{0}$, with $\mu(s_{0})=1$ and $\mu(s)=0\;\forall s\neq s_{0}$. * – A set of _terminal states_ for which $A(s)=\emptyset$. We assume that only terminal states are associated with a non-zero reward. The task is to learn a _policy_ $\pi(a\mid s)$ that defines the probability of selecting an action $a$ in state $s$. The optimal policy $\pi^{*}(a\mid s)$ maximizes the expected, cumulative reward $\displaystyle V(s_{t})$ $\displaystyle=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t})\right],$ (1) $\displaystyle=r(s_{t})+\gamma\int_{S}\int_{A}\delta(s_{t+1}\mid a_{t},s_{t})\pi(a_{t}\mid s_{t})V(s_{t+1}).$ The optimal policy maximizes $V(s_{t})$ for all time steps $t$. Here, $\gamma\in[0,1)$ is a discount factor, which dampens the influence of later events in the sequence. For finding an optimal policy, one needs to solve the so-called exploration/exploitation problem. The state/action spaces are usually too large to sample exhaustively. Hence, it is required to trade off the improvement of the current, best policy (exploitation) with an exploration of unknown parts of the state/action space. We also investigate an ordinal variation of the classical MDP: O-MDP weng2011markov Just like it is for the ordinal MAB, the ordinal MDP does use ordinal rewards and thus numerical definitions of optimality and regret can not be applied here. As described in the last section, we are interested in the borda winner in case of ordinal rewards. The borda winner maximizes the chance of beating the other arms. The chance for an arm $a$ to beat all other arms is called borda score ($B(a)$ \- see Section 3.1.1). Each non-optimal arm has a lower borda score than the optimal arm. The regret of playing a non optimal arm $a$ instead of $a^{*}$ is the difference of borda score: $\textit{regret}_{a}=B(a^{*})-B(a)$. Obviously it is $\textit{regret}_{a^{*}}=0$. Note that it is not possible to use one numerical bandit to optimize the borda score, since it is not the direct reward that is visible for the agent. Further more, the borda score is not only dependent on the distribution of the current arm, but also is dependent on the reward distributions of all other arms (since the borda score is defined on comparisons of those). ### 2.3 Multi-armed Bandits After introducing the problem frameworks MAB and MDP, we introduce common algorithms to solve those. We start with a popular algorithm to solve MABs, where the task is to identify the arm (or action) with the highest return by repeatedly pulling one of the possible arms. In this setting, there is only one non-terminal state, and the task is to achieve the highest average reward by playing theoretically infinite times. Here the exploration/exploitation dilemma occurs because the player must play the best-known arm trying to maximize the average reward (exploitation), but also needs to search for the best arm among all alternatives (exploration). A well-known technique for resolving this dilemma in bandit problems is the _upper confidence bounds (UCB)_ UCB . UCB estimates upper bounds on the expected reward for a certain arm, and chooses the action with the highest associated upper bound. We then observe the outcome and update the bound. The simplest UCB policy (2) gives a bonus $\sqrt{\frac{2\ln n}{n_{j}}}$ to the average reward $\bar{X_{j}}$, which depends on the number of visits, thereby increasing the chance that arms that have not yet been frequently played are selected in subsequent iterations. $UCB1=\bar{X_{j}}+\sqrt{\frac{2\ln n}{n_{j}}}$ (2) The first term favors arms with high payoffs, while the second term guarantees exploration UCB . The reward is expected to be bound by $[0,1]$. In Section 3.1, we introduce two algorithms that are able to solve ordinal MABs. ### 2.4 Duelling Bandits A related topic to ordinal bandits are dueling bandits, where at each time step two arms are pulled and the agent gets one reward indicating which arm won in the direct comparison. In reality, repeated dueling of different pairs is often used to identify the best element of a set, like in most sport leagues where different teams battle each other. The biggest downside of this approach (at least from a energy or optimization point of view) is that each team needs to play against each other team at least once to be able to identify a winner. If it would be possible to measure teams independently, only one measure per team would suffice to identify the winner. Sample efficiency is the biggest difference between dueling and ordinal bandits: In ordinal bandits, it is possible to measure the quality of one action, where for dueling bandits it always needs two actions to be compared against each other. It has been shown that it is possible to reduce dueling bandits to common numerical banditsailon2014reducing . Hence it is possible to optimize a bandit using preference information by relying on numerical bandit algorithms. In the following, we intoduce the MultiSBM algorithm which enables to compare UCB and our ordinal bandit algorithm to a dueling bandits approach. ### 2.5 MultiSBM The MultiSBM algorithm is able to learn from preference feedback, while not being restricted to the dueling bandit framework, where two arms are pulled at a time instead of one. The main idea is to have a numerical bandit for each arm (for example using UCB). In each round $t$ the last round played arm $a_{t-1}$ defines which bandit will be used to select the arm to be played $a_{t}$. The feedback signal for the current bandit is the preference information of how $a_{t-1}$ did perform in comparison to $a_{t}$ailon2014reducing . We use MultiSBM as a preference-learning baseline to compare to UCB and our ordinal algorithms. ### 2.6 Monte Carlo Tree Search Monte Carlo tree search (MCTS) is a method for approximating an optimal policy for a MDP. It builds a partial search tree, which is more detailed where the rewards are high. MCTS spends less time evaluating less promising action sequences, but does not avoid them entirely to explore the state space. The algorithm iterates over four steps MCTSSurvey : 1. 1. _Selection:_ Starting from the root node $v_{0}$, a _tree policy_ traverses to deeper nodes $v_{k}$, until a state with unvisited successor states is reached. 2. 2. _Expansion:_ One successor state is added to the tree. 3. 3. _Simulation:_ Starting from the new state, a so-called _rollout_ is performed, i.e., random actions are played until a terminal state is reached or a depth limit is exceeded. 4. 4. _Backpropagation:_ The reward of the last state of the simulation is backed up through all selected nodes. The UCT formula $a^{*}_{v}=\max_{a\in A_{v}}\bar{X}_{v}(a)+2C\sqrt{\frac{2\ln n_{v}}{n_{v}(a)}}$ (3) has been derived from the UCB1 algorithm (see 2 and is used to select the most interesting action $a^{*}_{v}$ in a node $v$ by trading off the expected reward estimated as $\bar{X}_{v}(a)=$ $\sum_{i=0}^{n_{v}}X_{v}^{(i)}(a)/n_{v}(a)$ from $n_{v}(a)$ samples $X_{v}^{(i)}(a)$, with an exploration term $\sqrt{2\ln(n_{v})/n_{v}(a)}$. The trade-off parameter $C$ is often set to $C=1/\sqrt{2}$, which has been shown to ensure convergence for rewards $\in[0,1]$ UCT . In the following, we will often omit the subscript $v$ when it is clear from the context. ### 2.7 Preference-Based Monte Carlo Tree Search Figure 1: Three nontransitive actions. The tree introduces a bias to solve nontransitivity. A version of MCTS that uses preference-based feedback (PB-MCTS) was recently introduced by joppen2018preference . In this setting, agents receive rewards in form of preferences over states. Hence, feedback about single states $s$ is not available, it can only be compared to another state $s^{\prime}$, i.e., $s\succ s^{\prime}$ (dominance), $s^{\prime}\succ s$, or $s\not\sim s^{\prime}$ (incomparable). An iteration of PB-MCTS contains the same abstract steps like MCTS, but their realization differs. First and foremost, it is impossible to use preference information on a vanilla MCTS iteration, since it only samples a single trajectory, whereas a second state is needed for a comparison. Hence, PB-MCTS does not select a single path per iteration but an entire subtree of the search tree. In each of its nodes, two actions are selected that can be compared to each other. For the selection step, a modified version of the dueling bandit algorithm RUCB zoghi14 is used to select actions. There are two main disadvantages with this approach: 1. 1. _No transitivity_ is used. Given ten actions, MCTS needs only $10$ iterations to have a first estimation of quality of each action. In the preference-based approach, each action has to be compared with each other action until a first complete estimation can be done. These are $(10\cdot 9)/2=45$ iterations, i.e., in general the effort is quadratic in the number of actions. 2. 2. A _binary subtree_ is needed to learn on each node of the currently best trajectory. Instead of a path of length $n$ for vailla MCTS, the subtree consists of $2^{n}-1$ nodes and $2^{n-1}$ trajectories instead of only one, causing an exponential blowup of PB-MCTS’s search tree. Hence, we believe that PB-MCTS does not make optimal use of available computing resources, since on a local perspective, transitivity information is lost, and on a global perspective, the desired asymmetric growth of the search tree is undermined by the need for selecting a binary tree. Note that even in the case of a non-transitive domain, PB-MCTS will nevertheless obtain a transitive policy, as illustrated in Figure 1, where the circular preferences between actions A, B, and C can not be reflected in the resulting tree structure. ## 3 Ordinal Algorithms In this section we introduce three novel ordinal algorithms, two for the multi-armed bandit setting and one novel ordinal MCTS algorithm for markov decision processes. ### 3.1 Borda Bandit Our first bandit algorithm O-UCB is the base for the following algorithms and a very core contribution of this paper. The main idea is to store all ordinal reward values per action and evaluate how good an action is depending on how probably it is that a random ordinal reward of those seen per arm is better than a random one of the other actions. In the following we describe this idea in more detail. #### 3.1.1 The Borda Score The Borda score is based on the Borda count which has its origins in voting theory black1976partial . Essentially, it estimates the probability of winning against a random competitor. In our case, $B(a)$ is the probability of action $a$ to win against any other action $b\neq a$ available (with tie correction): $B(a)=\frac{1}{|A|-1}\sum_{b\in A\setminus\\{a\\}}\Pr(a\succ b)$ where $\Pr(a\succ b)=\Pr(X_{a}>X_{b})+\nicefrac{{1}}{{2}}\Pr(X_{a}=X_{b})$ and $X_{i}$ is the random variable responsible for sampling the ordinal rewards for arm $i$. The true value for $\Pr(X_{a}>X_{b})$ and $\Pr(X_{a}=X_{b})$ is unknown but can be estimated empirically. In dueling bandits or PB-MCTS, one picks two arms $a_{i}$ and $a_{j}$ and receives a direct sample for $\Pr(X_{a_{i}}>X_{a_{j}})$. In this paper, we take a different approach by assuming availability of ordinal rewards per arm. For each available action $a\in A$, we store the empirical density function $\hat{f}_{a}$ and the empirical cumulative density function $\hat{F}_{a}$ of all backpropagated ordinal rewards $o\in O$. Given those functions, we can estimate $\Pr(X_{a}\succ X_{b})$(vargha, ): $\displaystyle\hat{B}(a\succ b)$ (4) $\displaystyle=\int\Pr[X_{a}<o]\hat{f}_{b}(o)+\nicefrac{{1}}{{2}}\Pr[X_{a}=o]\hat{f}_{b}(o)do$ $\displaystyle=\sum_{i=2}^{|O|}\frac{\hat{F}_{a}(o_{n-1})+\hat{F}_{a}(o_{n})}{2}\hat{f}_{b}(o_{n})$ #### 3.1.2 Algorithm In comparison to UCB, O-UCB uses the Borda score $\hat{B}$ as an exploitation term to choose the arm to play: $a^{*}=\max_{a\in A}\hat{B}(a)+2C\sqrt{\frac{2\ln n}{n(a)}}.$ (5) To wrap it up there are two differences to the UCB implementation: The selection of which arm to play uses a different exploitation term: $\hat{B}$; and instead of updating a running average per action, here $\hat{f}_{a}$ and $\hat{F}_{a}$ gets updated according to the new ordinal reward. We present bounds on the regret for this choice of actions in bandit algorithms: ###### Theorem 3.1 For all $K>1$ and $O>1$, the expected regret after any number $n$ of plays of an ordinal bandit using (5) is at most $\mathcal{O}(\sum_{i\in A^{-}}\big{(}\frac{\ln n}{\Delta_{i}}\big{)})$ where $A^{-}$ is the set of non optimal arms. ###### Proof See appendix.111The proof can be found at http://tiny.cc/OMCTS_proof ### 3.2 Hierarchical Borda Bandit The Borda Bandit O-UCB will favour to play that arm, which beats the other arms most often. In the most simple setting of two arms, O-UCB will choose the arm that more often wins direct duels with the other arm, independent of how much the difference between the outcomes is. In a lottery setting where one can choose to play or not to play, O-UCB will choose to play only if the chances of winning the lottery are above $50\%$. Same for the medicine treatment: If you can choose between no treatment or treatment, O-UCB will choose treatment if the treatment was successful in more than $50\%$ of the cases. Having people die from the treatment in $49\%$ would be fine for O-UCB, what is understandable since every distance measure is removed and there is no reason for O-UCB to argue that death is worse than a good treatment. But it might be desirable to define some notion of distance, like death is much worse than everything else. The following bandit algorithm _Hierarchical Ordinal UCB_(OH-UCB) provides parameters to define hierarchical preferences and thus different distance measures. A OH-UCB is parameterized by a tuple $(\bar{z},H)$, where $\bar{z}$ is a critical value to check for significance (needed later), and a $d$-sized hierarchy of ordinal values $H\in\mathcal{P}(O)^{d}$ where $\mathcal{P}(O)$ refers to the power set of $O$. In each hierarchy level $h\in\mathcal{P}(O)$ multiple elements of $O$ can be selected or not. The function $M_{h}\in O\times O$ maps all elements of $O$ to their next bigger element in $O$ which is selected in $h$ or, in case of no bigger selected element existing, the highest element of $O$. The selected elements now define a notion of distance: Two elements $o$ and $o^{\prime}$ are close-by in a hierarchy $h$, if $M_{h}(o)=M_{h}(o^{\prime})$, or in other words are close by if there is no selected element between $o$ and $o^{\prime}$ in $h$. The main idea of OH-UCB is the usage of one O-UCB agent per hierarchy $h$, where each agent $a_{h}$ perceives an ordinal reward signal $o$ as $M_{h}(o)$. Selecting an arm $a\in A$ is done in an iterative manner going from the first $i=1$ to last hierarchy $i=d$. The set of valid arms is initialized with all possible arms $\bar{A}=A$. The current agent of hierarchy $h_{i}$ is used to identify the best action $a^{i}\in\bar{A}$ of all valid arms (picking the one that maximizes $b(a)$). Now, each other valid action $\hat{a}\in\bar{A}\setminus\\{a^{i}\\}$ is tested whether $a^{i}$ is significant better than $\hat{a}$ using $\bar{z}$ as the critical value in a Mann-Whitney U test between those two arms when ignoring the exploration trade off: Having $n^{i}$ and $\hat{n}$ being the amount of plays of arms $a^{i}$ and $\hat{a}$, the Mann-Whitney U value for $a^{i}$ and $\hat{a}$ can be derived using the borda score $\hat{B}(a^{i}\succ\hat{a})$: $U_{a^{i}}=\hat{B}(a^{i}\succ\hat{a})\cdot n^{i}\hat{n}$ and $U_{\hat{a}}=\hat{B}(\hat{a}\succ a^{i})\cdot n^{i}\hat{n}$ sprinthall1990basic . For large enough samples (we use $n^{i}>3$, $\hat{n}>3$ and $n^{i}+\hat{n}>20$) U is approximately normally distributed and hence we can check for significance by testing if $\displaystyle\hat{z}$ $\displaystyle>\frac{U_{a^{i}}-m_{U_{a^{i}}}}{\sigma_{U_{a^{i}}}}$ (6) $\displaystyle\hat{z}$ $\displaystyle>\frac{U_{a^{i}}-\frac{n^{i}\hat{n}}{2}}{\sqrt{\frac{n^{i}\hat{n}(n^{i}+\hat{n}+1}{12}}}$ where $m$ and $\sigma$ are the mean and standard deviation of $U$(vargha, ; sprinthall1990basic, ). If a valid arm $\hat{a}$ is significant worse that the best arm $a^{i}$, it is removed from the list of valid arms $\bar{A}=\bar{A}\setminus\\{\hat{a}\\}$. If, after each valid arm has been tested for significance, $a^{i}$ is the only valid arm left, it is returned as the arm to be played by OH-UCB. Otherwise, the next hierarchy $i+1$ continues. If no hierarchy is left ($i=d$), $a^{i}$ is returned as to be played, too. After a arm $a$ has been played and a ordinal reward $o$ has been received, each of the bandits $a_{h}$ is updated with $M_{h}(o)$. To give further insight and motivation, we inspect on how OH-UCB could be used for the medicine treatment setting: The main problem for UCBin this setting is to avoid _dead patients_ without defining a clear trade off. Here, we can use the first hierarchy level to do exactly that by only selecting _dead patients_ in $h_{1}$. Hence, this bandit only perceives two rewards: _patient dead_ and _patient not dead_ and will therefore favor those actions that lead to the least amount of dead patients. Depending on the samples seen and $\hat{z}$ this first hierarchy filters out those arms with significant more dead patients. The second and last hierarchy would select the complete set of ordinal values to allow the most fine grained level of optimization. ### 3.3 Ordinal Monte Carlo Tree Search In this section, we introduce O-MCTS, an MCTS variant which only relies on ordinal information to learn a policy. We derive an ordinal MCTS algorithm by using O-UCB instead of UCB as the tree policy. _Ordinal Monte Carlo tree search_ (O-MCTS) proceeds like conventional MCTS as introduced in Section 2.6, but replaces the average value $\bar{X}_{v}(a)$ in (3) with the Borda score $B_{v}(a)$ of an action $a$ in node $v$, where $v$ now defines the current state or node in the tree. Here, each arm is rated according to its mean performance against the other arms. To our knowledge, Borda score has not been used in MCTS before, even tough several papers have investigated its use in dueling bandit algorithms DuelingBandits ; urvoy2013generic ; jamieson2015sparse . To calculate the Borda score for each action in a node, O-MCTS stores the backpropagated ordinal values, and estimates pairwise preference probabilities $P_{v}(a\succ b)$ from these data. Hence, it is not necessary to do multiple rollouts in the same iteration as in PB-MCTS because current rollouts can be directly compared to previously observed ones. Note that $P_{v}(a\succ b)$ can only be estimated if each action was visited at least once. Hence, similar to other MCTS variants, we enforce this by always selecting non-visited actions in a node first. ### 3.4 Discussion Although the changes from MCTS to O-MCTS are comparably small, the algorithms have very different characteristics. In this section, we highlight some of the differences between O-MCTS and MCTS. ##### Different Bias As mentioned previously, MCTS has been blamed for behaving cowardly, by preferring safe but unyielding actions over actions that have some risk but will in the long run result in higher rewards. As an example, consider Figure 2, which shows in its bottom row the distribution of trajectory values for two actions over a range of possible rewards. One action (circles) has a mediocre quality with low deviation, whereas the other (stars) is sometimes worse but often better than the first one. Since MCTS prioritizes the stars only if the average is above the average of circles, MCTS would often choose the safe, mediocre action. In the literature one can find many ideas to tackle this problem, like MixMax backups (khalifa2016modifying, ) or adding domain knowledge (e.g., by giving a direct bonus to actions that should be executed perez2018general ; YOLOBOT ). O-MCTS takes a different point of view, by not comparing average values but by comparing how often stars are the better option than circles and vice versa. As a result, it would prefer the star action, which is preferable in 70% of the games. Please note that the given example can be inverted such that MCTS takes the right choice instead of O-MCTS. Figure 2: Two actions with different distributions. ##### Hyperparameters and Reward Shaping When trying to solve a problem with MCTS (and other algorithms, too), rewards can be seen as hyperparameters that can be tuned manually to make an algorithm work as desired. In theory this can be beneficial since you can tweak the algorithm with many parameters. In practice it can be very painful since there often is an overwhelming number of hyperparameters to tune this way. This tuning process is called _reward shaping_. In theory, one can shape the state rewards until a greedy search is able to perform optimal on any problem. O-MCTS reduces the number of hyperparameters by only asking for ordinal rewards; which is like asking for a ranking of states instead of real numbers for each state. This limits the possibilities of reward shaping but induces a fixed bias using the borda method. ##### Computational Complexity Clearly, a naïve computation of $\hat{B}$ is computationally more expensive than MCTS’ calculation of a running average. We hence want to point out that once a new ordinal reward is seen it is possible to incrementally update the current value of $\hat{B}$ instead of calculating it again from scratch. In our experiments, updating the Borda score needed $3$ to $20$ times more time than updating the average (depending on the size of $O$ and $A$). These values do only show the difference in updating $\hat{B}$ in comparison to updating the running average, not the complete algorithms (where the factor is much lower, mostly depending on the runtime of the forward model). ## 4 Experimental Setup The experiments are split in two major sections: Bandits and Tree Search. To be able to compare numerical baseline algorithms with ordinal algorithms, we first define numerical rewards and derive ordinal rewards from there. ### 4.1 Bandit Setup For the bandit setup, we inspect the medicine treatment problem from above: Imagine patients can be dead or alive after a medical treatment. In case of being alive, a continuous scale of wellbeing can differentiate the quality of the treatment, while being dead is the worst possible outcome. This can be modeled in a numerical score by defining death as a reward of $0$ and everything else in $(0,1]$. The aim is to identify a good treatment with as less dead patients as possible. This on purpose is defined to not look for any dead-patient/wellbeing trade-off. There is a clear preference to minimize dead patients at first and then maximize the wellbeing score. We want to emphasise that this optimization can not be modeled as a numerical score but is of a higher dimension. Hence UCB1 will not be able to find that optimum but will only be able to maximize the expected numerical reward. Furthermore, O-UCB and MultiSBM will also not directly minimize the amount of death patients too, since they will maximize the pull of the arm that beats the other arms in average. Nevertheless this setup can prove two main statements: First, OH-UCB can maximize the wellbeing score while minimizing the amount of deaths without defining a numerical trade off between those targets and second, O-UCB can optimize the wellbeing score directly. The latter optimization can be compared to UCBand MultiSBM in terms of convergence speed. Notice that we use a parameterized version of all bandit algorithms with parameter $c$, which trades off exploration and exploitation (compare Formula 2 and Formula 3). In our bandit setting, we have the four different treatments available (four actions): * • A $20\%$ death (r=$0$) / $80\%$ maximal wellbeing (r=$1$) treatment (a good but maybe killing treatment) * • A $80\%$ death (r=$0$) / $20\%$ maximal wellbeing (r=$1$) treatment (a very risk and worse treatment than the one above) * • A no-treatment baseline (r=$0.6$) * • A treatment that slightly increases the level of wellbeing of patients (r=$0.7$) Using this bandit, we test four different algorithms: O-UCB, OH-UCB, UCB and MultiSBM. Each agent has 500 action pulls available per experiment, with each experiment being averaged over 20 runs. We repeat each experiment with the parameter $c\in(0.1,0.2,0.4,0.6,0.8,1,1.2,1.4,1.6,1.8,2)$ and compare the best results of each agent. O-UCB, OH-UCB and MultiSBM can derive their preferences from the wellbeing score described above. As described in Section 3.2, two hierarchies are used for OH-UCB, the first to define that _dead_ is worse than everything else (or $v=0<<v>0$) and a full second hierarchy, where all ordinal values are considered. Hence OH-UCB should be able to prioritize those actions that do not lead to death and optimizes the remaining actions as O-UCB would do, too. We use $\hat{z}=0.65$ as the critical z value to check for significance in OH-UCB. ### 4.2 Bandit Results The target is to minimize dead patients. In cases of ties, the average wellbeing score is used as a tie breaker. First, we inspect Figure 3(a), where the amount of dead patients is shown over time for the best $c$ parameters per algorithm. It can clearly be seen, that OH-UCBis the only algorithm that converges towards a low number of deaths. This is to no surprise, since the other algorithms maximize the wellbeing value directly. A plot showing the best wellbeing values per time is shown at Figure 3(b) for the $c$ parameters that increased this score the best. Here one can see, that O-UCB as well as UCB are able to maximize the value. MultiSBM is a bit behind, since it needs to learn from pair of actions and is not able to abstract information between arm pairs as good as UCB or O-UCB can do. In contrast to MultiSBM, O-UCB is able to use a single feedback value of one arm and can compare it to any other value of any other arm. Lastly, we present an overview showing the influence of parameter $c$ in Table 1 by displaying the average wellbeing score and amount of deaths per algorithm and parameter. Inspecting $c\geq 0.2$ there is an interesting different between UCB and O-UCB shown: O-UCB keeps the amount of deaths at around 83 with a decrease of wellbeing when increasing exploration. In contrast to that, UCB reduces both, wellbeing score and amount of deaths. $0$$100$$200$$300$$400$$500$$0$$20$$40$$60$PullsDead PatientsMultiSBM $c=0.1$OH-MAB $c=0.2$O-MAB $c=0.1$UCB $c=0.1$ (a) Data1 $0$$100$$200$$300$$400$$500$$0.5$$0.6$$0.7$$0.8$PullsExpected ValueMultiSBM $c=0.2$OH-MAB $c=0.4$O-MAB $c=0.4$UCB $c=0.4$ (b) Data2 Figure 3: Bandit results $C$ Parameter | UCB | OH-UCB | O-UCB | MultiSBM ---|---|---|---|--- | Value | Deaths | Value | Deaths | Value | Deaths | Value | Deaths 0.1 | 0.739 | 42.4 | 0.699 | 2.6 | 0.737 | 39.75 | 0.725 | 55.15 0.2 | 0.75 | 56 | 0.698 | 2.4 | 0.754 | 61.3 | 0.73 | 67.15 0.4 | 0.78 | 80.45 | 0.699 | 4.6 | 0.784 | 84.45 | 0.709 | 67.4 0.6 | 0.766 | 78.3 | 0.696 | 4.75 | 0.776 | 82.75 | 0.683 | 72.1 0.8 | 0.761 | 75.3 | 0.695 | 7.6 | 0.773 | 83.8 | 0.659 | 82.1 1.0 | 0.737 | 73.75 | 0.693 | 12.8 | 0.755 | 83.65 | 0.641 | 89.3 1.2 | 0.732 | 71.8 | 0.695 | 16.6 | 0.753 | 81.45 | 0.645 | 87.25 1.4 | 0.727 | 69.8 | 0.689 | 20.55 | 0.74 | 81.65 | 0.628 | 95.85 1.6 | 0.713 | 70.5 | 0.686 | 25 | 0.729 | 82.8 | 0.624 | 97.25 1.8 | 0.703 | 71.5 | 0.685 | 29.85 | 0.722 | 82.65 | 0.623 | 98.6 2.0 | 0.7 | 71.45 | 0.68 | 33.3 | 0.714 | 82.85 | 0.617 | 100.85 Table 1: The average wellbeing value and amount of deaths per algorithm and $c$ parameter averaged over 20 runs. ### 4.3 Tree Search Setup We test the three algorithms described above (MCTS, O-MCTS and PB-MCTS) using the General Video Game AI (GVGAI) framework perez2016general . As additional benchmarks we add MixMax (Q parameter set to $0.25$) as an MCTS variation that was suggested by khalifa2016modifying to tackle the cowardly behavior, and Yolobot, a state of the art GVGAI agent YOLOBOT ; perez2018general . GVGAI has implemented a variety of different video games and provides playing agents with a unified interface to simulate moves using a forward model. Using this forward model is expensive so that simulations take a lot of time. We use the number of calls to this forward model as a computational budget. In comparison to using the real computation time, it is independent of specific hardware, algorithm implementations, and side effects such as logging data. Our algorithms are given access to the following pieces of information provided by the framework: _Available actions_ : The actions the agent can perform in a given state _Game score_ : The score of the given state $\in\mathbb{N}$. Depending on the game this ranges from $0$ to $1$ or $-1000$ to $10000$. _Game result_ : The result of the game: _won_ , _lost_ or _running_. _Simulate action_ : The forward model. It is stochastic, e.g., for enemy moves or random object spawns. #### 4.3.1 Heuristic Monte Carlo Tree Search The games in GVGAI have a large search space with $5$ actions and up to $2000$ turns. Using vanilla MCTS, one rollout may use a substantial amount of time, since up to $2000$ moves have to be made to reach a terminal state. To achieve a good estimate, many rollouts have to be simulated. Hence it is common to stop rollouts early at non-terminal states, using a heuristic to estimate the value of these states. In our experiments, we use this variation of MCTS, adding the maximal length for rollouts RL as an additional parameter. The heuristic value at non-terminal nodes is computed in the same way as the terminal reward (i.e., it essentially corresponds to the score at this state of the game). #### 4.3.2 Mapping Rewards to $\mathbb{R}$ The objective function has two dimensions: on the one hand, the agent needs to win the game by achieving a certain goal, on the other hand, the agent also needs to maximize its score. Winning is more important than getting higher scores. Since MCTS needs its rewards being $\in\mathbb{R}$ or even better $\in[0,1]$, the two-dimensional target function needs to be mapped to one dimension, in our case for comparison and ease of tuning parameters into $[0,1]$. Knowing the possible scores of a game, the score can be normalized by $r_{norm}=(r-r_{min})/(r_{max}-r_{min})$ with $r_{max}$ and $r_{min}$ being the highest and lowest possible score. For modeling the relation _lost_ $\prec$ _playing_ $\prec$ _won_ which must hold for all states, we split the interval $[0,1]$ into three equal parts (cf. also the axis of Figure 2): $r_{mcts}=\frac{r_{norm}}{3}+\begin{cases}0,&\text{if \it lost}\\\ \frac{1}{3},&\text{if \it playing}\\\ \frac{2}{3},&\text{if \it won}.\end{cases}$ (7) This is only one of many possibilities to map the rewards to $[0,1]$, but it is an obvious and straight-forward approach. Naturally, the results for the MCTS techniques, which use this reward, will change when a different reward mapping is used, and their results can probably be improved by shaping the reward. In fact, one of the main points of our work is to show that for O-MCTS (as well as for PB-MCTS) no such reward shaping is necessary because these algorithms do not rely on the numerical information. In fact, for them, the mapped linear function with $a\succ b\Leftrightarrow r_{mcts}(a)>r_{mcts}(b)$ is equivalent to the preferences induced by the two-dimensional feedback. #### 4.3.3 Selected Games GVGAI provides users with many games. Doing an evaluation on all of them is not feasible. Furthermore, some results would exhibit erratic behavior, since the tested algorithms (except for Yolobot) are not suitable for solving some of the games. For example, true rewards often are very sparse, and the agent has to be guided in some way to reliably solve the game. For this reason, we manually played all the games and selected a variety of interesting, and not too complex games with different characteristics, which we believed to be solvable for the tested algorithms: * • Zelda: The agent can hunt monsters and slay them with its sword. It wins by finding the key and taking the door. * • Chase: The agent has to catch all animals which flee from the agent. Once an animal finds a caught one, it gets angry and chases the agent. If the agent get caught this way, the game is lost. * • Whackamole: The agent can collect mushrooms which spawn randomly. A cat helps it in doing so. The game is won after 2000 time steps or lost if agent and cat collide. * • Boulderchase: The agent can dig through sand to a door that opens after it has collected ten diamonds. Monsters chase it through the sand turning sand into diamonds. It may be very hard for a MCTS agent to solve this game. * • Surround: The agent can win the game at any time by taking a specific action, or collect points by moving while leaving a snake-like trail. A moving enemy also leaves a trail. The agent shall not collide with trails. * • Jaws: The agent controls a submarine, which is hunted by a shark. It can shoot fish giving points and leaving an item behind. Once 20 items are collected, a collision with the shark gives a large number of points, otherwise it loses the game. Colliding with fish always loses the game. The fish spawn randomly on 6 specific positions. * • Aliens: The agent can only move from left to right and shoot upwards. Aliens come flying from top to bottom throwing rocks on the agent. For increasing the score, the agent can shoot the aliens or shoot disappearing blocks. The number of iterations that can be performed by the algorithms depends on the computational budget of calls to the forward model. We tested the algorithms with $250$, $500$, $1000$ and $10000$ forward model uses (later called _budget_). Thus, in total, we experimented with $28$ problem settings ($7$ domains $\times$ $4$ different budgets). #### 4.3.4 Tuning Algorithms and Experiments All MCTS algorithms have two parameters in common, the _exploration trade-off_ $C$ and _rollout length_ $RL$. For $RL$ we tested 4 different values: $5,10,25$ and $50$, and for $C$ we tested 9 values from $0$ to $2$ in steps of size $0.25$. In total, these are 36 configurations per algorithm. To reduce variance, we have repeated each experiment 40 times. Overall, 4 algorithms with 36 configurations were run 40 times on 28 problems, resulting in 161280 games played for tuning. Additionally, we compare the algorithms to Yolobot, a highly competitive GVGAI agent that won several challenges YOLOBOT ; perez2018general . Yolobot is able to solve games none of the other five algorithms can solve. Note that Yolobot is designed and tuned to act within a 20ms time limit. Scaling and even increasing budget might lead to worse and unexpected behavior. Still it is added for sake of comparison and interpretability of strength. For Yolobot each of the $28$ problems is played $40$ times, which leads to $1120$ additional games or $162400$ games in total.222Results are available at https://github.com/Muffty/OMCTS_Appendix We are mainly interested on how well the different algorithms perform on the problems, given optimal tuning per problem. To give an answer, we show the performance of the algorithms per problem in percentage of wins and obtained average score. We do a Friedmann test on average ranks of those data with a posthoc Wilcoxon signed rank test to test for significance demvsar2006statistical . Additionally, we show and discuss the performance of all parameter configurations. Game | Time | O-MCTS | MCTS | Yolo- bot | PB-MCTS | MixMax ---|---|---|---|---|---|--- Jaws | $10^{4}$ | 100% | 100% | 27.5% | 80% | 67.5% 1083.8 | 832.7 | 274.7 | 895.7 | 866.8 $10^{3}$ | 92.5% | 95% | 35% | 52.5% | 65% 1028.2 | 958.9 | 391.0 | 788.5 | 736.4 500 | 85% | 90% | 65% | 50% | 52.5% 923.4 | 1023.1 | 705.7 | 577.6 | 629.0 250 | 85% | 85% | 32.5% | 37.5% | 37.5% 1000.9 | 997.6 | 359.6 | 548.8 | 469.0 Surround | $10^{4}$ | 100% | 100% | 100% | 100% | 100% 81.5 | 71.0 | 81.2 | 64.3 | 57.6 $10^{3}$ | 100% | 100% | 100% | 100% | 100% 83.0 | 80.8 | 77.3 | 40.8 | 25.0 500 | 100% | 100% | 100% | 100% | 100% 84.6 | 61.8 | 83.3 | 26.3 | 17.3 250 | 100% | 100% | 100% | 100% | 100% 83.4 | 64.7 | 76.1 | 14.3 | 10.3 Aliens | $10^{4}$ | 100% | 100% | 100% | 100% | 100% 82.4 | 81.6 | 81.5 | 81.8 | 77.0 $10^{3}$ | 100% | 100% | 100% | 100% | 100% 79.7 | 78.4 | 82.2 | 76.9 | 76.4 500 | 100% | 100% | 100% | 100% | 100% 78.0 | 77.3 | 81.1 | 77.2 | 76.0 250 | 100% | 100% | 100% | 100% | 100% 77.7 | 77.1 | 79.3 | 75.8 | 74.8 Chase | $10^{4}$ | 87.5% | 80% | 50% | 67.5% | 37.5% 6.2 | 6.0 | 4.8 | 5.2 | 3.9 $10^{3}$ | 60% | 50% | 70% | 30% | 17.5% 4.8 | 4.8 | 5.1 | 3.7 | 2.6 500 | 55% | 45% | 90% | 27.5% | 12.5% 4.9 | 4.5 | 5.5 | 2.9 | 2.1 250 | 40% | 32.5% | 90% | 17.5% | 7.5% 4.2 | 4.1 | 5.6 | 2.5 | 2.6 Boulderchase | $10^{4}$ | 62.5% | 75% | 45% | 82.5% | 30% 23.7 | 22.1 | 18.8 | 27.3 | 20.1 $10^{3}$ | 50% | 32.5% | 52.5% | 40% | 22.5% 22.8 | 18.6 | 21.8 | 18.1 | 16.2 500 | 47.5% | 30% | 35% | 32.5% | 15% 24.7 | 20.2 | 18.3 | 19.4 | 14.4 250 | 40% | 40% | 60% | 17.5% | 15% 20.9 | 20.1 | 21.7 | 14.7 | 15.3 Whackamole | $10^{4}$ | 100% | 100% | 75% | 97.5% | 75% 72.5 | 44.4 | 37.0 | 60.1 | 48.5 $10^{3}$ | 100% | 100% | 55% | 77.5% | 65% 64.0 | 41.8 | 33.9 | 43.9 | 39.0 500 | 100% | 100% | 57.5% | 70% | 52.5% 59.5 | 50.0 | 29.0 | 38.1 | 35.4 250 | 97.5% | 100% | 50% | 65% | 52.5% 54.8 | 45.9 | 28.5 | 35.1 | 26.6 Zelda | $10^{4}$ | 97.5% | 87.5% | 95% | 90% | 70% 8.3 | 7.4 | 3.8 | 9.6 | 8.1 $10^{3}$ | 80% | 85% | 87.5% | 57.5% | 42.5% 8.8 | 7.5 | 5.3 | 8.6 | 8.8 500 | 62.5% | 75% | 77.5% | 50% | 35% 8.6 | 8.2 | 4.6 | 8.8 | 7.8 250 | 55% | 55% | 70% | 45% | 30% 8.4 | 7.8 | 4.4 | 8.0 | 7.2 $\emptyset$ Rank | 1.6 | 2.5 | 2.6 | 3.5 | 4.7 Table 2: The results of algorithms tuned per row. ### 4.4 Tree Search Results Table 2 shows the best win rate and the corresponding average score of each algorithm, averaged over $40$ runs for each of the $36$ different parameter settings. In each row, the best values for the win rate and the average score are shown in bold, and a ranking of the algorithms is computed. The resulting average ranks are shown in the last line. We use a Friedmann test and a posthoc Wilcoxon signed rank test as an indication for significant differences in performance. The results of the latter (with a significance level of $99\%$) are shown in Figure 4(a). (a) All game runs. Data from Table 2 (b) Only won game runs Figure 4: Average ranks and the result of a Wilcoxon signed rank test with $\alpha=0.01$. Directly connected algorithms do not differ significantly. Table 3: Results for different parameters for all algorithms except of Yolobot (Rank 15). In each cell, the overall rank over all games and budgets is shown. We can see that O-MCTS performed best with an average rank of $1.6$ and a significantly better performance than MCTS and PB-MCTS. Table 2 allows us to take a closer look on the domains. For games that are easy to win, such as Surround, Aliens, and Whackamole O-MCTS beats MCTS and PB-MCTS by winning with a higher score. In Chase, a deadly but more deterministic game, O-MCTS is able to achieve a higher win rate. In deadly and stochastic games like Zelda, Boulderchase and Jaws O-MCTS performs comparable to the other algorithms without anyone performing significant better than the others. Figure 4(b) summarizes the results when only won games are considered. It can be seen, that in this case, PB-MCTS is significantly better than MCTS. This implies that if PB-MCTS manages to win, it does so with a greater score than MCTS, but it wins less often. Yolobot falls behind because it is designed to primarily maximize the win rate, not the score. Inspecting the performance of MixMax it can easily be seen that the hereby added bias towards higher scores often results in a death: Looking at only won games (see Figure 4(b)) it achieves a higher rank than MCTS, but overall its performance is significantly worse. In conclusion, we found evidence that O-MCTS’s preference for actions that _maximize win rate_ works better than MCTS’s tendency to _maximize average performance_ for the tested domains. ##### Parameter Optimization In Table 3 the overall rank over all parameters for all algorithms are shown. It is clearly visible that a low rollout length $RL$ improves performance and is more important to tune correctly than the exploration-exploitation trade- off $C$. Since Yolobot has no parameters, it is not shown. Except for the extreme case of no exploration ($C=0$), O-MCTS with $RL=5$ is better than any other MCTS algorithm. The best configuration is O-MCTS with $RL=5$ and $C=1.25$. ##### Video Demonstrations For each algorithm and game, we have recorded a video where the agent wins333You can watch the videos at https://bit.ly/2ohbYb3. In those videos it can be seen that O-MCTS frequently plays actions that lead to a higher score, whereas MCTS play more safely—often too cautious and averse to risking any potentially deadly effect. ## 5 Conclusion In this paper we proposed O-MCTS, a modification of MCTS that handles the rewards in an ordinal way: Instead of averaging backpropagated values to obtain a value estimation, it estimates the winning probability of an action using the Borda score. By doing so, the magnitude of distances between different reward signals are disregarded, which can be useful in ordinal domains. In our experiments using the GVGAI framework, we compared O-MCTS to MCTS, PB-MCTS, MixMax and Yolobot, a specialized agent for this domain. Overall, O-MCTS achieved higher win rates and reached higher scores than the other algorithms, confirming that this approach can even be useful in domains where numeric reward information is available. O-MCTS is based on O-UCB, an ordinal variation of UCB which we also introduced and tested. Additionally, we have introduced a hierarchical version of O-UCB with which it is possible to define ordinal thresholds that are optimized first. ##### Acknowledgments This work was supported by the German Research Foundation (DFG project number FU 580/10). We gratefully acknowledge the use of the Lichtenberg high performance computer of the TU Darmstadt for our experiments. ## References * (1) Ailon, N., Karnin, Z., Joachims, T.: Reducing dueling bandits to cardinal bandits. In: International Conference on Machine Learning, pp. 856–864 (2014) * (2) Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Machine Learning 47(2-3), 235–256 (2002) * (3) Black, D.: Partial justification of the Borda count. Public Choice 28(1), 1–15 (1976) * (4) Browne, C.B., Powley, E., Whitehouse, D., Lucas, S.M., Cowling, P.I., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., Colton, S.: A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games 4(1), 1–43 (2012). DOI 10.1109/tciaig.2012.2186810 * (5) Demšar, J.: Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7(Jan), 1–30 (2006) * (6) Jacobsen, E.J., Greve, R., Togelius, J.: Monte Mario: Platforming with MCTS. In: Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, pp. 293–300. ACM (2014) * (7) Jamieson, K.G., Katariya, S., Deshpande, A., Nowak, R.D.: Sparse dueling bandits. In: AISTATS (2015) * (8) Joppen, T., Moneke, M.U., Schröder, N., Wirth, C., Fürnkranz, J.: Informed hybrid game tree search for general video game playing. IEEE Transactions on Games 10(1), 78–90 (2018). DOI 10.1109/TCIAIG.2017.2722235 * (9) Joppen, T., Wirth, C., Fürnkranz, J.: Preference-based Monte Carlo tree search. In: Proceedings of the 41st German Conference on AI (KI-18) (2018) * (10) Khalifa, A., Isaksen, A., Togelius, J., Nealen, A.: Modifying MCTS for human-like general video game playing. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI-16), pp. 2514–2520 (2016) * (11) Kocsis, L., Szepesvári, C.: Bandit based Monte-Carlo planning. In: Proceedings of the 17th European Conference on Machine Learning (ECML-06), pp. 282–293 (2006) * (12) Perez-Liebana, D., Liu, J., Khalifa, A., Gaina, R.D., Togelius, J., Lucas, S.M.: General video game AI: A multi-track framework for evaluating agents, games and content generation algorithms. arXiv preprint arXiv:1802.10363 (2018) * (13) Perez-Liebana, D., Samothrakis, S., Togelius, J., Lucas, S.M., Schaul, T.: General video game AI: Competition, challenges and opportunities. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence, pp. 4335–4337 (2016) * (14) Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming, 2nd edn. Wiley (2005) * (15) Ramamohan, S.Y., Rajkumar, A., Agarwal, S., Agarwal, S.: Dueling bandits: Beyond condorcet winners to general tournament solutions. In: D.D. Lee, M. Sugiyama, U.V. Luxburg, I. Guyon, R. Garnett (eds.) Advances in Neural Information Processing Systems 29, pp. 1253–1261. Curran Associates, Inc. (2016). URL http://papers.nips.cc/paper/6337-dueling-bandits-beyond-condorcet-winners-to-general-tournament-solutions.pdf * (16) Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al.: Mastering the game of Go without human knowledge. Nature 550(7676), 354 (2017) * (17) Sprinthall, R.C., Fisk, S.T.: Basic statistical analysis. Prentice Hall Englewood Cliffs, NJ (1990) * (18) Urvoy, T., Clerot, F., Féraud, R., Naamane, S.: Generic exploration and k-armed voting bandits. In: International Conference on Machine Learning, pp. 91–99 (2013) * (19) Vargha, A., Delaney, H.D.: A critique and improvement of the ”cl” common language effect size statistics of mcgraw and wong. Journal of Educational and Behavioral Statistics 25(2), 101–132 (2000). URL http://www.jstor.org/stable/1165329 * (20) Weng, P.: Markov decision processes with ordinal rewards: Reference point-based preferences. In: Proceedings of the 21st International Conference on Automated Planning and Scheduling (ICAPS-11), ICAPS (2011) * (21) Yannakakis, G.N., Cowie, R., Busso, C.: The ordinal nature of emotions. In: Proceedings of the 7th International Conference on Affective Computing and Intelligent Interaction (ACII-17) (2017) * (22) Zoghi, M., Whiteson, S., Munos, R., Rijke, M.: Relative upper confidence bound for the k-armed dueling bandit problem. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 10–18 (2014). URL http://proceedings.mlr.press/v32/zoghi14.html
# LEVERAGING 3D INFORMATION IN UNSUPERVISED BRAIN MRI SEGMENTATION ###### Abstract Automatic segmentation of brain abnormalities is challenging, as they vary considerably from one pathology to another. Current methods are supervised and require numerous annotated images for each pathology, a strenuous task. To tackle anatomical variability, Unsupervised Anomaly Detection (UAD) methods are proposed, detecting anomalies as outliers of a healthy model learned using a Variational Autoencoder (VAE). Previous work on UAD adopted a 2D approach, meaning that MRIs are processed as a collection of independent slices. Yet, it does not fully exploit the spatial information contained in MRI. Here, we propose to perform UAD in a 3D fashion and compare 2D and 3D VAEs. As a side contribution, we present a new loss function guarantying a robust training. Learning is performed using a multicentric dataset of healthy brain MRIs, and segmentation performances are estimated on White-Matter Hyperintensities and tumors lesions. Experiments demonstrate the interest of 3D methods which outperform their 2D counterparts. ††*Data used in preparation of this article were partially obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp- content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf Index Terms— Deep Learning, Variational Autoencoder, Anomaly Detection, Medical Imaging ## 1 Introduction Brain anomalies are widely used as biomarkers indicating the presence or progress of many neurological disorders. Magnetic Resonance Imaging (MRI) is today an essential modality to reveal these markers. In recent years, a multitude of algorithms for the automatic detection of anomalies in brain MRI has been proposed, with promising results achieved by Deep Learning (DL) approaches. Anomalies can manifest in a wide range of shape, location and intensities. To automatically detect them, most state-of-the-art algorithms use a supervised approach, trained on an as large as possible manually annotated dataset [1]. Acquisition of such dataset can be both time-consuming and expensive. Besides, these algorithms are specific to the task they are trained for and will therefore perform poorly when applied on images presenting unseen type of anomalies. To overcome these limitations, Unsupervised Anomaly Detection (UAD) methods are proposed. They consist in the construction of a healthy model, which allows to detect anomalies from their deviation from the model. Recent DL methods propose to learn a manifold of healthy patients by computing lower- dimensional latent representations of the scans. At inference, anomalous images are mapped to the learned healthy manifold. An anomaly map is then computed using the voxel-wise difference from its healthy projection. Although failing for the time being to meet the state-of-the-art results of supervised methods, UAD approaches have several advantages: they do not require any labeled data and can be adapted with very limited adjustments to different types of pathologies. Two types of neural networks are particularly investigated in UAD : Generative Adversarial Networks (GANs) [2] and Variational Autoencoders (VAEs) [3]. GANs use an adversarial training approach to construct the healthy manifold [4]. VAEs rather encode images into latent distributions and use a regularization term in their loss function to ensure consistency in the latent space [5]. Yet, training of VAEs is known to be unstable [6]. As an attempt to alleviate this issue, we propose a robust loss function to ensure a stable training. To the best of our knowledge, all previous research related to UAD tackle the problem in a 2D fashion, meaning that the MRI volume is processed as a collection of independent 2D slices. Despite being computationally efficient, this approach has several limitations. First, it requires the selection of homogeneous slices. Top and bottom slices, which contain no or little brain tissue, are excluded, as they may compromise the training of 2D networks. It is an impairing step because it requires hand-tuning and prevents the detection of anomalies in the entire MRI. Second, the processing of 2D data does not fully exploit the spatial information available in the scans. This restriction to 2D may arise from the significant difficulty of 3D training, which suffers from the high dimensionality of data and the reduction of the number of training samples. Another limitation of the state of the art is the restriction of most previous work on UAD to monocentric datasets for training [5, 7, 8], which fails to ensure a clinically realistic setup. In this work, we present a 3D VAE framework for UAD. We train 2D and 3D models using our proposed robust loss function and compare them on multiple pathologies acquired in real clinical conditions. Experiments show that exploiting the full images in a 3D fashion leads to better performances than the usual 2D approaches. ## 2 Methods ### 2.1 UAD using Variational Autoencoders VAEs are composed of an encoder followed by a decoder. The encoder compresses the input image $X$ into a distribution over the latent space, which is regularized to be close to a prior distribution conventionally set to be a Gaussian multivariate distribution $N(z;0,I)$. A point $z$ is then sampled from this latent distribution and presented to the decoder which produces a reconstruction $\hat{X}$ of the input. The bottleneck separating the encoder and the decoder distinguishes spatial and dense VAEs. In the spatial configuration, VAEs are fully-convolutional, meaning that the latent vector is a multi-dimensional tensor $z\in\mathbb{R}^{N\times h\times w}$ in 2D and $z\in\mathbb{R}^{N\times d\times h\times w}$ in 3D with $N$ the latent space dimension, $h$ the height, $w$ the width and $d$ the depth. In this configuration, spatial information is preserved during the encoding-decoding process. In the dense configuration, the image is encoded into a 1-dimensional tensor $z\in\mathbb{R}^{N}$ with the use of fully-connected layers, and as a consequence spatial information is discarded. After training on healthy subjects, we perform UAD using a reconstruction- based approach. Reconstruction is the classical approach for DL-based UAD. It consists in the computation of anomaly maps by subtracting its reconstruction $\hat{X}$ to the input image $X$. As a result of the training, anomalies are badly reconstructed once projected to the healthy manifold, and thus yield to high values in the anomaly map. By binarizing them with an appropriate threshold, anomalies segmentations are obtained. ### 2.2 Collapsing-robust loss function Training of a VAE is performed by minimizing a two-terms loss function: the evidence lower-bound ($ELBO(X,\hat{X})$): $\pazocal{L}=|X-\hat{X}|_{d}+D_{KL}(q_{\phi}(z|X)|P(z)),d\in\\{1,2\\}$ (1) The left-hand side term of this equation is the reconstruction term, which computes the distance between the image $X$ and its reconstruction $\hat{X}$. The second term is the Kullback-Leibler Divergence (KL) between the posterior $q_{\phi}(z|X)$ and the defined prior $P(z)$, which acts as a regularization term in the latent space. Training of VAE using equation (1) is prone to _posterior collapse_ , meaning that the network may learn to neglect a subset of latent variables to match the prior, and thus the generative power of the network degrades [6]. Solutions have been proposed to tackle the problem, including the addition of a hyperparameter $\beta$ to balance both terms of the equation [9]. Following the same motivation, KL annealing schedules have been presented, including the KL cyclical annealing schedule described in [10]. We build on this work and propose a custom loss function defined as follows: $\displaystyle\pazocal{L}_{T}$ $\displaystyle=\dfrac{|X-\hat{X}|_{1}}{\Sigma}+\beta(t)D_{KL}(q_{\phi}(z|X)|P(z))$ (2) $\displaystyle\text{with }\beta(t)$ $\displaystyle=\left\\{\begin{array}[]{ll}\dfrac{2t}{T}\text{ for t}\in[0,\dfrac{T}{2}[\\\ \hskip 2.0pt1\hskip 8.0pt\text{else}\end{array}\right.\text{and }\beta(t+T)=\beta(t)$ where $\Sigma$ is the moving mean computed on the $L$ last values of the reconstruction term, $t$ is the current iteration and T the cycle duration. $T$ and $L$ are two hyperparameters that we set to 50 and 10 respectively in our experiments. We choose the $\ell_{1}$ norm as reconstruction term to reduce blurring of the reconstructed images. Normalizing this term by its moving mean maintains the reconstruction term near one, which encourages the network to learn throughout the entire training stage even when reconstruction becomes satisfying. Coupled with the KL cyclical annealing schedule, we obtain a loss robust to collapsing which ensures a stable training of VAEs. We use this loss without any additional parameter tuning in all our experiments. ### 2.3 Architecture Details Our encoders are composed of 6 layers: 4 convolutional layers compressing the input image, and 2 additional layers to encode the result into a normal distribution by computing its parameters $\mu$ and $\sigma$. These two last layers are convolutional in the spatial configuration, and fully-connected for dense VAEs. A latent vector $z$ is then sampled from this distribution and passed to the decoder. Decoders adopt a symmetric architecture with 5 layers decompressing $z$ in order to reconstruct the input image. Similarly, the first layer of the decoder is convolutional in the spatial configuration, and fully-connected otherwise. 3D adaptations of our architectures are obtained by replacing 2D convolutions by their 3D counterparts. ## 3 Experiments We focus on brain MRI FLAIR. To train our networks, we gather 79 healthy scans from several opensource datasets : the ADNI dataset, IBC 111This data was obtained from the OpenfMRI database. Its accession number is ds000244.[11] and Kirby [12]. Evaluation of our networks is performed on 2 different datasets presenting various brain lesions. We use a collection of 196 scans with annotated White-Matter Hyperintensities (WMH) gathered from several opensource datasets : the MICCAI MSSEG Challenge [13], the ISBI MS Lesion Challenge [14], and the WMH Challenge [15]. Additionally, we select 100 brain tumors scans from the BraTS 2018 dataset [16]. These datasets are multi-centric and comprise various scanners and clinical protocols, hence assuring a clinically realistic setup. Scans are rigidly registered on a $170\times 204\times 170$ template with an isotropic resolution of $1mm$, corrected for bias using [17] and skull- stripped using the HD-BET algorithm [18]. Volumes are then cropped to $160\times 192\times 75$ to concentrate on central slices, for which brain tissue is abundant. Finally, intensity range is set to $[0,1]$. Segmentations are produced by binarizing anomaly maps, obtained as the difference between an image and its reconstruction. Binarization is performed using a threshold above which voxels are considered as anomalies. In order to find its optimal value, we split each testing set in half. On the first half, we compute Dice scores between ground truths and segmentations obtained with 15 different thresholds, ranged between $0\text{ and }0.15$, which we found to be a relevant range in our experiments. Then, the threshold which provided the best performance is applied on the second half. The obtained segmentations are then multiplied by a slightly eroded brain mask, in order to remove false positives occurring near the brain contour. Finally, a median filter is applied to remove regions with less than 10 voxels. Fig. 1: Reconstructions and segmentations of our networks. Left : WMH lesions. Right : tumors lesions. ## 4 Results We evaluate our networks segmentation performance using Dice, sensitivity and specificity scores. Reconstruction performance is estimated using the voxel- wise mean average error (MAE) between the input scan and its reconstruction. To fairly compare 2D and 3D networks, they are all evaluated on the same 75 central slices for each test scan. Performances of the Spatial 2D and Dense 2D, as well as their 3D adaptations, are presented in Table 1. Illustrative reconstructions and segmentations can be found in Figure 1 for both WMH and tumor lesions. 3D adaptations demonstrate a boost in performance as compared to 2D. On both the WMH and Tumors datasets, the best network is the Dense 3D VAE, with dice scores of $0.463\pm 0.259$ and $0.650\pm 0.190$, a specificity of $0.996$ and $0.981$, and a sensitivity of $0.511$ and $0.711$ respectively. | WMH | Tumors ---|---|--- Model | Dice | Spe | Sen | MAE | Dice | Spe | Sen | MAE Spatial 2D | $0.280\pm 0.174$ | $0.986$ | $0.547$ | $0.049$ | $0.260\pm 0.121$ | $0.912$ | $0.376$ | $0.079$ Spatial 3D | $0.336\pm 0.203$ | $0.989$ | $0.605$ | $0.056$ | $0.386\pm 0.160$ | $0.898$ | $0.655$ | $0.065$ Dense 2D | $0.460\pm 0.262$ | $0.994$ | $0.604$ | $0.102$ | $0.618\pm 0.167$ | $0.978$ | $0.676$ | $0.133$ Dense 3D | $\textbf{0.463}\pm\textbf{0.259}$ | $0.996$ | $0.511$ | $0.104$ | $\textbf{0.650}\pm\textbf{0.190}$ | $0.981$ | $0.711$ | $0.151$ Table 1: Performance metrics obtained on the test datasets. Spe = Specificity, Sen = Sensitivity, MAE = Mean Average Error. ## 5 Discussion and Conclusion In this work, the interest of 3D methods for UAD in brain MRIs was evaluated. Using our proposed collapsing-robust loss function, 2D and 3D models were compared on two pathological datasets showing an overall increase in performance in the 3D setting. For spatial VAEs, this gain was more significant than for dense networks. In the dense configuration, spatial information was not preserved during the encoding-decoding scheme. Thus, the introduction of additional spatial context with the use of 3D did not significantly increase the segmentation performance. Additionally, we observed that dense networks outperformed spatial networks in both the 2D and 3D cases. They indeed operated a significantly higher compression of the scan as compared to spatial networks. Consequently, reconstructions presented very little details, and anomalies were entirely discarded, which made them more easily detectable. Performances of our approaches, although being satisfying, remain lower than state-of-the-art supervised methods. In [19], a Dice score above $0.9$ is obtained in tumor segmentation task, and a score of $0.56$ is achieved in [20] for WMH lesions segmentation with supervised approaches. Although UAD methods fail at reaching these standards, they are very promising as they allow to detect anomalies in a generic fashion and without the use of annotated training samples. One interesting application of UAD approaches is the generation of raw segmentations, which can be used as a starting point for manual segmentations, saving time for the rater. In future work, advanced 3D architectures could be tested with the use of deeper networks. Extensions to multi-sequential MRI data is also of interest. ## 6 Compliance with ethical standards This research study was conducted retrospectively using human subject data made available by the following sources: ADNI, OpenfMRI, NITRC, the MICCAI MSSEG Challenge, the ISBI MS Lesion Challenge, the WMH Challenge and BraTS 2018. Ethical approval was not required as confirmed by the license attached with the data. ## 7 Acknowledgments BL, ML, SD and AT are employees of the Pixyl Company. MD and FF serve on Pixyl advisory board. ## References * [1] Z. Akkus et al., “Deep learning for brain MRI segmentation: state of the art and future directions,” J Dig Im, vol. 30, no. 4, 2017. * [2] Ian Goodfellow et al., “Generative adversarial nets,” Adv Neural Inf Proc Sys, 2014. * [3] D. P. Kingma et al., “Auto-encoding variational Bayes,” Int Conf Learn Repres, 2013. * [4] F. Di Mattia et al., “A survey on GANs for anomaly detection,” CoRR, 2019. * [5] C. Baur et al., “Autoencoders for unsupervised anomaly segmentation in brain MR images: A comparative study,” CoRR, 2020. * [6] J. Lucas et al., “Understanding posterior collapse in generative latent variable models,” Int Conf Learn Repres, 2019. * [7] X. Chen et al., “Unsupervised detection of lesions in brain MRI using constrained adversarial auto-encoders,” CoRR, 2018. * [8] D. Zimmerer et al., “Context-encoding variational autoencoder for unsupervised anomaly detection,” CoRR, 2018. * [9] I. Higgins et al., “beta-VAE: Learning basic visual concepts with a constrained variational framework,” Int Conf Learn Repres, 2016. * [10] H. Fu et al., “Cyclical annealing schedule: A simple approach to mitigating KL vanishing,” NAACL-HLT, vol. 1, 2019. * [11] A. L. Pinho et al., “Individual brain charting, a high-resolution fmri dataset for cognitive mapping,” Scientific data, vol. 5, 2018. * [12] B. A. Landman et al., “Multi-parametric neuroimaging reproducibility: a 3T resource study,” NeuroIm, 2011. * [13] Olivier Commowick et al., “Objective evaluation of multiple sclerosis lesion segmentation using a data management and processing infrastructure,” Scientific reports, vol. 8, no. 1, 2018. * [14] Aaron Carass et al., “Longitudinal multiple sclerosis lesion segmentation: resource and challenge,” NeuroImage, vol. 148, 2017. * [15] Hugo J. Kuijf et al., “Standardized assessment of automatic segmentation of white matter hyperintensities and results of the wmh segmentation challenge,” IEEE transactions on medical imaging, vol. 38, no. 11, 2019. * [16] B. H. Menze et al., “The multimodal brain tumor image segmentation benchmark (BRATS),” TMI, vol. 34, no. 10, 2014. * [17] N. J. Tustison et al., “N4itk: improved N3 bias correction,” TMI, vol. 29, no. 6, 2010. * [18] F. Isensee et al., “Automated brain extraction of multisequence MRI using artificial neural networks,” HBM, vol. 40, no. 17, 2019. * [19] D. Lachinov et al., “Glioma segmentation with cascaded UNet,” in BrainLes. Springer, 2018. * [20] S. Valverde et al., “Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach,” NeuroIm, vol. 155, 2017.
# Discrete Adaptive Control Allocation S. S. Tohidi Mechasnical Engineering Department Bilkent University Ankara, 06800 Turkey <EMAIL_ADDRESS> &Y. Yildiz Mechasnical Engineering Department Bilkent University Ankara, 06800 Turkey <EMAIL_ADDRESS> ###### Abstract The main purpose of a control allocator is to distribute a total control effort among redundant actuators. This paper proposes a discrete adaptive control allocator for over-actuated sampled-data systems in the presence of actuator uncertainty. The proposed method does not require uncertainty estimation or persistency of excitation. Furthermore, the presented algorithm employs a closed loop reference model, which provides fast convergence without introducing excessive oscillations. To generate the total control signal, an LQR controller with reference tracking is used to guarantee the outer loop asymptotic stability. The discretized version of the Aerodata Model in Research Environment (ADMIRE) is used as an over-actuated system, to demonstrate the efficacy of the proposed method. _K_ eywords First keyword $\cdot$ Second keyword $\cdot$ More ## 1 Introduction Increasing the number of actuators in dynamical systems is one of the ways to improve maneuverability and fault tolerance [1, 2]. Thanks to the advances in microprocessors and progress in actuator miniaturization, which leads to actuator cost reduction, over-actuated systems are becoming ubiquitous in engineering applications. Aerial vehicles [3, 4, 5, 6, 7, 8, 9], marine vehicles [10, 11], and automobiles [12, 13, 14, 15] can be counted as examples of systems where redundant actuators are employed. Allocating control signals among redundant actuators can be achieved via several different control allocation methods which can be categorized into the following categories: Pseudo-inverse-based, optimization-based and dynamic control allocation. Pseudo-inverse-based methods [16, 2], which have the lowest computational complexity among the others, are implemented by manipulating the null space of the control input matrix. Optimization-based methods [17, 18, 19, 20, 21] are performed by minimizing a cost function that penalizes the difference between the desired and achieved total control inputs. Dynamic control allocation methods [22, 23, 1], on the other hand, are based on solving differential equations that model the control allocation goals. These methods can be extended to consider actuator limits [24, 25]. A survey of control allocation methods can be found in [26]. Actuator effectiveness uncertainty is an inevitable problem in several dynamical systems and it can be handled by control allocation methods in over- actuated systems. However, most of the control allocation methods that are proposed to solve this problem require uncertainty estimation and persistently exciting input signals [27, 2]. Adaptive control allocation [28, 1, 29], on the other hand, is able to manage redundancy and uncertainty of actuators without these requirements. The majority of control algorithms are implemented using digital technology. Therefore, before their application, continuous-time controllers need an intermediate discretization step, which may lead to loss of stability margins [30, 31]. In this paper, a discrete adaptive control allocation method is introduced for sampled-data systems. The approach is inspired by the continuous control allocation algorithm that is recently proposed in [1]. It does not require uncertainty estimation or persistency of excitation assumption. Furthermore, the method is implemented using a closed loop reference model [32], which proved to speed up the system response without causing excessive oscillations [33]. To the best of our knowledge, a discrete adaptive control allocator with these properties is not available in the prior literature. This paper is organized as follows: Section 2 introduces the notations and definitions employed during the paper. Section 3 presents control allocation problem statement. The discrete adaptive control allocation is presented in Section 4. Controller design is presented in section 5. Section 6 illustrates the effectiveness of the proposed method in the simulation environment. Finally, Section 7 summarizes the paper. ## 2 NOTATIONS In this section, we collect several definitions and basic results which are used in the following sections. Throughout this paper, $\lambda_{\text{min}}(.)$, $\lambda_{\text{max}}(.)$ and $\lambda_{i}(.)$ refer to the minimum, maximum and the $i^{\text{th}}$ eigenvalue of a matrix, respectively. $I_{r}$ is the identity matrix of dimension $r\times r$, $0_{r\times m}$ is the zero matrix of dimension $r\times m$. $\text{tr}(.)$ refers to the trace operation and $\text{diag}([.])$ symbolizes a diagonal matrix with the elements of a vector $[.]$. $\mathbb{R}$, $\mathbb{R}^{n}$ and $\mathbb{R}^{n\times m}$ denote the set of real numbers, real column vectors with $n$ elements, and $n\times m$ real matrices, respectively. $\mathbb{Z}^{+}$ denotes the set of non-negative integers. In the discrete time, the $\mathcal{L}_{2}$ and $\mathcal{L}_{\infty}$ signal norms are defined as $\displaystyle||x(k)||=||x(k)||_{2}$ $\displaystyle=\sqrt{\sum_{k=0}^{\infty}\left(x_{1}^{2}(k)+...+x_{n}^{2}(k)\right)},$ (1) $\displaystyle||x(k)||_{\infty}$ $\displaystyle=\sup_{k\in\mathbb{Z}^{+}}\max_{1\leq i\leq n}|x_{i}(k)|,$ (2) where $x_{1},...,x_{n}$ are the elements of $x$. $x(k)\in L_{2}$ if $||x(k)||_{2}<\infty$. Also, $x(k)\in L_{\infty}$ if $||x(k)||_{\infty}<\infty$. ## 3 PROBLEM STATEMENT Consider the following discretized plant dynamics $\displaystyle{x}(k+1)$ $\displaystyle=Ax(k)+B_{u}u(k)$ $\displaystyle=Ax(k)+B_{v}Bu(k),$ (3) where $k\in\mathbb{Z}^{+}$ is the sampling instant, $x\in\mathbb{R}^{n}$ is the system states vector, $u\in\mathbb{R}^{m}$ is the control input vector, $A\in\mathbb{R}^{n\times n}$ is the known state matrix and $B_{u}\in\mathbb{R}^{n\times m}$ is the known control input matrix. ###### Remark 1 It is noted that in the development of the control allocator, the matrix $A$ being known or unknown is irrelevant. This matrix will play a role in the controller (not control allocator) design and since the contribution of the paper is a novel discrete time adaptive control allocator, the matrix $A$ is taken to be known to facilitate the controller (not control allocator) development. Redundancy of actuators leads $B_{u}$ in (3) to be rank deficient, that is, $\text{rank}(B_{u})=r<m$. Therefore, $B_{u}$ can be decomposed into the known matrices $B_{v}\in\mathbb{R}^{n\times r}$ and $B\in\mathbb{R}^{r\times m}$ with $\text{rank}(B_{v})=\text{rank}(B)=r$. To model the actuator degradation, a diagonal matrix $\Lambda\in\mathbb{R}^{m\times m}$ with uncertain positive elements, belong to $(0,1]$, is introduced to the system dynamics as $\displaystyle{x}(k+1)$ $\displaystyle=Ax(k)+B_{v}B\Lambda u(k)$ $\displaystyle=Ax(k)+B_{v}v(k),$ (4) where $v\in\mathbb{R}^{r}$ denotes the bounded control input produced by the controller. The boundedness of the control input can be guaranteed by using a soft saturation on the control signal $v$, before feeding it to the control allocator. An example of this can also be seen in [1]. The control allocation problem is to achieve $B\Lambda u(k)=v(k),$ (5) without using any matrix identification methods. Since $\Lambda$ is unknown, conventional control allocation methods do not apply. ## 4 DISCRETE ADAPTIVE CONTROL ALLOCATION Consider the following dynamics ${\xi}(k+1)=A_{m}\xi(k)+B\Lambda u(k)-v(k),$ (6) where $A_{m}\in\mathbb{R}^{r\times r}$ is stable matrix, that is, eigenvalues of $A_{m}$ are inside the unit circle. A reference model dynamics is chosen as ${\xi}_{m}(k+1)=A_{m}\xi_{m}(k).$ (7) Defining the control input as a mapping from $v$ to $u$, $u(k)={\theta}_{v}^{T}(k)v(k),$ (8) where $\theta_{v}\in\mathbb{R}^{r\times m}$ represents the adaptive parameter matrix to be determined, and substituting (8) into (6), it is obtained that ${\xi}(k+1)=A_{m}\xi(k)+(B\Lambda{\theta}_{v}^{T}(k)-I_{r})v(k).$ (9) Defining $\theta_{v}(k)=\theta_{v}^{*}+\tilde{\theta}_{v}(k)$, where ${\theta_{v}^{*}}=((B\Lambda)^{T}(B\Lambda\Lambda B^{T})^{-1})^{T}$ is the ideal value of $\theta_{v}$, which corresponds to the pseudo inverse of $B\Lambda$, and $\tilde{\theta}_{v}$ is the deviation of $\theta_{v}$ from its ideal value, equation (9) can be rewritten as $\xi(k+1)=A_{m}\xi(k)+B\Lambda\tilde{\theta}_{v}^{T}(k)v(k).$ (10) Defining the error $e(k)=\xi(k)-\xi_{m}(k)$, and using (7) and (10), the error dynamics is obtained as ${e}(k+1)=A_{m}e(k)+B\Lambda\tilde{\theta}_{v}^{T}(k)v(k).$ (11) ###### Assumption 1 The design matrix $A_{m}$ is chosen such that [34]: (i) $|\lambda_{i}(A_{m})|\leq 1,i=1,...,r$, (ii) All controllable modes of $(A_{m},B\Lambda)$ are inside the unit circle, (iii) The eigenvalues of $A_{m}$ on the unit circle have a Jordan block of size one. ###### Theorem 1 Consider the system $x(k+1)=\hat{A}x(k)+\hat{B}u(k)$, which satisfies Assumption 1. There exist positive constants $m_{1}$ and $m_{2}$, independent of $k$ and $N$, such that $\displaystyle||x(k)||\leq m_{1}+m_{2}\max_{0\leq\tau\leq N}||u(\tau)||,$ (12) for all $k$, such that $0\leq k\leq N$. ###### Proof 1 The proof can be found in [34]. $\blacksquare$ ###### Theorem 2 Consider the error dynamics (11), which satisfies Assumption 1. If the update law $\displaystyle{\theta}_{v}(k+1)={\theta}_{v}(k)+\Gamma v(k)\epsilon^{T}(k)B$ (13) is used, where $0<\Gamma=\Gamma^{T}\in\mathbb{R}^{r\times r}$ is the adaptation rate matrix, and $\epsilon(k)\in\mathbb{R}^{r}$ is defined as $\displaystyle\epsilon(k)=\frac{v(k)-B\Lambda u(k)}{\sigma^{2}(k)},$ (14) with $\sigma(k)\equiv\sqrt{1+\lambda_{\text{max}}(B\Lambda B^{T})v^{T}(k)\Gamma v(k)}$, then the adaptive parameter $\theta_{v}(k)$, the error signal $e(k)$ and all signals remain bounded. Furthermore, $\lim_{k\rightarrow\infty}e(k)=0$. ###### Proof 2 Consider the scalar positive definite function $V(k)=\text{tr}\left\\{\tilde{\theta}_{v}^{T}(k)\Gamma^{-1}\tilde{\theta}_{v}(k)\Lambda\right\\},$ (15) where $\Gamma=\Gamma^{T}>0$. The time increment of (15) can be calculated as $\displaystyle V(k+1)-V(k)$ $\displaystyle=\text{tr}\left\\{\tilde{\theta}_{v}^{T}(k+1)\Gamma^{-1}\tilde{\theta}_{v}(k+1)\Lambda\right\\}-\text{tr}\left\\{\tilde{\theta}_{v}^{T}(k)\Gamma^{-1}\tilde{\theta}_{v}(k)\Lambda\right\\}.$ (16) Using (13) and the fact that $\theta_{v}(k)=\theta_{v}^{*}+\tilde{\theta}_{v}(k)$, it is obtained that $\displaystyle\tilde{\theta}_{v}(k+1)$ $\displaystyle=\theta_{v}(k+1)-\theta_{v}^{*}$ $\displaystyle=\theta_{v}(k)+\Gamma v(k)\epsilon^{T}(k)B-\theta_{v}^{*}$ $\displaystyle=\tilde{\theta}_{v}(k)+\Gamma v(k)\epsilon^{T}(k)B.$ (17) Substituting (2) in (16), and using the trace property, $\text{tr}\left\\{A+B\right\\}=\text{tr}\left\\{A\right\\}+\text{tr}\left\\{B\right\\}$ for two square matrices $A$ and $B$, we have $\displaystyle V(k+1)-V(k)$ $\displaystyle=\text{tr}\Biggl{\\{}\tilde{\theta}_{v}^{T}(k)\Gamma^{-1}\tilde{\theta}_{v}(k)\Lambda+\tilde{\theta}_{v}^{T}(k)v(k)\epsilon^{T}(k)B\Lambda+B^{T}\epsilon(k)v^{T}(k)\tilde{\theta}_{v}(k)\Lambda$ $\displaystyle+B^{T}\epsilon(k)v^{T}(k)\Gamma v(k)\epsilon^{T}(k)B\Lambda\Bigg{\\}}-\text{tr}\left\\{\tilde{\theta}_{v}^{T}(k)\Gamma^{-1}\tilde{\theta}_{v}(k)\Lambda\right\\}$ $\displaystyle=\text{tr}\Biggl{\\{}2B^{T}\epsilon(k)v^{T}(k)\tilde{\theta}_{v}(k)\Lambda+B^{T}\epsilon(k)v^{T}(k)\Gamma v(k)\epsilon^{T}(k)B\Lambda\Bigg{\\}}.$ (18) Since $u(k)=\theta_{v}^{T}v(k)$ and $v(k)=B\Lambda\theta_{v}^{*^{T}}v(k)$, (14) can be rewritten as $\displaystyle\epsilon(k)=\frac{-B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)}{\sigma^{2}(k)}.$ (19) By substituting (19) in (2) and using the trace properties, $\text{tr}\left\\{cA\right\\}=c\times\text{tr}\left\\{A\right\\}$, for a square matrix $A$ and a scalar $c$, and $a^{T}b=\text{tr}\left\\{ba^{T}\right\\}$, for two column vectors $a$ and $b$, (2) can be rewritten as $\displaystyle V(k+1)-V(k)$ $\displaystyle=\text{tr}\Biggl{\\{}\frac{-2B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)v^{T}(k)\tilde{\theta}_{v}(k)\Lambda}{\sigma^{2}(k)}+\frac{B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)v^{T}(k)\Gamma v(k)v^{T}(k)\tilde{\theta}_{v}(k)\Lambda B^{T}B\Lambda}{\sigma^{4}(k)}\Biggr{\\}}$ $\displaystyle=\frac{1}{\sigma^{2}}\text{tr}\Biggl{\\{}-2B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)v^{T}(k)\tilde{\theta}_{v}(k)\Lambda+\frac{B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)v^{T}(k)\Gamma v(k)v^{T}(k)\tilde{\theta}_{v}(k)\Lambda B^{T}B\Lambda}{\sigma^{2}(k)}\Biggr{\\}}$ $\displaystyle=\frac{1}{\sigma^{2}(k)}\text{tr}\Biggl{\\{}-2v^{T}(k)\tilde{\theta}_{v}(k)\Lambda B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)$ $\displaystyle+\frac{v^{T}(k)\Gamma v(k)v^{T}(k)\tilde{\theta}_{v}(k)\Lambda B^{T}B\Lambda B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)}{\sigma^{2}(k)}\Biggr{\\}}.$ (20) Using the inequality $a^{T}Aa\leq\lambda_{\text{max}}(A)a^{T}a$ for a symmetric matrix $A$ and a column vector $a$, an upper bound for (2) can be obtained as $\displaystyle V(k+1)-V(k)$ $\displaystyle\leq\frac{1}{\sigma^{2}(k)}\text{tr}\Biggl{\\{}-2v^{T}(k)\tilde{\theta}_{v}(k)\Lambda B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)$ $\displaystyle+\lambda_{\text{max}}(B\Lambda B^{T})\frac{v^{T}(k)\Gamma v(k)}{\sigma^{2}(k)}v^{T}(k)\tilde{\theta}_{v}(k)\Lambda B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)\Biggr{\\}}$ $\displaystyle=\frac{1}{\sigma^{2}(k)}\text{tr}\Biggl{\\{}v^{T}(k)\tilde{\theta}_{v}(k)\Lambda B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)\times\left(-2+\lambda_{\text{max}}(B\Lambda B^{T})\frac{v^{T}(k)\Gamma v(k)}{\sigma^{2}(k)}\right)\Biggr{\\}}.$ (21) Considering the definition of $\sigma(k)$, which is given after (14), it can be obtained that $-2\leq\left(-2+\lambda_{\text{max}}(B\Lambda B^{T})\frac{v^{T}(k)\Gamma v(k)}{\sigma^{2}(k)}\right)<-1$. Therefore, an upper bound for (2) can be written as $\displaystyle V(k+1)-V(k)<\frac{-1}{\sigma^{2}(k)}v^{T}(k)\tilde{\theta}_{v}(k)\Lambda B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)\leq 0.$ (22) This shows that $V(k)\in\mathcal{L}_{\infty}$ and therefore $\tilde{\theta}_{v}(k)\in\mathcal{L}_{\infty}$, which implies that ${\theta}_{v}(k)\in\mathcal{L}_{\infty}$. In addition, since $V(k)$ is decreasing and positive definite, it has a limit as $k\rightarrow\infty$, that is, $\lim_{k\rightarrow\infty}V(k)=V_{\infty}$. Furthermore, using Theorem 1, the error dynamics (11), and the boundedness of $\tilde{\theta}_{v}$ and $v$, it can be shown that $e(t)\in\mathcal{L}_{\infty}$. Finally, since $\xi_{m}$ in (7) is bounded, $\xi$ is also bounded. Summing both sides of (22) from $k=0$ to $\infty$, it is obtained that $\displaystyle\sum_{k=0}^{\infty}\frac{1}{\sigma^{2}(k)}\left(v^{T}(k)\tilde{\theta}_{v}(k)\Lambda B^{T}B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)\right)\leq V(0)-V_{\infty}\leq\infty,$ $\displaystyle\Rightarrow\lambda_{\text{min}}(\Lambda B^{T}B\Lambda)\sum_{k=0}^{\infty}\frac{1}{\sigma^{2}(k)}\left(v^{T}(k)\tilde{\theta}_{v}(k)\tilde{\theta}_{v}^{T}(k)v(k)\right)\leq V(0)-V_{\infty}\leq\infty,$ $\displaystyle\Rightarrow\sum_{k=0}^{\infty}\frac{v^{T}(k)\tilde{\theta}_{v}(k)\tilde{\theta}_{v}^{T}(k)v(k)}{\sigma^{2}(k)}\leq\frac{V(0)-V_{\infty}}{\lambda_{\text{min}}(\Lambda B^{T}B\Lambda)}\leq\infty.$ (23) This leads to the conclusion that $\frac{\tilde{\theta}_{v}^{T}(k)v(k)}{\sigma(k)}\in\mathcal{L}_{2}$. Therefore [35], $\displaystyle\lim_{k\rightarrow\infty}\frac{\tilde{\theta}_{v}^{T}(k)v(k)}{\sigma(k)}=0.$ (24) Since $v(k)$ is bounded, $\sigma(k)$ is also bounded. Furthermore, $\sigma(k)\geq 1$ by definition. Therefore, using (24) it is obtained that $\displaystyle\lim_{k\rightarrow\infty}\tilde{\theta}_{v}^{T}(k)v(k)=0_{m\times 1}.$ (25) Using (25) and (19), it can be concluded that $\epsilon(k)$ converges to zero. Considering (13) and the convergence of $\epsilon(k)$ to zero, we deduce that $\theta_{v}(k)$ converges to a constant value as $k\rightarrow\infty$. Finally, (25) and the error dynamics (11) lead to the conclusion that $\lim_{k\rightarrow\infty}e(k)=0$.$\blacksquare$ ###### Remark 2 To realize (6), the signal $B\Lambda u(k)$ is required. In motion control applications, this signal corresponds to the net external forces and moments [36], which can be obtained via an inertial measurement unit (IMU). Examples of measuring/estimating this signal, without introducing delay or noise amplification, and employing it in real applications can be found in [37] and [38]. ###### Remark 3 To calculate $\sigma$, whose definition is given after (14), $\lambda_{\text{max}}(B\Lambda B^{T})$ needs to be computed. Although $\Lambda$ is an unknown matrix, the range of its elements is known, which is $(0,1]$. Therefore, the maximum eigenvalue of the matrix multiplication $B\Lambda B^{T}$ can be calculated. To obtain fast convergence without introducing excessive oscillations, the open loop reference model (7) is modified as a closed loop reference model [32] as follows $\displaystyle{\xi}_{m1}(k+1)$ $\displaystyle=A_{m}\xi_{m1}(k)-\l\left(\xi(k)-\xi_{m1}(k)\right)$ (26) $\displaystyle=A_{m}^{c}\xi_{m1}(k)-\l\xi(k),$ (27) where $A_{m}^{c}\equiv A_{m}+\l I_{r}$ and $l$ is a scalar design parameter. Defining the error $e_{1}(k)=\xi(k)-\xi_{m1}(k)$, and using (26) and (10), the error dynamics is obtained as $\displaystyle{e}_{1}(k+1)$ $\displaystyle=A_{m}e_{1}(k)+B\Lambda\tilde{\theta}_{v}^{T}(k)v(k)-\l e_{1}(k)$ $\displaystyle=\bar{A}_{m}e_{1}(k)+B\Lambda\tilde{\theta}_{v}^{T}(k)v(k),$ (28) where $\bar{A}_{m}\equiv A_{m}-\l I_{r}$. It is noted that the design parameter $\l$ needs to be chosen such that both $A_{m}^{c}$ in (27) and $\bar{A}_{m}$ in (4) satisfy Assumption 1. ###### Theorem 3 Consider the error dynamics (4), with the update law (13). The adaptive parameter $\theta_{v}(k)$, error signal $e_{1}(k)$ and all other signals remain bounded, and $\lim_{k\rightarrow\infty}e_{1}(k)=0$. ###### Proof 3 The proof is similar to that of Theorem 2’s and therefore omitted here for brevity.$\blacksquare$ ## 5 CONTROLLER DESIGN In this section, a discrete Linear Quadratic Regulator (LQR) is designed to generate the total control input for reference tracking. Consider the discrete dynamical system given in (3), together with the output vector $y\in\mathbb{R}^{r}$ as $\displaystyle x(k+1)$ $\displaystyle=Ax(k)+B_{v}v(k),$ $\displaystyle y(k)$ $\displaystyle=Cx(k),$ (29) where $C\in\mathbb{R}^{r\times n}$ is the output matrix. In order to design a controller for reference tracking, we define a new state as $\displaystyle x_{new}(k+1)=x_{new}(k)+\Delta t(ref(k)-y(k)),$ (30) where $x_{new}\in\mathbb{R}^{r}$ is created by integrating the tracking error, $ref\in\mathbb{R}^{r}$ is the reference input and $\Delta t\in\mathbb{R}^{+}$ is the sampling interval. Augmenting (30) with (5), we have $\displaystyle\begin{bmatrix}x(k+1)\\\ x_{new}(k+1)\end{bmatrix}$ $\displaystyle=\begin{bmatrix}A&0_{n\times r}\\\ -\Delta tC&I_{r}\end{bmatrix}\begin{bmatrix}x(k)\\\ x_{new}(k)\end{bmatrix}+\begin{bmatrix}B_{v}\\\ 0_{r\times r}\end{bmatrix}v(k)+\begin{bmatrix}0_{n\times r}\\\ \Delta tI_{r}\end{bmatrix}ref(k).$ (31) Defining $z(k)\equiv\left[x^{T}(k)\ x_{new}^{T}(k)\right]^{T}\in\mathbb{R}^{n+r}$ as the state vector of the aggregate system, the cost function required to solve the LQR problem becomes $\displaystyle J=\sum_{k=0}^{\infty}\left(z^{T}(k)Qz(k)+v^{T}(k)Rv(k)\right),$ (32) where $Q\in\mathbb{R}^{(n+r)\times(n+r)}$ is a positive semi-definite matrix and $R\in\mathbb{R}^{r\times r}$ is a positive definite matrix. Using (32) and solving the standard LQR problem for discrete systems lead to the optimal gain matrix $K\in\mathbb{R}^{r\times(n+r)}$ and the control signal is obtained as $v(k)=-Kz(k)$. The structure of the controller is shown in Figure 1. Figure 1: Block diagram of the closed loop system. ## 6 SIMULATION RESULTS ### 6.1 ADMIRE model The ADMIRE model is an over-actuated aircraft model introduced in [39] and [36]. We use a version of this model that is linearized at Mach 0.22 and altitude 3km. Considering the actuator loss of effectiveness matrix $\Lambda$, and discretizing the continuous model using a $0.1$s sampling interval, the discrete time dynamics is obtained as $\displaystyle x(k+1)$ $\displaystyle=Ax(k)+B_{u}u(k)$ $\displaystyle=Ax(k)+B_{v}Bu(k)$ $\displaystyle=Ax(k)+B_{v}v(k),$ (33) where $v=Bu$ is the control input, $x=[\alpha\ \beta\ p\ q\ r]^{T}$ is the state matrix with $\alpha,\beta,p,q$ and $r$ denoting the angle of attack, sideslip angle, roll rate, pitch rate and yaw rate, respectively. The vector $u=[u_{c}\ u_{re}\ u_{le}\ u_{r}]^{T}$ represents the control surface deflections of canard wings, right and left elevons and the rudder. The state and control matrices are given as $\displaystyle A=\begin{bmatrix}1.0214&0.0054&0.0003&0.4176&-0.0013\\\ 0&0.6307&0.0821&0&-0.3792\\\ 0&-3.4485&0.3979&0&1.1569\\\ 1.1199&0.0024&0.0001&1.0374&-0.0003\\\ 0&0.3802&-0.0156&0&0.8062\end{bmatrix},$ (34) $\displaystyle B_{u}=\begin{bmatrix}0.1823&-0.1798&-0.1795&0.0008\\\ 0&-0.0639&0.0639&0.1396\\\ 0&-1.584&1.584&0.2937\\\ 0.8075&-0.6456&-0.6456&0.0013\\\ 0&-0.1005&0.1005&-0.4113\end{bmatrix}.$ (35) The uncertainty which is considered as actuator loss of effectiveness occurs at $t=100$s and reduces the actuator effectivenesses by $30\%$. It is noted that for the design of the control allocator, the first two rows of the matrix $B_{u}$ is taken to be zero, which makes the control surfaces pure moment generators [18]. However, the original $B_{u}$ matrix given in (35) is used for the plant dynamics in the simulations. ### 6.2 Design parameters The design parameters of the controller are the matrices $R$ and $Q$, which are selected as $R=\text{diag}([1,\ 1,\ 0.1])$ and $Q=I_{8}$. Control allocation design parameters are $\Gamma$ and $A_{m}$. These two matrices are chosen as $\Gamma=\text{diag}([1,\ 1,\ 0.1])$ and $A_{m}=\text{diag}([0.5,\ 0.5,\ 0.5])$. To improve the transient response, the design parameter for the closed loop reference model approach is chosen as $\l=0.1$. Figure 2: System states and reference tracking. Figure 3: Control surface deflections. Figure 4: Control input signal, $v$, tracking. Figure 5: Adaptive parameters. ### 6.3 Simulation results Figure 2 illustrates the aircraft states together with the three references. It is seen that the first two states ($\alpha$ and $\beta$) remain bounded while the other three states ($p$, $q$ and $r$) track their references. The effect of the actuator effectiveness uncertainty, which is introduced at $t=100$s, can also be observed in this figure. Figure 3 shows the time evolution of control surfaces, where no excessive deflections are observed. It is seen in Figure 4 that the total control signals, $v_{i},i=1,2,3$, are realized by the control allocator. The figure shows that $B\Lambda u$ is converging to $v$, which implies that the control allocation error is converging to zero. Adaptive parameters’ time evolutions are demonstrated in Figure 5. The elements of $\theta_{v}$ remain bounded throughout the simulation and eventually converge to constant values. ## 7 Summary A discrete adaptive control allocation is proposed in this paper. This method is able to distribute the total control signals of a sampled-data system among redundant actuators in the presence of actuator effectiveness uncertainty. The proposed control allocation method does not require uncertainty estimation or persistency of excitation. Simulation results demonstrate the effectiveness of the method. ## References * [1] Seyed Shahabaldin Tohidi, Yildiray Yildiz, and Ilya Kolmanovsky. Adaptive control allocation for constrained systems. Automatica, 121:109161, 2020. * [2] Seyed Shahabaldin Tohidi, Ali Khaki Sedigh, and David Buzorgnia. Fault tolerant control design using adaptive control allocation based on the pseudo inverse along the null space. International Journal of Robust and Nonlinear Control, 26(16):3541–3557, 2016. * [3] Guillaume JJ Ducard. Fault-tolerant flight control and guidance systems: Practical methods for small unmanned aerial vehicles. Springer Science & Business Media, 2009. * [4] Iman Sadeghzadeh, Abbas Chamseddine, Youmin Zhang, and Didier Theilliol. Control allocation and re-allocation for a modified quadrotor helicopter against actuator faults. IFAC Proceedings Volumes, 45(20):247–252, 2012. * [5] Qiang Shen, Danwei Wang, Senqiang Zhu, and Eng Kee Poh. Inertia-free fault-tolerant spacecraft attitude tracking using control allocation. Automatica, 62:114–121, 2015. * [6] Qiang Shen, Danwei Wang, Senqiang Zhu, and Eng Kee Poh. Robust control allocation for spacecraft attitude tracking under actuator faults. IEEE Transactions on Control Systems Technology, 25(3):1068–1075, 2017. * [7] Diana M Acosta, Yildiray Yildiz, Robert W Craun, Steven D Beard, Michael W Leonard, Gordon H Hardy, and Michael Weinstein. Piloted evaluation of a control allocation technique to recover from pilot-induced oscillations. Journal of Aircraft, 52(1):130–140, 2014. * [8] Yildiray Yildiz and Ilya Kolmanovsky. Stability properties and cross-coupling performance of the control allocation scheme capio. Journal of Guidance, Control, and Dynamics, 34(4):1190–1196, 2011\. * [9] Seyed Shahabaldin Tohidi, Yildiray Yildiz, and Ilya Kolmanovsky. Pilot induced oscillation mitigation for unmanned aircraft systems: An adaptive control allocation approach. In IEEE Conference on Control Technology and Applications, pages 343–348, 2018. * [10] Mou Chen, Shuzhi Sam Ge, Bernard Voon Ee How, and Yoo Sang Choo. Robust adaptive position mooring control for marine vessels. IEEE Transactions on Control Systems Technology, 21(2):395–409, 2013. * [11] Tor A Johansen, Thomas P Fuglseth, Petter Tøndel, and Thor I Fossen. Optimal constrained control allocation in marine surface vessels with rudders. Control Engineering Practice, 16(4):457–464, 2008. * [12] Johannes Tjønnås and Tor A Johansen. Stabilization of automotive vehicles using active steering and adaptive brake control allocation. IEEE Transactions on Control Systems Technology, 18(3):545–558, 2010. * [13] Ozan Temiz, Melih Cakmakci, and Yildiray Yildiz. A fault tolerant vehicle stability control using adaptive control allocation. In Dynamic Systems and Control Conference, volume 51890, page V001T09A002. American Society of Mechanical Engineers, 2018. * [14] Ozan Temiz, Melih Cakmakci, and Yildiray Yildiz. A fault-tolerant integrated vehicle stability control using adaptive control allocation. arXiv preprint arXiv:2008.05697, 2020. * [15] Seyed Shahabaldin Tohidi and Ali Khaki Sedigh. Adaptive fault tolerance in automotive vehicle using control allocation based on the pseudo inverse along the null space for yaw stabilization. In The 3rd International Conference on Control, Instrumentation, and Automation, pages 174–179. IEEE, 2013. * [16] Halim Alwi and Christopher Edwards. Fault tolerant control using sliding modes with on-line control allocation. Automatica, 44(7):1859–1866, 2008. * [17] Marc Bodson. Evaluation of optimization methods for control allocation. Journal of Guidance, Control, and Dynamics, 25(4):703–711, 2002\. * [18] Ola Härkegård and S. Torkel Glad. Resolving actuator redundancy-optimal control vs. control allocation. Automatica, 41(1):137–144, 2005. * [19] Yildiray Yildiz and Ilya V Kolmanovsky. A control allocation technique to recover from pilot-induced oscillations (capio) due to actuator rate limiting. In American Control Conference, pages 516–523, 2010. * [20] Yildiray Yildiz, Ilya V Kolmanovsky, and Diana Acosta. A control allocation system for automatic detection and compensation of phase shift due to actuator rate limiting. In American Control Conference, pages 444–449, 2011. * [21] Yuanchao Yang and Zichen Gao. A new method for control allocation of aircraft flight control system. IEEE Transactions on Automatic Control, 65(4):1413–1428, 2020. * [22] Guillermo P Falconí and Florian Holzapfel. Adaptive fault tolerant control allocation for a hexacopter system. In American Control Conference (ACC), 2016, pages 6760–6766. IEEE, 2016. * [23] Sergio Galeani and Mario Sassano. Data-driven dynamic control allocation for uncertain redundant plants. In IEEE Conference on Decision and Control, pages 5494–5499, 2018\. * [24] Seyed Shahabaldin Tohidi, Yildiray Yildiz, and Ilya Kolmanovsky. Adaptive control allocation for over-actuated systems with actuator saturation. volume 50, pages 5492–5497. Elsevier, 2017. * [25] Seyed Shahabaldin Tohidi and Yildiray Yildiz. Handling actuator magnitude and rate saturation in uncertain over-actuated systems: A modified projection algorithm approach. International Journal of Control, pages 1–24, 2020. * [26] Tor A Johansen and Thor I Fossen. Control allocation—a survey. Automatica, 49(5):1087–1103, 2013. * [27] Alessandro Casavola and Emanuele Garone. Fault-tolerant adaptive control allocation schemes for overactuated systems. International Journal of Robust and Nonlinear Control, 20(17):1958–1980, 2010. * [28] Johannes Tjønnås and Tor A Johansen. Adaptive control allocation. Automatica, 44(11):2754–2765, 2008. * [29] Seyed Shahabaldin Tohidi, Yildiray Yildiz, and Ilya Kolmanovsky. Fault tolerant control for over-actuated systems: An adaptive correction approach. In American Control Conference, 2016, pages 2530–2535, 2016. * [30] Mario A Santillo and Dennis S Bernstein. Adaptive control based on retrospective cost optimization. Journal of guidance, control, and dynamics, 33(2):289–304, 2010\. * [31] K Merve Dogan, Tansel Yucelen, Wassim M Haddad, and Jonathan A Muse. Improving transient performance of discrete-time model reference adaptive control architectures. International Journal of Adaptive Control and Signal Processing, 2020. * [32] Travis E Gibson, Zheng Qu, Anuradha M Annaswamy, and Eugene Lavretsky. Adaptive output feedback based on closed-loop reference models. IEEE Transactions on Automatic Control, 60(10):2728–2733, 2015\. * [33] Anil Alan, Yildiray Yildiz, and Umit Poyraz. High-performance adaptive pressure control in the presence of time delays: Pressure control for use in variable-thrust rocket development. IEEE Control Systems Magazine, 38(5):26–52, 2018. * [34] Petros Ioannou and Bariş Fidan. Adaptive control tutorial. SIAM, 2006. * [35] Gang Tao. Adaptive control design and analysis, volume 37. John Wiley & Sons, 2003. * [36] Ola Härkegård. Backstepping and control allocation with applications to flight control. PhD thesis, Linköpings universitet, 2003. * [37] Ali Kutay, John Culp, Jonathan Muse, Daniel Brzozowski, Ari Glezer, and Anthony Calise. A closed-loop flight control experiment using active flow control actuators. In 45th AIAA Aerospace Sciences Meeting and Exhibit, page 114, 2007\. * [38] S Sieberling, QP Chu, and JA Mulder. Robust flight control using incremental nonlinear dynamic inversion and angular acceleration prediction. Journal of guidance, control, and dynamics, 33(6):1732–1742, 2010\. * [39] Declan Bates and Martin Hagström. Nonlinear analysis and synthesis techniques for aircraft control, volume 365. Springer, 2007.
aainstitutetext: Physical Research Laboratory, Ahmedabad - 380009, Gujarat, Indiabbinstitutetext: Department of Physical Sciences, Indian Institute of Science Education and Research Kolkata, Mohanpur, 741246, Indiaccinstitutetext: Regional Centre for Accelerator-based Particle Physics, Harish-Chandra Research Institute, HBNI, Chhatnag Road, Jhusi, Prayagraj 211019, India # Probing non-standard $b\bar{b}h$ interaction at the LHC at $\sqrt{s}=13$ TeV Partha Konar b Biswarup Mukhopadhyaya c Rafiqul Rahaman b and Ritesh K. Singh <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract In the detailed probe of Higgs boson properties at the Large Hadron Collider, and in looking for new physics signatures in the electroweak symmetry breaking sector, the bottom quark Yukawa coupling has a crucial role. We investigate possible departure from the standard model value of $b\bar{b}h$ coupling, phenomenologically expressed in terms of a modification factor $\alpha_{b}$, in $b\bar{b}$-associated production of the $125$-GeV scalar at the high- luminosity LHC. In a next-to-leading order estimate, we make use of a gradient boosting algorithm to improve in statistical significance upon a cut-based analysis. It is found possible to probe down to $\alpha_{b}=3$ with more than $5~{}\sigma$ significance, with ${\cal L}=3000$ fb-1 and $\sqrt{s}$ = 13 TeV, while the achievable limit at $95\%$ C.L. is $\pm 1.95$. ###### Keywords: Higgs coupling to $b$, Large Hadron Collider, gradient boosting ## 1 Introduction Whether the 125 GeV scalar discovered in 2012 Chatrchyan:2012xdj ; Aad:2012tfa is ‘the Higgs’ or ‘a Higgs’ is still an unresolved issue. Most importantly, its interaction strengths with relatively heavy fermions are not yet known precisely enough, in contrast to the interaction with gauge boson pairs, where the uncertainty is much lesser Aad:2019mbh . For example, the signal strength defined as $\mu_{b}=\frac{\sigma(b\bar{b})}{\sigma(b\bar{b})_{SM}}$, where the denominator corresponds to the rate predicted by the standard model (SM), lies in the range $0.84$ – $1.24$ Sirunyan:2018kst . Thus there is considerable scope of variation with respect to the standard model value. Here, we propose one way of reducing this uncertainty, by taking a fresh look at $h$-production associated with $b\bar{b}$ at the high-luminosity Large Hadron Collider (HL- LHC). The $b\bar{b}$-associated production of Higgs has been already studied Balazs:1998nt ; Harlander:2003ai ; Dittmaier:2003ej ; Dawson:2003kb ; Campbell:2004pu ; Dawson:2005vi ; Wiesemann:2014ioa ; Forte:2015hba ; Jager:2015hka ; Bonvini:2016fgf ; Deutschmann:2018avk , and the the conclusion is that the rates are too small to make any difference, as far as the SM interaction is concerned. However, the rather large error-bar keeps alive the possibility of enhancement in the presence of new physics. This is reflected, for example, in two Higgs doublet models (2HDM) where regions in the parameter space with a large $b$-coupling of the $125$ GeV scalar are still consistent with all experiments Fontes:2015mea . It is important, therefore, to look for clear signatures of such enhancement as the stamp of new physics. Taking a model-independent standpoint, let us parametrize the modification factor for the $b\bar{b}h$ interaction strength by $\alpha_{b}$, treated here as real, as $\alpha_{b}=\frac{y_{b}}{(y_{b})_{\text{SM}}}.$ (1) Here $(y_{b})_{\text{SM}}=\sqrt{2}m_{b}/v$ is the SM bottom Yukawa coupling ($v=246$ GeV is the vacuum expectation value), and $y_{b}$ is the bottom Yukawa coupling in a new physics model. The analysis of Higgs-$p_{T}$ data with $\int{\cal{L}}dt=35.9fb^{-1}$ Sirunyan:2018sgc already restricts $\alpha_{b}$ and $\alpha_{c}$ (its analogue for the charm) as $-1.1\leq\alpha_{b}\leq+1.1,~{}-4.9\leq\alpha_{c}\leq+4.8$ at $95\%$ C.L.. However, no other non-standard Higgs interaction is allowed in such an analysis, and thus the contributions to $h\rightarrow ZZ,\gamma\gamma$ bring in stringent constraints. However, in the absence of this restrictive assumption and allowance for ‘nuisance parameters’ relaxes the corresponding ranges to $\left[-8.5,18.0\right]$ for $\alpha_{b}$ and $\left[-33.0,38.0\right]$ for $\alpha_{c}$. A more recent study Cepeda:2019klc in the context of the high-luminosity LHC, running upto 3000 $fb^{-1}$, yields the projected limits as $-2.0\lesssim\alpha_{b}\leq+4.0,~{}-10.0\leq\alpha_{c}\leq+10.0$ at $95\%$ C.L., once other non-standard interactions are not forbidden, and the branching ratios in the $ZZ$ and $\gamma\gamma$ channels are not used as prior constraints. We show here that $\alpha_{b}$ can be pinned down to an even shorter range by considering $pp\rightarrow b\bar{b}h$ in the high-luminosity run. In this channel, significant enhancement takes place at the production level itself for large $\alpha_{b}$. This is of advantage, since the level of enhancement does not saturate with increasing $\alpha_{b}$, unlike the effect on the branching ratio in the $b\bar{b}$ channel when the anomalous $b\bar{b}$ shows up in decays alone. The resulting signal, where one looks for four $b$-jets with two of them close to the $h$-peak, is jacked up substantially for $\alpha_{b}\rightarrow 3.0$. However, it is also plagued by backgrounds, including four $b$-jets from QCD, $b\bar{b}Z$ production, and also QCD production of $2b2c$, with two $c$-quark jets faking $b$’s. The backgrounds pose larger next-to-leading order (NLO) QCD corrections strengths than that of the signal, thus making the signal significance smaller at NLO than the leading-order (LO) values. Our analysis reveals how the resulting loss in signal significance due to the NLO QCD effects can be ameliorated by adopting an algorithm based on Boosted Decision Trees (BDT)— in particular, the gradient boosting technique. In section 2, we provide an outline of the framework to operate within, with $\alpha_{b}$ (and $\alpha_{c}$) taken as purely phenomenological parameters, with no bar prima facie on other non-standard interactions. We also discuss the signal and all major irreducible background processes involved in the present analysis. Sections 3 and 4 contains, respectively, report on cut-based and BDT-based machine learning analyses. We summarise and conclude in section 5. ## 2 The parametrisation of anomalous couplings Figure 1: Representative Feynman diagram for the $b\bar{b}h$ production at the LHC at leading order (LO). The non-standard $b\bar{b}h$ coupling is shown by the blobs. Higgs decays further through $h\to b\bar{b}$. The second blob in such vertex are not shown in the diagrams. Figure 2: The enhancement factor received in the signal cross section over SM, as defined at Eq. 3 is shown for variation of modification factor in $b\bar{b}h$ interaction strength $\alpha_{b}$. We are interested in the $b\bar{b}$-associated Higgs production followed by the Higgs decaying to a pair of $b$ at the LHC, thus resulting in a $4b$ final state. The representative Feynman diagrams for the $b\bar{b}h$ production at the LHC are shown in Figure 1. The $h\to b\bar{b}$ decay is not shown in the diagrams. The $\alpha_{b}$ also appears in the decay vertex of the $h\to b\bar{b}$ apart from the production process. The total cross section of the signal with $\alpha_{b}$ will be, $\sigma_{b\bar{b}h\to 4b}(\alpha_{b})=\alpha_{b}^{2}\,\sigma_{b\bar{b}h}(SM)\times\dfrac{\alpha_{b}^{2}\,\Gamma(h\to b\bar{b})}{\Gamma_{h}(\alpha_{b})}$ (2) enhancing the SM cross section by a factor of $k_{\alpha_{b}}=\dfrac{\sigma_{b\bar{b}h\to 4b}(\alpha_{b})}{\sigma_{b\bar{b}h\to 4b}(SM)}=\alpha_{b}^{2}\,\dfrac{\alpha_{b}^{2}\Gamma_{h}(SM)}{\Gamma_{h}(\alpha_{b})}.$ (3) Figure 3: Representative Feynman diagram for the topologically very similar background process through $b\bar{b}Z$ production at the LHC at LO. The enhancement factor for the signal cross section over the SM is shown in Figure 2 with varying $\alpha_{b}$ for $B(h\to b\bar{b})\approx 60\%$ Tanabashi:2018oca . The solid/blue line represents the factor when the new physics effect is accounted for only in the production part; the dashed/green line, however, represents the enhancement factor when the new physics is accounted for in both production as well as in the decay process. Process | LO cross section(pb) ---|--- $bbh\to 4b$ ($\alpha_{b}=1$) | $5.386_{-21.6\%}^{+29.5\%}(\text{scale})\pm 12.4\%(PDF)\times 10^{-3}$ $bbZ\to 4b$ | $2.082_{-20.1\%}^{+26.9\%}(\text{scale})\pm 11.5\%(\text{PDF})$ QCD-$4b$ | $118.194_{-35.5\%}^{+60.9\%}(\text{scale})\pm 12.4\%(\text{PDF})$ QCD-$2b2c$ | $636.098_{-35.4\%}^{+60.6\%}(\text{scale})\pm 12.6\%(\text{PDF})$ $hZ\to 4b$ | $0.01764_{-5.8\%}^{+4.7\%}(\text{scale})\pm 6.1\%(\text{PDF})$ Table 1: Parton level cross sections of signal and the major background processes at the leading order (LO), after applying the generator level event selection cuts of $p_{T}(b)>30$ GeV, $\Delta R(b,b)>0.4$, $\eta_{b}<2.5$ using the scale choice of $\mu_{R}=\mu_{F}=\mu_{0}=(2m_{b}+m_{h})/2$. The major backgrounds to the $4b$ final state comes from QCD $4b$-jets, QCD $2b2c$ with the $c$-quarks faking as $b$-jets, $b\bar{b}Z$ production, and $hZ$ production. We ignore the QCD $4j$ process as the probability of four light jets faking as four $b$-jets is insignificantly small. The background $b\bar{b}Z$, Feynman diagrams shown in Figure 3, has the same topology as the signal $b\bar{b}h$, and thus expected to be irreducible from the signal. The QCD and the $hZ$ backgrounds, however, are expected to be reducible for having a different topology than that of the signal. The leading order cross sections for the signal and the backgrounds for the $4b$ final state estimated in the MadGraph5_aMC@NLO v2.6.4 (mg5_aMC) Alwall:2014hca package with a generator level cuts of $p_{T}(b)>30$ GeV, $\Delta R(b,b)>0.4$, and $\eta_{b}<2.5$ are presented in the Table 1. We use a fixed renormalization ($\mu_{R}$) and factorization ($\mu_{F}$) scale of $\mu_{R}=\mu_{F}=\mu_{0}=(2m_{b}+m_{h})/2$ for the signal as well as for the backgrounds motivated by the $b\bar{b}h$ production topology. The scale uncertainties, shown in Table 1, are estimated by varying the $\mu_{R}$ and $\mu_{F}$ in the range of $0.5\mu_{0}\leq\mu_{R},\mu_{F}\leq 2\mu_{0}$, with the constraint $0.5\leq\mu_{R}/\mu_{F}\leq 2$. We use the NNPDF3.0 Ball:2014uwa sets of parton distribution functions (PDFs) with $\alpha_{s}(m_{Z})=0.118$ for our calculations. A branching ratio of $60\%$ is used for the $h\to b\bar{b}$ decay Tanabashi:2018oca with $m_{h}=125$ GeV. ## 3 Cut-based analysis We generated events for the signal and the backgrounds in mg5_aMC at LO and NLO with chosen renormalization and factorization scale. The QCD-$2b2c$ background, however, is generated only at LO, and it is used for NLO analysis with a $k$-factor of $1.4$ taken from the QCD-$4b$ background. The showering and hadronization of the events are performed by PYTHIA8 Sjostrand:2014zea followed by the detector simulations by Delphes-3.4.2 deFavereau:2013fsa . We estimated the expected number of events with four $b$-tagged jets for the signal with $\alpha_{b}=3$ and the backgrounds after detector simulations at an integrated luminosity of ${\cal L}=3000$ fb-1 for the following two kinematical regions: Event selection (cut1) $\displaystyle:~{}~{}p_{T}(b)>20~{}\text{GeV},~{}\Delta R(b,b)>0.5,~{}\eta_{b}<2.5,$ (4) Event selection (cut2) $\displaystyle:~{}~{}\textsf{cut1}+\text{ at least one }m_{bb}\in[100,150]~{}\text{GeV}$ (5) and present them in Table 2 for $\mu_{R}=\mu_{F}=\mu_{0}=(2m_{b}+m_{h})/2$. Signal & background | No. of events @ LO | No. of events @ NLO ---|---|--- process | cut1 | cut2 | cut1 | cut2 $S$ : | $bbh\to 4b$ | $33511$ | $30867$ | $38895$ | $34946.8$ $B_{1}$ : | $bbZ\to 4b$ | $846715$ | $682871$ | $1.67229\times 10^{6}$ | $1.33163\times 10^{6}$ $B_{2}$ : | QCD - $4b$ | $4.24088\times 10^{7}$ | $3.36035\times 10^{7}$ | $6.81198\times 10^{7}$ | $5.07642\times 10^{7}$ $B_{3}$ : | QCD-$2b2c$ | $1.53389\times 10^{7}$ | $1.1986\times 10^{7}$ | $2.15198\times 10^{7}$ | $1.68568\times 10^{7}$ $B_{4}$ : | $hZ\to 4b$ | $7817$ | $7168$ | $20177$ | $18244$ Significance ($\frac{S}{\sqrt{B}}$) | $4.38$ | $4.54$ | $4.07$ | $4.21$ Table 2: Expected number of events for the signal as well as the backgrounds and signal significance ($S/\sqrt{B}$) at ${\cal L}=3000$ fb-1 with $\alpha_{b}=3$ at LO as well as at NLO for $\mu_{R}=\mu_{F}=\mu_{0}=(2m_{b}+m_{h})/2$ for two cuts region given in Eq. 4. For the cut2, we select events with at least one $b$ pair with invariant mass in the range $[100,150]$, thus emulating a Higgs candidate. We calculate the signal significance, defined by $S/\sqrt{B}$ with $S$ being signal events and $B$ being total background events, for the two cut region, and they are shown in the lowest row of Table 2. A $b\bar{b}h$ signal with $\alpha_{b}=3$ can be observed with a significance of $4.54$ ($4.21$) at LO (NLO) in the cut2 region at an integrated luminosity of ${\cal L}=3000$ fb-1 for renormalization and factorization scale of $\mu_{R}=\mu_{F}=\mu_{0}=(2m_{b}+m_{h})/2$. The signal significance for other renormalization and factorization scales namely $\mu_{R}=\mu_{F}=\mu_{0}/2,~{}2\mu_{0}$ are also shown in the next section for comparison. The QCD corrections for the signal being much smaller compared to the same for the QCD backgrounds, and the shape of the distributions of the variables being similar for LO and NLO, the signal significance is smaller at NLO compared to the LO result. Other than the cut2 regions, cuts such as $p_{T}$, $H_{T}$, $m_{4b}$, $\cancel{E}_{T}$ on the $b$-jets do not improve the signal significance. These variables, however, in certain combinations, may improve the signal significance, which we explore with the gradient boosting technique in the next section. ## 4 Analysis based on the gradient boosting technique After estimating a maximally achievable signal significance with a simple cut- based analysis (cut2 in Eq. (4)), we further explore the possibility of improving the significance by a Machine Learning technique namely Gradient Boosted Decision Trees (gradient BDT) Chen:2016btl by employing various kinematical variables. We use the package XGBoost Chen:2016btl as a toolkit for the gradient boosting. We construct these following kinematical features as input for the gradient boosting: * • Transverse momentum of each of the four leading b-tagged jets $p_{T}(b_{i})$ (4 variables), * • Total invariant mass of all four leading b-tagged jets $m_{4b}$ and inclusive variables, such as, missing transverse momentum $MET$, global mass scale variable $H_{T}$ (3 variables), * • Set of all b-jet pair invariant masses $m_{b_{i}b_{j}}$, and b-pair transverse momentum $p_{T}(b_{i}b_{j})$ from all four leading b-tagged jets (12 variables), * • $\theta$-angle (measured w.r.t. the boost of $4b$-system) and pseudo-rapidity of each b-tagged jet $\cos\theta(b_{i})$, $\eta_{b_{i}}$ (8 variables), * • Angular and azimuthal angle separation between set of all b-jet pairs $\Delta R(b_{i}b_{j})$, $\Delta\phi(b_{i}b_{j})$ (12 variables), * • Angular and azimuthal angle separation between the ‘invariant-mass based reconstructed’ Higgs candidate (composed of two b-tagged jets - so called $b_{1}$ and $b_{2}$) from other two b-tagged jet candidates $\Delta R(hb_{3})$, $\Delta R(hb_{4})$, $\Delta\phi(hb_{3})$, $\Delta\phi(hb_{4})$ (4 variables), * • Angular and azimuthal angle separation between the ‘angular-separation based reconstructed’ Higgs candidate (composed of two b-tagged jets - so called $b_{1}$ and $b_{2}$) from other two b-tagged jet candidates $\Delta R(h^{\prime}b_{3}^{\prime})$, $\Delta R(h^{\prime}b_{4}^{\prime})$ , $\Delta R(b_{3}^{\prime}b_{4}^{\prime})$, $\Delta\phi(h^{\prime}b_{3}^{\prime})$, $\Delta\phi(h^{\prime}b_{4}^{\prime})$, $\Delta\phi(b_{3}^{\prime}b_{4}^{\prime})$ (6 variables). The number in parentheses at the end of each item represents the total number of features in each item, giving a total of $49$ features. The features are reconstructed as follows: The Higgs candidate ($h$) is reconstructed with the $b$-pair close to $125$ GeV invariant mass. These two $b$’s are labelled as $b_{1}$ and $b_{2}$ ordered by their $p_{T}$. The other two $b$’s are labelled as $b_{3}$ and $b_{4}$, also ordered by their $p_{T}$. On the other hand, the primed Higgs candidate ($h^{\prime}$) is reconstructed using the lowest $\Delta R$ of the $b$-pairs. The $b_{i}^{\prime}$ are labelled in a similar way as done in the un-primed case. Figure 4: Normalised distribution of BDT response for the signal and background processes representing relative separability based on the XGBoost score cut for LO (top-left) and NLO (top-right). Corresponding performance in terms of ROC (receiver operating characteristic) curve shown in bottom-left curve. Also the right-bottom plot is for significance enhancement factor depending on the XGBoost score cut for $\mu_{R}=\mu_{F}=\mu_{0}=(2m_{b}+m_{h})/2$. We use an equal number of events for the signal and background events to classify them using the train module of XGBoost. The backgrounds are mixed with the ratio of their corresponding rates in cut2 region given in Table 2. We use $80\%$ of the total dataset for training purposes and the rest $20\%$ for testing purposes. At first, we vary the XGBoost parameters to obtain a combination of them for a maximum accuracy to classify the signal and the backgrounds. We obtain a maximum accuracy of $69.13\%\pm 0.44\%~{}(1\sigma)$ for LO events and $68.06\%\pm 0.23\%~{}(1\sigma)$ for NLO events at $\mu_{R}=\mu_{F}=\mu_{0}$ for the following combination of the parameters’ values xgboost_param : * • Step size shrinkage: $\eta=0.1$, * • Maximum depth of a tree: max_depth = 50, * • Subsample ratio of the training instances: subsample = 0.9, * • subsample ratio of columns when constructing each tree: colsample_bytree= 0.3, * • Minimum loss reduction required to make a further partition on a leaf node of the tree: $\gamma=1.0$, * • L2 regularization term on weights: $\lambda=50.0$, * • L1 regularization term on weights: $\alpha=1.0$, * • Number of parallel trees constructed during each iteration: num_parallel_tree=8. | @LO | @NLO ---|---|--- Scale choice ($\mu_{R}=\mu_{F}=\mu_{0})$ | $\mu_{0}$ | $\mu_{0}/2$ | $\mu_{0}$ | $2\mu_{0}$ Cut-based significance ($\frac{S}{\sqrt{B}}$) | 4.54 | $2.92$ | $4.21$. | $5.06$ XGBoost | Signal efficiency ($\epsilon_{S}$) | 67.7% | $74.3\%$ | $70.8\%$ | $67.3\%$ Brackground rejection ($\bar{\epsilon}_{B}$) | 70.7% | $63.1\%$ | $65.9\%$ | $68.5\%$ Enhancement factor ($\frac{\epsilon_{S}}{\sqrt{1-\bar{\epsilon}_{B}}}$) | 1.25 | $1.22$ | $1.2$ | $1.2$ Maximum significance ($\frac{S}{\sqrt{B}}\times\frac{\epsilon_{S}}{\sqrt{1-\bar{\epsilon}_{B}}}$) | 5.67 | $3.56$ | $5.05$ | $6.07$ Table 3: Comparison of signal efficiency $\epsilon_{S}$, background rejection $\bar{\epsilon}_{B}\equiv(1-\epsilon_{B})$, efficiency factor and significance between the leading order (LO) calculation and the next to leading order (NLO). Both cut and count based and XGBoost analysis results are shown at the luminosity ${\cal L}=3000$ fb-1 after the final event selection as in cut2, for a moderate value of modification factor, $\alpha_{b}=3$. Also the effect of variation in renormalization and factorization scale choice at the NLO pointed out in additional columns. Note for the XGBoost analysis, results are shown for a choice of XGBoost score cut where the maximum significance is achivable. The final probability distributions of the output of the BDT network (XGBoost score) for the signal and the total backgrounds are shown in Figure 4 in the top-row to show their separability for LO (left) as well as for NLO (right). The signal efficiency versus the background rejection curves for LO and NLO are shown in Figure 4 in the left-bottom-panel. The larger the area under the curves of signal efficiency versus the background rejection, the better is the separability between the signal and the background. Compared to the LO, the NLO events are less distinguishable, thus reduces the background rejection. The XGBoost score cut is varied to obtain the signal efficiency $\epsilon_{S}$, background rejection $\bar{\epsilon}_{B}\equiv(1-\epsilon_{B})$ for the maximum factor by which the significance can be improved. The significance enhancement factor ($\frac{\epsilon_{S}}{\sqrt{1-\bar{\epsilon}_{B}}}$) w.r.t. the XGBoost score cut is shown in the right-bottom-panel in Figure 4. For a XGBoost score of $0.52$ ($0.50$), $67.7\%$ ($70.8\%$) signal remains rejecting a total of $70.7\%$ ($65.9\%$) background for LO (NLO) events, thus maximally enhancing the signal significance by a factor of $1.25$ ($1.2$). Thus the total significance after the BDT analysis becomes $4.54\times 1.25=5.67$ ($4.21\times 1.2=5.058$) at LO (NLO). The combined result of cut-based and XGBoost at NLO are shown in Table 3 in the second and fourth column for LO and NLO, respectively. Table 3 also contains result for renormalization and factorization scale choice $\mu_{0}/2$ and $2\mu_{0}$ for NLO; The reason being discussed below. Figure 5: Achievable total significance as a function of modification factor for the $hb\bar{b}$ interaction strength $\alpha_{b}$ at LO and NLO in left- panel with renormalization and factorization scales at $\mu_{0}=m_{b}+m_{h}/2$ for ${\cal L}=3000$ fb-1. The right-panel shows the scale variance of such significance at NLO considering two extreme cases of changing both the scales with a factor of half and a factor of two. The QCD correction strengths and the shape of the distributions for the kinematical variable change as the renormalization and factorization scale are changed for the signal as well as for backgrounds. As a result, our results in cut-based as well as in BDT-based analysis are expected to be different for different $\mu_{R}$ and $\mu_{F}$. So, we repeat all analyses for two extreme cases of $\mu_{R}$ and $\mu_{F}$ with a factor of half and one, i.e., $\mu_{R}=\mu_{F}=\mu_{0}/2,~{}\mu_{0},~{}2\mu_{0}$ apart from $\mu_{0}$ and obtain the results. The results are shown in Table 3 for $\mu_{R}=\mu_{F}=\mu_{0}/2,~{}\mu_{0},~{}2\mu_{0}$. The results for $\mu_{0}$ are repeated for comparison. The QCD correction strength increase for the signal as the scale choices are doubled to $2\mu_{0}$, while it decreases as the scale choices are reduced to $\mu_{0}/2$. However, the QCD corrections remain roughly the same for the backgrounds, specially for the dominant $4b$ QCD background. As a result, as the scale choices are doubled, the signal significance improves by $25\%$, but it decreases when the scale choices are halved at the cut-based analysis. In the XGBoost result, the significance enhancement factor, however, increase a little due to the increase in signal efficiency for lower-scale choices. Till now we have shown the results for $\alpha_{b}=3$, i.e., for a fixed value of the new physics parameter. The total signal significance, including cut- based and XGBoost, are computed for varying $\alpha_{b}$, and they are shown in Figure 5 for $\mu_{R}=\mu_{F}=\mu_{0}$ at LO and NLO in left-panel at ${\cal L}=3000$ fb-1. The right-panel in Figure 5 shows the comparison of signal significance for three different scale choices namely $\mu_{0}/2$, $\mu_{0}$, and $2\mu_{0}$ at NLO for the same luminosity ${\cal L}=3000$ fb-1. The limits on $\alpha_{b}$ is obtained to be $\pm 1.95$ at $95\%$ C.L. at NLO for $\mu_{R}=\mu_{F}=\mu_{0}$, see Figure 5 left-panel. The limits on $\alpha_{b}$ is tighter for higher scale choices and weaker for lower scale choices, as can be seen in the right-panel. It appears that the strengths of QCD corrections for the backgrounds are always higher than that for the signal for a range of renormalization and factorization scales, thus making the signal significance smaller for NLO than LO for both cut-based and XGBoost analysis. A $5\sigma$ discovery significance is achievable for a moderate value of $\alpha_{b}=3$ at a projected luminosity of $3000$ fb-1 at the LHC. ## 5 Summary and conclusions While LHC is emphatically looking for any indication of elusive new physics, hints of that can already be hidden in our Higgs data. Precision measurements of Higgs coupling with third-generation quarks are thus crucial in indirect probes on the physics beyond the standard model. In this present work, we probe the non-standard $hb\bar{b}$ coupling parametrized in a model- independent standpoint in $b\bar{b}$-associated production of Higgs. We point out the importance and effectiveness of this channel in uncovering the modification factor in $hb\bar{b}$ interaction strength. With a detailed detector level simulation, we devised a phase space region to emulate a Higgs peak in the signal and also to regulate the background processes. We obtain a moderate signal significance showing the outcome both at LO as well as at NLO for a choice of modification factor $\alpha_{b}=3$ at high luminosity LHC. This cut-based significance is further refined upon by gradient boosted techniques later on. Overall, the NLO result is slightly weaker than that of LO. We also investigate the effect of (renormalization and factorization) scale variation on the results at NLO and observe a significant variation, with better results at relatively higher scale values. The limit on $\alpha_{b}$, which is $\pm 1.95$ at $95\%$ C.L. for ${\cal L}=3000$ fb-1, is surpassing the existing results in literature. During the concluding stage of study, we came across reference Grojean:2020ech , where an anomalous $hb\bar{b}$ interaction has been investigated via the Higgs decay channel $h\to\gamma\gamma$, in the context of higher energies and luminosities than those envisioned currently for the HL-LHC. Our study differs from theirs in several ways. First, by concentrating on the $b\bar{b}$ decay mode, one expects considerably larger event rates. Secondly, it also entails more severe backgrounds. Thirdly, next-to-leading order QCD effects are more non-trivial, not only for the backgrounds but for the signal as well. We have shown how to overcome the second and third issues, especially with the help of gradient boosting techniques, and thus improve upon hitherto estimated levels of constraining $\alpha_{b}$ at the HL-LHC, with $\int{\cal L}dt=3000$ fb-1. ## 6 Acknowledgement The work of PK is supported by Physical Research Laboratory (PRL), Department of Space, Government of India. The work of RR is partially supported by funding available from the Department of Atomic Energy, Government of India, for the Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-Chandra Research Institute. The work of RKS is partially supported by SERB, DST, Government of India through the project EMR/2017/002778. ## References * (1) CMS Collaboration, S. Chatrchyan et al., Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC, Phys. Lett. B716 (2012) 30–61, arXiv:1207.7235 [hep-ex]. * (2) ATLAS Collaboration, G. Aad et al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, Phys. Lett. B716 (2012) 1–29, arXiv:1207.7214 [hep-ex]. * (3) ATLAS Collaboration, G. Aad et al., Combined measurements of Higgs boson production and decay using up to $80$ fb-1 of proton-proton collision data at $\sqrt{s}=$ 13 TeV collected with the ATLAS experiment, Phys. Rev. D 101 no. 1, (2020) 012002, arXiv:1909.02845 [hep-ex]. * (4) CMS Collaboration, A. M. Sirunyan et al., Observation of Higgs boson decay to bottom quarks, Phys. Rev. Lett. 121 no. 12, (2018) 121801, arXiv:1808.08242 [hep-ex]. * (5) C. Balazs, J. L. Diaz-Cruz, H. J. He, T. M. P. Tait, and C. P. Yuan, Probing Higgs bosons with large bottom Yukawa coupling at hadron colliders, Phys. Rev. D59 (1999) 055016, arXiv:hep-ph/9807349 [hep-ph]. * (6) R. V. Harlander and W. B. Kilgore, Higgs boson production in bottom quark fusion at next-to-next-to leading order, Phys. Rev. D 68 (2003) 013001, arXiv:hep-ph/0304035. * (7) S. Dittmaier, M. Kr mer, and M. Spira, Higgs radiation off bottom quarks at the Tevatron and the CERN LHC, Phys. Rev. D70 (2004) 074010, arXiv:hep-ph/0309204 [hep-ph]. * (8) S. Dawson, C. Jackson, L. Reina, and D. Wackeroth, Exclusive Higgs boson production with bottom quarks at hadron colliders, Phys. Rev. D 69 (2004) 074027, arXiv:hep-ph/0311067. * (9) J. M. Campbell, S. Dawson, S. Dittmaier, C. Jackson, M. Kramer, F. Maltoni, L. Reina, M. Spira, D. Wackeroth, and S. Willenbrock, Higgs boson production in association with bottom quarks, in Physics at TeV colliders. Proceedings, Workshop, Les Houches, France, May 26-June 3, 2003. 2004\. arXiv:hep-ph/0405302 [hep-ph]. * (10) S. Dawson, C. Jackson, L. Reina, and D. Wackeroth, Higgs production in association with bottom quarks at hadron colliders, Mod. Phys. Lett. A 21 (2006) 89–110, arXiv:hep-ph/0508293. * (11) M. Wiesemann, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, and P. Torrielli, Higgs production in association with bottom quarks, JHEP 02 (2015) 132, arXiv:1409.5301 [hep-ph]. * (12) S. Forte, D. Napoletano, and M. Ubiali, Higgs production in bottom-quark fusion in a matched scheme, Phys. Lett. B 751 (2015) 331–337, arXiv:1508.01529 [hep-ph]. * (13) B. Jager, L. Reina, and D. Wackeroth, Higgs boson production in association with b jets in the POWHEG BOX, Phys. Rev. D 93 no. 1, (2016) 014030, arXiv:1509.05843 [hep-ph]. * (14) M. Bonvini, A. S. Papanastasiou, and F. J. Tackmann, Matched predictions for the $b\overline{b}H$ cross section at the 13 TeV LHC, JHEP 10 (2016) 053, arXiv:1605.01733 [hep-ph]. * (15) N. Deutschmann, F. Maltoni, M. Wiesemann, and M. Zaro, Top-Yukawa contributions to bbH production at the LHC, JHEP 07 (2019) 054, arXiv:1808.01660 [hep-ph]. * (16) D. Fontes, J. C. Romão, R. Santos, and J. a. P. Silva, Large pseudoscalar Yukawa couplings in the complex 2HDM, JHEP 06 (2015) 060, arXiv:1502.01720 [hep-ph]. * (17) CMS Collaboration, A. M. Sirunyan et al., Measurement and interpretation of differential cross sections for Higgs boson production at $\sqrt{s}=$ 13 TeV, Phys. Lett. B792 (2019) 369–396, arXiv:1812.06504 [hep-ex]. * (18) M. Cepeda et al., Report from Working Group 2, CERN Yellow Rep. Monogr. 7 (2019) 221–584, arXiv:1902.00134 [hep-ph]. * (19) Particle Data Group Collaboration, M. Tanabashi et al., Review of Particle Physics, Phys. Rev. D 98 no. 3, (2018) 030001. * (20) J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations, JHEP 07 (2014) 079, arXiv:1405.0301 [hep-ph]. * (21) NNPDF Collaboration, R. D. Ball et al., Parton distributions for the LHC Run II, JHEP 04 (2015) 040, arXiv:1410.8849 [hep-ph]. * (22) T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen, and P. Z. Skands, An introduction to PYTHIA 8.2, Comput. Phys. Commun. 191 (2015) 159–177, arXiv:1410.3012 [hep-ph]. * (23) DELPHES 3 Collaboration, J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lema tre, A. Mertens, and M. Selvaggi, DELPHES 3, A modular framework for fast simulation of a generic collider experiment, JHEP 02 (2014) 057, arXiv:1307.6346 [hep-ex]. * (24) T. Chen and C. Guestrin, XGBoost: A Scalable Tree Boosting System, arXiv:1603.02754 [cs.LG]. * (25) XGBoost Parameters. https://xgboost.readthedocs.io/en/latest/parameter.html. * (26) C. Grojean, A. Paul, and Z. Qian, Resurrecting $b\bar{b}h$ with kinematic shapes, arXiv:2011.13945 [hep-ph].
Université Gustave Eiffel<EMAIL_ADDRESS>Université Libre de Bruxelles<EMAIL_ADDRESS>Université Gustave Eiffel, <EMAIL_ADDRESS>Léonard Brice, Jean-François Raskin, Marie van den Bogaard Software and its engineering: Formal methods; Theory of computation: Logic and verification; Theory of computation: Solution concepts in game theory. # Subgame-perfect Equilibria in Mean-payoff Games Léonard Brice Jean-François Raskin Marie van den Bogaard ###### Abstract In this paper, we provide an effective characterization of all the subgame- perfect equilibria in infinite duration games played on finite graphs with mean-payoff objectives. To this end, we introduce the notion of requirement, and the notion of negotiation function. We establish that the plays that are supported by SPEs are exactly those that are consistent with the least fixed point of the negotiation function. Finally, we show that the negotiation function is piecewise linear, and can be analyzed using the linear algebraic tool box. As a corollary, we prove the decidability of the SPE constrained existence problem, whose status was left open in the literature. ###### keywords: Games on graphs, subgame-perfect equilibria, mean-payoff objectives. ## 1 Introduction The notion of Nash equilibrium (NE) is one of the most important and most studied solution concepts in game theory. A profile of strategies is an NE when no rational player has an incentive to change their strategy unilaterally, i.e. while the other players keep their strategies. Thus an NE models a stable situation. Unfortunately, it is well known that, in sequential games, NEs suffer from the problem of non-credible threats, see e.g. [18]. In those games, some NE only exists when some players do not play rationally in subgames and so use non-credible threats to force the NE. This is why, in sequential games, the stronger notion of subgame-perfect equilibrium is used instead: a profile of strategies is a subgame-perfect equilibrium (SPE) if it is an NE in all the subgames of the sequential game. Thus SPE imposes rationality even after a deviation has occured. In this paper, we study sequential games that are infinite-duration games played on graphs with mean-payoff objectives, and focus on SPEs. While NEs are guaranteed to exist in infinite duration games played on graphs with mean- payoff objectives, it is known that it is not the case for SPEs, see e.g. [19, 4]. We provide in this paper a constructive characterization of the entire set of SPEs, which allows us to decide, among others, the SPE (constrained) existence problem. This problem was left open in previous contributions on the subject. More precisely, our contributions are described in the next paragraphs. Contributions. First, we introduce two important new notions that allow us to capture NEs, and more importantly SPEs, in infinite duration games played on graphs with mean-payoff objectives111A large part of our results apply to the larger class of games with prefix-independent objectives. For the sake of readability of this introduction, we focus here on mean-payoff games but the technical results in the paper are usually covering broader classes of games.: the notion of requirement and the notion of negotiation function. A requirement $\lambda$ is a function that assigns to each vertex $v\in V$ of a game graph a value in $\mathbb{R}\cup\\{-\infty,+\infty\\}$. The value $\lambda(v)$ represents a requirement on any play $\rho=\rho_{0}\rho_{1}\dots\rho_{n}\dots$ that traverses this vertex: if we want the player who controls the vertex $v$ to follow $\rho$ and to give up deviating from $\rho$, then the play must offer a payoff to this player that is at least $\lambda(v)$. An infinite play $\rho$ is $\lambda$-consistent if, for each player $i$, the payoff of $\rho$ for player $i$ is larger than or equal to the largest value of $\lambda$ on vertices occurring along $\rho$ and controlled by player $i$. We first use those notions to rephrase a classical result about NEs: if $\lambda$ maps a vertex $v$ to the largest value that the player that controls $v$ can secure against a fully adversarial coalition of the other players, i.e. if $\lambda(v)$ is the zero-sum worst-case value, then the set of plays that are $\lambda$-consistent is exactly the set of plays that are supported by an NE (Theorem 1). As SPEs are forcing players to play rationally in all subgames, we cannot rely on the zero-sum worst-case value to characterize them. Indeed, when considering the worst-case value, we allow adversaries to play fully adversarially after a deviation and so potentially in an irrational way w.r.t. their own objective. In fact, in an SPE, a player is refrained to deviate when opposed by a coalition of rational adversaries. To characterize this relaxation of the notion of worst-case value, we rely on our notion of _negotiation function_. The negotiation function $\mathrm{nego}$ operates from the set of requirements into itself. To understand the purpose of the negotiation function, let us consider its application on the requirement $\lambda$ that maps every vertex $v$ on the worst-case value as above. Now, we can naturally formulate the following question: given $v$ and $\lambda$, can the player who controls $v$ improve the value that they can ensure against all the other players, if only plays that are consistent with $\lambda$ are proposed by the other players? In other words, can this player enforce a better value when playing against the other players if those players are not willing to give away their own worst- case value? Clearly, securing this worst-case value can be seen as a minimal goal for any rational adversary. So $\mathrm{nego}(\lambda)(v)$ returns this value; and this reasoning can be iterated. One of the contributions of this paper is to show that the least fixed point $\lambda^{*}$ of the negotiation function is exactly characterizing the set of plays supported by SPEs (Theorem 2). To turn this fixed point characterization of SPEs into algorithms, we additionally draw links between the negotiation function and two classes of zero-sum games, that are called abstract and concrete negotiation games (see Theorem 3). We show that the latter can be solved effectively and allow, given $\lambda$, to compute $\mathrm{nego}(\lambda)$ (Lemma 3). While solving concrete negotiation games allows us to compute $\mathrm{nego}(\lambda)$ for any requirement $\lambda$, and even if the function $\mathrm{nego}(\cdot)$ is monotone and Scott-continuous, a direct application of the Kleene-Tarski fixed point theorem is not sufficient to obtain an effective algorithm to compute $\lambda^{*}$. Indeed, we give examples that require a transfinite number of iterations to converge to the least fixed point. To provide an algorithm to compute $\lambda^{*}$, we show that the function $\mathrm{nego}(\cdot)$ is piecewise linear and we provide an effective representation of this function (Theorem 4). This effective representation can then be used to extract all its fixed points and in particular its least fixed point using linear algebraic techniques, hence the decidability of the SPE (constrained) existence problem (Theorem 6). Finally, all our results are also shown to extend to $\varepsilon$-SPEs, those are quantitative relaxations of SPEs. Related works. Non-zero sum infinite duration games have attracted a large attention in recent years, with applications targeting reactive synthesis problems. We refer the interested reader to the following survey papers [2, 6] and their references for the relevant literature. We detail below contributions more closely related to the work presented here. In [5], Brihaye et al. offer a characterization of NEs in quantitative games for cost-prefix-linear reward functions based on the worst-case value. The mean-payoff is cost-prefix-linear. In their paper, the authors do not consider the stronger notion of SPE, which is the central solution concept studied in our paper. In [7], Bruyère et al. study secure equilibria that are a refinement of NEs. Secure equilibria are not subgame-perfect and are, as classical NEs, subject to non-credible threats in sequential games. In [20], Ummels proves that there always exists an SPE in games with $\omega$-regular objectives and defines algorithms based on tree automata to decide constrained SPE problems. Strategy logics, see e.g. [11], can be used to encode the concept of SPE in the case of $\omega$-regular objectives with application to the rational synthesis problem [15] for instance. In [12], Flesch et al. show that the existence of $\varepsilon$-SPEs is guaranteed when the reward function is lower-semicontinuous. The mean-payoff reward function is neither $\omega$-regular, nor lower-semicontinuous, and so the techniques defined in the papers cited above cannot be used in our setting. Furthermore, as already recalled above, see e.g. [23, 4], contrary to the $\omega$-regular case, SPEs in games with mean-payoff objectives may fail to exist. In [4], Brihaye et al. introduce and study the notion of weak subgame-perfect equilibria, which is a weakening of the classical notion of SPE. This weakening is equivalent to the original SPE concept on reward functions that are continuous. This is the case for example for the quantitative reachability reward function, on which Brihaye et al. solve the problem of the constrained existence of SPEs in [3]. On the contrary, the mean-payoff cost function is not continuous and the techniques used in [4], and generalized in [9], cannot be used to characterize SPEs for the mean-payoff reward function. In [17], Meunier develops a method based on Prover-Challenger games to solve the problem of the existence of SPEs on games with a finite number of possible outcomes. This method is not applicable to the mean-payoff reward function, as the number of outcomes in this case is uncountably infinite. In [13], Flesch and Predtetchinski present another characterization of SPEs on games with finitely many possible outcomes, based on a game structure that we will present here under the name of _abstract negotiation game_. Our contributions differ from this paper in two fundamental aspects. First, it lifts the restriction to finitely many possible outcomes. This is crucial as mean-payoff games violate this restriction. Instead, we identify a class of games, that we call with steady negotiation, that encompasses mean-payoff games and for which some of the conceptual tools introduced in that paper can be generalized. Second, the procedure developed by Flesch and Predtetchinski is not an algorithm in CS acceptation: it needs to solve infinitely many games that are not represented effectively, and furthermore it needs a transfinite number of iterations. On the contrary, our procedure is effective and leads to a complete algorithm in the classical sense: with guarantee of termination in finite time and applied on effective representations of games. Structure of the paper. In Sect. 2, we introduce the necessary background. Sect. 3 defines the notion of requirement and the negotiation function. Sect. 4 shows that the set of plays that are supported by an SPE are those that are $\lambda^{*}$-consistent, where $\lambda^{*}$ is the least fixed point of the negotiation function. Sect. 5 draws a link between the negotiation function and negotiation games. Sect. 6 establishes that the negotiation function is effectively piecewise linear. Finally, Sect. 7 applies those results to prove the decidability of the SPE constrained existence problem on mean-payoff games, and adds some complexity considerations. All the detailed proofs of our results can be found in a well-identified appendix and a large number of examples are provided in the main part of the paper to illustrate the main ideas behind our new concepts and constructions. ## 2 Background In all what follows, we will use the word _game_ for the infinite duration turn-based quantitative games on finite graphs with complete information. ###### Definition 1 (Game). A _game_ is a tuple $G=\left(\Pi,V,(V_{i})_{i\in\Pi},E,\mu\right)$, where: * • $\Pi$ is a finite set of _players_ ; * • $(V,E)$ is a finite directed graph, whose vertices are sometimes called _states_ and whose edges are sometimes called _transitions_ , and in which every state has at least one outgoing transition. For the simplicity of writing, a transition $(v,w)\in E$ will often be written $vw$. * • $(V_{i})_{i\in\Pi}$ is a partition of $V$, in which $V_{i}$ is the set of states _controlled_ by player $i$; * • $\mu:V^{\omega}\to\mathbb{R}^{\Pi}$ is an _outcome function_ , that maps each infinite word $\rho$ to the tuple $\mu(\rho)=(\mu_{i}(\rho))_{i\in\Pi}$ of the players’ _payoffs_. ###### Definition 2 (Initialized game). An _initialized game_ is a tuple $(G,v_{0})$, often written $G_{\upharpoonright v_{0}}$, where $G$ is a game and $v_{0}\in V$ is a state called _initial state_. Moreover, the game $G_{\upharpoonright v_{0}}$ is _well-initialized_ if any state of $G$ is accessible from $v_{0}$ in the graph $(V,E)$. ###### Definition 3 (Play, history). A _play_ (resp. history) in the game $G$ is an infinite (resp. finite) path in the graph $(V,E)$. It is also a play (resp. history) in the initialized game $G_{\upharpoonright v_{0}}$, where $v_{0}$ is its first vertex. The set of plays (resp. histories) in the game $G$ (resp. the initialized game $G_{\upharpoonright v_{0}}$) is denoted by $\mathrm{Plays}G$ (resp. $\mathrm{Plays}G_{\upharpoonright v_{0}},\mathrm{Hist}G,\mathrm{Hist}G_{\upharpoonright v_{0}}$). We write $\mathrm{Hist}_{i}G$ (resp. $\mathrm{Hist}_{i}G_{\upharpoonright v_{0}}$) for the set of histories in $G$ (resp. $G_{\upharpoonright v_{0}}$) of the form $hv$, where $v$ is a vertex controlled by player $i$. ###### Remark. In the literature, the word _outcome_ can be used to name plays, and the word _payoff_ to name what we call here outcome. Here, the word _payoff_ will be used to refer to outcomes, seen from the point of view of a given player – or in other words, an _outcome_ will be seen as the collection of all players’ payoffs. ###### Definition 4 (Strategy, strategy profile). A _strategy_ for player $i$ in the initialized game $G_{\upharpoonright v_{0}}$ is a function $\sigma_{i}:\mathrm{Hist}_{i}G_{\upharpoonright v_{0}}\to V$, such that $v\sigma_{i}(hv)$ is an edge of $(V,E)$ for every $hv$. A history $h$ is _compatible_ with a strategy $\sigma_{i}$ if and only if $h_{k+1}=\sigma_{i}(h_{0}\dots h_{k})$ for all $k$ such that $h_{k}\in V_{i}$. A play $\rho$ is compatible with $\sigma_{i}$ if all its prefixes are. A _strategy profile_ for $P\subseteq\Pi$ is a tuple $\bar{\sigma}_{P}=(\sigma_{i})_{i\in P}$, where for each $i$, $\sigma_{i}$ is a strategy for player $i$ in $G_{\upharpoonright v_{0}}$. A _complete_ strategy profile, usually written $\bar{\sigma}$, is a strategy profile for $\Pi$. A play or a history is _compatible_ with $\bar{\sigma}_{P}$ if it is compatible with every $\sigma_{i}$ for $i\in P$. When $i$ is a player and when the context is clear, we will often write $-i$ for the set $\Pi\setminus\\{i\\}$. We will often refer to $\Pi\setminus\\{i\\}$ as the _environment_ against player $i$. When $\bar{\tau}_{P}$ and $\bar{\tau}^{\prime}_{Q}$ are two strategy profiles with $P\cap Q=\emptyset$, $(\bar{\tau}_{P},\bar{\tau}^{\prime}_{Q})$ denotes the strategy profile $\bar{\sigma}_{P\cup Q}$ such that $\sigma_{i}=\tau_{i}$ for $i\in P$, and $\sigma_{i}=\tau^{\prime}_{i}$ for $i\in Q$. Before moving on to SPEs, let us recall the notion of Nash equilibrium. ###### Definition 5 (Nash equilibrium). Let $G_{\upharpoonright v_{0}}$ be an initialized game. The strategy profile $\bar{\sigma}$ is a _Nash equilibrium_ — or _NE_ for short — in $G_{\upharpoonright v_{0}}$ if and only if for each player $i$ and for every strategy $\sigma^{\prime}_{i}$, called _deviation of $\sigma_{i}$_, we have the inequality $\mu_{i}\left(\langle\sigma^{\prime}_{i},\bar{\sigma}_{-i}\rangle_{v_{0}}\right)\leq\mu_{i}\left(\langle\bar{\sigma}\rangle_{v_{0}}\right)$. To define SPEs, we need the notion of subgame. ###### Definition 6 (Subgame, substrategy). Let $hv$ be a history in the game $G$. The _subgame_ of $G$ after $hv$ is the initialized game $\left(\Pi,V,(V_{i})_{i},E,\mu_{\upharpoonright hv}\right)_{\upharpoonright v}$, where $\mu_{\upharpoonright hv}$ maps each play to its payoff in $G$, assuming that the history $hv$ has already been played: formally, for every $\rho\in\mathrm{Plays}G_{\upharpoonright hv}$, we have $\mu_{\upharpoonright hv}(\rho)=\mu(h\rho)$. If $\sigma_{i}$ is a strategy in $G_{\upharpoonright v_{0}}$, its _substrategy_ after $hv$ is the strategy $\sigma_{i\upharpoonright hv}$ in $G_{\upharpoonright hv}$, defined by $\sigma_{i\upharpoonright hv}(h^{\prime})=\sigma_{i}(hh^{\prime})$ for every $h^{\prime}\in\mathrm{Hist}_{i}G_{\upharpoonright hv}$. ###### Remark. The initialized game $G_{\upharpoonright v_{0}}$ is also the subgame of $G$ after the one-state history $v_{0}$. ###### Definition 7 (Subgame-perfect equilibrium). Let $G_{\upharpoonright v_{0}}$ be an initialized game. The strategy profile $\bar{\sigma}$ is a _subgame-perfect equilibrium_ — or _SPE_ for short — in $G_{\upharpoonright v_{0}}$ if and only if for every history $h$ in $G_{\upharpoonright v_{0}}$, the strategy profile $\bar{\sigma}_{\upharpoonright h}$ is a Nash equilibrium in the subgame $G_{\upharpoonright h}$. The notion of subgame-perfect equilibrium can be seen as a refinement of Nash equilibrium: it is a stronger equilibrium which excludes players resorting to non-credible threats. ###### Example 1. In the game represented in Figure 1(a), where the square state is controlled by player $\Box$ and the round states by player $\Circle$, if both players get the payoff $1$ by reaching the state $d$ and the payoff $0$ in the other cases, there are actually two NEs: one, in blue, where $\Box$ goes to the state $b$ and then player $\Circle$ goes to $d$, and both win, and one, in red, where player $\Box$ goes to the state $c$ because player $\Circle$ was planning to go to $e$. However, only the blue one is an SPE, as moving from $b$ to $e$ is irrational for player $\Circle$ in the subgame $G_{\upharpoonright ab}$. An $\varepsilon$-SPE is a strategy profile which is _almost_ an SPE: if a player deviates after some history, they will not be able to improve their payoff by more than a quantity $\varepsilon\geq 0$. ###### Definition 8 ($\varepsilon$-SPE). Let $G_{\upharpoonright v_{0}}$ be an initialized game, and $\varepsilon\geq 0$. A strategy profile $\bar{\sigma}$ from $v_{0}$ is an $\varepsilon$-SPE if and only if for every history $hv$, for every player $i$ and every strategy $\sigma^{\prime}_{i}$, we have $\mu_{i}(\langle\bar{\sigma}_{-i\upharpoonright hv},\sigma^{\prime}_{i\upharpoonright hv}\rangle_{v})\leq\mu_{i}(\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v})+\varepsilon$. Note that a $0$-SPE is an SPE, and conversely. Hereafter, we focus on _prefix-independent_ games, and in particular _mean- payoff_ games. ###### Definition 9 (Mean-payoff game). A _mean-payoff game_ is a game $G=\left(\Pi,V,(V_{i})_{i},E,\mu\right)$, where $\mu$ is defined from a function $\pi:E\to\mathbb{Q}^{\Pi}$, called _weight function_ , by, for each player $i$: $\mu_{i}:\rho\mapsto\underset{n\to\infty}{\liminf}\frac{1}{n}\underset{k=0}{\overset{n-1}{\sum}}\pi_{i}\left(\rho_{k}\rho_{k+1}\right).$ In a mean-payoff game, the weight given by the function $\pi$ represents the immediate reward that each action gives to each player. The final payoff of each player is their average payoff along the play, classically defined as the limit inferior over $n$ (since the limit may not be defined) of the average payoff after $n$ steps. ###### Definition 10 (Prefix-independent game). A game $G$ is _prefix-independent_ if, for every history $h$ and for every play $\rho$, we have $\mu(h\rho)=\mu(\rho)$. We also say, in that case, that the outcome function $\mu$ is prefix-independent. $a$$b$$c$$d$$e$$f$$g$ (a) Two NEs and one SPE $a$$c$$b$$d$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$(\lambda_{0})$$-\infty$$-\infty$$-\infty$$-\infty$$(\lambda_{1})$$1$$2$$1$$2$$(\lambda_{2})$$2$$2$$1$$2$$(\lambda_{3})$$2$$3$$1$$2$$(\lambda_{4})$$+\infty$$+\infty$$1$$2$ (b) A game without SPE Figure 1: Two examples of games Mean-payoff games are prefix-independent. We now recall a classical result about two-player zero-sum games. ###### Definition 11 (Zero-sum game). A game $G$, with $\Pi=\\{1,2\\}$, is _zero-sum_ if $\mu_{2}=-\mu_{1}$. ###### Definition 12 (Borel game). A game $G$ is _Borel_ if the function $\mu$, from the set $V^{\omega}$ equipped with the product topology to the Euclidian space $\mathbb{R}^{\Pi}$, is Borel, i.e. if, for every Borel set $B\subseteq\mathbb{R}^{\Pi}$, the set $\mu^{-1}(B)$ is Borel. ###### Definition 13 (Determinacy). Let $G_{\upharpoonright v_{0}}$ be an initialized zero-sum Borel game, with $\Pi=\\{1,2\\}$. The game $G_{\upharpoonright v_{0}}$ is _determined_ if we have the following equality: $\sup_{\sigma_{1}}\leavevmode\nobreak\ \inf_{\sigma_{2}}\leavevmode\nobreak\ \mu_{1}(\langle\bar{\sigma}\rangle_{v_{0}})=\inf_{\sigma_{2}}\leavevmode\nobreak\ \sup_{\sigma_{1}}\leavevmode\nobreak\ \mu_{1}(\langle\bar{\sigma}\rangle_{v_{0}}).$ That quantity is called _value_ of $G_{\upharpoonright v_{0}}$, denoted by $\mathrm{val}_{1}(G_{\upharpoonright v_{0}})$; _solving_ the game $G$ means computing its value. ###### Proposition 1 (Determinacy of two-player zero-sum Borel games [16]). Zero-sum Borel games are determined. The following examples illustrate the SPE existence problem in mean-payoff games. ###### Example 2. Let $G$ be the mean-payoff game of Figure 1(b), where each edge is labelled by its weights $\pi_{\scriptsize{\Circle}}$ and $\pi_{\Box}$. No weight is given for the edges $ac$ and $bd$ since they can be used only once, and therefore do not influence the final payoff. For now, the reader should not pay attention to the red labels below the states. As shown in [8], this game does not have any SPE, neither from the state $a$ nor from the state $b$. Indeed, the only NE plays from the state $b$ are the plays where player $\Box$ eventually leaves the cycle $ab$ and goes to $d$: if he stays in the cycle $ab$, then player $\Circle$ would be better off leaving it, and if she does, player $\Box$ would be better off leaving it before. From the state $a$, if player $\Circle$ knows that player $\Box$ will leave, she has no incentive to do it before: there is no NE where $\Circle$ leaves the cycle and $\Box$ plans to do it if ever she does not. Therefore, there is no SPE where $\Circle$ leaves the cycle. But then, after a history that terminates in $b$, player $\Box$ has actually no incentive to leave if player $\Circle$ never plans to do it afterwards: contradiction. $a$$b$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{1}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{0}}$ (a) The game $G$ ${\Circle}$${\Box}$12012 (b) The outcomes of plays and SPE plays in $G$ Figure 2: A game with an infinity of SPEs ###### Example 3. Let us now study the game of Figure 2(a). Using techniques from [10], we can represent the outcomes of possible plays in that game as in Figure 2(b) (gray and blue areas). Following exclusively one of the three simple cycles $a$, $ab$ and $b$ of the game graph during a play yields the outcomes $01,10$ and $22$, respectively. By combining those cycles with well chosen frequencies, one can obtain any outcome in the convex hull of those three points. Now, it is also possible to obtain the point $00$ by using the properties of the limit inferior: it is for instance the outcome of the play $a^{2}b^{4}a^{16}b^{256}\dots a^{2^{2^{n}}}b^{2^{2^{n+1}}}\dots$. In fact, one can construct a play that yields any outcome in the convex hull of the four points $00,10,01$, and $22$. We claim that the outcomes of SPEs plays correspond to the entire blue area in Figure 2(b): there exists an SPE $\bar{\sigma}$ in $G_{\upharpoonright a}$ with $\langle\bar{\sigma}\rangle_{a}=\rho$ if and only if $\mu_{\Box}(\rho),\mu_{\scriptsize{\Circle}}(\rho)\geq 1$. That statement will be a direct consequence of the results we show in the remaining sections, but let us give a first intuition: a play with such an outcome necessarily uses infinitely often both states. It is an NE play because none of the players can get a better payoff by looping forever on their state, and they can both force each other to follow that play, by threatening them to loop for ever on their state whenever they can. But such a strategy profile is clearly not an SPE. It can be transformed into an SPE as follows: when a player deviates, say player $\Box$, then player $\Circle$ can punish him by looping on $a$, not forever, but a great number of times, until player $\Box$’s mean-payoff gets very close to $1$. Afterwards, both players follow again the play that was initially planned. Since that threat is temporary, it does not affect player $\Circle$’s payoff on the long term, but it really punishes player $\Box$ if that one tries to deviate infinitely often. ## 3 Requirements and negotiation We will now see that SPEs are strategy profiles that respect some _requirements_ about the payoffs, depending on the states it traverses. In this part, we develop the notions of _requirement_ and _negotiation_. ### 3.1 Requirement In the method we will develop further, we will need to analyze the players’ behaviour when they have some _requirement_ to satisfy. Intuitively, one can see requirements as _rationality constraints_ for the players, that is, a threshold payoff value under which a player will not accept to follow a play. In all what follows, $\overline{\mathbb{R}}$ denotes the set $\mathbb{R}\cup\\{\pm\infty\\}$. ###### Definition 14 (Requirement). A _requirement_ on the game $G$ is a function $\lambda:V\to\overline{\mathbb{R}}$. For a given state $v$, the quantity $\lambda(v)$ represents the minimal payoff that the player controlling $v$ will require in a play beginning in $v$. ###### Definition 15 ($\lambda$-consistency). Let $\lambda$ be a requirement on a game $G$. A play $\rho$ in $G$ is _$\lambda$ -consistent_ if and only if, for all $i\in\Pi$ and $n\in\mathbb{N}$ with $\rho_{n}\in V_{i}$, we have $\mu_{i}(\rho_{n}\rho_{n+1}\dots)\leavevmode\nobreak\ \geq\leavevmode\nobreak\ \lambda(\rho_{n})$. The set of the $\lambda$-consistent plays from a state $v$ is denoted by $\lambda\mathrm{Cons}(v)$. ###### Definition 16 ($\lambda$-rationality). Let $\lambda$ be a requirement on a mean-payoff game $G$. Let $i\in\Pi$. A strategy profile $\bar{\sigma}_{-i}$ is _$\lambda$ -rational_ if and only if there exists a strategy $\sigma_{i}$ such that, for every history $hv$ compatible with $\bar{\sigma}_{-i}$, the play $\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v}$ is $\lambda$-consistent. We then say that the strategy profile $\bar{\sigma}_{-i}$ is $\lambda$-rational _assuming_ $\sigma_{i}$. The set of $\lambda$-rational strategy profiles in $G_{\upharpoonright v}$ is denoted by $\lambda\mathrm{Rat}(v)$. Note that $\lambda$-rationality is a property of a strategy profile for all the players but one, player $i$. Intuitively, their rationality is justified by the fact that they collectively assume that player $i$ will, eventually, play according to the strategy $\sigma_{i}$: if player $i$ does so, then everyone gets their payoff satisfied. Finally, let us define a particular requirement: the _vacuous requirement_ , that requires nothing, and with which every play is consistent. ###### Definition 17 (Vacuous requirement). In any game, the _vacuous requirement_ , denoted by $\lambda_{0}$, is the requirement constantly equal to $-\infty$. ### 3.2 Negotiation We will show that SPEs in prefix-independent games are characterized by the fixed points of a function on requirements. That function can be seen as a _negotiation_ : when a player has a requirement to satisfy, another player can hope a better payoff than what they can secure in general, and therefore update their own requirement. ###### Definition 18 (Negotiation function). Let $G$ be a game. The _negotiation function_ is the function that transforms any requirement $\lambda$ on $G$ into a requirement $\mathrm{nego}(\lambda)$ on $G$, such that for each $i\in\Pi$ and $v\in V_{i}$, with the convention $\inf\emptyset=+\infty$, we have: $\mathrm{nego}(\lambda)(v)=\underset{\bar{\sigma}_{-i}\in\lambda\mathrm{Rat}(v)}{\inf}\underset{\sigma_{i}}{\sup}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\sigma}\rangle_{v}).$ ###### Remarks. There exists a $\lambda$-rational strategy profile from $v$ against the player controlling $v$ if and only if $\mathrm{nego}(\lambda)(v)\neq+\infty$. The negotiation function is monotone: if $\lambda\leq\lambda^{\prime}$ (for the pointwise order, i.e. if for each $v$, $\lambda(v)\leq\lambda^{\prime}(v)$), then $\mathrm{nego}(\lambda)\leq\mathrm{nego}(\lambda^{\prime})$. The negotiation function is also non-decreasing: for every $\lambda$, we have $\lambda\leq\mathrm{nego}(\lambda)$. In the general case, the quantity $\mathrm{nego}(\lambda)(v)$ represents the worst case value that the player controlling $v$ can ensure, assuming that the other players play $\lambda$-rationally. ###### Example 4. Let us consider the game of Example 2: in Figure 1(b), on the two first lines below the states, we present the requirements $\lambda_{0}$ and $\lambda_{1}=\mathrm{nego}(\lambda_{0})$, which is easy to compute since any strategy profile is $\lambda_{0}$-rational: for each $v$, $\lambda_{1}(v)$ is the classical _worst-case value_ or _antagonistic value_ of $v$, i.e. the best value the player controlling $v$ can enforce against a fully hostile environment. Let us now compute the requirement $\lambda_{2}=\mathrm{nego}(\lambda_{1})$. From $c$, there exists exactly one $\lambda_{1}$-rational strategy profile $\bar{\sigma}_{-{\scriptsize{\Circle}}}=\sigma_{\Box}$, which is the empty strategy since player $\Box$ has never to choose anything. Against that strategy, the best and the only payoff player $\Circle$ can get is $1$, hence $\lambda_{2}(c)=1$. For the same reasons, $\lambda_{2}(d)=2$. From $b$, player $\Circle$ can force $\Box$ to get the payoff $2$ or less, with the strategy profile $\sigma_{\scriptsize{\Circle}}:h\mapsto c$. Such a strategy is $\lambda_{1}$-rational, assuming the strategy $\sigma_{\Box}:h\mapsto d$. Therefore, $\lambda_{2}(b)=2$. Finally, from $a$, player $\Box$ can force $\Circle$ to get the payoff $2$ or less, with the strategy profile $\sigma_{\Box}:h\mapsto d$. Such a strategy is $\lambda_{1}$-rational, assuming the strategy $\sigma_{\scriptsize{\Circle}}:h\mapsto c$. But, he cannot force her to get less than the payoff $2$, because she can force the access to the state $b$, and the only $\lambda_{1}$-consistent plays from $b$ are the plays with the form $(ba)^{k}bd^{\omega}$. Therefore, $\lambda_{2}(a)=2$. ### 3.3 Steady negotiation In what follows, we will often need a game to be _with steady negotiation_ , i.e. such that there always exists a worst $\lambda$-rational behaviour for the environment against a given player. ###### Definition 19 (Game with steady negotiation). A game $G$ is _with steady negotiation_ if and only if for every player $i$, for every vertex $v$, and for every requirement $\lambda$, the set $\left\\{\left.\sup_{\sigma_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\sigma}_{-i},\sigma_{i}\rangle_{v})\leavevmode\nobreak\ \right|\leavevmode\nobreak\ \bar{\sigma}_{-i}\in\lambda\mathrm{Rat}(v)\right\\}$ is either empty, or has a minimum. ###### Remark. In particular, when a game is with steady negotiation, the infimum in the definition of negotiation is always reached. It will be proved in Section 5 that mean-payoff games are with steady negotiation. ### 3.4 Link with Nash equilibria Requirements and the negotiation function are able to capture Nash equilibria. Indeed, if $\lambda_{0}$ is the vacuous requirement, then $\mathrm{nego}(\lambda_{0})$ characterizes the plays that are supported by a Nash equilibrium (abbreviated by NE plays), in the following formal sense: ###### Theorem 1 (App. A). Let $G$ be a game with steady negotiation. Then, a play $\rho$ in $G$ is an NE play if and only if $\rho$ is $\mathrm{nego}(\lambda_{0})$-consistent. ###### Example 5. Let us consider again the game of Example 2, with the requirement $\lambda_{1}$ given in Figure 1(b). The only $\lambda_{1}$-consistent plays in this game, starting from the state $a$, are $ac^{\omega}$, and $(ab)^{k}d^{\omega}$ with $k\geq 1$. One can check that those plays are exactly the NE plays in that game. In the following section, we will prove that as well as $\mathrm{nego}(\lambda_{0})$ characterizes the NEs, the requirement that is the least fixed point of the negotiation function characterizes the SPEs. ## 4 Link between negotiation and SPEs The notion of negotiation will enable us to find the SPEs, but also more generally the $\varepsilon$-SPEs, in a game. For that purpose, we need the notion of $\varepsilon$-fixed points of a function. ###### Definition 20 ($\varepsilon$-fixed point). Let $\varepsilon\geq 0$, let $D$ be a finite set and let $f:\overline{\mathbb{R}}^{D}\to\overline{\mathbb{R}}^{D}$ be a mapping. A tuple $\bar{x}\in\mathbb{R}^{D}$ is a _$\varepsilon$ -fixed point_ of $f$ if for each $d\in D$, for $\bar{y}=f(\bar{x})$, we have $y_{d}\in[x_{d}-\varepsilon,x_{d}+\varepsilon]$. ###### Remark. A $0$-fixed point is a fixed point, and conversely. The set of requirements, equipped with the componentwise order, is a complete lattice. Since the negotiation function is monotone, Tarski’s fixed point theorem states that the negotiation function has a least fixed point. That result can be generalized to $\varepsilon$-fixed points: ###### Lemma 1 (App. B). Let $\varepsilon\geq 0$. On each game, the function $\mathrm{nego}$ has a least $\varepsilon$-fixed point. Intuitively, the $\varepsilon$-fixed points of the negotiation function are the requirements $\lambda$ such that, from every vertex $v$, the player $i$ controlling $v$ cannot enforce a payoff greater than $\lambda(v)+\varepsilon$ against a $\lambda$-rational behaviour. Therefore, the $\lambda$-consistent plays are such that if one player tries to deviate, it is possible for the other players to prevent them improving their payoff by more than $\varepsilon$, while still playing rationally. Formally: ###### Theorem 2 (App. C). Let $G_{\upharpoonright v_{0}}$ be an initialized prefix-independent game, and let $\varepsilon\geq 0$. Let $\lambda^{*}$ be the least $\varepsilon$-fixed point of the negotiation function. Let $\xi$ be a play starting in $v_{0}$. If there exists an $\varepsilon$-SPE $\bar{\sigma}$ such that $\langle\bar{\sigma}\rangle_{v_{0}}=\xi$, then $\xi$ is $\lambda^{*}$-consistent. The converse is true if the game $G$ is with steady negotiation. ## 5 Negotiation games We have now proved that SPEs are characterized by the requirements that are fixed points of the negotiation function; but we need to know how to compute, in practice, the quantity $\mathrm{nego}(\lambda)$ for a given requirement $\lambda$. In other words, we need a algorithm that computes, given a state $v_{0}$ controlled by a player $i$ in the game $G$, and given a requirement $\lambda$, which value player $i$ can ensure in $G_{\upharpoonright v_{0}}$ if the other players play $\lambda$-rationally. ### 5.1 Abstract negotiation game We first define an _abstract negotiation game_ , that is conceptually simple but not directly usable for an algorithmic purpose, because it is defined on an uncoutably infinite state space. A similar definition was given in [13], as a tool in a general method to compute SPE plays in games whose payoff functions have finite range, which is not the case of mean-payoff games. Here, linking that game with our concepts of requirements, negotiation function and steady negotiation enables us to present an effective algorithm in the case of mean-payoff games, by constructing a finite version of the abstract negotiation game, the _concrete negotiation game_ , and afterwards by analyzing the negotiation function with linear algebra tools. The abstract negotiation game from a state $v_{0}$, with regards to a player $i$ and a requirement $\lambda$, is denoted by $\mathrm{Abs}_{\lambda i}(G)_{\upharpoonright[v_{0}]}$ and opposes two players, _Prover_ and _Challenger_ , as follows: * • Prover proposes a $\lambda$-consistent play $\rho$ from $v_{0}$ (or loses, if she has no play to propose). * • Then, either Challenger accepts the play and the game terminates; or, he chooses an edge $\rho_{k}\rho_{k+1}$, with $\rho_{k}\in V_{i}$, from which he can make player $i$ deviate, using another edge $\rho_{k}v$ with $v\neq\rho_{k+1}$: then, the game starts again from $v$ instead of $v_{0}$. * • In the resulting play (either eventually accepted by Challenger, or constructed by an infinity of deviations), Prover wants player $i$’s payoff to be low, and Challenger wants it to be high. That game gives us the basis of a method to compute $\mathrm{nego}(\lambda)$ from $\lambda$: the maximal outcome that Challenger — or $\mathbb{C}$ for short — can ensure in $\mathrm{Abs}_{\lambda i}(G)_{\upharpoonright[v_{0}]}$, with $v_{0}\in V_{i}$, is also the maximal payoff that player $i$ can ensure in $G_{\upharpoonright v_{0}}$, against a $\lambda$-rational environment; hence the equality $\mathrm{val}_{\mathbb{C}}\left(\mathrm{Abs}_{\lambda i}(G)_{\upharpoonright[v_{0}]}\right)=\mathrm{nego}(\lambda)(v_{0}).$ A proof of that statement, with a complete formalization of the abstract negotiation game, is presented in Appendix D. ###### Example 6. Let us consider again the game of Example 2: the requirement $\lambda_{2}=\mathrm{nego}(\lambda_{1})$, computed in Section 3.2, is also presented on the third line below the states in Figure 1(b). Let us use the abstract negotiation game to compute the requirement $\lambda_{3}=\mathrm{nego}(\lambda_{2})$. From $a$, Prover can propose the play $abd^{\omega}$, and the only deviation Challenger can do is going to $c$; he has of course no incentive to do it. Therefore, $\lambda_{3}(a)=2$. From $b$, whatever Prover proposes at first, Challenger can deviate and go to $a$. Then, from $a$, Prover cannot propose the play $ac^{\omega}$, which is not $\lambda_{2}$-consistent: she has to propose a play beginning by $ab$, and to let Challenger deviate once more. He can then deviate infinitely often that way, and generate the play $(ba)^{\omega}$: therefore, $\lambda_{3}(b)=3$. The other states keep the same values. Note that there exists no $\lambda_{3}$-consistent play from $a$ or $b$, hence $\mathrm{nego}(\lambda_{3})(a)=\mathrm{nego}(\lambda_{3})(b)=+\infty$. This proves that there is no SPE in that game. The interested reader will find other such examples in Appendix N. ### 5.2 Concrete negotiation game In the abstract negotiation game, Prover has to propose complete plays, on which we can make the hypothesis that they are $\lambda$-consistent. In practice, there will often be an infinity of such plays, and therefore it cannot be used directly for an algorithmic purpose. Instead, those plays can be given edge by edge, in a finite state game. Its definition is more technical, but it can be shown that it is equivalent to the abstract one. In order to make the definition as clear as possible, we give it only when the original game is a mean-payoff game. However, one could easily adapt this definition to other classes of prefix-independent games. ###### Definition 21 (Concrete negotiation game). Let $G_{\upharpoonright v_{0}}$ be an initialized mean-payoff game, and let $\lambda$ be a requirement on $G$, with either $\lambda(V)\subseteq\mathbb{R}$, or $\lambda=\lambda_{0}$. The _concrete negotiation game_ of $G_{\upharpoonright v_{0}}$ for player $i$ is the two-player zero-sum game $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}\leavevmode\nobreak\ =\leavevmode\nobreak\ \left(\\{\mathbb{P},\mathbb{C}\\},S,(S_{\mathbb{P}},S_{\mathbb{C}}),\Delta,\nu\right)_{\upharpoonright s_{0}}$, defined as follows: * • The set of states controlled by Prover is $S_{\mathbb{P}}=V\times 2^{V}$, where the state $s=(v,M)$ contains the information of the current state $v$ on which Prover has to define the strategy profile, and the _memory_ $M$ of the states that have been traversed so far since the last deviation, and that define the requirements Prover has to satisfy. The initial state is $s_{0}=(v_{0},\\{v_{0}\\})$. * • The set of states controlled by Challenger is $S_{\mathbb{C}}=E\times 2^{V}$, where in the state $s=(uv,M)$, the edge $uv$ is the edge proposed by Prover. * • The set $\Delta$ contains three types of transitions: _proposals_ , _acceptations_ and _deviations_. * – The proposals are transitions in which Prover proposes an edge of the game $G$: $\mathrm{Prop}=\left\\{(v,M)(vw,M)\leavevmode\nobreak\ \left|\leavevmode\nobreak\ vw\in E,M\in 2^{V}\right.\right\\};$ * – the acceptations are transitions in which Challenger accepts to follow the edge proposed by Prover (it is in particular his only possibility when that edge begins on a state that is not controlled by player $i$) — note that the memory is updated: $\mathrm{Acc}=\left\\{(vw,M)\left(w,M\cup\\{w\\}\right)\leavevmode\nobreak\ \left|\leavevmode\nobreak\ j\in\Pi,w\in V_{j}\right.\right\\};$ * – the deviations are transitions in which Challenger refuses to follow the edge proposed by Prover, as he can if that edge begins in a state controlled by player $i$ — the memory is erased, and only the new state the deviating edge leads to is memorized: $\mathrm{Dev}=\left\\{(uv,M)(w,\\{w\\})\leavevmode\nobreak\ \left|\leavevmode\nobreak\ u\in V_{i},w\neq v,uw\in E\right.\right\\}.$ * • On those transitions, we define a multidimensional weight function $\hat{\pi}:\Delta\to\mathbb{R}^{\Pi\cup\\{\star\\}}$, with one dimension per player (_non-main_ dimensions) plus one special dimension (_main_ dimension) denoted by the symbol $\star$. For each non-main dimension $j\in\Pi$, we define: * – on proposals: $\hat{\pi}_{j}\left((v,M)(vw,M)\right)=0$; * – on acceptations and deviations: $\hat{\pi}_{j}\left((uv,M)(w,N)\right)=2\left(\pi_{j}(uw)-\underset{v_{j}\in M\cap V_{j}}{\max}\lambda(v_{j})\right)$; and on the main dimension: * – on proposals: $\hat{\pi}_{\star}\left((v,M),(vw,M)\right)=0$; * – on acceptations and deviations: $\hat{\pi}_{\star}\left((uv,M),(w,N)\right)=2\pi_{i}(uw)$. For each dimension $d$, we write $\hat{\mu}_{d}$ the corresponding mean-payoff function: $\hat{\mu}_{d}(\rho)=\liminf_{n\in\mathbb{N}}\frac{1}{n}\sum_{k=0}^{n-1}\hat{\pi}_{d}(\rho_{k}\rho_{k+1}).$ Thus, the mean-payoff along the main dimension corresponds to player $i$’s payoff, while the mean-payoff along a non-main dimension $j$ corresponds to player $j$’s payoff… minus the maximal requirement player $j$ has to satisfy. * • Then, the outcome function $\nu_{\mathbb{C}}=-\nu_{\mathbb{P}}$ measures player $i$’s payoff, with a winning condition if the constructed strategy profile is not $\lambda$-rational, that is to say if after finitely many player $i$’s deviations, it can generate a play which is not $\lambda$-consistent: * – $\nu_{\mathbb{C}}(\eta)=+\infty$ if after some index $n\in\mathbb{N}$, the play $\eta_{n}\eta_{n+1}\dots$ contains no deviation, and if $\hat{\mu}_{j}(\eta)<0$ for some $j\in\Pi$; * – $\nu_{\mathbb{C}}(\eta)=\hat{\mu}_{\star}(\eta)$ otherwise. Like in the abstract negotiation game, the goal of Challenger is to find a $\lambda$-rational strategy profile that forces the worst possible payoff for player $i$, and the goal of Prover is to find a possibly deviating strategy for player $i$ that gives them the highest possible payoff. A play or a history in the concrete negotiation game has a projection in the game on which that negotiation game has been constructed, defined as follows: ###### Definition 22 (Projection of a history, of a play). Let $G$ be a prefix-independent game. Let $\lambda$ be a requirement and $i$ a player, and let $\mathrm{Conc}_{\lambda i}(G)$ be the corresponding concrete negotiation game. Let $H=(h_{0},M_{0})(h_{0}h^{\prime}_{0},M_{0})\dots(h_{n}h^{\prime}_{n},M_{n})$ be a history in $\mathrm{Conc}_{\lambda i}(G)$: the _projection_ of the history $H$ is the history $\dot{H}=h_{0}\dots h_{n}$ in the game $G$. That definition is naturally extended to plays. ###### Remark. For a play $\eta$ without deviations, we have $\hat{\mu}_{j}(\eta)\geq 0$ for each $j\in\Pi$ if and only if $\dot{\eta}$ is $\lambda$-consistent. The concrete negotiation game is equivalent to the abstract one: the only differences are that the plays proposed by Prover are proposed edge by edge, and that their $\lambda$-consistency is not written in the rules of the game but in its outcome function. ###### Theorem 3 (App. E). Let $G_{\upharpoonright v_{0}}$ be an initialized mean-payoff game. Let $\lambda$ be a requirement and $i$ a player. Then, we have: $\mathrm{val}_{\mathbb{C}}\left(\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}\right)=\inf_{\bar{\sigma}_{-i}\in\lambda\mathrm{Rat}(v_{0})}\leavevmode\nobreak\ \sup_{\sigma_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\sigma}\rangle_{v_{0}}).$ An example of concrete negotiation game is given in Appendix F. ### 5.3 Solving the concrete negotiation game We now know that $\mathrm{nego}(\lambda)(v)$, for a given requirement $\lambda$, a given player $i$ and a given state $v\in V_{i}$, is the value of the concrete negotiation game $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright(v,\\{v\\})}$. Let us now show how, in the mean-payoff case, that value can be computed. ###### Definition 23 (Memoryless strategy). A strategy $\sigma_{i}$ in a game $G$ is _memoryless_ if for all vertices $v\in V_{i}$ and for all histories $h$ and $h^{\prime}$, we have $\sigma_{i}(hv)=\sigma_{i}(h^{\prime}v)$. For any game $G$ and any memoryless strategy $\sigma_{i}$, $G[\sigma_{i}]$ denotes the graph _induced_ by $\sigma_{i}$, that is the graph $(V,E^{\prime})$, with $E^{\prime}=\left\\{vw\in E\leavevmode\nobreak\ |\leavevmode\nobreak\ v\not\in V_{i}\mathrm{\leavevmode\nobreak\ or\leavevmode\nobreak\ }w=\sigma_{i}(v)\right\\}.$ For any finite set $D$ and any set $X\subseteq\mathbb{R}^{D}$, $\mathrm{Conv}X$ denotes the convex hull of $X$. We can now prove that in the concrete negotiation game constructed from a mean-payoff game, Challenger has an optimal strategy that is memoryless. ###### Lemma 2 (App. G). Let $G_{\upharpoonright v_{0}}$ be an initialized mean-payoff game, let $i$ be a player, let $\lambda$ be a requirement and let $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}$ be the corresponding concrete negotiation game. There exists a memoryless strategy $\tau_{\mathbb{C}}$ that is optimal for Challenger, i.e. such that: $\underset{\tau_{\mathbb{P}}}{\inf}\leavevmode\nobreak\ \nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{s_{0}})=\mathrm{val}_{\mathbb{C}}\left(\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}\right).$ For every game $G_{\upharpoonright v_{0}}$ and each player $i$, $\mathrm{ML}_{i}\left(G_{\upharpoonright v_{0}}\right)$, or $\mathrm{ML}\left(G_{\upharpoonright v_{0}}\right)$ when the context is clear, denotes the set of memoryless strategies for player $i$ in $G_{\upharpoonright v_{0}}$. When $(V,E)$ is a graph, $\mathrm{SC}(V,E)$ denotes the set of its simple cycles, and $\mathrm{SConn}(V,E)$ the set of its strongly connected components. For any closed set $C\subseteq\mathbb{R}^{\Pi\cup\\{\star\\}}$, the quantity $\min\\!^{\star}C=\min\left\\{x_{\star}\leavevmode\nobreak\ |\leavevmode\nobreak\ \bar{x}\in C,\forall j\in\Pi,x_{j}\geq 0\right\\}$ is the _$\star$ -minimum_ of $C$: it will capture, in the concrete negotiation game, the least payoff that can be imposed on player $i$ while keeping every player’s payoff above their requirements, among a set of possible outcomes. With Lemma 2, we can now solve the concrete negotiation game. ###### Lemma 3 (App. H). Let $G_{\upharpoonright v_{0}}$ be an initialized mean-payoff game, and let $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}$ be its concrete negotiation game for some $\lambda$ and some $i$. Then, the value of the game $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}$ is given by the formula: $\max_{\tau_{\mathbb{C}}\in\mathrm{ML}_{\mathbb{C}}\left(\mathrm{Conc}_{\lambda i}(G)\right)}\leavevmode\nobreak\ \min_{\scriptsize{\begin{matrix}K\in\mathrm{SConn}\left(\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]\right)\\\ \mathrm{accessible\leavevmode\nobreak\ from\leavevmode\nobreak\ }s_{0}\end{matrix}}}\mathrm{opt}(K),$ where $\mathrm{opt}(K)$ is the minimal value $\nu_{\mathbb{C}}(\rho)$ for $\rho$ among the infinite paths in $K$. If $K$ contains a deviation, then Prover can choose among its simple cycles the one that minimizes player $i$’s payoff: $\mathrm{opt}(K)=\underset{c\in\mathrm{SC}(K)}{\min}\leavevmode\nobreak\ \hat{\mu}_{\star}(c^{\omega}).$ If $K$ does not contain a deviation, then Prover must choose a combination of its simple cycles that minimizes the main dimension while keeping the other dimensions above $0$: $\mathrm{opt}(K)=\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\leavevmode\nobreak\ \hat{\mu}(c^{\omega}).$ ###### Corollary 1. For each player $i$ and every state $v\in V_{i}$, the value $\mathrm{nego}(\lambda)(v)$ can be computed with the formula given in Lemma 3 applied to the game $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright(v,\\{v\\})}$ Another corollary of that result is that there always exists a best play that Prover can choose, i.e. Prover has an optimal strategy; by Theorem 3, this is equivalent to saying that: ###### Corollary 2. Mean-payoff games are games with steady negotiation. ## 6 Analysis of the negotiation function in mean-payoff games When one wants to compute the least fixed point of a function, the usual method is to iterate it on the minimal element of the considered set, to go until that fixed point. That approach is sufficient in many simple examples. In Appendix I, we present its technical details, and an example on which it does not enable to find the least fixed point in a finite number of iterations; which is why another approach is necessary. In this section, we will show that, in the case of mean-payoff games, the negotiation function is a piecewise linear function from the vector space of requirements into itself, which can therefore be computed and analyzed using classical linear algebra techniques. Then, it becomes possible to search for the fixed points or the $\varepsilon$-fixed points of such a function, and to decide the existence or not of SPEs or $\varepsilon$-SPEs in the game studied. ###### Theorem 4 (App. K). Let $G$ be a mean-payoff game. Let us assimilate any requirement $\lambda$ on $G$ with finite values to the tuple $\lambda\mspace{-10.0mu}\bar{\phantom{v}}=(\lambda(v))_{v\in V}$, element of the vector space $\mathbb{R}^{V}$. Then, for each player $i$ and every vertex $v_{0}\in V_{i}$, the quantity $\mathrm{nego}(\lambda)(v_{0})$ is a piecewise linear function of $\lambda\mspace{-10.0mu}\bar{\phantom{v}}$, and an effective expression of that function can be computed in 2-ExpTime. ###### Example 7. Let us consider the game of Example 3. If a requirement $\lambda$ is represented by the tuple $(\lambda(a),\lambda(b))$, the function $\mathrm{nego}:\mathbb{R}^{2}\to\mathbb{R}^{2}$ can be represented by Figure 3(a), where in any one of the regions delimited by the dashed lines, we wrote a formula for the couple $(\mathrm{nego}(\lambda)(a),\mathrm{nego}(\lambda)(b))$. The orange area indicates the fixed points of the function, and the yellow area the other $\frac{1}{2}$-fixed points. $\lambda(a)$$\lambda(b)$12012$(1,1)$$(1,\lambda(b))$$(\lambda(a),1)$$(2\lambda(b)-2,\lambda(b))$$(\lambda(a),2\lambda(a)-2)$$(\lambda(a),\lambda(b))$$(+\infty,+\infty)$ (a) Example 3 $\lambda(a)$$\lambda(b)$123012$(1,2)$$(2,2)$$(2,3)$$(1,3)$$(+\infty,+\infty)$ (b) Example 2 Figure 3: The negotiation function on the games of Examples 3 and 2 ###### Example 8. Now, let us consider the game of Example 2. If we fix $\lambda(c)=1$ and $\lambda(d)=2$, and represent the requirements $\lambda$ by the tuples $(\lambda(a),\lambda(b))$, as in the previous example. Then, the negotiation function can be represented as in Figure 3(b). One can check that there is no fixed point here, and even no $\frac{1}{2}$-fixed point — except $(+\infty,+\infty)$. ## 7 Conclusion: algorithm and complexity Thanks to all the previous results, we are now able to compute the least fixed point, or the least $\varepsilon$-fixed point, of the negotiation function, on every mean-payoff game, and to use it as a characterization of all the SPEs or all the $\varepsilon$-SPEs. A direct application is an algorithm that solves the _$\varepsilon$ -SPE constrained existence problem_, i.e. that decides, given an initialized mean-payoff game $G_{\upharpoonright v_{0}}$, two thresholds $\bar{x},\bar{y}\in\mathbb{Q}^{\Pi}$, and a rational number $\varepsilon\geq 0$, whether there exists an SPE $\bar{\sigma}$ such that $\bar{x}\leq\mu(\langle\bar{\sigma}\rangle_{v_{0}})\leq\bar{y}$. We leave for future work the optimal complexity of that problem. However, we can easily prove that it cannot be solved in polynomial time, unless $\mathbf{P=NP}$. ###### Theorem 5 (App. L). The $\varepsilon$-SPE constrained existence problem is NP-hard. Given $G_{\upharpoonright v_{0}}$, by Theorem 4, computing a general expression of the negotiation function as a piecewise linear function can be done in time double exponential in the size of $G$. Then, for each linear piece of $\mathrm{nego}$, computing its set of $\varepsilon$-fixed points is a polynomial problem. Since the number of pieces is at most double exponential in the size of $G$, computing its entire set of fixed points, and thus its least $\varepsilon$-fixed point $\lambda$, can be done in double exponential time. Then, from the requirement $\lambda$ and the thresholds $\bar{x}$ and $\bar{y}$, we can construct a multi-mean-payoff automaton $\mathcal{A}_{\lambda}$ of exponential size, that accepts an infinite word $\rho\in V^{\omega}$, if and only if $\rho$ is a $\lambda$-consistent play of $G_{\upharpoonright v_{0}}$, and $\bar{x}\leq\mu(\rho)\leq\bar{y}$ — see Appendix M for the construction of $\mathcal{A}_{\lambda}$. Finally, by Theorem 2, there exists an SPE $\bar{\sigma}$ in $G_{\upharpoonright v_{0}}$ with $\bar{x}\leq\mu(\langle\bar{\sigma}\rangle_{v_{0}})\leq\bar{y}$ if and only if the language of the automaton $\mathcal{A}_{\lambda}$ is nonempty, which can be known in a time polynomial in the size of $\mathcal{A}_{\lambda}$ (see for example [1]), i.e. in a time exponential in the size of $G$. We can therefore conclude on the following result: ###### Theorem 6. The $\varepsilon$-SPE constrained existence problem is decidable and 2-ExpTime-easy. ## References * [1] Rajeev Alur, Aldric Degorre, Oded Maler, and Gera Weiss. On omega-languages defined by mean-payoff conditions. In Luca de Alfaro, editor, Foundations of Software Science and Computational Structures, 12th International Conference, FOSSACS 2009, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2009, York, UK, March 22-29, 2009. Proceedings, volume 5504 of Lecture Notes in Computer Science, pages 333–347. Springer, 2009. doi:10.1007/978-3-642-00596-1\\_24. * [2] Romain Brenguier, Lorenzo Clemente, Paul Hunter, Guillermo A. Pérez, Mickael Randour, Jean-François Raskin, Ocan Sankur, and Mathieu Sassolas. Non-zero sum games for reactive synthesis. In Language and Automata Theory and Applications - 10th International Conference, LATA 2016, Prague, Czech Republic, March 14-18, 2016, Proceedings, volume 9618 of Lecture Notes in Computer Science, pages 3–23. Springer, 2016. * [3] Thomas Brihaye, Véronique Bruyère, Aline Goeminne, Jean-François Raskin, and Marie van den Bogaard. The complexity of subgame perfect equilibria in quantitative reachability games. In CONCUR, volume 140 of LIPIcs, pages 13:1–13:16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019. * [4] Thomas Brihaye, Véronique Bruyère, Noémie Meunier, and Jean-François Raskin. Weak subgame perfect equilibria and their application to quantitative reachability. In 24th EACSL Annual Conference on Computer Science Logic, CSL 2015, September 7-10, 2015, Berlin, Germany, volume 41 of LIPIcs, pages 504–518. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2015. * [5] Thomas Brihaye, Julie De Pril, and Sven Schewe. Multiplayer cost games with simple nash equilibria. In Logical Foundations of Computer Science, International Symposium, LFCS 2013, San Diego, CA, USA, January 6-8, 2013. Proceedings, volume 7734 of Lecture Notes in Computer Science, pages 59–73. Springer, 2013. * [6] Véronique Bruyère. Computer aided synthesis: A game-theoretic approach. In Developments in Language Theory - 21st International Conference, DLT 2017, Liège, Belgium, August 7-11, 2017, Proceedings, volume 10396 of Lecture Notes in Computer Science, pages 3–35. Springer, 2017. * [7] Véronique Bruyère, Noémie Meunier, and Jean-François Raskin. Secure equilibria in weighted games. In CSL-LICS, pages 26:1–26:26. ACM, 2014. * [8] Véronique Bruyère, Stéphane Le Roux, Arno Pauly, and Jean-François Raskin. On the existence of weak subgame perfect equilibria. CoRR, abs/1612.01402, 2016. * [9] Véronique Bruyère, Stéphane Le Roux, Arno Pauly, and Jean-François Raskin. On the existence of weak subgame perfect equilibria. In Foundations of Software Science and Computation Structures - 20th International Conference, FOSSACS 2017, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017, Uppsala, Sweden, April 22-29, 2017, Proceedings, volume 10203 of Lecture Notes in Computer Science, pages 145–161, 2017. * [10] Krishnendu Chatterjee, Laurent Doyen, Herbert Edelsbrunner, Thomas A. Henzinger, and Philippe Rannou. Mean-payoff automaton expressions. In Paul Gastin and François Laroussinie, editors, CONCUR 2010 - Concurrency Theory, 21th International Conference, CONCUR 2010, Paris, France, August 31-September 3, 2010. Proceedings, volume 6269 of Lecture Notes in Computer Science, pages 269–283. Springer, 2010. * [11] Krishnendu Chatterjee, Thomas A. Henzinger, and Nir Piterman. Strategy logic. Inf. Comput., 208(6):677–693, 2010. doi:10.1016/j.ic.2009.07.004. * [12] János Flesch, Jeroen Kuipers, Ayala Mashiah-Yaakovi, Gijs Schoenmakers, Eilon Solan, and Koos Vrieze. Perfect-information games with lower-semicontinuous payoffs. Math. Oper. Res., 35(4):742–755, 2010. * [13] János Flesch and Arkadi Predtetchinski. A characterization of subgame-perfect equilibrium plays in borel games of perfect information. Math. Oper. Res., 42(4):1162–1179, 2017. doi:10.1287/moor.2016.0843. * [14] Eryk Kopczynski. Half-positional determinacy of infinite games. In Michele Bugliesi, Bart Preneel, Vladimiro Sassone, and Ingo Wegener, editors, Automata, Languages and Programming, 33rd International Colloquium, ICALP 2006, Venice, Italy, July 10-14, 2006, Proceedings, Part II, volume 4052 of Lecture Notes in Computer Science, pages 336–347. Springer, 2006. * [15] Orna Kupferman, Giuseppe Perelli, and Moshe Y. Vardi. Synthesis with rational environments. Ann. Math. Artif. Intell., 78(1):3–20, 2016. doi:10.1007/s10472-016-9508-8. * [16] Donald A. Martin. Borel determinacy. Annals of Mathematics, pages 363–371, 1975. * [17] Noémie Meunier. Multi-Player Quantitative Games: Equilibria and Algorithms. PhD thesis, Université de Mons, 2016. * [18] Martin J. Osborne. An introduction to game theory. Oxford Univ. Press, 2004. * [19] Eilon Solan and Nicolas Vieille. Deterministic multi-player dynkin games. Journal of Mathematical Economics, 39(8):911–929, 2003. * [20] Michael Ummels. Rational behaviour and strategy construction in infinite multiplayer games. In FSTTCS 2006: Foundations of Software Technology and Theoretical Computer Science, 26th International Conference, Kolkata, India, December 13-15, 2006, Proceedings, volume 4337 of Lecture Notes in Computer Science, pages 212–223. Springer, 2006. * [21] Michael Ummels. The complexity of nash equilibria in infinite multiplayer games. In Roberto M. Amadio, editor, Foundations of Software Science and Computational Structures, 11th International Conference, FOSSACS 2008, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2008, Budapest, Hungary, March 29 - April 6, 2008. Proceedings, volume 4962 of Lecture Notes in Computer Science, pages 20–34. Springer, 2008. doi:10.1007/978-3-540-78499-9\\_3. * [22] Yaron Velner, Krishnendu Chatterjee, Laurent Doyen, Thomas A. Henzinger, Alexander Moshe Rabinovich, and Jean-François Raskin. The complexity of multi-mean-payoff and multi-energy games. Inf. Comput., 241:177–196, 2015. * [23] Nicolas Vieille and Eilon Solan. Deterministic multi-player Dynkin games. Journal of Mathematical Economics, Vol.39,num. 8:pp.911–929, November 2003. URL: https://hal-hec.archives-ouvertes.fr/hal-00464953, doi:10.1016/S0304-4068(03)00021-1. The following appendices are providing the detailed proofs of all our results. They are not necessary to understand our results and are meant to provide full formalization and rigorous proofs. They also provide further intuitions through additional examples for the interested reader. To improve readability, we have chosen to recall the statements that appeared in the main body of the paper before giving their detailed proofs in order to ease the work of the reader. ## Appendix A Proof of Theorem 1 Theorem 1. _Let $G$ be a game with steady negotiation. Then, a play $\rho$ in $G$ is an NE play if and only if $\rho$ is $\mathrm{nego}(\lambda_{0})$-consistent._ ###### Proof. * • Let $\bar{\sigma}$ be a Nash equilibrium in $G_{\upharpoonright v_{0}}$, for some state $v_{0}$, and let $\rho=\langle\bar{\sigma}\rangle_{v_{0}}$ : let us prove that the play $\rho$ is $\mathrm{nego}(\lambda_{0})$-consistent. Let $k\in\mathbb{N}$, let $i\in\Pi$ be such that $\rho_{k}\in V_{i}$, and let us prove that $\mu_{i}\left(\rho_{k}\rho_{k+1}\dots\right)\geq\mathrm{nego}(\lambda_{0})(\rho_{k})$. For any deviation $\sigma^{\prime}_{i}$ of $\sigma_{i\upharpoonright\rho_{0}\dots\rho_{k}}$, by definition of NEs, $\mu_{i}\left(\langle\bar{\sigma}_{-i\upharpoonright\rho_{0}\dots\rho_{k}},\sigma^{\prime}_{i}\rangle_{\rho_{k}}\right)\leq\mu_{i}(\rho)$. Therefore: $\mu_{i}(\rho)\geq\sup_{\sigma^{\prime}_{i}}\leavevmode\nobreak\ \mu_{i}\left(\langle\bar{\sigma}_{-i\upharpoonright\rho_{0}\dots\rho_{k}},\sigma^{\prime}_{i}\rangle_{\rho_{k}}\right)$ hence: $\mu_{i}(\rho)\geq\inf_{\bar{\tau}_{-i}}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}\left(\langle\bar{\tau}_{-i\upharpoonright\rho_{0}\dots\rho_{k}},\tau_{i}\rangle_{\rho_{k}}\right)$ i.e.: $\mu_{i}(\rho)\geq\mathrm{nego}(\lambda_{0})(\rho_{k}).$ * • Let $\rho$ be a $\mathrm{nego}(\lambda_{0})$-consistent play from a state $v_{0}$. Let us define a strategy profile $\bar{\sigma}$ such that $\langle\bar{\sigma}\rangle_{v_{0}}=\rho$, by: * – $\langle\bar{\sigma}\rangle_{v_{0}}=\rho$; * – for all histories of the form $\rho_{0}\dots\rho_{k}v$ with $v\neq\rho_{k+1}$, let $i$ be the player controlling $\rho_{k}$. Since the game $G$ is with steady negotiation, the infimum: $\inf_{\bar{\tau}_{-i}\in\lambda_{0}\mathrm{Rat}(\rho_{k})}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{\rho_{k}})$ is a minimum. Let $\bar{\tau}^{k}_{-i}$ be $\lambda_{0}$-rational strategy profile from $\rho_{k}$ realizing that minimum, and let $\tau^{k}_{i}$ be some strategy from $\rho_{k}$ such that $\tau^{k}_{i}(\rho_{k})=v$. Then, we define: $\langle\bar{\sigma}_{\upharpoonright\rho_{0}\dots\rho_{k}v}\rangle_{v}=\langle\bar{\tau}^{k}_{\rho_{k}v}\rangle_{v};$ * – for every other history $h$, $\bar{\sigma}(h)$ is defined arbitrarily. Let us prove that $\bar{\sigma}$ is an NE: let $\sigma^{\prime}_{i}$ be a deviation of $\sigma_{i}$, let $\rho^{\prime}=\langle\bar{\sigma}_{-i},\sigma^{\prime}_{i}\rangle_{v_{0}}$ and let $\rho_{0}\dots\rho_{k}$ be the longest common prefix of $\rho$ and $\rho^{\prime}$. Let $v=\rho^{\prime}_{k+1}$. Then, we have: $\mu_{i}(\rho^{\prime})\leq\sup_{\tau_{i}^{k}}\leavevmode\nobreak\ \mu_{i}\left(\langle\bar{\tau}^{k}\rangle_{\rho_{k}}\right)=\mathrm{nego}(\lambda_{0})(\rho_{k}),$ and since $\rho$ is $\lambda_{0}$-consistent, $\mathrm{nego}(\lambda_{0})(\rho_{k})\leq\mu_{i}(\rho)$, hence $\mu_{i}(\rho^{\prime})\leq\mu_{i}(\rho)$. ∎ ## Appendix B Proof of Lemma 1 Lemma 1. _Let $G$ be a game, and let $\varepsilon\geq 0$. The negotiation function has a least $\varepsilon$-fixed point._ ###### Proof. The following proof is a generalization of a classical proof of Tarski’s fixed point theorem. Let $\Lambda$ be the set of the $\varepsilon$-fixed points of the negotiation function. The set $\Lambda$ is not empty, since it contains at least the requirement $v\mapsto+\infty$. Let $\lambda^{*}$ be the requirement defined by: $\lambda^{*}:v\mapsto\inf_{\lambda\in\Lambda}\lambda(v).$ For every $\varepsilon$-fixed point $\lambda$ of the negotiation function, we have then for each $v$, $\lambda^{*}(v)\leq\lambda(v)$, and $\mathrm{nego}(\lambda^{*})(v)\leq\mathrm{nego}(\lambda)(v)$ since $\mathrm{nego}$ is monotone; and therefore, $\mathrm{nego}(\lambda^{*})(v)\leq\lambda(v)+\varepsilon$. As a consequence, we have: $\mathrm{nego}(\lambda^{*})(v)\leq\inf_{\lambda\in\Lambda}\lambda(v)+\varepsilon=\lambda^{*}(v)+\varepsilon.$ The requirement $\lambda^{*}$ is an $\varepsilon$-fixed point of the negotiation function, and is therefore the least $\varepsilon$-fixed point of the negotiation function. ∎ ## Appendix C Proof of Theorem 2 Theorem 2. _Let $G_{\upharpoonright v_{0}}$ be an initialized prefix- independent game, and let $\varepsilon\geq 0$. Let $\lambda^{*}$ be the least $\varepsilon$-fixed point of the negotiation function. Let $\xi$ be a play starting in $v_{0}$. If there exists an $\varepsilon$-SPE $\bar{\sigma}$ such that $\langle\bar{\sigma}\rangle_{v_{0}}=\xi$, then $\xi$ is $\lambda^{*}$-consistent. The converse is true if the game $G$ is with steady negotiation._ ###### Proof. First, let us recall that $\lambda^{*}$ exists by Lemma 1. Then, our proof can be decomposed in two lemmas: ###### Lemma 4. Let $G_{\upharpoonright v_{0}}$ be a well-initialized prefix-independent game, and let $\varepsilon\geq 0$. Let $\bar{\sigma}$ be an $\varepsilon$-SPE in $G_{\upharpoonright v_{0}}$. Then, there exists an $\varepsilon$-fixed point $\lambda$ of the negotiation function such that for every history $hv$ starting in $v_{0}$, the play $\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v}$ is $\lambda$-consistent. ###### Proof. Let us define the requirement $\lambda$ by, for each $i\in\Pi$ and $v\in V_{i}$: $\lambda(v)=\inf_{hv\in\mathrm{Hist}G_{\upharpoonright v_{0}}}\mu_{i}(\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v}).$ Note that the set $\left\\{\mu_{i}(\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v})\leavevmode\nobreak\ |\leavevmode\nobreak\ hv\in\mathrm{Hist}G_{\upharpoonright v_{0}}\right\\}$ is never empty, since the game $G_{\upharpoonright v_{0}}$ is well-initialized. Then, for every history $hv$ starting in $v_{0}$, the play $\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v}$ is $\lambda$-consistent. Let us prove that $\lambda$ is an $\varepsilon$-fixed point of $\mathrm{nego}$: let $i\in\Pi$, let $v\in V_{i}$, and let us assume towards contradiction (since the negotiation function is non-decreasing) that $\mathrm{nego}(\lambda)(v)>\lambda(v)+\varepsilon$, that is to say: $\inf_{\bar{\tau}_{-i}\in\lambda\mathrm{Rat}(v)}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{v})>\inf_{hv\in\mathrm{Hist}G_{\upharpoonright v_{0}}}\mu_{i}(\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v})+\varepsilon.$ Then, since all the plays generated by the strategy profile $\bar{\sigma}$ are $\lambda$-consistent, and therefore since any strategy profile of the form $\bar{\sigma}_{-i\upharpoonright hv}$ is $\lambda$-rational, we have: $\inf_{hv}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\sigma}_{-i\upharpoonright hv},\tau_{i}\rangle_{v})>\inf_{hv}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v})+\varepsilon.$ Therefore, there exists a history $hv$ such that: $\sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\sigma}_{-i\upharpoonright hv},\tau_{i}\rangle_{v})>\mu_{i}(\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v})+\varepsilon,$ which is impossible if the strategy profile $\bar{\sigma}$ is an $\varepsilon$-SPE. Therefore, there is no such $v$, and the requirement $\lambda$ is an $\varepsilon$-fixed point of the negotiation function. ∎ ###### Lemma 5. Let $G_{\upharpoonright v_{0}}$ be a well-initialized prefix-independent game with steady negotiation, and $\varepsilon\geq 0$. Let $\lambda$ be an $\varepsilon$-fixed point of the function $\mathrm{nego}$. Then, for every $\lambda$-consistent play $\xi$ starting in $v_{0}$, there exists an $\varepsilon$-SPE $\bar{\sigma}$ such that $\langle\bar{\sigma}\rangle_{v_{0}}=\xi$. ###### Proof. * • _Particular case: if there exists $v$ such that $\lambda(v)=+\infty$._ In that case, for each $u$ such that $uv\in E$, if the player controlling $u$ chooses to go to $v$, no $\lambda$-consistent play can be proposed to them from there, hence there is no $\lambda$-rational strategy profile against that player from $u$, and $\mathrm{nego}(\lambda)(u)=+\infty$. Since $\varepsilon$ is finite and since $\lambda$ is an $\varepsilon$-fixed point of the negotiation function, it follows that $\lambda(u)=+\infty$. Since $G_{\upharpoonright v_{0}}$ is well-initialized, we can repeat this argument and show that $\lambda(v_{0})=+\infty$; in that case, there is no $\lambda$-consistent play $\xi$ from $u$, and then the proof is done. Therefore, for the rest of the proof, we assume that for all $v$, we have $\lambda(v)\neq+\infty$. As a consequence, since $\lambda$ is an $\varepsilon$-fixed point of the function $\mathrm{nego}$, for all $v$, we have $\mathrm{nego}(\lambda)(v)\neq+\infty$; and so finally, for each such $v$, there exists a $\lambda$-consistent play starting from $v$. * • _Preliminary result: a game with steady negotiation is also with subgame- steady negotiation._ Recall that since $G$ is a game with steady negotiation, for every requirement $\lambda$, for every player $i$ and for every state $v$, there exists a $\lambda$-rational strategy profile $\bar{\tau}^{v}$ such that: $\sup_{\tau^{v}_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}^{v}\rangle_{v})=\inf_{\bar{\tau}_{-i}\in\lambda\mathrm{Rat}(v)}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{v})$ i.e. there exists a worst $\lambda$-rational strategy profile against player $i$ from the state $v$, with regards to player $i$’s payoff. Our goal in this part of the proof is to show that $G$ is then also with _subgame-steady negotiation_ , that is to say, for every requirement $\lambda$, for every player $i$ and for every state $v$, there exists a $\lambda$-rational strategy profile $\bar{\tau}^{v*}_{-i}$ such that for every history $hw$ starting from $v$ compatible with $\bar{\tau}^{v*}_{-i}$, we have: $\sup_{\tau^{v*}_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}^{v*}_{\upharpoonright hw}\rangle_{w})=\inf_{\bar{\tau}_{-i}\in\lambda\mathrm{Rat}(w)}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{w}),$ i.e. there exists a $\lambda$-rational strategy profile against player $i$ from the state $v$, that is the worst with regards to player $i$’s payoff in any subgame, in other words a _subgame-worst_ strategy profile. Let us construct inductively the strategy profile $\bar{\tau}^{v*}_{-i}$ and the strategy $\tau^{v*}_{i}$ assuming which it is $\lambda$-rational. We define them only on histories that are compatible with $\bar{\tau}^{v*}_{-i}$, since they can be defined arbitrarily on any other histories. We proceed by assembling the strategy profiles of the form $\bar{\tau}^{w}$, and the histories after which we follow a new $\bar{\tau}^{w}$ will be called the _resets_ of $\bar{\tau}^{v*}_{-i}$. * – First, $\langle\bar{\tau}^{v*}\rangle_{v}=\langle\bar{\tau}^{v}\rangle_{v}$: the one-state history $v$ is then the first reset of $\bar{\tau}^{v*}_{-i}$; * – then, for every history $hw$ from $v$ such that $h$ is compatible with $\bar{\tau}^{v*}_{-i}$ and ends in $V_{i}$, and such that $w\neq\tau^{v*}_{i}(h)$: let us write $hw=h^{\prime}uh^{\prime\prime}$ so that $h^{\prime}u$ is the longest reset of $\bar{\tau}^{v*}_{-i}$ among the prefixes of $h$, and therefore so that the strategy profile $\bar{\tau}^{v*}_{\upharpoonright h^{\prime}u}$ has been defined as equal to $\bar{\tau}^{u}$ over the prefixes of $h^{\prime\prime}$ until $w$. Then, we have: $\sup_{\tau_{i}}\mu_{i}(\langle\bar{\tau}^{w}_{-i},\tau_{i}\rangle_{w})\leq\sup_{\tau_{i}}\mu_{i}(\langle\bar{\tau}^{u}_{-i\upharpoonright uh^{\prime\prime}},\tau_{i}\rangle_{w})$ by prefix-independence of $G$ and since by its definition, the strategy profile $\bar{\tau}^{w}_{-i}$ minimizes the quantity $\sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}^{w}_{-i},\tau_{i}\rangle_{w})$. Let us separate two cases. * * Suppose first that: $\sup_{\tau_{i}}\mu_{i}(\langle\bar{\tau}^{w}_{-i},\tau_{i}\rangle_{w})=\sup_{\tau_{i}}\mu_{i}(\langle\bar{\tau}^{u}_{-i\upharpoonright uh^{\prime\prime}},\tau_{i}\rangle_{w}).$ Then, $\langle\bar{\tau}^{v*}_{\upharpoonright hw}\rangle=\langle\bar{\tau}^{u}_{\upharpoonright uh^{\prime\prime}}\rangle_{w}$: the coalition of players against player $i$ keeps following their strategy profile so that player $i$ will have no more than the payoff they can ensure. * * Suppose now that: $\sup_{\tau_{i}}\mu_{i}(\langle\bar{\tau}^{w}_{-i},\tau_{i}\rangle_{w})<\sup_{\tau_{i}}\mu_{i}(\langle\bar{\tau}^{u}_{-i\upharpoonright uh^{\prime\prime}},\tau_{i}\rangle_{w}).$ Then, $\langle\bar{\tau}^{v^{*}}_{\upharpoonright hw}\rangle=\langle\bar{\tau}^{w}\rangle_{w}$: player $i$ has done something that lowers the payoff they can ensure, and therefore the other players have to update their strategy profile in order to enforce that new minimum. The history $hw$ is a reset of $\bar{\tau}^{v*}_{-i}$. All the plays constructed are $\lambda$-consistent, hence $\bar{\tau}^{v*}_{-i}$ is indeed $\lambda$-rational assuming $\tau^{v*}_{i}$. Let us now prove that $\tau^{v*}_{i}$ is the subgame-worst $\lambda$-rational strategy profile against player $i$. Let $hw$ be a history starting in $v$ compatible with $\bar{\tau}^{v*}_{-i}$, let $\tau^{\prime}_{i}$ be a strategy from the state $w$, let $\eta=\langle\bar{\tau}^{v*}_{-i\upharpoonright hw},\tau^{\prime}_{i}\rangle_{w}$ and let us prove that: $\mu_{i}(\eta)\leq\inf_{\bar{\tau}_{-i}\in\lambda\mathrm{Rat}(w)}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{w}).$ Let us consider the sequence $(\alpha_{n})_{n\in\mathbb{N}}$, defined by: $\alpha_{n}=\inf_{\bar{\tau}_{-i}\in\lambda\mathrm{Rat}(\eta_{n})}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{\eta_{n}}).$ That sequence is non-increasing. Indeed, for all $n$: * * If $\eta_{n}\in V_{i}$, then no action of player $i$ can improve the payoff player $i$ themself can secure against a $\lambda$-rational environment. * * If $\eta_{n}\not\in V_{i}$, then: $\eta_{n+1}=\bar{\tau}^{v*}_{-i}(h\eta_{0}\dots\eta_{n})=\bar{\tau}^{\eta_{k}}_{-i}(\eta_{k}\dots\eta_{n})$ for some $k$ such that, by construction of $\bar{\tau}^{v*}_{-i}$, $\alpha_{k}=\dots=\alpha_{n}$. Since the strategy profile $\bar{\tau}^{\eta_{k}}_{-i}$ is defined to realize the payoff $\alpha_{k}=\alpha_{n}$, we have $\alpha_{n+1}=\alpha_{n}$. Moreover, that sequence can only take a finite number of values (at most $\mathrm{card}V$). Therefore, it is stationary: there exists $n_{0}\in\mathbb{N}$ such that $(\alpha_{n})_{n\geq n_{0}}$ is constant, and there are no resets of $\bar{\tau}^{v*}_{-i}$ among the prefixes of $\eta$ of length greater than $n_{0}$. Therefore, if we choose $n_{0}$ minimal (i.e., $n_{0}$ is the index of the last reset in $\eta$), then the play $\eta_{n_{0}}\eta_{n_{0}+1}\dots$ is compatible with the strategy profile $\bar{\tau}^{\eta_{n_{0}}}_{-i}$. Then, we have: $\mu_{i}(\eta)\leq\alpha_{n_{0}}\leq\alpha_{0},$ and: $\alpha_{0}=\inf_{\bar{\tau}_{-i}\in\lambda\mathrm{Rat}(w)}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{w}),$ which proves that $\bar{\tau}^{v*}$ is the subgame-worst $\lambda$-strategy profile against player $i$ from the state $w$, and therefore that the game $G$ is a game with subgame-steady negotiation. * • _Construction of $\bar{\sigma}$._ Let $\mathcal{H}_{0}=\mathrm{Hist}G_{\upharpoonright v_{0}}$. Let us construct inductively $\bar{\sigma}$ by defining all the plays $\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v}$, for $hv\in\mathcal{H}_{0}$, keeping the hypothesis that at any step $n$, the set $\mathcal{H}_{n}$ contains exactly the histories $hv$ such that the play $\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v}$ has been defined, and that such a play is always $\lambda$-consistent: it will define a $\lambda$-rational strategy profile, and we will then prove it is an $\varepsilon$-SPE. * – First, $\langle\bar{\sigma}\rangle_{v_{0}}=\xi$, which satisfies the induction hypothesis. We remove then all the finite prefixes of $\xi$ form $\mathcal{H}_{0}$ to obtain $\mathcal{H}_{1}$. Note that the only history of length $1$ has been removed. * – At the $n$-th step, with $n>0$: let us choose $hv\in\mathcal{H}_{n}$ of minimal length, and therefore minimal for the prefix order: the strategy profile $\bar{\sigma}$ has been defined on all the strict prefixes of $hv$, but not on $hv$ itself, and $v\neq\bar{\sigma}(h)$. Let then $i$ be the player controlling the last state of $h$ (which exists since all the histories of $\mathcal{H}_{n}$ have length at least $2$). Let $\bar{\tau}^{v*}_{-i}$ be a subgame-worst $\lambda$-rational strategy profile against player $i$ from $v$, whose existence has been proved in the previous point, and let $\tau^{v*}_{i}$ be a strategy assuming which it is $\lambda$-rational. Then, we define $\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v}=\langle\bar{\tau}^{v*}\rangle_{v}$, and inductively, for every history $h^{\prime}w$ starting from $v$ and compatible with $\bar{\sigma}_{-i\upharpoonright hv}$ as it has been defined so far, we define $\langle\bar{\sigma}_{\upharpoonright hh^{\prime}w}\rangle_{v}=\langle\bar{\tau}^{v*}_{\upharpoonright h^{\prime}w}\rangle_{w}$. The strategy profile $\bar{\sigma}_{\upharpoonright hv}$ is then equal to $\bar{\tau}^{v*}$ on any history compatible with $\bar{\tau}^{v*}_{-i}$. We remove all such histories from $\mathcal{H}_{n}$ to obtain $\mathcal{H}_{n+1}$. All the plays we built are $\lambda$-consistent, which was our induction hypothesis. Since each step removes from $\mathcal{H}_{n}$ a history of minimal length, and since there are finitely many histories of any given length, we have $\bigcap_{n}\mathcal{H}_{n}=\emptyset$, and this process completely defines $\bar{\sigma}$. * • _Such $\bar{\sigma}$ is an $\varepsilon$-SPE._ Let $h^{(0)}w\in\mathrm{Hist}G_{\upharpoonright v_{0}}$, let $i\in\Pi$, let $\sigma^{\prime}_{i}$ be a deviation of $\sigma_{i}$. Let $\rho=h^{(0)}\langle\bar{\sigma}_{\upharpoonright h^{(0)}w}\rangle_{w}$ and let $\rho^{\prime}=h^{(0)}\langle\sigma^{\prime}_{i\upharpoonright h^{(0)}w},\bar{\sigma}_{-i\upharpoonright h^{(0)}w}\rangle_{w}$. We prove that $\mu_{i}(\rho^{\prime})\leq\mu_{i}(\rho)+\varepsilon$. If $\rho^{\prime}$ is compatible with $\sigma_{i}$, then $\rho^{\prime}=\rho$ and the proof is immediate. If it is not, we let $huv$ denote the shortest prefix of $\rho^{\prime}$ such that $u\in V_{i}$ and $v\neq\sigma_{i}(hu)$. The transition $uv$ can be considered as the first deviation of player $i$, but note that $hu$ can be both longer or shorter than $h^{(0)}$: player $i$ may have already deviated in $h^{(0)}$. Be that as it may, the history $hu$ is a common prefix of the play $\rho$ and $\rho^{\prime}$, and if $\bar{\tau}^{v*}_{-i}$ denotes a subgame-worst strategy profile against player $i$ from the state $v$, $\lambda$-rational assuming a strategy $\tau^{v*}_{i}$, then $\bar{\sigma}_{\upharpoonright huv}$ has been defined as equal to $\bar{\tau}^{v*}$ on any history compatible with $\bar{\sigma}_{-i\upharpoonright huv}$. * – If $huv$ is a prefix of $\rho$: let $huh^{\prime}w^{\prime}$ be the longest common prefix of $\rho$ and $\rho^{\prime}$. Necessarily, $w^{\prime}\in V_{i}$. Then, by definition of $\bar{\tau}^{v*}_{-i}$, we have: $\mu_{i}(\rho^{\prime})\leq\inf_{\bar{\tau}_{-i}\in\lambda\mathrm{Rat}(w^{\prime})}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{w^{\prime}})=\mathrm{nego}(\lambda)(w^{\prime}),$ and since $\lambda$ is an $\varepsilon$-fixed point of $\mathrm{nego}$: $\mu_{i}(\rho^{\prime})\leq\lambda(w^{\prime})+\varepsilon.$ On the other hand, the play $\langle\bar{\sigma}_{\upharpoonright h^{\prime}w^{\prime}}\rangle_{w^{\prime}}$, which is a suffix of $\rho$, is $\lambda$-consistent, hence $\mu_{i}(\rho)\geq\lambda(w^{\prime})$. Therefore, $\mu_{i}(\rho^{\prime})\leq\mu_{i}(\rho)+\varepsilon$. * – If $huv$ is not a prefix of $\rho$: then, $\rho=h\langle\bar{\sigma}_{\upharpoonright hu}\rangle_{u}$. Since $u\in V_{i}$, we have: $\mathrm{nego}(\lambda)(u)=\sup_{uv^{\prime}\in E}\leavevmode\nobreak\ \inf_{\bar{\tau}_{-i}\in\lambda\mathrm{Rat}(v^{\prime})}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{v^{\prime}}).$ In particular, we have: $\inf_{\bar{\tau}_{-i}\in\lambda\mathrm{Rat}(v)}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{v})\leq\mathrm{nego}(\lambda)(u)\leq\lambda(u)+\varepsilon.$ Then, for the same reason as above, we know that: $\mu_{i}(\rho^{\prime})\leq\inf_{\bar{\tau}_{-i}\in\lambda\mathrm{Rat}(v)}\leavevmode\nobreak\ \sup_{\tau_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\tau}\rangle_{v}).$ Finally, since the suffix $\langle\bar{\sigma}_{\upharpoonright hu}\rangle_{u}$ of $\rho$ is $\lambda$-consistent, we have $\mu_{i}(\rho)\geq\lambda(u)\geq\mathrm{nego}(\lambda)(u)-\varepsilon\geq\mu_{i}(\rho^{\prime})$. The strategy profile $\bar{\sigma}$ is an $\varepsilon$-SPE. ∎ If $\bar{\sigma}$ is an $\varepsilon$-SPE, then by Lemma 4, there exists an $\varepsilon$-fixed point $\lambda$ of the negotiation function such that all the plays generated by $\bar{\sigma}$ after some history are $\lambda$-consistent; in particular, the play $\xi$ is $\lambda$-consistent, and therefore $\lambda^{*}$-consistent since $\lambda^{*}\leq\lambda$. Conversely, if the game $G$ is with steady negotiation, and if the play $\xi$ is $\lambda^{*}$-consistent, then by Lemma 5, there exists an $\varepsilon$-SPE $\bar{\sigma}$ such that $\langle\bar{\sigma}\rangle_{v_{0}}=\xi$. ∎ ## Appendix D Abstract negotiation game ###### Definition 24 (Abstract negotiation game). Let $G_{\upharpoonright v_{0}}$ be an initialized game, let $i\in\Pi$, and let $\lambda$ be a requirement on $G$. The _abstract negotiation game_ of $G_{\upharpoonright v_{0}}$ for player $i$ with requirement $\lambda$ is the two-player zero-sum initialized game: $\mathrm{Abs}_{\lambda i}(G)_{\upharpoonright[v_{0}]}=\left(\\{\mathbb{P},\mathbb{C}\\},S,(S_{\mathbb{P}},S_{\mathbb{C}}),\Delta,\nu\right)_{\upharpoonright[v_{0}]},$ where: * • $\mathbb{P}$ denotes the player _Prover_ and $\mathbb{C}$ the player _Challenger_ ; * • the states of $S_{\mathbb{C}}$ are written $[\rho]$, where $\rho$ is a $\lambda$-consistent play in $G$; * • the states of $S_{\mathbb{P}}$ are written $[hv]$, where $hv$ is a history in $G$, with $h\in\mathrm{Hist}_{i}(G)$, or $[v]$ with $v\in V$, plus two additional states $\top$ and $\bot$; * • the set $\Delta$ contains the transitions of the form: * – $[hv][v\rho]$, where $[hv]\in S_{\mathbb{P}}$ and $[v\rho]\in S_{\mathbb{C}}$ (Prover proposes a play); * – $[\rho][\rho_{0}...\rho_{n}v]$, where $[\rho]\in S_{\mathbb{C}},n\in\mathbb{N},\rho_{n}\in V_{i}$, and $v\neq\rho_{n+1}$ (Challenger makes player $i$ deviate); * – $[\rho]\top$, where $[\rho]\in S_{\mathbb{C}}$ (Challenger accepts the proposed play); * – $\top\top$ (the game is over); * – $[hv]\bot$ (Prover has no more play to propose); * – $\bot\bot$ (the game is over). * • $\nu$ is the outcome function defined by, for all $\rho^{(0)},\rho^{(1)},\dots,h^{(1)}v_{1},h^{(2)}v_{2},\dots,k,H$: $\begin{matrix}&\nu_{\mathbb{C}}\left([v_{0}]\left[\rho^{(0)}\right]\left[h^{(1)}v_{1}\right]\left[\rho^{(1)}\right]\dots\left[h^{(k)}v_{k}\right]\left[\rho^{(k)}\right]\top^{\omega}\right)\\\\[2.84526pt] =&\mu_{i}\left(h^{(1)}\dots h^{(k)}\rho^{(k)}\right),\\\\[5.69054pt] &\nu_{\mathbb{C}}\left([v_{0}]\left[\rho^{(0)}\right]\left[h^{(1)}v_{1}\right]\left[\rho^{(1)}\right]\dots\left[h^{(n)}v_{n}\right]\left[\rho^{(n)}\right]\dots\right)\\\\[2.84526pt] =&\mu_{i}\left(h^{(1)}h^{(2)}\dots\right),\\\\[5.69054pt] &\nu_{\mathbb{C}}\left(H\bot^{\omega}\right)=+\infty,\end{matrix}$ and by $\nu_{\mathbb{P}}=-\nu_{\mathbb{C}}$. ###### Proposition 2. Let $G_{\upharpoonright v_{0}}$ be an initialized Borel game, let $\lambda$ be a requirement on $G$ and let $i\in\Pi$. Then, the corresponding abstract negotiation game satisfies: $\inf_{\tau_{\mathbb{P}}}\leavevmode\nobreak\ \sup_{\tau_{\mathbb{C}}}\leavevmode\nobreak\ \nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{[v_{0}]})=\inf_{\bar{\sigma}_{-i}\in\lambda\mathrm{Rat}(v_{0})}\leavevmode\nobreak\ \sup_{\sigma_{i}}\leavevmode\nobreak\ \mu_{i}\left(\langle\bar{\sigma}\rangle_{v_{0}}\right).$ ###### Proof. Let $\alpha\in\mathbb{R}$, and let us prove that the following statements are equivalent: 1. 1. there exists a strategy $\tau_{\mathbb{P}}$ such that for every strategy $\tau_{\mathbb{C}}$, $\nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{[v_{0}]})<\alpha$; 2. 2. there exists a $\lambda$-rational strategy profile $\bar{\sigma}_{-i}$ in the game $G_{\upharpoonright v_{0}}$ such that for every strategy $\sigma_{i}$, we have $\mu_{i}\left(\langle\bar{\sigma}\rangle_{v_{0}}\right)<\alpha$. * • _( 1) implies (2)._ Let $\tau_{\mathbb{P}}$ be such that for every strategy $\tau_{\mathbb{C}}$, $\nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{[v_{0}]})<\alpha$. In what follows, any history $h$ compatible with an already defined strategy profile $\bar{\sigma}_{-i}$ in $G_{\upharpoonright v_{0}}$ will be decomposed in: $h=v_{0}h^{(0)}v_{1}h^{(1)}\dots h^{(n-1)}v_{n}h^{(n)},$ so that there exist plays $\rho^{(0)},\dots,\rho^{(n-1)},\eta$ and a history: $[v_{0}]\left[\rho^{(0)}\right]\left[v_{1}h^{(1)}v_{2}\right]\dots\left[v_{n-1}h^{(n-1)}v_{n}\right]\left[v_{n}h^{(n)}\eta\right]$ in the game $\mathrm{Abs}_{\lambda i}(G)$ compatible with $\tau_{\mathbb{P}}$: the existence and the unicity of that decomposition can be proved by induction. Intuitively, the history $h$ is cut in histories which are prefixes of plays that can be proposed by Prover. Then, let us define inductively the strategy profile $\bar{\sigma}_{-i}$ by, for every $h$ such that $\bar{\sigma}_{-i}$ has been defined on the prefixes of $h$, and such that the last state of $h$ is not controlled by player $i$, $\bar{\sigma}_{-i}(h)=\eta_{0}$ with $\eta$ defined from $h$ as higher. Let us prove that $\bar{\sigma}_{-i}$ is the desired strategy profile. * – _The strategy profile $\bar{\sigma}_{-i}$ is $\lambda$-rational._ Let us define $\sigma_{i}$ so that for every history $hv$ compatible with $\bar{\sigma}_{-i}$, the play $\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v}$ is $\lambda$-consistent. For any history: $h=v_{0}h^{(0)}v_{1}h^{(1)}\dots h^{(n-1)}v_{n}h^{(n)}$ compatible with $\bar{\sigma}_{-i}$ and ending in $V_{i}$, let $\sigma_{i}(h)=\eta_{0}$ with $\eta$ corresponding to the decomposition of $h$, so that by induction: $\langle\bar{\sigma}_{\upharpoonright v_{0}h^{(0)}v_{1}h^{(1)}\dots h^{(n-1)}v_{n}}\rangle_{v_{n}}=v_{n}h^{(n)}\eta.$ Let now $hv$ be a history in $G_{\upharpoonright v_{0}}$, and let us show that the play $\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v}$ is $\lambda$-consistent. If we decompose: $hv=v_{0}h^{(0)}v_{1}h^{(1)}\dots h^{(n-1)}v_{n}h^{(n)}$ with the same definition of $\eta$ (note that the vertex $v$ is now included in the decomposition), then $\langle\bar{\sigma}_{\upharpoonright hv}\rangle_{v}=v\eta$, and by definition of the abstract negotiation game, $v_{n}h^{(n)}\eta$ is a $\lambda$-consistent play, and therefore so is $v\eta$. * – _The strategy profile $\bar{\sigma}_{-i}$ keeps player $i$’s payoff under the value $\alpha$._ Let $\sigma_{i}$ be a strategy for player $i$, and let $\rho=\langle\bar{\sigma}\rangle_{v_{0}}$. We want to prove that $\mu_{i}(\rho)<\alpha$. Let us define two finite or infinite sequences $\left(\rho^{(k)}\right)_{k\in K}$ and $\left(h^{(k)}v_{k}\right)_{k\in K}$, where $K=\\{1,\dots,n\\}$ or $K=\mathbb{N}\setminus\\{0\\}$, by for every $k\in K$: $\left[\rho^{(k)}\right]=\tau_{\mathbb{P}}\left([v_{0}]\left[\rho^{(0)}\right]\dots\left[\rho^{(k-1)}\right]\left[h^{(k)}v_{k}\right]\right)$ and so that for every $k$, the history $h^{(k)}v_{k}$ is the shortest prefix of $\rho$ that is not a prefix of $h^{(1)}\dots h^{(k-1)}\rho^{(k-1)}$ (or equivalently, the history $h^{(k)}$ is the longest common prefix of $\rho$ and $h^{(1)}\dots h^{(k-1)}\rho^{(k-1)}$). Then, the length of the longest common prefix of $h^{(1)}\dots h^{(k-1)}\rho^{(k)}$ and $\rho$ increases with $k$, and the set $K$ is finite if and only if there exists $n$ such that $h^{(1)}\dots h^{(n-1)}\rho^{(n)}=\rho$. In the infinite case, let: $\chi=[v_{0}]\left[\rho^{(0)}\right]\left[h^{(1)}v_{1}\right]\dots\left[\rho^{(k)}\right]\left[h^{(k)}v_{k}\right]\dots.$ The play $\chi$ is compatible with $\tau_{\mathbb{P}}$, hence $\nu_{\mathbb{C}}(\chi)<\alpha$, that is to say: $\mu_{i}\left(h^{(1)}h^{(2)}\dots\right)<\alpha,$ ie. $\mu_{i}(\rho)<\alpha$. In the finite case, let: $\chi=[v_{0}]\left[\rho^{(0)}\right]\left[h^{(1)}v_{1}\right]\dots\left[\rho^{(n)}\right]\top^{\omega}.$ For the same reason, $\nu_{\mathbb{C}}(\chi)<\alpha$, that is to say $\mu_{i}\left(h^{(1)}\dots h^{(n)}\rho^{(n)}\right)=\mu_{i}(\rho)<\alpha$. * • _( 2) implies (1)._ Let $\bar{\sigma}_{-i}$ be a strategy profile keeping player $i$’s payoff below $\alpha$, $\lambda$-rational assuming a strategy $\sigma_{i}$. Let us define a strategy $\tau_{\mathbb{P}}$ for Prover in the abstract negotiation game. Let $H=[v_{0}]\left[\rho^{(0)}\right]\left[h^{(1)}v_{1}\right]\left[\rho^{(1)}\right]\dots\left[h^{(n)}v_{n}\right]$ be a history in the abstract game, ending in $S_{\mathbb{P}}$. Then, we define: $\tau_{\mathbb{P}}(H)=\left[\langle\bar{\sigma}_{\upharpoonright h^{(1)}\dots h^{(n)}v_{n}}\rangle_{v_{n}}\right].$ If $H$ is a history ending in $\top$, then $\tau_{\mathbb{P}}(H)=\top$, and in the same way if $H$ ends in $\bot$, then $\tau_{\mathbb{P}}(H)=\bot$. Let us show that $\tau_{\mathbb{P}}$ is the strategy we were looking for. Let $\chi$ be a play compatible with $\tau_{\mathbb{P}}$, and let us note that the state $\bot$ does not appear in $\chi$. Then, the play $\chi$ can only have two forms: * – If $\chi=[v_{0}]\left[\rho^{(0)}\right]\left[h^{(1)}v_{1}\right]\dots\left[\rho^{(n)}\right]\top^{\omega}$, then we have: $\rho^{(n)}=\langle\bar{\sigma}_{\upharpoonright h^{(1)}\dots h^{(n)}v_{n}}\rangle_{v_{n}},$ and the history $h^{(1)}\dots h^{(n)}v_{n}$ in the game $G_{\upharpoonright v_{0}}$ is compatible with $\bar{\sigma}_{-i}$. By hypothesis, we have: $\mu_{i}\left(h^{(1)}\dots h^{(n)}\rho^{(n)}\right)<\alpha,$ hence $\nu_{\mathbb{C}}(\chi)<\alpha$. * – If $\chi=[v_{0}]\left[\rho^{(0)}\right]\dots\left[h^{(n)}v_{n}\right]\left[\rho^{(n)}\right]\dots$, then the play $\rho=h^{(1)}h^{(2)}\dots$ is compatible with $\bar{\sigma}_{-i}$, and by hypothesis $\mu_{i}(\rho)<\alpha$, hence $\nu_{\mathbb{C}}(\chi)<\alpha$. ∎ ###### Remark. We have proven the equality: $\inf_{\tau_{\mathbb{P}}}\leavevmode\nobreak\ \sup_{\tau_{\mathbb{C}}}\leavevmode\nobreak\ \nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{\upharpoonright v_{0}})=\inf_{\bar{\sigma}_{-i}\in\lambda\mathrm{Rat}(v_{0})}\leavevmode\nobreak\ \sup_{\sigma_{i}}\leavevmode\nobreak\ \mu_{i}\left(\langle\bar{\sigma}\rangle_{v_{0}}\right).$ To be absolutely rigorous, the left member can be written $\mathrm{val}_{\mathbb{C}}\left(\mathrm{Abs}_{\lambda i}(G)_{\upharpoonright[v_{0}]}\right)$ only if we prove that the abstract negotiation game is determined. That will be a consequence of its equivalence with the corresponding concrete negotiation game, which is Borel and therefore determined. ## Appendix E Proof of Theorem 3 Theorem 3. _Let $G_{\upharpoonright v_{0}}$ be an initialized mean-payoff game. Let $\lambda$ be a requirement and $i$ a player. Then, we have:_ $\mathrm{val}_{\mathbb{C}}\left(\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}\right)=\inf_{\bar{\sigma}_{-i}\in\lambda\mathrm{Rat}(v_{0})}\leavevmode\nobreak\ \sup_{\sigma_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\sigma}\rangle_{v_{0}}).$ ###### Proof. First, let us define: $A=\left\\{\left.\sup_{\sigma_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\sigma}\rangle_{v_{0}})\leavevmode\nobreak\ \right|\leavevmode\nobreak\ \bar{\sigma}_{-i}\in\lambda\mathrm{Rat}(v_{0})\right\\}$ and: $B=\left\\{\left.\sup_{\tau_{\mathbb{C}}}\leavevmode\nobreak\ \nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{s_{0}})\leavevmode\nobreak\ \right|\leavevmode\nobreak\ \tau_{\mathbb{P}}\right\\}\setminus\\{+\infty\\}.$ We prove our point if we prove that $A=B$. * • _$B\subseteq A$._ Let $\tau_{\mathbb{P}}$ be a strategy such that: $\sup_{\tau_{\mathbb{C}}}\leavevmode\nobreak\ \nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{s_{0}})<+\infty,$ and let $\bar{\sigma}$ be the strategy profile defined by: $\bar{\sigma}(\dot{H})=w$ for every history $H$ compatible with $\tau_{\mathbb{P}}$ (by induction, the localized projection is injective on the histories compatible with $\tau_{\mathbb{P}}$) with $\tau_{\mathbb{P}}(H)=(vw,\cdot)$, and arbitrarily defined on any other histories. * – _The strategy profile $\bar{\sigma}_{-i}$ is $\lambda$-rational_, assuming the strategy $\sigma_{i}$. Indeed, let us assume it is not. Then, there exists a history $h=h_{0}\dots h_{n}$ in $G_{\upharpoonright v_{0}}$ compatible with $\bar{\sigma}_{-i}$ such that the play $\dot{\rho}=\langle\bar{\sigma}_{\upharpoonright h}\rangle_{h_{n}}$ is not $\lambda$-consistent. Then, let: $Hs=\left(h_{0},M_{0}\right)\left(h_{0}\bar{\sigma}(h_{0}),M_{0}\right)\dots\left(h_{n},M_{n}\right)$ be the only history in $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}$ compatible with $\tau_{\mathbb{P}}$ such that $\dot{H}=h$. Let $\tau_{\mathbb{C}}$ be a strategy constructing the history $h$, defined by: $\tau_{\mathbb{C}}\left(H_{0}\dots H_{2k-1}\right)=H_{2k}$ for every $k$, and: $\tau_{\mathbb{C}}\left(H^{\prime}(vw,M)\right)=(w,M\cup\\{w\\})$ for any other history $H^{\prime}(vw,M)$. Then, the play $\eta=\langle\bar{\tau}\rangle_{s_{0}}$ contains finitely many deviations (Challenger stops the deviations after having drawn the history $h$), and the play $\dot{\eta}=h_{0}\dots h_{n-1}\dot{\rho}$ is not $\lambda$-consistent, i.e. there exists a dimension $j\in\Pi$ such that: $\mu_{j}(\dot{\eta})-\max_{v\in M_{n}\cap V_{j}}\lambda(v)<0$ i.e.: $\hat{\mu}_{j}(\eta)<0$ and therefore $\nu_{\mathbb{C}}(\rho)=\nu_{\mathbb{C}}(\eta)=+\infty$, which is false by hypothesis. * – Now, let us prove the equality: $\sup_{\sigma^{\prime}_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\sigma}_{-i},\sigma^{\prime}_{i}\rangle_{v_{0}})=\sup_{\tau_{\mathbb{C}}}\leavevmode\nobreak\ \nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{s_{0}}).$ For that purpose, let us prove the equality of sets: $\left\\{\mu_{i}(\langle\bar{\sigma}_{-i},\sigma^{\prime}_{i}\rangle_{v_{0}})\leavevmode\nobreak\ |\leavevmode\nobreak\ \sigma^{\prime}_{i}\right\\}=\left\\{\nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{s_{0}})\leavevmode\nobreak\ |\leavevmode\nobreak\ \tau_{\mathbb{C}}\right\\}.$ * * Let $\tau_{\mathbb{C}}$ be a strategy for Challenger, and let $\rho=\langle\bar{\tau}\rangle_{s_{0}}$. Since $\nu_{\mathbb{C}}(\rho)\neq+\infty$ by hypothesis, we have $\nu_{\mathbb{C}}(\rho)=\hat{\mu}_{\star}(\rho)=\mu_{i}(\dot{\rho})$, which is an element of the left-hand set. * * Conversely, if $\sigma^{\prime}_{i}$ is a strategy for player $i$ and if $\eta=\langle\bar{\sigma}_{-i},\sigma^{\prime}_{i}\rangle_{v_{0}}$, let $\tau_{\mathbb{C}}$ be a strategy such that for every $k$: $\tau_{\mathbb{C}}\left((\eta_{0},\cdot)(\eta_{0}\cdot,\cdot)\dots(\eta_{k}\cdot,\cdot)=(\eta_{k+1},\cdot)\right),$ i.e. a strategy forcing $\eta$. Then, since $\nu_{\mathbb{C}}(\rho)\neq+\infty$ by hypothesis on $\tau_{\mathbb{P}}$, we have $\mu_{i}(\eta)=\nu_{\mathbb{C}}(\rho)$, which is an element of the right-hand set. * • _$A\subseteq B$._ Let $\bar{\sigma}_{-i}$ be a $\lambda$-rational strategy profile from $v_{0}$, assuming the strategy $\sigma_{i}$; let us define a strategy $\tau_{\mathbb{P}}$ by, for every history $H$ and for every $v\in V$: $\tau_{\mathbb{P}}(H(v,\cdot))=\left(v\bar{\sigma}(\dot{H}v),\cdot\right).$ Let us prove the equality: $\sup_{\sigma^{\prime}_{i}}\leavevmode\nobreak\ \mu_{i}(\langle\bar{\sigma}_{-i},\sigma^{\prime}_{i}\rangle_{v_{0}})=\sup_{\tau_{\mathbb{C}}}\leavevmode\nobreak\ \nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{s_{0}}).$ For that purpose, let us prove the equality of sets: $\left\\{\mu_{i}(\langle\bar{\sigma}_{-i},\sigma^{\prime}_{i}\rangle_{v_{0}})\leavevmode\nobreak\ |\leavevmode\nobreak\ \sigma^{\prime}_{i}\right\\}=\left\\{\nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{s_{0}})\leavevmode\nobreak\ |\leavevmode\nobreak\ \tau_{\mathbb{C}}\right\\}.$ * – Let $\tau_{\mathbb{C}}$ be a strategy for Challenger, and let $\rho=\langle\bar{\tau}\rangle_{s_{0}}$. If $\nu_{\mathbb{C}}(\rho)=+\infty$, then $\dot{\rho}$ is compatible with $\bar{\sigma}$ and not $\lambda$-consistent after finitely many steps, which is impossible. Therefore, $\nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{s_{0}})\neq+\infty$, and as a consequence we have $\nu_{\mathbb{C}}(\rho)=\hat{\mu}_{\star}(\rho)=\mu_{i}(\dot{\rho})$, which is an element of the left-hand set. * – Conversely, if $\sigma^{\prime}_{i}$ is a strategy for player $i$ and if $\eta=\langle\bar{\sigma}_{-i},\sigma^{\prime}_{i}\rangle_{v_{0}}$, let $\tau_{\mathbb{C}}$ be a strategy such that for all $k$: $\tau_{\mathbb{C}}\left((\eta_{0},\cdot)(\eta_{0}\cdot,\cdot)\dots(\eta_{k}\cdot,\cdot)\right)=(\eta_{k+1},\cdot),$ i.e. a strategy forcing $\eta$. Then, either $\nu_{\mathbb{C}}(\rho)=+\infty$, and therefore $\eta$ is not $\lambda$-consistent, and is compatible with $\bar{\sigma}$ after finitely many steps, which is impossible. Or, $\mu_{i}(\eta)=\nu_{\mathbb{C}}(\rho)$, which is an element of the right- hand set. ∎ ## Appendix F An example of concrete negotiation game Let us consider again the game from Example 2. Figure 4 represents the game $\mathrm{Conc}_{\lambda_{1}{\scriptsize{\Circle}}}(G)$ (with $\lambda_{1}(a)=1$ and $\lambda_{1}(b)=2$), where the dashed states are controlled by Challenger, and the other ones by Prover. $b,\\{a,b\\}$$ba,\\{a,b\\}$$a,\\{a,b\\}$$ab,\\{a,b\\}$$a,\\{a\\}$$ab,\\{a\\}$$c,\\{c\\}$$cc,\\{c\\}$$ac,\\{a\\}$$c,\\{a,c\\}$$cc,\\{a,c\\}$$b,\\{b\\}$$ba,\\{b\\}$$bd,\\{b\\}$$d,\\{b,d\\}$$dd,\\{b,d\\}$$ac,\\{a,b\\}$$c,\\{a,b,c\\}$$cc,\\{a,b,c\\}$$bd,\\{a,b\\}$$d,\\{a,b,d\\}$$dd,\\{a,b,d\\}$$\scriptsize{\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{-2}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\Box}}{{2}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\star}}{{0}}}$$\scriptsize{\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{-2}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\Box}}{{2}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\star}}{{0}}}$$\scriptsize{\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\Box}}{{0}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\star}}{{2}}}$$\scriptsize{\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\Box}}{{0}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\star}}{{2}}}$$\scriptsize{\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\Box}}{{0}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\star}}{{4}}}$$\scriptsize{\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\Box}}{{-2}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\star}}{{2}}}$$\scriptsize{\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\Box}}{{0}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\star}}{{4}}}$$\scriptsize{\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\Box}}{{2}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\star}}{{0}}}$$\scriptsize{\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{-2}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\Box}}{{2}}\leavevmode\nobreak\ \stackrel{{\scriptstyle\star}}{{0}}}$ Figure 4: A concrete negotiation game The dotted arrows indicate the deviations, and the transitions that are not labelled are either zero for the three coordinates, or meaningless since they cannot be used more than once. The red arrows indicate a (memoryless) optimal strategy for Challenger. Against that strategy, the lowest outcome Prover can ensure is $2$. Therefore, $\mathrm{nego}(\lambda_{1})(v_{0})=2$, in line with the abstract game in Example 6. ## Appendix G Proof of Lemma 2 Lemma 2. _Let $G_{\upharpoonright v_{0}}$ be an initialized mean-payoff game, let $i$ be a player, let $\lambda$ be a requirement and let $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}$ be the corresponding concrete negotiation game. There exists a memoryless strategy $\tau_{\mathbb{C}}$ that is optimal for Challenger, i.e. such that:_ $\underset{\tau_{\mathbb{P}}}{\inf}\leavevmode\nobreak\ \nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{s_{0}})=\mathrm{val}_{\mathbb{C}}\left(\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}\right).$ ###### Proof. The structure of that proof is inspired from the proof of Lemma 14 in [22]. Let $\alpha\in\mathbb{R}$, and let $\Phi$ be the set of the plays $\rho$ in $\mathrm{Conc}_{\lambda i}(G)$ such that: * • $\underset{n\to\infty}{\liminf}\frac{1}{n}\underset{k=0}{\overset{n-1}{\sum}}\left(-\hat{\pi}_{\star}(\rho_{k}\rho_{k+1})\right)\geq-\alpha$; * • and either: * – $\rho$ contains infinitely many deviations; * – or for each $j\in\Pi$, $\hat{\mu}_{j}(\rho)\geq 0$. Note that the set of the plays $\rho$ such that $\nu_{\mathbb{C}}(\rho)\leq\alpha$ could be defined almost the same way, but with a limit superior instead of the limit inferior. By [14], if Challenger can falsify the objective $\Phi$, he can falsify it with a memoryless strategy, if $\Phi$ is _prefix-independent_ and _convex_. _Convex_ objectives are defined as follows: the objective $\Phi$ is convex if for all $\rho,\eta\in\Phi$ and for any decomposition: $\rho_{0}\dots\rho_{k_{1}}\dots\rho_{k_{2}}\dots$ and: $\eta_{0}\dots\eta_{\ell_{1}}\dots\eta_{\ell_{2}}\dots$ with $\eta_{\ell_{p}}=\rho_{k_{q}}$ for all $p,q\in\mathbb{N}$, we have: $\chi=\rho_{0}\dots\rho_{k_{1}}\eta_{1}\dots\eta_{\ell_{1}}\rho_{k_{1}+1}\dots\rho_{k_{2}}\eta_{\ell_{1}+1}\dots\in\Phi.$ Let then be such two plays and decomposition, and let us prove that $\chi\in\Phi$. Let us write $\Phi=\Psi\cap(\mathrm{X}\cup\Xi)$, where: * • $\Psi$ is the set of the plays $\rho$ such that: $\underset{n\to\infty}{\liminf}\frac{1}{n}\underset{k=0}{\overset{n-1}{\sum}}\left(-\hat{\pi}_{\star}(\rho_{k}\rho_{k+1})\right)\geq-\alpha;$ * • $\mathrm{X}$ is the set of the plays containing infinitely many deviations; * • $\Xi$ is the set of the plays $\rho$ such that for each $j\in\Pi$, $\hat{\mu}_{j}(\rho)\geq 0$. As shown in [22], a mean-payoff objective defined with a limit inferior is convex: therefore, we can already say that $\chi\in\Psi$. Let us now prove that $\chi\in\mathrm{X}\cup\Xi$. * • _If $\rho\in\mathrm{X}$ or $\eta\in\mathrm{X}$._ Then, $\chi$ contains the deviations of $\rho$ and $\eta$, hence $\chi\in\mathrm{X}$. * • _If $\rho,\eta\in\Xi$._ Then, since mean-payoff objectives are convex, we have $\chi\in\Xi$. In both cases, $\chi\in\mathrm{X}\cup\Xi$, so $\chi\in\Phi$: the objective $\Phi$ is convex. Therefore, if Challenger has some strategy to falsify the objective $\Phi$, he has a memoryless one: let us write it $\tau_{\mathbb{C}}$. Now, we want to prove that the memoryless strategy $\tau_{\mathbb{C}}$ is also efficient when we replace the limit inferior of the definition of $\Phi$ by a limit superior, even though this new objective is no longer convex. By definition of $\tau_{\mathbb{C}}$, for every strategy $\tau_{\mathbb{P}}$, we have $\langle\bar{\tau}\rangle_{s_{0}}\not\in\Phi$. Let us prove that $\nu_{\mathbb{C}}(\langle\bar{\tau}\rangle_{s_{0}})>\alpha$. In other words, let us prove that for every infinite path $\rho$ from $s_{0}$ in the graph $\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]$, we have $\nu_{\mathbb{C}}(\rho)>\alpha$. Since $\rho\not\in\Phi$, we have either $\rho\not\in\mathrm{X}\cup\Xi$ or $\rho\not\in\Psi$. In the first case, we have $\nu_{\mathbb{C}}(\rho)=+\infty$, which ends the proof. In the second case, we have: $\underset{n\to\infty}{\limsup}\frac{1}{n}\underset{k=0}{\overset{n-1}{\sum}}\hat{\pi}_{\star}(\rho_{k}\rho_{k+1})>\alpha.$ We want to prove that $\nu_{\mathbb{C}}(\rho)>\alpha$, that is, since we assume $\rho\in\mathrm{X}\cup\Xi$: $\hat{\mu}_{\star}(\rho)=\underset{n\to\infty}{\liminf}\frac{1}{n}\underset{k=0}{\overset{n-1}{\sum}}\hat{\pi}_{\star}(\rho_{k}\rho_{k+1})>\alpha.$ Here, the play $\rho$ is an infinite path in the graph $\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]$: by the description of the possible outcomes in a mean-payoff game given in [10], the mean-payoff $\hat{\mu}_{\star}(\rho)$ is then larger than or equal to the minimal mean-payoff $\hat{\mu}_{\star}$ we get by looping on a simple cycle $c$ of that graph accessible from the state $s$. Intuitively, a play can be seen as a combination of those cycles. That is to say: $\hat{\mu}_{\star}(\rho)\geq\underset{\tiny{\begin{matrix}c\in\mathrm{SC}\left(\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]\right)\\\ \mathrm{accessible\leavevmode\nobreak\ from\leavevmode\nobreak\ }s\end{matrix}}}{\min}\hat{\mu}_{\star}(c^{\omega}).$ For each such cycle, since $c^{\omega}$ is a play compatible with $\tau_{\mathbb{C}}$, we have: $\underset{n\to\infty}{\limsup}\frac{1}{n}\underset{k=0}{\overset{n-1}{\sum}}\hat{\pi}_{\star}(c_{k}c_{k+1})>\alpha$ where the indices are taken in $\mathbb{Z}/|c|\mathbb{Z}$, i.e.: $\underset{n\to\infty}{\lim}\frac{1}{n}\underset{k=0}{\overset{n-1}{\sum}}\hat{\pi}_{\star}(c_{k}c_{k+1})>\alpha,$ and therefore: $\underset{n\to\infty}{\liminf}\frac{1}{n}\underset{k=0}{\overset{n-1}{\sum}}\hat{\pi}_{\star}(c_{k}c_{k+1})>\alpha,$ that is to say: $\hat{\mu}_{\star}(c^{\omega})>\alpha,$ hence $\hat{\mu}_{\star}(\rho)>\alpha$. ∎ ## Appendix H Proof of Lemma 3 Lemma 3. _Let $G_{\upharpoonright v_{0}}$ be an initialized mean-payoff game, and let $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}$ be its concrete negotiation game for some $\lambda$ and some $i$._ _Then, the value of the game $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}$ is given by the formula:_ $\max_{\tau_{\mathbb{C}}\in\mathrm{ML}_{\mathbb{C}}\left(\mathrm{Conc}_{\lambda i}(G)\right)}\leavevmode\nobreak\ \min_{\scriptsize{\begin{matrix}K\in\mathrm{SConn}\left(\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]\right)\\\ \mathrm{accessible\leavevmode\nobreak\ from\leavevmode\nobreak\ }s_{0}\end{matrix}}}\mathrm{opt}(K),$ _where $\mathrm{opt}(K)$ is the minimal value $\nu_{\mathbb{C}}(\rho)$ for $\rho$ among the infinite paths in $K$._ * • _If $K$ contains a deviation, then Prover can simply choose the simple cycle of $K$ that minimizes player $i$’s payoff:_ $\mathrm{opt}(K)=\underset{c\in\mathrm{SC}(K)}{\min}\leavevmode\nobreak\ \hat{\mu}_{\star}(c^{\omega}).$ * • _If $K$ does not contain a deviation, then Prover must choose a combination of the simple cycles of $K$ that minimizes player $i$’s payoff while keeping the non-main dimensions above $0$:_ $\mathrm{opt}(K)=\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\leavevmode\nobreak\ \hat{\mu}(c^{\omega}).$ ###### Proof. By Lemma 2, there exists a memoryless strategy $\tau_{\mathbb{C}}$ which is optimal for Challenger among all his possible strategies. It follows from Theorem 3 that the highest value player $i$ can get against a hostile $\lambda$-rational environment is the minimal payoff of Challenger in a path in the graph $\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]$. For any such path $\rho$, there exists a strongly connected component $K$ of $\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]$ accessible from $s_{0}$ such that after a finite number of steps, $\rho$ is a path in $K$. The least payoff of Challenger in such a path, for a given $K$, is $\mathrm{opt}(K)$; let us prove that it is given by the desired formula. There are, then, two cases to distinguish: * • _If there is at least one deviation in $K$._ Then, for every play $\rho$ in $K$, it is possible to transform $\rho$ into a play $\rho^{\prime}$ with $\hat{\mu}(\rho^{\prime})=\hat{\mu}(\rho)$, which contains infinitely many deviations: it suffices to add round trips to a deviation, endlessly, but less and less often. Therefore, the outcomes $\nu_{\mathbb{C}}(\rho)$ of plays in $K$ are exactly the mean-payoffs $\hat{\mu}_{\star}(\rho)$ of plays in $K$, and possibly $+\infty$; and in particular, the lowest outcome Prover can get in $K$ is the quantity: $\underset{c\in\mathrm{SC}(K)}{\min}\leavevmode\nobreak\ \hat{\mu}_{\star}(c^{\omega}),$ the least value of a simple cycle in $K$. * • _If there is no deviation in $K$._ Let us first introduce a notation: for any finite set $D$ and any set $X\subseteq\mathbb{R}^{D}$, $X^{\llcorner}$ denotes the set: $X^{\llcorner}=\left\\{\left.\left(\min_{\bar{y}\in Y}y_{d}\right)_{d\in D}\leavevmode\nobreak\ \right|\leavevmode\nobreak\ Y\subseteq X\mathrm{\leavevmode\nobreak\ finite}\right\\}.$ For example, in $\mathbb{R}^{2}$, if $X$ is the blue area in Figure 5, then $X^{\llcorner}$ is the union of the blue area and the gray area. $x$$y$ Figure 5: An example for the operator $\cdot^{\llcorner}$ Let us already note that for all $X\in\mathbb{R}^{\Pi\cup\\{\star\\}}$, $\begin{matrix}&\min\\!^{\star}X^{\llcorner}\\\ =&\min\left\\{x_{\star}\leavevmode\nobreak\ \left|\leavevmode\nobreak\ \begin{matrix}\bar{x}\in X^{\llcorner},\\\ \forall j\in\Pi,x_{j}\geq 0\end{matrix}\right.\right\\}\\\ =&\min\left\\{\underset{\bar{y}\in Y}{\min}\leavevmode\nobreak\ y_{\star}\leavevmode\nobreak\ \left|\leavevmode\nobreak\ \begin{matrix}Y\subseteq X\mathrm{\leavevmode\nobreak\ finite},\\\ \forall\bar{y}\in Y,\forall j\in\Pi,y_{j}\geq 0\end{matrix}\right.\right\\}\\\ =&\min\left\\{y_{\star}\leavevmode\nobreak\ \left|\leavevmode\nobreak\ \begin{matrix}\bar{y}\in X,\\\ \forall\bar{y}\in Y,\forall j\in\Pi,y_{j}\geq 0\end{matrix}\right.\right\\}\\\ =&\min\\!^{\star}X.\end{matrix}$ Then, it has been proved in [10] that the set of possible values of $\hat{\mu}(\rho)$ for all plays $\rho$ in $K$ is exactly the set: $X=\left(\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}(c^{\omega})\right)^{\llcorner}.$ Since all the plays in $K$ contain finitely many deviations (actually none), for every $\bar{x}=\hat{\mu}(\rho)\in X$, we have $\nu_{\mathbb{C}}(\rho)=+\infty$ if and only if there exists $j\in\Pi$ such that $x_{j}<0$. Then, the lowest outcome Prover can get in $K$ is: $\min\left\\{x_{\star}\leavevmode\nobreak\ |\leavevmode\nobreak\ \bar{x}\in X,\forall j\in\Pi,x_{j}\geq 0\right\\},$ that is to say: $\min\\!^{\star}\left(\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}(c^{\omega})\right)^{\llcorner},$ i.e. $\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}(c^{\omega})$. Theorem 3 enables to conclude to the desired formula. ∎ ## Appendix I The negotiation sequence We assume in that appendix that $G$ is a game on which the negotiation function is _Scott-continuous_ , i.e. such that for every non-decreasing sequence $(\lambda_{n})_{n}$ of requirements on $G$, we have: $\mathrm{nego}\left(\sup_{n}\lambda_{n}\right)=\sup_{n}\mathrm{nego}(\lambda_{n}).$ By Kleene-Tarski fixed-point theorem, the least fixed point of the negotiation function is, then, the limit of the _negotiation sequence_ , defined as the sequence $(\lambda_{n})_{n\in\mathbb{N}}=(\mathrm{nego}^{n}(\lambda_{0}))_{n}$. In mean-payoff games, in particular, the hypothesis made above is true: ###### Proposition 3. In mean-payoff games, the negotiation function is Scott-continuous. A proof of that statement is given in Appendix J. In many cases, the negotiation sequence is stationary, and in such a case, it is possible to compute its limit: whenever a term is equal to the previous one, we know that we reached it. But actually, the negotiation sequence is not always stationary. The game of Figure 6 is a counter-example. Indeed, for all $n$, we have: $\lambda_{n}(a)=\lambda_{n}(b)=2-\frac{1}{2^{n-1}},$ which converges to $2$ but never reaches it. $c$$d$$a$$b$$e$$f$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}\stackrel{{\scriptstyle\Diamond}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{1}}\stackrel{{\scriptstyle\Diamond}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}\stackrel{{\scriptstyle\Diamond}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{0}}\stackrel{{\scriptstyle\Diamond}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}\stackrel{{\scriptstyle\Diamond}}{{0}}$ Figure 6: A game where the negotiation sequence is not stationary Let us give some details. Since all the $\Diamond$ weights are equal to $0$, for all $n>0$, we have $\lambda_{n}(d)=\lambda_{n}(f)=0$. It comes that for all $n>0$, we also have $\lambda_{n}(c)=\lambda_{n}(e)=0$. Moreover, by symmetry of the game, we always have $\lambda_{n}(a)=\lambda_{n}(b)$. Therefore, to compute the negotiation sequence, it suffices to compute $\lambda_{n+1}(a)$ as a function of $\lambda_{n}(b)$, knowing that $\lambda_{1}(a)=\lambda_{1}(b)=1$, and therefore that for all $n>0$, $\lambda_{n}(a)=\lambda_{n}(b)\geq 1$. From $a$, the worst play that player $\Box$ could propose to player $\Circle$ would be a combination of the cycles $cd$ and $d$ giving her exactly $1$. But then, player $\Circle$ will deviate to go to $b$, from which if player $\Box$ proposes plays in the strongly connected component containing $c$ and $d$, then player $\Circle$ will always deviate and generate the play $(ab)^{\omega}$, and then get the payoff $2$. Then, in order to give her a payoff lower than $2$, player $\Box$ has to go to the state $e$. Since player $\Circle$ does not control any state in that strongly connected component, the play he will propose will be accepted: he will, then, propose the worst possible combination of the cycles $ef$ and $f$ for player $\Circle$, such that he gets at least his requirement $\lambda_{n}(b)$. The payoff $\lambda_{n+1}(a)$ is then the minimal solution of the system: $\left\\{\begin{matrix}\lambda_{n+1}(a)=x+2(1-x)\\\ 2(1-x)\geq\lambda_{n}(b)\\\ 0\leq x\leq 1\end{matrix}\right.$ that is to say $\lambda_{n+1}(a)=1+\frac{\lambda_{n}(b)}{2}=1+\frac{\lambda_{n}(a)}{2}$, and by induction, for all $n>0$: $\lambda_{n}(a)=\lambda_{n}(b)=2-\frac{1}{2^{n-1}}.$ ## Appendix J Proof of Proposition 3 Proposition 3. _In mean-payoff games, the negotiation function is Scott- continuous._ ###### Proof. Let $(\lambda_{n})_{n}$ be a non-decreasing sequence of requirements on a mean-payoff game $G$, and let $\lambda=\sup_{n}\lambda_{n}$. We want to prove that $\mathrm{nego}(\lambda)=\sup_{n}\mathrm{nego}(\lambda_{n})$. Since the negotiation function is monotone, we already have $\mathrm{nego}(\lambda)\geq\sup_{n}\mathrm{nego}(\lambda_{n})$. Let us prove that $\mathrm{nego}(\lambda)\leq\sup_{n}\mathrm{nego}(\lambda_{n})$. Let $\delta>0$: we want to find $n$ such that $\mathrm{nego}(\lambda_{n})(v)\geq\mathrm{nego}(\lambda)(v)-\delta$ for each $v\in V$. Let: $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}=\left(\\{\mathbb{P},\mathbb{C}\\},S,(S_{\mathbb{P}},S_{\mathbb{C}}),\Delta,\nu\right)_{\upharpoonright s_{0}}$ be the concrete negotiation game of $G$ for $\lambda$ and player $i$ controlling $v$, and let: $\mathrm{Conc}_{\lambda_{n}i}(G)_{\upharpoonright s_{0}}=\left(\\{\mathbb{P},\mathbb{C}\\},S,(S_{\mathbb{P}},S_{\mathbb{C}}),\Delta,\nu^{\prime}\right)_{\upharpoonright s_{0}}$ be the concrete negotiation game of $G$ for some requirement $\lambda_{n}$ in $v$. Let us note that both have the same underlying graph, and that the only difference are the weight functions $\hat{\pi}$ and $\hat{\pi}^{\prime}$, on the non-main dimensions. By Lemma 3, we have: $\mathrm{nego}(\lambda)(v)=\max_{\tau_{\mathbb{C}}\in\mathrm{ML}_{\mathbb{C}}\left(\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}\right)}\leavevmode\nobreak\ \min_{\scriptsize{\begin{matrix}K\in\mathrm{SConn}\left(\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]\right)\\\ \mathrm{accessible\leavevmode\nobreak\ from\leavevmode\nobreak\ }s_{0}\end{matrix}}}\mathrm{opt}(K)$ with: $\mathrm{opt}(K)=\left\\{\begin{matrix}\mathrm{if\leavevmode\nobreak\ }K\mathrm{\leavevmode\nobreak\ contains\leavevmode\nobreak\ a\leavevmode\nobreak\ deviation}:\\\ \underset{c\in\mathrm{SC}(K)}{\min}\leavevmode\nobreak\ \hat{\mu}_{\star}(c^{\omega})\\\\[14.22636pt] \mathrm{otherwise}:\\\ \min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\leavevmode\nobreak\ \hat{\mu}(c^{\omega}),\end{matrix}\right.$ and identically: $\mathrm{nego}(\lambda_{n})(v)=\max_{\tau_{\mathbb{C}}\in\mathrm{ML}_{\mathbb{C}}\left(\mathrm{Conc}_{\lambda_{n}i}(G)_{\upharpoonright s_{0}}\right)}\leavevmode\nobreak\ \min_{\scriptsize{\begin{matrix}K\in\mathrm{SConn}\left(\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]\right)\\\ \mathrm{accessible\leavevmode\nobreak\ from\leavevmode\nobreak\ }s_{0}\end{matrix}}}\mathrm{opt}^{\prime}(K)$ with: $\mathrm{opt}^{\prime}(K)=\left\\{\begin{matrix}\mathrm{if\leavevmode\nobreak\ }K\mathrm{\leavevmode\nobreak\ contains\leavevmode\nobreak\ a\leavevmode\nobreak\ deviation}:\\\ \underset{c\in\mathrm{SC}(K)}{\min}\leavevmode\nobreak\ \hat{\mu}_{\star}(c^{\omega})\\\\[14.22636pt] \mathrm{otherwise}:\\\ \min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\leavevmode\nobreak\ \hat{\mu}^{\prime}(c^{\omega}).\end{matrix}\right.$ Let $\tau_{\mathbb{C}}$ be a memoryless strategy for Challenger in the game $\mathrm{Conc}_{\lambda i}(G)_{\upharpoonright s_{0}}$; it can also be considered as a memoryless strategy in the game $\mathrm{Conc}_{\lambda_{n}i}(G)_{\upharpoonright s_{0}}$. Let us now define: $\gamma_{n}=\sup_{v\in V}(\lambda(v)-\lambda_{n}(v)).$ Then, the sequence $(\gamma_{n})_{n}$ is non-increasing and converges to $0$. Moreover, for each transition $st\in\Delta$, we have: $\hat{\pi}^{\prime}_{j}(st)\in[\hat{\pi}_{j}(st)-\gamma_{n},\hat{\pi}_{j}(st)].$ Let: $\Gamma_{n}=\left\\{\left.\bar{x}\in\mathbb{R}^{\Pi\cup\\{\star\\}}\leavevmode\nobreak\ \right|\leavevmode\nobreak\ x_{\star}=0\mathrm{\leavevmode\nobreak\ and\leavevmode\nobreak\ }\forall j\in\Pi,x_{j}\in[0,\gamma_{n}]\right\\}.$ Then, let $K$ be a strongly connected component of the graph $\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]$, without deviation, accessible from $s_{0}$; we have: $\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}^{\prime}(c^{\omega})\subseteq\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}(c^{\omega})+\Gamma_{n}.$ Let $R=\left\\{\left.\bar{x}\in\mathbb{R}^{\Pi\cup\\{\star\\}}\leavevmode\nobreak\ \right|\leavevmode\nobreak\ \forall j\in\Pi,x_{j}\geq 0\right\\}$. * • If $\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}(c^{\omega})\cap R=\emptyset$, since $\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}(c^{\omega})$ and $R$ are closed sets, if $\gamma_{n}$ is small enough, we have $\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}^{\prime}(c^{\omega})\cap R=\emptyset$. Therefore, if: $\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}(c^{\omega})=+\infty,$ then, for $n$ great enough: $\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}^{\prime}(c^{\omega})=+\infty.$ * • Otherwise, we have: $\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}^{\prime}(c^{\omega})\geq\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}(c^{\omega})-\gamma_{n}\max_{\tiny\begin{matrix}c\in\mathrm{SC}(K)\\\ d\in\mathrm{SC}(K)\end{matrix}}\sum_{\tiny\begin{matrix}j\in\Pi,\\\ \hat{\mu}_{j}(c^{\omega})>\\\ \hat{\mu}_{j}(d^{\omega})\end{matrix}}\frac{\hat{\mu}_{\star}(c^{\omega})-\hat{\mu}_{\star}(d^{\omega})}{\hat{\mu}_{j}(c^{\omega})-\hat{\mu}_{j}(d^{\omega})}$ and if $\gamma_{n}$ is small enough, we have: $\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}^{\prime}(c^{\omega})\geq\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}(c^{\omega})-\delta.$ In both cases, we find that there exists $\gamma_{n}$ small enough, i.e. $n$ great enough, to ensure: $\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}^{\prime}(c^{\omega})\geq\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\hat{\mu}(c^{\omega})-\delta.$ We can find such $n$ for each strongly connected component $K$ without deviation, and there exists a finite number of such components. Moreover, when $K$ is a strongly connected component with a deviation, the quantity: $\underset{c\in\mathrm{SC}(K)}{\min}\leavevmode\nobreak\ \hat{\mu}_{\star}(c^{\omega})$ is the same in $\mathrm{Conc}_{\lambda i}(G)$ and in $\mathrm{Conc}_{\lambda_{n}i}(G)$. Therefore, there exists $n\in\mathbb{N}$ such that: $\min_{\scriptsize{\begin{matrix}K\in\mathrm{SConn}\left(\mathrm{Conc}_{\lambda_{n}i}(G)[\tau_{\mathbb{C}}]\right)\\\ \mathrm{accessible\leavevmode\nobreak\ from\leavevmode\nobreak\ }s_{0}\end{matrix}}}\leavevmode\nobreak\ \mathrm{opt}(K)\geq\min_{\scriptsize{\begin{matrix}K\in\mathrm{SConn}\left(\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]\right)\\\ \mathrm{accessible\leavevmode\nobreak\ from\leavevmode\nobreak\ }s_{0}\end{matrix}}}\leavevmode\nobreak\ \mathrm{opt}(K)-\delta.$ We find such $n$ for every memoryless strategy $\tau_{\mathbb{C}}$, and there exists a finite number of such strategies. Therefore, there exists $n\in\mathbb{N}$ such that: $\mathrm{nego}(\lambda_{n})(v)\geq\mathrm{nego}(\lambda)(v)-\delta.$ Finally, since there are finitely many states $v\in V$, we can conclude to the existence of $n\in\mathbb{N}$ such that for each $v\in V$, we have: $\mathrm{nego}(\lambda_{n})(v)\geq\mathrm{nego}(\lambda)(v)-\delta.$ The negotiation function is Scott-continuous. ∎ ## Appendix K Proof of Theorem 4 Theorem 4. _Let $G$ be a mean-payoff game. Let us assimilate any requirement $\lambda$ on $G$ with finite values to the tuple $\lambda\mspace{-10.0mu}\bar{\phantom{v}}=(\lambda(v))_{v\in V}$, element of the vector space of finite dimension $\mathbb{R}^{V}$. Then, for each player $i$ and every vertex $v_{0}\in V_{i}$, the quantity $\mathrm{nego}(\lambda)(v_{0})$ is a piecewise linear function of $\lambda\mspace{-10.0mu}\bar{\phantom{v}}$, and an effective expression of that function can be computed in time double exponential in the size of $G$._ ###### Proof. By Lemma 3, we have the formula: $\mathrm{nego}(\lambda)(v_{0})=\max_{\tiny{\tau_{\mathbb{C}}\in\mathrm{ML}_{\mathbb{C}}\left(\mathrm{Conc}_{\lambda i}(G)\right)}}\leavevmode\nobreak\ \min_{\tiny{\begin{matrix}K\in\mathrm{SConn}(\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}])\\\ \mathrm{accessible\leavevmode\nobreak\ from\leavevmode\nobreak\ }(v_{0},\\{v_{0}\\})\end{matrix}}}\mathrm{opt}(K).$ Let $\tau_{\mathbb{C}}$ be a memoryless strategy of Challenger, and let $K$ be a strongly connected component of the graph $\mathrm{Conc}_{\lambda i}(G)[\tau_{\mathbb{C}}]$. Let us prove that the quantity: $\mathrm{opt}(K)=\left\\{\begin{matrix}\mathrm{if\leavevmode\nobreak\ }K\mathrm{\leavevmode\nobreak\ contains\leavevmode\nobreak\ a\leavevmode\nobreak\ dev.}:&\underset{c\in\mathrm{SC}(K)}{\min}\leavevmode\nobreak\ \hat{\mu}_{\star}(c^{\omega})\\\ \mathrm{otherwise}:&\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\leavevmode\nobreak\ \hat{\mu}(c^{\omega})\end{matrix}\right.$ is a piecewise linear function of $\lambda\mspace{-10.0mu}\bar{\phantom{v}}$. When $K$ contains a deviation, the quantity: $\underset{c\in\mathrm{SC}(K)}{\min}\leavevmode\nobreak\ \hat{\mu}_{\star}(c^{\omega})$ is independent of $\lambda$, and the result is then immediate. Let us now study the case where $K$ does not contain any deviation, i.e. let us prove that the quantity: $f(\lambda)=\min\\!^{\star}\underset{c\in\mathrm{SC}(K)}{\mathrm{Conv}}\leavevmode\nobreak\ \hat{\mu}(c^{\omega})$ is a piecewise linear function of $\lambda$. Let $M$ be the common memory of the states of $K$ (since $K$ does not contain deviations). We know that for each $j\in\Pi$ and for every cycle $c\in\mathrm{SC}(K)$, we have: $\hat{\mu}_{j}(c^{\omega})=\mu_{j}(\dot{c}^{\omega})-\max_{v\in V_{j}\cap M}\lambda(v).$ Let $C=\left\\{\dot{c}\leavevmode\nobreak\ |\leavevmode\nobreak\ c\in\mathrm{SC}(K)\right\\}$. Since there is no deviation in $K$, any cycle in $C$ is a simple cycle of $G$. Then, the quantity $f(\lambda)$ is the minimal $x_{i}$ for $\bar{x}$ in the set: $X=\underset{c\in C}{\mathrm{Conv}}\leavevmode\nobreak\ \mu(c^{\omega})\cap\bigcap_{v\in M}\left\\{\bar{x}\leavevmode\nobreak\ |\leavevmode\nobreak\ x_{j}\geq\lambda(v)\mathrm{\leavevmode\nobreak\ with\leavevmode\nobreak\ }v\in V_{j}\right\\}.$ The set $X$, intersection of a polyhedron and a polytope, is a polytope: therefore, there exists a vertex $\bar{x}$ of that polytope which minimizes $x_{i}$ for $\bar{x}\in X$. That vertex is the intersection between a face of the greater polytope $P=\underset{c\in C}{\mathrm{Conv}}\leavevmode\nobreak\ \mu(c^{\omega})$, and some of the hyperplanes $H_{v}$ (possibly zero), defined as the hyperplanes of equation $x_{j}=\lambda(v)$ for $j\in\Pi$ controlling $v$, such that $\lambda(v)=\underset{w\in M\cap V_{j}}{\max}\lambda(w)$. ###### Example 9. With three cycles and two players against player $i$, each controlling one vertex $v$ such that $\lambda(v)=0$, the vertex $\bar{x}$ is the red point in Figure 7 and Figure 8. $j_{1}$$i$$j_{2}$$\bullet$ Figure 7: The intersection between a $0$-dimensional face and zero hyperplane $j_{1}$$i$$j_{2}$$\bullet$ Figure 8: The intersection between a $2$-dimensional face and two hyperplanes The set of vertices of the polyhedron $X$ is included in the finite set: $Y=\left\\{\bar{y}_{WD}\in\mathbb{R}^{\Pi\cup\\{\star\\}}\leavevmode\nobreak\ \left|\leavevmode\nobreak\ \begin{matrix}W\subseteq M,\leavevmode\nobreak\ D\subseteq C,\\\ \underset{c\in D}{\mathrm{Conv}}\leavevmode\nobreak\ \mu(c^{\omega})\cap\underset{w\in W}{\bigcap}H_{w}=\\{\bar{y}_{WD}\\}\\\ \mathrm{and\leavevmode\nobreak\ }\forall j,\leavevmode\nobreak\ \forall v\in M\cap V_{j},\\\ y_{WDj}\geq\lambda(v)\end{matrix}\right.\right\\},$ where the tuple $\bar{y}_{WD}$ is the intersection of the face of $P$ delimited by the values of the cycles of $D$, and the hyperplanes $H_{v}$ for $v\in W$, as states the condition $\underset{c\in D}{\mathrm{Conv}}\leavevmode\nobreak\ \mu(c^{\omega})\cap\underset{w\in W}{\bigcap}H_{w}=\\{\bar{y}_{WD}\\}$. The condition $\forall j,\forall v\in M\cap V_{j},y_{WDj}\geq\lambda(v)$ states that the vertex $\bar{y}_{WD}$ is, moreover, the outcome of a $\lambda$-consistent play of $G$, which guarantees that this set $Y$ is itself included in $X$. We have, therefore: $\mathrm{opt}(K)=\min\\!^{\star}\underset{c\in C}{\mathrm{Conv}}\leavevmode\nobreak\ \mu(c^{\omega})=\min_{\bar{y}\in Y}y_{i}.$ Let now $\bar{y}\in Y$, and let $W$ and $D$ be such that $\bar{y}=\bar{y}_{WD}$. Let us choose $D$ and $W$ minimal, so that each player $j\in\Pi$ controls at most one state $w\in W$, and so that there exists only one decomposition: $\bar{y}=\underset{c\in D}{\sum}\alpha_{c}\mu(c^{\omega})$ with $0<\alpha_{c}<1$ for each $c$, and $\sum_{c}\alpha_{c}=1$. Furthermore, $\bar{y}$ is the only such solution of the system of equations: $\forall j\in\Pi,\forall w\in W\cap V_{j},y_{j}=\lambda(w).$ Therefore, the vector $\bar{\alpha}=(\alpha_{c})_{c\in D}$ is the only solution of the system: $\left\\{\begin{matrix}\underset{c\in D}{\sum}\alpha_{c}=1\\\ \forall j\in\Pi,\forall w\in W\cap V_{j},\underset{c\in D}{\sum}\alpha_{c}\mu_{j}(c^{\omega})=\lambda(w)\\\ \forall c\in D,\alpha_{c}>0.\end{matrix}\right.$ Then, if $\oplus$ is a symbol and $A_{WD}$ is the matrix: $A_{WD}=\left(\left\\{\begin{matrix}1&\mathrm{if\leavevmode\nobreak\ }w=\oplus\\\ \mu_{j}(c^{\omega})&\mathrm{else,\leavevmode\nobreak\ with\leavevmode\nobreak\ }w\in V_{j}\end{matrix}\right.\right)_{\tiny{\begin{matrix}w\in W\cup\\{\oplus\\},\\\ c\in D\end{matrix}}}$ then $A_{WD}$ is invertible and: $\bar{\alpha}=A_{WD}^{-1}\left(\left\\{\begin{matrix}1&\mathrm{if\leavevmode\nobreak\ }w=\oplus\\\ \lambda(w)&\mathrm{otherwise}\end{matrix}\right.\right)_{w\in W\cup\\{\oplus\\}},$ with $\alpha_{c}>0$ for all $c\in D$. Let us write: $\bar{\beta}_{\lambda W}=\left(\left\\{\begin{matrix}1&\mathrm{if\leavevmode\nobreak\ }j=\oplus\\\ \lambda(w)&\mathrm{otherwise}\end{matrix}\right.\right)_{w\in W\cup\\{\oplus\\}}.$ We have, thus, $\bar{\alpha}=A_{WD}^{-1}\bar{\beta}_{\lambda W}$. Let us write, for each player $j$, $\bar{\gamma}_{D}^{j}=(\mu_{j}(c^{\omega}))_{c\in D}$. Then, we can write: $\begin{matrix}y_{i}&=&\sum_{c}\alpha_{c}\mu_{i}(c^{\omega})\\\ &=&\\!{}^{\mathrm{t}}\bar{\gamma}_{D}^{i}\leavevmode\nobreak\ \bar{\alpha}\\\ &=&\\!{}^{\mathrm{t}}\bar{\gamma}_{D}^{i}\leavevmode\nobreak\ A_{WD}^{-1}\leavevmode\nobreak\ \bar{\beta}_{\lambda W}.\end{matrix}$ Finally, if we write: $B_{W}=\left(\left\\{\begin{matrix}1&\mathrm{if\leavevmode\nobreak\ }w=v\\\ 0&\mathrm{otherwise}\end{matrix}\right.\right)_{w\in W\cup\\{\oplus\\},v\in V}$ and: $\delta\mspace{-9.0mu}\bar{\phantom{v}}_{W}=\left(\left\\{\begin{matrix}1&\mathrm{if\leavevmode\nobreak\ }w=\oplus\\\ 0&\mathrm{otherwise}\end{matrix}\right.\right)_{w\in W\cup\\{\oplus\\}}$ we have $\bar{\beta}_{\lambda W}=B_{W}\lambda\mspace{-10.0mu}\bar{\phantom{v}}+\delta\mspace{-9.0mu}\bar{\phantom{v}}_{W}$, and therefore: $y_{i}=\\!^{\mathrm{t}}\bar{\gamma}_{D}^{i}\leavevmode\nobreak\ A_{WD}^{-1}\leavevmode\nobreak\ (B_{W}\leavevmode\nobreak\ \lambda\mspace{-10.0mu}\bar{\phantom{v}}+\delta\mspace{-9.0mu}\bar{\phantom{v}}_{W}).$ Conversely, the tuple $\bar{y}$ defined by, for each $j\in\Pi$, $y_{j}=\\!^{\mathrm{t}}\bar{\gamma}_{D}^{j}\leavevmode\nobreak\ A_{WD}^{-1}\leavevmode\nobreak\ (B_{W}\leavevmode\nobreak\ \lambda\mspace{-10.0mu}\bar{\phantom{v}}+\delta\mspace{-9.0mu}\bar{\phantom{v}}_{W})$ for given $W\subseteq M$ and $D\subseteq C$, is an element of the set $Y$ if and only if: * • the intersection $\underset{c\in D}{\mathrm{Conv}}\leavevmode\nobreak\ \mu(c^{\omega})\cap\underset{w\in W}{\bigcap}H_{w}$ is a singleton, i.e. the matrix $A_{WD}$ is invertible (otherwise the matrix $A_{WD}^{-1}$ is not defined); * • $\bar{y}\in\underset{c\in D}{\mathrm{Conv}}\leavevmode\nobreak\ \mu(c^{\omega})$, i.e. the tuple $\bar{\alpha}=A_{WD}^{-1}\leavevmode\nobreak\ (B_{W}\leavevmode\nobreak\ \lambda\mspace{-10.0mu}\bar{\phantom{v}}+\delta\mspace{-9.0mu}\bar{\phantom{v}}_{W})$ has only non-negative coordinates (actually positive if $D$ is minimal); * • for each player $j$, for each vertex $v\in M\cap V_{j}$, we have $y_{j}\geq\lambda(v)$, i.e. $\\!{}^{\mathrm{t}}\gamma_{D}^{j}A_{WD}^{-1}(B_{W}\lambda\mspace{-10.0mu}\bar{\phantom{v}}+\delta\mspace{-9.0mu}\bar{\phantom{v}}_{W})\geq\lambda(v)$. We finally find the formula: $\mathrm{nego}(\lambda)(v_{0})=\max_{\tiny{\tau_{\mathbb{C}}\in\mathrm{ML}\left(\mathrm{Conc}_{\lambda_{0}i}(G)\right)}}\leavevmode\nobreak\ \min_{\tiny{\begin{matrix}K\in\mathrm{SConn}\left(\mathrm{Conc}_{\lambda_{0}i}(G)[\tau_{\mathbb{C}}]\right)\\\ \mathrm{accessible\leavevmode\nobreak\ from\leavevmode\nobreak\ }(v_{0},\\{v_{0}\\})\end{matrix}}}\left\\{\begin{matrix}\mathrm{if\leavevmode\nobreak\ }K\mathrm{\leavevmode\nobreak\ contains\leavevmode\nobreak\ dev.}:&\underset{c\in\mathrm{SC}(K)}{\min}\leavevmode\nobreak\ \hat{\mu}_{\star}(c^{\omega})\\\ \mathrm{otherwise}:&\min S_{K},\end{matrix}\right.$ where $S_{K}$ is the set of real numbers of the form: $\\!{}^{\mathrm{t}}\bar{\gamma}_{D}^{i}A^{-1}_{WD}(B_{W}\lambda\mspace{-10.0mu}\bar{\phantom{v}}+\delta\mspace{-9.0mu}\bar{\phantom{v}}_{W})$ with: * • $W\subseteq M$, common memory of the states of $K$; * • $D\subseteq C$, set of the cycles of the form $\dot{c}$, where $c$ is a simple cycle of $K$; * • the matrix $A_{WD}$ is invertible; * • the vector $A_{WD}^{-1}(B_{W}\lambda\mspace{-10.0mu}\bar{\phantom{v}}+\delta\mspace{-9.0mu}\bar{\phantom{v}}_{W})$ has only positive coordinates; * • and for each $j\in\Pi$, for each $v\in M_{K}\cap V_{j}$, we have $\\!{}^{\mathrm{t}}\bar{\gamma}_{D}^{j}A^{-1}_{WD}(B_{W}\lambda\mspace{-10.0mu}\bar{\phantom{v}}+\delta\mspace{-9.0mu}\bar{\phantom{v}}_{W})\geq\lambda(v)$. This is, indeed, the expression of a piecewise linear function. Computing a complete and effective expression of that function can be done by constructing all the concrete games $\mathrm{Conc}_{\lambda_{0}i}(G)_{\upharpoonright(v_{0},\\{v_{0}\\})}$ for $i\in\Pi$ and $v_{0}\in V_{i}$ (there are as many of them as there are vertices in $G$, and their size is exponential in the size of $G$, hence so is the time that one needs for their construction), and then applying the formula above for each memoryless strategy of Challenger (their number is exponential in the size of the concrete game, i.e. double exponential in the size of $G$) and each strongly connected component of the induced graph (their number is bounded by the size of the concrete game). For each $K$, the computation of $\mathrm{opt}(K)$ with the formula given above requires only elementary operations on matrices and vectors, that can all be done in polynomial time: therefore, the effective construction of $\mathrm{nego}$ as a piecewise linear function requires a time double exponential in the size of $G$. ∎ ## Appendix L Proof of Theorem 5 Theorem 5. _The SPE constrained existence problem is NP-hard._ ###### Proof. We proceed by reduction from the NP-complete problem SAT. This proof is liberally inspired from the proof of the NP-hardness of the NE constrained existence problem in co-Büchi games by Michael Ummels, in [21]. Let $\varphi=\bigwedge_{i=1}^{n}\bigvee_{j=1}^{m}L_{ij}$ be a formula from propositional logic, written in conjunctive normal form, over the finite variable set $X$. We construct a mean-payoff game $G^{\varphi}_{\upharpoonright v_{0}}$ that admits an SPE where the player $\mathbb{P}$ gets the payoff $1$, if and only if $\varphi$ is satisfiable. First, we define the set of players $\Pi=\\{\mathbb{P}\\}\cup X$: every variable of $\varphi$ is a player and there is an additional special player $\mathbb{P}$, called _Prover_ , who wants to prove that $\varphi$ is satisfiable. Then, let us define the state space: for each clause $C_{i}$, with $i\in\mathbb{Z}/n\mathbb{Z}$, of $\varphi$, we define a state $C_{i}$ that is controlled by Prover, and for each litteral $L_{ij}$ of $C_{i}$ we define a state $(C_{i},L_{ij})$, that is controlled by the player $x$ such that $L_{ij}=x$ or $\neg x$. We add a transition from $C_{i}$ to $(C_{i},L_{ij})$, and another one from $(C_{i},L_{ij})$ to $C_{i+1}$. Moreover, we add a sink state $\bot$, with a transition from it to itself, and transitions from all the states of the form $(C,\neg x)$ to it. We define the weight function $\pi$ on this game as follows: * • $\pi_{\mathbb{P}}(\bot\bot)=0$, and $\pi_{\mathbb{P}}(uv)=1$ for any other transition $vw$; * • for each player $x$, we have $\pi_{x}(uv)=0$ for every transition leading to a state of the form $v=(C,x)$, and $\pi_{x}(uv)=1$ for any other transition. Note that Prover can only get the payoffs $0$ (in a play that reaches $\bot$) or $1$ (in any other play). Another player $x$ gets the payoff $1$ in a play that never visits (or finitely often, or infinitely often but with negligible frequence) a vertex of the form $(C,x)$. Otherwise, he may get any payoff between $0.5$ and $1$, depending on the frequence with which such a state is visited. Finally, we initialize that game in $v_{0}=C_{1}$. ###### Example 10. The game $G^{\varphi}$, when $\varphi$ is the tautology $(x_{1}\vee\neg x_{1})\wedge\dots\wedge(x_{6}\vee\neg x_{6})$, is represented by Figure 9. When the weights of an edge are not written, they are equal to $1$ for all players. $C_{1}$$x_{1}$$\neg x_{1}$$C_{2}$$x_{2}$$\neg x_{2}$$C_{3}$$x_{3}$$\neg x_{3}$$C_{4}$$x_{4}$$\neg x_{4}$$C_{5}$$x_{5}$$\neg x_{5}$$C_{6}$$x_{6}$$\neg x_{6}$$\bot$$\stackrel{{\scriptstyle x_{1}}}{{0}}\stackrel{{\scriptstyle x_{2}}}{{1}}\stackrel{{\scriptstyle x_{3}}}{{1}}\stackrel{{\scriptstyle x_{4}}}{{1}}\stackrel{{\scriptstyle x_{5}}}{{1}}\stackrel{{\scriptstyle x_{6}}}{{1}}\stackrel{{\scriptstyle\mathbb{P}}}{{1}}$$\stackrel{{\scriptstyle x_{1}}}{{1}}\stackrel{{\scriptstyle x_{2}}}{{0}}\stackrel{{\scriptstyle x_{3}}}{{1}}\stackrel{{\scriptstyle x_{4}}}{{1}}\stackrel{{\scriptstyle x_{5}}}{{1}}\stackrel{{\scriptstyle x_{6}}}{{1}}\stackrel{{\scriptstyle\mathbb{P}}}{{1}}$$\stackrel{{\scriptstyle x_{1}}}{{1}}\stackrel{{\scriptstyle x_{2}}}{{1}}\stackrel{{\scriptstyle x_{3}}}{{0}}\stackrel{{\scriptstyle x_{4}}}{{1}}\stackrel{{\scriptstyle x_{5}}}{{1}}\stackrel{{\scriptstyle x_{6}}}{{1}}\stackrel{{\scriptstyle\mathbb{P}}}{{1}}$$\stackrel{{\scriptstyle x_{1}}}{{1}}\stackrel{{\scriptstyle x_{2}}}{{1}}\stackrel{{\scriptstyle x_{3}}}{{1}}\stackrel{{\scriptstyle x_{4}}}{{0}}\stackrel{{\scriptstyle x_{5}}}{{1}}\stackrel{{\scriptstyle x_{6}}}{{1}}\stackrel{{\scriptstyle\mathbb{P}}}{{1}}$$\stackrel{{\scriptstyle x_{1}}}{{1}}\stackrel{{\scriptstyle x_{2}}}{{1}}\stackrel{{\scriptstyle x_{3}}}{{1}}\stackrel{{\scriptstyle x_{4}}}{{1}}\stackrel{{\scriptstyle x_{5}}}{{0}}\stackrel{{\scriptstyle x_{6}}}{{1}}\stackrel{{\scriptstyle\mathbb{P}}}{{1}}$$\stackrel{{\scriptstyle x_{1}}}{{1}}\stackrel{{\scriptstyle x_{2}}}{{1}}\stackrel{{\scriptstyle x_{3}}}{{1}}\stackrel{{\scriptstyle x_{4}}}{{1}}\stackrel{{\scriptstyle x_{5}}}{{1}}\stackrel{{\scriptstyle x_{6}}}{{0}}\stackrel{{\scriptstyle\mathbb{P}}}{{1}}$$\stackrel{{\scriptstyle x_{1}}}{{1}}\stackrel{{\scriptstyle x_{2}}}{{1}}\stackrel{{\scriptstyle x_{3}}}{{1}}\stackrel{{\scriptstyle x_{4}}}{{1}}\stackrel{{\scriptstyle x_{5}}}{{1}}\stackrel{{\scriptstyle x_{6}}}{{1}}\stackrel{{\scriptstyle\mathbb{P}}}{{0}}$ Figure 9: The game $G^{\varphi}$ Now, let us prove that there is an SPE in $G^{\varphi}_{\upharpoonright v_{0}}$ in which Prover gets the payoff $1$, if and only if the formula $\varphi$ is satisfiable. * • If such an SPE exists: let us write it $\bar{\sigma}$, and let $\rho=\langle\bar{\sigma}\rangle_{C_{1}}$. Since $\mu_{\mathbb{P}}(\rho)=1$, the sink state $\bot$ is never visited. Let us define a valuation $\nu$ on $X$ as follows: for each variable $x$, we have $\nu(x)=1$ if and only if $\mu_{x}(\rho)<1$. Now, let $C$ be a clause of $\varphi$: since $C$, as a state, is necessarily visited infinitely often and with a fixed frequence in the play $\rho$ (because no player ever go to the sink state $\bot$), one of its successors, say $(C,L)$, is visited with a non-negligible frequence (more formally, the time between two occurrences of $(C,L)$ is bounded). If $L$ is a positive litteral, say $x$, then by definition of $\nu$, we have $\nu(x)=1$ and the clause $C$ is satisfied. If $L$ has the form $\neg x$, then each time the state $(C,\neg x)$ is traversed, player $x$ has the possibility to deviate and to go to the sink state $\bot$, where he is sure to get the payoff $1$. Since $\bar{\sigma}$ is an SPE, it means that he already gets the payoff $1$ in the play $\rho$. By definition of $\nu$, we then have $\nu(x)=0$, hence the litteral $\neg x$ is satisfied, hence so is the clause $C$. The valuation $\nu$ satisfies all the clauses of $\varphi$, and therefore satisfies the formula $\varphi$ itself. * • If $\varphi$ is satisfied by some valuation $\nu$: let us define a strategy profile $\bar{\sigma}$ by: * – for each history $hC$ where $C$ is a clause of $\varphi$, $\sigma_{\mathbb{P}}(hC)=(C,L)$ where $L$ is a litteral of $C$ that is satisfied in the valuation $\nu$; * – and for each history $h(C,\neg x)$ where $C$ is a clause of $\varphi$ and $x$ is a variable, $\sigma_{x}(h(C,\neg x))=\bot$ if and only if $\nu(x)=1$. Any other state has only one successor, hence we now have completely defined a strategy profile. Now, let us prove it is an SPE, in which Prover gets the payoff $1$. Let $hC$ be a history, where $C$ is a clause of $\varphi$. We want to prove that $\bar{\sigma}_{\upharpoonright hC}$ is a Nash equilibrium, in which Prover gets the payoff $1$. Let $\rho=\langle\bar{\sigma}_{\upharpoonright hC}\rangle_{C}$. If $\mu_{\mathbb{P}}(\rho)<1$, i.e. if $\rho$ is of the form $hD(D,\neg x)\bot^{\omega}$, then by definition of $\bar{\sigma}$ we have $\nu(x)=0$. But then, we cannot have $\sigma_{\mathbb{P}}(D)=(D,\neg x)$: contradiction. The play $\rho$ never reaches the state $\bot$, and Prover gets the payoff $1$, and as a consequence she does not have any profitable deviation. Now, if another player $x$ has a profitable deviation, it means that he does not get the payoff $1$ in $\rho$, and therefore that some state of the form $(D,x)$ is visited infinitely often. But then, if Prover choose to go to the state $(D,x)$, it means that the litteral $x$ is satisfied in $\nu$, i.e. that $\nu(x)=1$. In that case, if some clause $D^{\prime}$ contains the litteral $\neg x$, it is not a litteral satisfied by $\nu$, and therefore the strategy $\sigma_{\mathbb{P}}$, as we defined it, never chooses the transition to the state $(D^{\prime},\neg x)$, where player $x$ could have the possibility to deviate from his strategy. Contradiction. Finally, after a history of the form $h(C,L)$, either: * – $L=\neg x$ with $\nu(x)=1$, and in that case, we have $\langle\bar{\sigma}_{\upharpoonright h(C,L)}\rangle_{(C,L)}=(C,L)\bot^{\omega}$, player $x$ gets the payoff $1$, and no player has a profitable deviation; * – or $L$ is a positive litteral, and then there exists only one transition from the state $(C,L)$ to another clause $D$, and we go back to the previous case; * – or $L=\neg x$ with $\nu(x)=0$, and in that case, we have $\sigma_{x}(C,L)=D$ where $D$ is the following clause, and by the first case the strategy profile $\bar{\sigma}_{\upharpoonright h(C,\neg x)D}$ is a Nash equilibrium. Moreover, since the litteral $\neg x$ is not satisfied in $\nu$, the play $\langle\bar{\sigma}_{\upharpoonright h(C,\neg x)D}\rangle_{D}$ does never traverse again any state of the form $(D^{\prime},\neg x)$, hence player $x$ wins, and therefore has no profitable deviation: the strategy profile $\bar{\sigma}_{\upharpoonright h(C,\neg x)}$ is a Nash equilibrium. The constrained SPE existence problem is NP-hard in mean-payoff games. ∎ ## Appendix M Construction of the automaton solving the SPE constrained existence problem In the proof of Theorem 6, we invoked a multi-mean-payoff automaton $\mathcal{A}_{\lambda}$, defined from a requirement $\lambda$ and two thresholds $\bar{x},\bar{y}\in\mathbb{Q}^{\Pi}$, that recognizes the language of the plays $\rho\in V^{\omega}$ that are $\lambda$-consistent, and that satisfy $\bar{x}\leq\mu(\rho)\leq\bar{y}$. We give here the details of its construction. * • The state space of $\mathcal{A}_{\lambda}$ is: $Q=V\times 2^{V},$ where a state $(v,M)$ must be interpreted as follows: we are currently in the vertex $v$, and we already traversed the states of the set $M$. The initial state is $(v_{0},\\{v_{0}\\})$. * • The automaton $\mathcal{A}_{\lambda}$ is $2\mathrm{card}\Pi$-dimensional, and its dimension set is $\Pi\times\\{\lambda,0\\}$. * • The transitions of $\mathcal{A}_{\lambda}$ are all the transitions of the form: $(v,M)(w,M\cup\\{w\\})$ where $vw\in E$. Each such transition is labelled by the vertex $v$ as a letter of the alphabet $V$, and is weighted by: * – $\pi_{i}(vw)-\underset{u\in M}{\max}\leavevmode\nobreak\ \lambda(u)$ on each dimension $(i,\lambda)$; * – $\pi_{i}(vw)$ on each dimension $(i,0)$. * • A run $\rho$ of $\mathcal{A}_{\lambda}$ is accepting if its mean-payoff is nonnegative along each dimension $(i,\lambda)$, and its mean-payoff along each dimension $(i,0)$ belongs to the interval $[x_{i},y_{i}]$. ## Appendix N Some examples of negotiation sequences We gather in this section some examples that could be interesting for the reader who would want to get a full overall view on the behaviour of the negotiation function on the mean-payoff games. For all of them, we computed the negotiation sequence, as defined in Appendix I. For some of them, we just gave the negotiation sequence; for the most important ones, we gave a complete explanation of how we computed it, using the abstract negotiation game, as defined in Appendix D. ###### Example 11. Let us take again the game of Example 2: let us give (in red) the values of $\lambda_{1}=\mathrm{nego}(\lambda_{0})$, which correspond to the antagonistic values. $a$$c$$b$$d$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$(\lambda_{1})$$1$$2$$1$$2$ At the second step, let us execute the abstract game on the state $a$, with the requirement $\lambda_{1}$: whatever Prover proposes at first, Challenger has the possibility to deviate and to reach the state $b$. Then, Prover has to propose a $\lambda_{1}$-consistent play from the state $b$, i.e. a play in which player $\Circle$ gets at least the payoff $2$: such a play necessarily ends in the state $d$, and gives player $\Box$ the payoff $2$. The other states keep the same values. $a$$c$$b$$d$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$(\lambda_{2})$$2$$2$$1$$2$ But then, at the third step, from the state $b$: whatever Prover proposes at first, Challenger can deviate to reach the state $a$. Then, Prover has to propose a $\lambda_{2}$-consistent play from $a$, i.e. a play in which player $\Circle$ gets at least the payoff $2$: such a play necessarily end in the state $d$, i.e. after possibly some prefix, Prover proposes the play $abd^{\omega}$. But then, Challenger can always deviate to go back to the state $a$; and the play which is thus created is $(ab)^{\omega}$ which gives player $\Box$ the payoff $3$. $a$$c$$b$$d$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$(\lambda_{3})$$2$$3$$1$$2$ Finally, from the states $a$ and $b$, there exists no $\lambda_{3}$-consistent play, and therefore no $\lambda$-rational strategy profile. $a$$c$$b$$d$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$(\lambda_{4})$$+\infty$$+\infty$$1$$2$ and for all $n\geq 4$, $\lambda_{n}=\lambda_{4}$. ###### Example 12. In this example, we show a game that can be turned into a family of games, where the negotiation function needs as many steps as there are states to reach its limit: when the requirement changes in some state, it opens new possibilities from the neighbour states, and so on. $a$$b$$c$$d$$e$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$(\lambda_{1})$$1$$0$$0$$0$$2$ $a$$b$$c$$d$$e$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$(\lambda_{2})$$1$$1$$0$$2$$2$ $a$$b$$c$$d$$e$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$(\lambda_{3})$$1$$1$$2$$2$$2$ $a$$b$$c$$d$$e$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$(\lambda_{4})$$1$$2$$2$$2$$2$ $a$$b$$c$$d$$e$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$(\lambda_{5})$$2$$2$$2$$2$$2$ and the requirement $\lambda_{5}$ is a fixed point of the negocation function. ###### Example 13. In all the previous examples, all the games whose underlying graphs were strongly connected contained SPEs. Here is an example of game with a strongly connected underlying graph that does not contain SPEs. $b$$c$$a$$d$$e$$f$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{4}}$$(\lambda_{1})$$1$$1$$3$$2$$2$$4$ $b$$c$$a$$d$$e$$f$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{4}}$$(\lambda_{2})$$2$$1$$3$$2$$2$$4$ $b$$c$$a$$d$$e$$f$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{4}}$$(\lambda_{3})$$2$$1$$3$$3$$2$$4$ $b$$c$$a$$d$$e$$f$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{1}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{4}}$$(\lambda_{4})$$+\infty$$+\infty$$+\infty$$+\infty$$+\infty$$+\infty$ ###### Example 14. This example shows how a new requirement can emerge from the combination of several cycles. Let $G$ be the following game: $a$$b$$c$$d$$e$$f$$g$$(\lambda_{1})$$1$$1$$0$$0$$0$$0$$3$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$ At the first step, the requirement $\lambda_{1}$ captures the antagonistic values. Then, from the state $c$, if player $\Box$ forces the access to the state $b$, then player $\Circle$ must get at least $1$: the worst play that can be proposed to player $\Box$ is then $(babc)^{\omega}$, which gives player $\Box$ the payoff $\frac{3}{2}$. From the state $f$, if player $\Box$ forces the access to the state $g$, then the worst play that can be proposed to them is $g^{\omega}$. $a$$b$$c$$d$$e$$f$$g$$(\lambda_{2})$$1$$1$$\frac{3}{2}$$0$$0$$2$$3$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$ Then, from the state $d$, if player $\Circle$ forces the access to the state $c$, then player $\Box$ must get at least $\frac{3}{2}$: the worst play that can be proposed to player $\Circle$ is then $(cccd)^{\omega}$, which gives player $\Circle$ the payoff $\frac{1}{2}$. At the same time, from the state $e$, player $\Circle$ can now force the acces to the state $f$: then, the worst play that can be proposed to them is $fg^{\omega}$. $a$$b$$c$$d$$e$$f$$g$$(\lambda_{3})$$1$$1$$\frac{3}{2}$$\frac{1}{2}$$3$$2$$3$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$ But then, from the state $c$, player $\Box$ can now force the access to the state $e$: then, the worst play that can be proposed to them is $efg^{\omega}$. $a$$b$$c$$d$$e$$f$$g$$(\lambda_{4})$$1$$1$$2$$\frac{1}{2}$$3$$2$$3$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$ And finally, from that point, if from the state $d$ player $\Circle$ forces the access to the state $c$, then player $\Box$ must have at least the payof $2$; and therefore, the worst play that can be proposed to player $\Circle$ is now $(ccd)^{\omega}$, which gives her the payoff $\frac{2}{3}$. $a$$b$$c$$d$$e$$f$$g$$(\lambda_{5})$$1$$1$$2$$\frac{2}{3}$$3$$2$$3$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{3}}\stackrel{{\scriptstyle\Box}}{{2}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{2}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{1}}\stackrel{{\scriptstyle\Box}}{{3}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$$\stackrel{{\scriptstyle{\scriptsize{\Circle}}}}{{0}}\stackrel{{\scriptstyle\Box}}{{0}}$ The requirement $\lambda_{5}$ is a fixed point of the negotiation function.
# A Distributed Implementation of Steady-State Kalman Filter Jiaqi Yan, Xu Yang, Yilin Mo∗, and Keyou You The authors are with Department of Automation, BNRist, Tsinghua University. Emails<EMAIL_ADDRESS><EMAIL_ADDRESS>ylmo<EMAIL_ADDRESS>$*$: Corresponding author. ###### Abstract This paper studies the distributed state estimation in sensor network, where $m$ sensors are deployed to infer the $n$-dimensional state of a Linear Time- Invariant (LTI) Gaussian system. By a lossless decomposition of optimal steady-state Kalman filter, we show that the problem of distributed estimation can be reformulated as the synchronization of homogeneous linear systems. Based on such decomposition, a distributed estimator is proposed, where each sensor node runs a local filter using only its own measurement, alongside with a consensus algorithm to fuse the local estimate of every node. We prove that the average of estimates from all sensors coincides with the optimal Kalman estimate, and under certain condition on the graph Laplacian matrix and the system matrix, the covariance of estimation error is bounded and the asymptotic error covariance is derived. As a result, the distributed estimator is stable for each single node. We further show that the proposed algorithm has a low message complexity of $\min(m,n)$. Numerical examples are provided in the end to illustrate the efficiency of the proposed algorithm. ###### Index Terms: Distributed estimation, Kalman filter, Linear system synchronization, Consensus algorithm. ## I Introduction The past decades have witnessed remarkable research interests in multi-sensor networked systems. As one of its important focuses, distributed estimation has been widely studied in various applications including robot formation control, environment monitoring, spacecraft navigation (see [1, 2, 3, 4, 5]). Compared with the centralized architecture, it provides better robustness, flexibility and reliability. One fundamental problem in distributed estimation is to estimate the state of an LTI Gaussian system using multiple sensors, where the well-known Kalman filter provides the optimal solution in a centralized manner [6]. Thus, many research efforts have been devoted to the distributed implementation of Kalman filter. For example, in an early work [7], the authors suggest a fusion algorithm for two-sensor networks, where local estimate of the first sensor is considered as a pseudo measurement of the second one. Due to its ease of implementation, this approach has then inspired the sequential fusion in multi-sensor networks [8, 9, 10], where the multiple nodes repeatedly perform the two-sensor fusion in a sequential manner. As the result of serial operation, these algorithms require special communication topology which should be sequentially connected as a ring/chain. In [11], Olfati-Saber et. al consider the more general network topology. They introduce the consensus algorithms into distributed estimation and propose Kalman-Consensus Filter (KCF), where the average consensus on local estimates is performed. Since then, various consensus-based distributed estimators have been proposed in literature [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. For example, instead of doing consensus on local estimates, [14] suggests achieving consensus respectively on noisy measurements and inverse-covariance matrices. On the other hand, Battistelli et. al [25] find that, by performing consensus on the Kullback-Leibler average of local probability density function, estimation stability is also guaranteed. They further prove that, if the single-consensus step is used, this approach is reduced to the well-known covariance intersection fusion rule [26, 27]. Since the consensus-based estimators usually require multiple consensus steps during each sampling period, they generate better estimation performance. In Fig. 1, we present the general information flow of the existing consensus- based estimation algorithms, where $\Delta_{i}(k)$ is the information transmitted by sensor $i$ and to be fused by consensus algorithms, which could be the local estimates ( [12, 11]), measurements ( [13, 15, 14, 28]), or information matrices ( [25, 29]). It is noticed from the figure that the consensus/synchronization process is usually coupled with the local filter in these works, making it hard to analyze the performance of local estimates. Due to this fact, while the aforementioned algorithms are successful in distributing the fusion task over multiple nodes and providing stable local estimates, i.e. the error covariance is proved to be bounded at each sensor side, the exact calculation of error covariance can hardly be obtained. Moreover, the global optimality (namely, whether performance of the algorithm can converge to that of the centralized Kalman filter) is also difficult to be analyzed and guaranteed in some works. $y_{i}(k)$$y_{j}(k)$Linear systemLinear systemLocal filterLocal filter$\Delta_{i}(k)$$\Delta_{j}(k)$Synchronization$\hat{x}_{i}(k)$$\hat{x}_{j}(k)$ Figure 1: The information flow of most existing algorithms, where sensors $i$ and $j$ are immediate neighbors. It is worth noticing that in theory, the gain of the Kalman filter converges to a steady-state gain exponentially fast[30], which can be calculated off- line. Moreover, in practice, a fixed gain estimator is usually implemented, which has the same asymptotic performance as the time-varying Kalman filter. Hence, this paper focuses on the distributed implementation of the centralized steady-state Kalman filter. In contrast to most of the existing algorithms, we decouple the local filter from the consensus process. Such decoupling enables us to provide a new framework for designing distributed estimators, by reformulating the problem of distributed state estimation into that of linear system synchronization. We, hence, are able to leverage the methodologies from latter field to propose solutions for distributed estimation. To be specific, in the synchronization of linear systems, the dynamics of each agent is governed by an LTI system, the control input to which is generated using the local information within the neighborhood, in order to achieve asymptotic consensus on the local states of agents. Over the past years, lots of research efforts have been devoted to this area (see [31, 32, 33, 34, 35, 36] for examples) by designing synchronization algorithms that can handle various network constraints. Exploiting the results therein, the distributed estimator in this work is designed through two phases as below: 1) (Local measurement processing) A lossless decomposition of steady-state Kalman filter is proposed, where each sensor node runs a local estimator based on this decomposition using solely its own measurement. 2) (Information fusion via consensus) The sensor infers the local estimates of all the others via a modified consensus algorithm designed for achieving linear system synchronization. The contributions of this paper are summarized as follows: 1) By removing assumptions regarding the eigenvalues of system matrix, this paper extends, in a non trivial way, the results in [37], and thus develops the local filters for losslessly decomposing Kalman filter in estimating the general systems. (Lemma 3) 2) Through the decomposition of Kalman filter, this paper bridges two different fields and makes it possible to leverage a general class of algorithms designed for achieving the synchronization of linear systems to solve the problem of distributed state estimation. By doing so, we can propose stable distributed estimators under different communication constraints, such as time delay, switching topology, random link failures, etc. (Theorem 4) 3) For certain synchronization algorithm, e.g., [31], the stability criterion of the proposed estimator is established. Moreover, in contrast to the existing literature, the covariance of the estimation error can be exactly derived by solving Lyapunov equations. (Theorem 2, Theorem 3, and Corollary 1) 4) The designed estimator enjoys low communication cost, where the size of message sent by each sensor is $\min\\{m,n\\},$ with $n$ and $m$ being dimensions of the state and measurement respectively. (Remark 6) Some preliminary results are reported in our previous work [38], where most of the proofs are missing. This paper further extends the results in [38] by computing the exact asymptotic error covariance, instead of only showing the stability of proposed algorithms. The extension to the more general random communication topology is also added. Moreover, a model reduction method is further proposed in this work to reduce the message complexity from $m$ to $\min\\{m,n\\}$. Notations: For vectors $v_{i}\in\mathbb{R}^{m_{i}},$ the vector $\left[v_{1}^{T},\ldots,v_{N}^{T}\right]^{T}$ is defined by $\operatorname*{col}(v_{1},\ldots,v_{N}).$ Moreover, $A\otimes B$ indicates the Kronecker product of matrices $A$ and $B$. Throughout this paper, we define a stochastic signal as “stable” if its covariance is bounded at any time. The remainder of this paper is organized as follows. Section II introduces the preliminaries and formulates the problem of interest. A lossless decomposition of optimal Kalman filter is given in Section III, where a model reduction approach is further proposed to reduce the system order. With the aim of realizing the optimal Kalman filter, distributed solutions for state estimation are given and analyzed in Section IV. We then discuss some extensions in Section V and validate performance of the developed estimator through numerical examples in Section VI. Finally, Section VII concludes the paper. ## II Problem Formulation In this paper, we consider the LTI system as given below: $x(k+1)=Ax(k)+w(k),$ (1) where $x(k)\in\mathbb{R}^{n}$ is the system state, $w(k)\sim\mathcal{N}(0,Q)$ is independent and identically distributed (i.i.d) Gaussian noise with zero mean and covariance matrix $Q\geq 0$. The initial state $x(0)$ is also assumed to be Gaussian with zero mean and covariance matrix $\Sigma\geq 0$, and is independent from the process noise $\\{w(k)\\}$. A network consisting of $m$ sensors is monitoring the above system. The measurement from each sensor $i\in\\{1,\cdots,m\\}$ is given by 111The results in this paper can be readily generalized to cases where the sensor outputs a vector measurement, by treating each entry independently as a scalar measurement.: $y_{i}(k)=C_{i}x(k)+v_{i}(k),$ (2) where $y_{i}(k)\in\mathbb{R}$ is the output of sensor $i$, $C_{i}$ is an $n$-dimensional row vector, and $v_{i}(k)\in\mathbb{R}$ is the Gaussian measurement noise. By stacking the measurement equations, one gets $y(k)=Cx(k)+v(k),$ (3) where $\begin{split}y(k)\triangleq{\left[\begin{array}[]{c}y_{1}(k)\\\ \vdots\\\ y_{m}(k)\end{array}\right],}\;C\triangleq{\left[\begin{array}[]{c}C_{1}\\\ \vdots\\\ C_{m}\end{array}\right],}\;v(k)\triangleq{\left[\begin{array}[]{c}v_{1}(k)\\\ \vdots\\\ v_{m}(k)\end{array}\right]},\end{split}$ (4) and $v(k)$ is zero-mean i.i.d. Gaussian noise with covariance $R\geq 0$ and is independent from $w(k)$ and $x(0)$. Throughout this paper, we assume that $(A,C)$ is observable. On the other hand, $(A,C_{i})$ may not necessarily be observable, i.e., a single sensor may not be able to observe the whole state space. ### II-A Preliminaries: the centralized Kalman filter If all measurements are collected at a single fusion center, the centralized Kalman filter is optimal for state estimation purpose, and provides a fundamental limit for all other estimation schemes. For this reason, this part will briefly review the centralized solution given by the Kalman filter. Let us denote by $P(k)$ the error covariance of estimate given by Kalman filter at time $k$. Since $(A,C)$ is observable, it is well-known that the error covariance will converge to the steady state [6]: $\displaystyle P=\lim_{k\rightarrow\infty}P(k).$ (5) Since the operation of a typical sensor network lasts for an extended period of time, we assume that the Kalman filter is in the steady state, or equivalently $\Sigma=P$, which results in a steady-state Kalman filter with fixed gain222Notice that even if $\Sigma\neq P$, the Kalman estimate converges to the steady-state Kalman filter, i.e., the steady-state estimator is asymptotically optimal. $\displaystyle K=PC^{T}\left(CPC^{T}+R\right)^{-1}.$ (6) Accordingly, the optimal Kalman estimate is computed recursively as $\begin{split}\hat{x}(k+1)&=A\hat{x}(k)+K(y(k+1)-CA\hat{x}(k))\\\ &=(A-KCA)\hat{x}(k)+Ky(k+1).\end{split}$ (7) It is clear that the optimal estimate (7) requires the information from all sensors. However, in a distributed framework, each sensor is only capable of communicating with immediate neighbors, rendering the centralized solution impractical. Therefore, this paper is devoted to the implementation of Kalman filter in a distributed fashion. ## III Decomposition of Kalman Filter In this section, we shall provide a local decomposition of the Kalman filter (7), where the Kalman estimate can be recovered as a linear combination of the estimates from local filters. This section extends, in a non-trivial way, the results in [37] by removing the assumptions on the eigenvalues of system matrix therein, and thus proposes the local filter for estimating the general systems. The results in this part would further help us to design distributed estimation algorithms in the next sections. Without loss of generality, let the system matrix be $A=\begin{bmatrix}A^{u}&\\\ &A^{s}\end{bmatrix},$ (8) where $A^{u}\in\mathbb{R}^{n^{u}\times n^{u}}$ and $A^{s}\in\mathbb{R}^{n^{s}\times n^{s}}$, such that any eigenvalue of $A^{u}$ lies on or outside the unit circle while the eigenvalues of $A^{s}$ are strictly within the unit circle. It thus follows from (1) that $x^{s}(k+1)=A^{s}x^{s}(k)+Jw(k),$ (9) where $J=\begin{bmatrix}0&\operatorname{\textbf{1}}_{n^{s}}\end{bmatrix}\in\mathbb{R}^{n_{s}\times n}$ and $x(k)=\operatorname*{col}(x^{u}(k),x^{s}(k)).$ Accordingly, $C_{i}$ is partitioned as $\displaystyle C_{i}=\begin{bmatrix}C_{i}^{u}&C_{i}^{s}\end{bmatrix},$ (10) where $C_{i}^{u}\in\mathbb{R}^{n^{u}\times 1}$ and $C_{i}^{s}\in\mathbb{R}^{n^{s}\times 1}$. ### III-A Local decomposition of Kalman filter To locally decompose Kalman filter, we first introduce the following lemmas, the proofs of which are given in appendix: ###### Proposition 1. If $\Lambda$ is a non-derogatory333A matrix is defined to be non-derogatory if every eigenvalue of it has geometric multiplicity $1$. Jordan matrix, then both $(\Lambda,\,\mathbf{1})$ and $(\Lambda^{T},\mathbf{1})$ are controllable. ###### Lemma 1. Let $(X,p)$ be controllable, where $X\in\mathbb{R}^{n\times n}$ and $p\in\mathbb{R}^{n}$. For any $q\in\mathbb{R}^{n}$, if $X+pq^{T}$ and $X$ do not share any eigenvalues, then $(X+pq^{T},q^{T})$ is observable, or equivalently $(X^{T}+qp^{T},q)$ is controllable. ###### Lemma 2. Let $(X,p)$ be controllable, where $X\in\mathbb{R}^{n\times n}$ and $p\in\mathbb{R}^{n}$. Denote the characteristic polynomial $X$ as $\varphi(s)=\det(sI-X)$. Let $Y\in\mathbb{R}^{m\times m}$ and $q\in\mathbb{R}^{m}$. Suppose that $\varphi(Y)q=0,$ (11) then there exists $T\in\mathbb{R}^{m\times n}$ which solves the equation below: $TX=YT,\;Tp=q.$ (12) With the above preparations, let us consider the optimal Kalman estimate in (7). For simplicity, we denote by $K_{j}$ the $j$-th column of the Kalman gain $K$. Namely, $K=[K_{1},\cdots,K_{m}]$. Accordingly, (7) can be rewritten as $\hat{x}(k+1)=(A-KCA)\hat{x}(k)+\sum_{i=1}^{m}K_{i}y_{i}(k+1).$ (13) Notice that $A-KCA$ is stable. It is clear that we can always find a Jordan matrix $\Lambda\in\mathbb{R}^{n\times n}$, such that $\Lambda$ is strictly stable, non-derogatory and has the same characteristic polynomial of $A-KCA$. In view of Proposition 1, we conclude that $(\Lambda,\mathbf{1})$ is controllable. Therefore, by Lemma 2, we can always find matrices $F_{i}$’s, such that the following equalities hold for $i=1,\cdots,m$: $F_{i}\Lambda=(A-KCA)F_{i},\;F_{i}\mathbf{1}_{n}=K_{i}.$ (14) Suppose each sensor $i$ performs the following local filter solely based on its own measurements: $\hat{\xi}_{i}(k+1)=\Lambda\hat{\xi}_{i}(k)+\operatorname{\textbf{1}}_{n}y_{i}(k+1),$ (15) where $\hat{\xi}_{i}(k)$ is the output of local filter from sensor $i$, and $\operatorname{\textbf{1}}_{n}\in\mathbb{R}^{n}$ is a vector of all ones. Then it is proved the optimal Kalman filter can be decomposed as a weighted sum of local estimates $\hat{\xi}_{i}(k)$’s, as stated below: ###### Lemma 3. Suppose each sensor performs the local filter (15). The optimal Kalman estimate (7) can be recovered from the local estimates $\hat{\xi}_{i}(k),i=1,2,\cdots,m$ as $\hat{x}(k)=\sum_{i=1}^{m}F_{i}\hat{\xi}_{i}(k),$ (16) where $F_{i}$ is defined in (14). ###### Proof. By multiplying both sides of the recursive equation (15) by $F_{i}$, we arrive at $F_{i}\hat{\xi}_{i}(k+1)=F_{i}\Lambda\hat{\xi}_{i}(k)+F_{i}\operatorname{\textbf{1}}_{n}y_{i}(k+1).$ (17) Then it follows from (14) that $F_{i}\hat{\xi}_{i}(k+1)=(A-KCA)F_{i}\hat{\xi}_{i}(k)+K_{i}y_{i}(k+1),$ (18) Summing up the above equation for all $i=1,\cdots,m$ and comparing it with (13), we can conclude that (16) holds. ∎ Notice that the equality in Lemma 3 surely holds. That means the Kalman filter can be perfectly recovered by (16). We hence claim that (16) is a lossless decomposition of optimal Kalman filter. To better illustrate the ideas, the information flow of centralized Kalman filter and local decomposition (16) is given in Fig 2. $y_{1}(k)$$\cdots$$y_{m}(k)$Kalman filter$\hat{x}(k)$ $y_{1}(k)$$\cdots$$y_{m}(k)$Local filterLocal filter$\cdots$$\hat{\xi}_{1}(k)$$\hat{\xi}_{m}(k)$Weighted sum$\hat{x}(k)$ Figure 2: The information flow of centralized Kalman filter (left hand), and local decomposition of Kalman filter (16) (right hand). ### III-B A reformulation of (15) with stable inputs It is noted that the system matrix $A$ may be unstable which implies that the covariance of measurement $y(k)$ is not necessarily bounded. As a result, we need to redesign (15) using the stable residual $z_{i}(k)$ as an input instead of the raw measurement $y_{i}(k)$. The main reason for this reformulation is to make the consensus algorithm feasible and develop stable distributed estimators, which will be further discussed in the proof of Theorem 3. Towards the end, notice that $(\Lambda,1)$ is controllable, $\Lambda$ is stable and any eigenvalue of $A_{u}$ is unstable. Hence, we can always find a non-zero $\beta\in\mathbb{R}^{n}$ and compute $S=\Lambda+1\beta^{T},$ (19) such that 1. 1. the characteristic polynomial of $A^{u}$ divides $\phi(s)$, where $\phi(s)$ is the characteristic polynomial of $S$, and $\phi(s)/\det(sI-A^{u})$ has only strictly stable roots; 2. 2. $S$ do not share eigenvalues with $\Lambda$. Hence, by the virtue of Lemma 1, $(S^{T},\beta)$ is controllable. ###### Remark 1. Notice that by using $\beta$, we place the eigenvalues of $S$ to the locations which consist of two parts: the unstable ones that coincide with the eigenvalues of $A_{u}$ and the stable ones that are freely assigned but cannot be the eigenvalues of $\Lambda$. This is feasible as $(\Lambda,1)$ is controllable. Next, let us consider the filter below: $\begin{split}z_{i}(k)=y_{i}(k+1)-\beta^{T}\hat{\xi}_{i}(k),\\\ \hat{\xi}_{i}(k+1)=S\hat{\xi}_{i}(k)+\operatorname{\textbf{1}}_{n}z_{i}(k),\end{split}$ (20) where $\beta$ and $S$ are calculated through (19). In the following lemma, we shall show that (20) also losslessly decomposes the Kalman filter. Moreover, the covariance of $z_{i}(k)$ is bounded at any time. ###### Lemma 4. Consider the local filter (20). The following statements hold at any instant $k$: 1. 1. (20) has the same input-output relationship with (15). Namely, given the input $y_{i}(k)$, they yield the same output $\hat{\xi}_{i}(k)$; 2. 2. $z_{i}(k)$ is stable, i.e., the covariance of $z_{i}(k)$ is always bounded. ###### Proof. The proof is given in Appendix-C. ∎ ###### Remark 2. If $A$ has unstable modes, the previous discussions show that (15) can be seen as a linear system with stable system matrix $\Lambda$ but unstable input $y_{i}(k+1)$. As a contrast, (20) has unstable system matrix $S$ but stable input $z_{i}(k)$. This formulation is essential to guarantee the stability of local estimators, as will be seen in the proof of Theorem 4. ### III-C A reduced-order decomposition of Kalman filter when $n<m$ To simplify notations, we define the following aggregated matrices: $\begin{split}&\tilde{S}\triangleq I_{m}\otimes S,\;\tilde{L}_{i}\triangleq e_{i}\otimes\operatorname{\textbf{1}}_{n},\;\tilde{L}\triangleq[\tilde{L}_{1},\cdots,\tilde{L}_{m}]=I_{m}\otimes\operatorname{\textbf{1}}_{n},\end{split}$ (21) where $I_{m}$ is an $m$-dimensional identity matrix and $e_{i}$ is the $i$th canonical basis vector in $\mathbb{R}^{m}$. We thus collect (16) and (20) in matrix form as: $\begin{split}\begin{bmatrix}\hat{\xi}_{1}(k+1)\\\ \vdots\\\ \hat{\xi}_{m}(k+1)\end{bmatrix}&=\tilde{S}\begin{bmatrix}\hat{\xi}_{1}(k)\\\ \vdots\\\ \hat{\xi}_{m}(k)\end{bmatrix}+\tilde{L}\begin{bmatrix}z_{1}(k)\\\ \vdots\\\ z_{m}(k)\end{bmatrix},\\\ \hat{x}(k)&=F\begin{bmatrix}\hat{\xi}_{1}(k)\\\ \vdots\\\ \hat{\xi}_{m}(k)\end{bmatrix}.\end{split}$ (22) where $F\triangleq\left[F_{1},F_{2},\cdots,F_{m}\right]$. By Lemmas 3 and 4, (22) represents a lossless decomposition of Kalman filter. Notice that the system order of (22) is $mn$. In this part, we shall show that by performing model reduction, this order can be further reduced to $n^{2}$ when the state dimension is less than the number of sensors, namely $n<m$. These discussions would be useful for us to achieve a low communication complexity in distributed frameworks. To proceed, we regard the input and output of (22) as $z(k)$ and $\hat{x}(k)$, respectively, where $z(k)\triangleq\begin{bmatrix}z_{1}(k),\cdots,z_{m}(k)\end{bmatrix}^{T}.$ (23) Let us introduce the below lemma, the proof of which is given in Appendix-D: ###### Lemma 5. Any matrix $W\in\mathbb{R}^{n\times n}$ can be decomposed as $W=H_{1}\varphi_{1}(S)+H_{2}\varphi_{2}(S)+\cdots+H_{n}\varphi_{n}(S),$ (24) where $H_{i}\triangleq e_{i}\beta^{T}$, $\\{\varphi_{j}(S)\\}$ are certain polynomials of $S$, and $S$ and $\beta$ are given in (19). As a direct result of Lemma 5, for any $F_{i}$ in (16), we can always rewrite it by using the polynomials of $S$, i.e., $\\{p_{ij}(S)\\}$: $F_{i}=\sum_{j=1}^{n}H_{j}p_{ij}(S).$ (25) For simplicity, we also denote $T_{i}\triangleq[(p_{i1}(S)\operatorname{\textbf{1}}_{n})^{T},\cdots,(p_{in}(S)\operatorname{\textbf{1}}_{n})^{T}]^{T}.$ (26) It is then proved in the below theorem that system (22) can be reduced with a less order: ###### Theorem 1. Consider the following system: $\begin{split}\begin{bmatrix}\theta_{1}(k+1)\\\ \vdots\\\ \theta_{n}(k+1)\end{bmatrix}&=(I_{n}\otimes S)\begin{bmatrix}\theta_{1}(k)\\\ \vdots\\\ \theta_{n}(k)\end{bmatrix}+T\begin{bmatrix}z_{1}(k)\\\ \vdots\\\ z_{m}(k)\end{bmatrix},\\\ \tilde{x}(k)&={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}H\begin{bmatrix}\theta_{1}(k)\\\ \vdots\\\ \theta_{n}(k)\end{bmatrix},}\end{split}$ (27) where $T=[T_{1},T_{2},\cdots,T_{m}],\;H=[H_{1},H_{2},\cdots,H_{n}].$ (28) It holds that system (27) shares the same transfer function with (22). ###### Proof. The proof is presented in Appendix-E. ∎ Therefore, by performing model reduction, we present system (27) which shares the same transfer function with (22) but with a reduced order. As proved previously, the output of (22) is the optimal Kalman estimate. As a result, (27) also has the Kalman estimate as its output and the Kalman filter can be perfectly recovered by (27) as well. We hereby refer both (22) and (27) to lossless decomposition of Kalman fiter. Depending on the size of $m$ and $n$, one should use a system with smaller dimension to represent the centralized Kalman filter. ## IV Local Implementation of Kalman filter From Fig. 2, it is clear that local decomposition proposed in Section III is still centralized as a fusion center is required for calculating the weighted sum. In this section, we shall provide distributed algorithms for implementing it, where each sensor node performs local filtering by using the results from Section III, and global fusion by exchanging information with neighbors and running consensus algorithm. Based on whether $n$ is greater than $m$ or not, different algorithms will be presented to achieve a low communication complexity. We use a weighted undirected graph $\mathcal{G}=\\{\mathcal{V},\mathcal{E},\mathcal{A}\\}$ to model the interaction among nodes, where $\mathcal{V}=\\{1,2,...,m\\}$ is the set of sensors, $\mathcal{E}\subset\mathcal{V}\times\mathcal{V}$ is the set of edges, and $\mathcal{A}=\left[a_{ij}\right]$ is the weighted adjacency matrix. It is assumed $a_{ij}\geq 0$ and $a_{ij}=a_{ji},\forall i,j\in\mathcal{V}$. An edge between sensors $i$ and $j$ is denoted by $e_{ij}\in\mathcal{E}$, indicating that these two agents can communicate directly with each other. Note that $e_{ij}\in\mathcal{E}$ if and only $a_{ij}>0$. By denoting the degree matrix as $\mathcal{D}\triangleq\diag\left(\operatorname{deg}_{1},\ldots,\operatorname{deg}_{N}\right)$ with $\mathrm{deg}_{i}=\sum_{j=1}^{N}a_{ij},$ the Laplacian matrix of $\mathcal{G}$ is defined as $\mathcal{L}_{\mathcal{G}}\triangleq\mathcal{D}-\mathcal{A}$. In this paper, a connected network is considered. We therefore can arrange the eigenvalues of Laplacian matrix as $0=\mu_{1}<\mu_{2}\leq\cdots\leq\mu_{m}.$ ### IV-A Description of the distributed estimator In light of (16), the optimal estimate fuses $\hat{\xi}_{i}(k)$ from all sensors. However, in a distributed framework, each sensor can only access the information in its neighborhood. Hence, any sensor $i$ needs to, through the communication over network, infer $\hat{\xi}_{j}(k)$ for all $j\in\mathcal{V}$ to achieve a stable local estimate. Let us denote by $\eta_{i,j}(k)$ as the inference from sensor $i$ on sensor $j$. As will be proved later in this section, $\eta_{i,j}(k)$, by running a synchronization algorithm, can track $\frac{1}{m}\hat{\xi}_{j}(k)$ with bounded error. Hence, every sensor $i$ can make a decent inference on $\hat{\xi}_{j}(k)$. By collecting its inference on all sensors together, each sensor $i$ keeps a local state as below: $\eta_{i}(k)\triangleq{\left[\begin{array}[]{c}\eta_{i,1}(k)\\\ \vdots\\\ \eta_{i,m}(k)\end{array}\right]\in\mathbb{R}^{mn},}$ (29) which will be updated by synchronization algorithms. Since $\eta_{i}(k)$ contains the fair inference on all $\hat{\xi}_{j}(k),j\in\mathcal{V}$, sensor $i$ finally uses it to compute a stable local estimate. To be concrete, let us define the message sent by agent $i$ at time $k$ as $\Delta_{i}(k)\triangleq\tilde{\Gamma}\eta_{i}(k)\in\mathbb{R}^{m}$, where $\tilde{\Gamma}=I_{m}\otimes\Gamma$ and $\Gamma$ is a design parameter to be given later. We are now ready to present the main algorithm. Suppose each node $i$ is initialized with $\hat{x}_{i}(0)=0$ and $\eta_{i}(0)=0$. At any instant $k>0$, its update is outlined in Algorithm 1, the information flow of which is shown in Fig. 3. Compared with Fig. 2, the proposed algorithm is achieved in a distributed manner. ###### Remark 3. Instead of transmitting the raw estimate $\eta_{i}(k)\in\mathbb{R}^{mn}$, each agent sends a “coded” vector $\Delta_{i}(k)$, with a smaller size $m$. 1: Using the latest measurement from itself, sensor $i$ computes the local residual and update the local estimate by $\begin{split}z_{i}(k)=y_{i}(k+1)-\beta^{T}\hat{\xi}_{i}(k),\\\ \hat{\xi}_{i}(k+1)=S\hat{\xi}_{i}(k)+\operatorname{\textbf{1}}_{n}z_{i}(k).\end{split}$ (30) 2: Compute $\Delta_{i}(k)=\tilde{\Gamma}\eta_{i}(k)$ and collect $\Delta_{j}(k)$ from neighbors and fuse the neighboring information with the consensus algorithm as $\eta_{i}(k+1)=\tilde{S}\eta_{i}(k)+\tilde{L}_{i}z_{i}(k)+\tilde{B}\sum_{j=1}^{m}a_{ij}(\Delta_{j}(k)-\Delta_{i}(k)),$ (31) where $\tilde{S}$ and $\tilde{L}_{i}$ are given in (21), and $\tilde{B}\triangleq I_{m}\otimes\operatorname{\textbf{1}}_{n}$. 3: Update the fused estimate on system state as: $\breve{x}_{i}(k+1)=mF\eta_{i}(k+1).$ (32) 4: Transmit the new state $\Delta_{i}(k+1)$ to neighbors. Algorithm 1 Distributed estimation algorithm for sensor $i$ $y_{i}(k)$$y_{j}(k)$Linear system$\eta_{i}(k)$Linear system$\eta_{j}(k)$Local filter$\hat{\xi}_{i}(k)$Local filter$\hat{\xi}_{j}(k)$$z_{i}(k)$$z_{j}(k)$$\Delta_{i}(k)$$\Delta_{j}(k)$Synchronization$\breve{x}_{i}(k)$$\breve{x}_{j}(k)$ Figure 3: The information flow of Algorithm 1, where nodes $i$ and $j$ are immediate neighbors. ### IV-B Performance analysis This part is devoted to the performance analysis of Algorithm 1. We shall first provide the following theorem: ###### Theorem 2. With Algorithm 1, the average of fused estimates from all sensors coincides with the optimal Kalman estimate at any instant $k$. That is, $\frac{1}{m}\sum_{i=1}^{m}\breve{x}_{i}(k)=\hat{x}(k),\forall k\geq 0.$ (33) ###### Proof. Summing (31) over all $i=1,2,...,m$ yields $\sum_{i=1}^{m}\eta_{i}(k+1)=\tilde{S}\sum_{i=1}^{m}\eta_{i}(k)+\sum_{i=1}^{m}\tilde{L}_{i}z_{i}(k),$ (34) where we use the fact that $a_{ij}=a_{ji}$ for any $i,j\in\mathcal{V}$. Comparing it with (20), it holds for any instant $k$ and any $j\in\mathcal{V}$ that: $\hat{\xi}_{j}(k)=\sum_{i=1}^{m}\eta_{i,j}(k).$ (35) Therefore, the following equation is satisfied at any $k\geq 0$: $\begin{split}\frac{1}{m}\sum_{i=1}^{m}\breve{x}_{i}(k)&=\sum_{i=1}^{m}F\eta_{i}(k)=\sum_{i=1}^{m}\sum_{j=1}^{m}F_{j}\eta_{i,j}(k)\\\ &=\sum_{j=1}^{m}F_{j}\Big{[}\sum_{i=1}^{m}\eta_{i,j}(k)\Big{]}=\sum_{j=1}^{m}F_{j}\hat{\xi}_{j}(k)=\hat{x}(k).\end{split}$ (36) This completes the proof. ∎ On the other hand, in order to show the stability of proposed estimator, it is also desired to prove the boundedness of error covariance. Towards this end, we introduce the following lemma, the condition of which is characterized in terms of a certain relation between the Mahler measure (the absolute product of unstable eigenvalues of $S$) and the graph condition number (the ratio of the maximum and minimum nonzero eigenvalues of the Laplacian matrix): ###### Lemma 6. Suppose that the product of all unstable eigenvalues of matrix $S$ meets the following condition: $\prod_{j}|\lambda_{j}^{u}(S)|<\frac{1+\mu_{2}/\mu_{m}}{1-\mu_{2}/\mu_{m}},$ (37) where $\lambda_{j}^{u}(S)$ represents the $j$th unstable eigenvalue of $S$. Let $\Gamma=\frac{2}{\mu_{2}+\mu_{m}}\frac{\operatorname{\textbf{1}}_{n}^{T}\mathcal{P}S}{\operatorname{\textbf{1}}_{n}^{T}\mathcal{P}\operatorname{\textbf{1}}_{n}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\in\mathbb{R}^{1\times n}},$ (38) where $\mu_{2}$ and $\mu_{m}$ are, respectively, the second smallest and largest eigenvalues of $\mathcal{L}_{\mathcal{G}}$. Moreover, $\mathcal{P}>0$ is the solution to the following modified algebraic Riccati inequality $\mathcal{P}-S^{T}\mathcal{P}S+\left(1-\zeta^{2}\right)\frac{S^{T}\mathcal{P}\operatorname{\textbf{1}}_{n}\operatorname{\textbf{1}}_{n}^{T}\mathcal{P}S}{\operatorname{\textbf{1}}_{n}^{T}\mathcal{P}\operatorname{\textbf{1}}_{n}}>0,$ (39) with $\zeta$ satisfying $\prod_{j}\left|\lambda_{j}^{u}(S)\right|<\zeta^{-1}\leq\frac{1+\mu_{2}/\mu_{m}}{1-\mu_{2}/\mu_{m}}.$ Then for any $j\in\\{2,...,n\\}$, it holds that $\rho(S-\mu_{j}\operatorname{\textbf{1}}_{n}\Gamma)<1.$ (40) ###### Proof. For any $j\in\\{2,...,n\\}$, let us denote $\zeta_{j}=1-2\mu_{j}/(\mu_{2}+\mu_{m})\leq\zeta$. Since $(S,\operatorname{\textbf{1}}_{n})$ is controllable, there exists some $\mathcal{P}>0$ which solves (39). Together with (38), it holds that $\begin{split}&(S-\mu_{j}\operatorname{\textbf{1}}_{n}\Gamma)^{T}\mathcal{P}(S-\mu_{j}\operatorname{\textbf{1}}_{n}\Gamma)-\mathcal{P}\\\ =&S^{T}\mathcal{P}S-(1-\zeta_{j}^{2})\frac{S^{T}\mathcal{P}\operatorname{\textbf{1}}_{n}\operatorname{\textbf{1}}_{n}^{T}\mathcal{P}S}{\operatorname{\textbf{1}}_{n}^{T}\mathcal{P}\operatorname{\textbf{1}}_{n}}-\mathcal{P}\\\ \leq&S^{T}\mathcal{P}S-(1-\zeta^{2})\frac{S^{T}\mathcal{P}\operatorname{\textbf{1}}_{n}\operatorname{\textbf{1}}_{n}^{T}\mathcal{P}S}{\operatorname{\textbf{1}}_{n}^{T}\mathcal{P}\operatorname{\textbf{1}}_{n}}-\mathcal{P}<0.\end{split}$ (41) Hence, our proof completes. ∎ ###### Remark 4. Note that, if all the eigenvalues of $S$ lie on or outside the unit circle, You et al. [31] prove that (40) holds if and only if (37) is satisfied. In Lemma 6, we further show that, (37) is still a sufficient condition to facilitate (40) if $S$ has stable modes. ###### Remark 5. Invoking Remark 1, each $\lambda_{j}^{u}(S)$ corresponds to a root of the characteristic polynomial of $A^{u}$. Thus, the condition (37) can be rewritten using the system matrix $A^{u}$, $\prod_{j}|r_{j}(A^{u})|<\frac{1+\mu_{2}/\mu_{m}}{1-\mu_{2}/\mu_{m}},$ (42) where $r_{j}(A^{u}$ is a root of the characteristic polynomial of $A^{u}$. With the above preparations, we are now ready to analyze the error covariance of local estimator as below: ###### Theorem 3. Suppose that the Mahler measure of $S$ meets condition (37), and $\Gamma$ is designed based on (38)–(39). With Algorithm 1, the error covariance of each local estimate $\breve{x}_{i}(k)$ is bounded at any instant $k$. ###### Proof. Due to space limitation, the proof is given in Appendix-F. ∎ The proof of Theorem 3 implies that we present a distributed estimation scheme with quantifiable performance. ###### Corollary 1. Suppose that the Mahler measure of $S$ meets condition (37), and $\Gamma$ is designed based on (38)–(39). Let $\breve{W}$ be the asymptotic error covariance of local estimates. Namely, $\breve{W}\triangleq\lim_{k\to\infty}\operatorname*{cov}(\breve{e}(k)),$ where $\breve{e}(k)\triangleq\operatorname*{col}[(\breve{x}_{1}(k)-x(k)),\cdots,(\breve{x}_{m}(k)-x(k))]$. By using Algorithm 1, it holds that $\breve{W}=\bar{W}+(\operatorname{\textbf{1}}_{m}\operatorname{\textbf{1}}_{m}^{T})\otimes P,$ (43) where $\bar{W}$ is the asymptotic error covariance between local estimate and the Kalman estimate, and $P$ is the error covariance of Kalman filter as defined in (5). Moreover, $\breve{W}$ can be exactly calculated. As seen from the calculation, $\bar{W}$, i.e., the performance gap between our estimator and the optimal Kalman filter, is purely caused by the consensus error. Therefore, if infinite consensus steps are allowed between two consecutive sampling instants, the consensus error vanishes and the performance of the proposed estimator coincides with that of the Kalman filter. Combining Theorems 2 and 3, the local estimator is stable at each sensor side. Therefore, we conclude that by applying the algorithm designed for linear system synchronization, i.e., (31), the problem of distributed state estimation is resolved. ###### Remark 6. Note that Algorithm 1 requires each agent to send out an $m$-dimensional vector $\Delta_{i}(k)$ at any time. Therefore, in the network with a large number of sensors, i.e., $n<m$, this solution will cause a high communication cost. To address this issue, this remark, by leveraging the reduced-order estimator (27) in Theorem 1, modifies Algorithm 1 to introduce less communication complexity. To be specific, we aim to implement the reduced order system (27) with distributed estimators. Similar as before, any agent $i$ stores its estimate on all the others in a variable $\vartheta_{i}(k)$, where $\vartheta_{i}(k)\triangleq{\left[\begin{array}[]{c}\vartheta_{i,1}(k)\\\ \vdots\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\vartheta_{i,n}(k)}\end{array}\right]\in\mathbb{R}^{n^{2}}.}$ (44) For each sensor $i$, it is initialized with $\hat{x}_{i}(0)=0$ and $\vartheta_{i}(0)=0$. For the case of $n<m$, the estimation algorithm works as in Algorithm 2. Following similar arguments, the local estimator at each sensor side is proved to be stable. Combining it with Algorithm 1, we conclude the size of message sent by each sensor at any time is $\min\\{m,n\\}$. Compared with the existing solutions in distributed estimation, e.g., [12, 13, 14, 15, 16], our algorithm enjoys lower message complexity. ###### Remark 7. Notice that sensor node $i$ has perfect information of its own local estimate $\xi_{i}(k)$. Therefore, instead of using $\eta_{i,i}(k)$ to infer $\xi_{i}(k)/m$, node $i$ can just use $\xi_{i}(k)/m$ to replace $\eta_{i,i}(k)$ in (32), which potentially improves the performance of the estimators. 1: Using the latest measurement from itself, sensor $i$ computes the local residual and update the local estimate by $\begin{split}z_{i}(k)=y_{i}(k+1)-\beta^{T}\hat{\xi}_{i}(k),\\\ \hat{\xi}_{i}(k+1)=S\hat{\xi}_{i}(k)+\operatorname{\textbf{1}}_{n}z_{i}(k).\end{split}$ 2: Compute $\Delta_{i}(k)=(I_{n}\otimes\Gamma)\vartheta_{i}(k)$ such that $\Gamma$ is calculated by (38). Collect $\Delta_{j}(k)$ from neighbors and fuse the neighboring information with the consensus algorithm as $\begin{split}\vartheta_{i}(k+1)=&(I_{n}\otimes S)\vartheta_{i}(k)+T_{i}z_{i}(k)\\\ &+(I_{n}\otimes\operatorname{\textbf{1}}_{n})\sum_{j=1}^{m}a_{ij}(\Delta_{j}(k)-\Delta_{i}(k)),\end{split}$ (45) where $T_{i}$ is defined in (26). 3: Update the fused estimate on system state as: $\breve{x}_{i}(k+1)=mH\vartheta_{i}(k+1),$ (46) where $H$ is given in (28). 4: Transmit the new state $\Delta_{i}(k+1)$ to neighbors. Algorithm 2 Distributed estimation algorithm $2$ for sensor $i$ ## V Extensions of Proposed Solutions In the previous sections, we leverage the linear system synchronization algorithm proposed in [31], to solve the problem of distributed state estimation. In this section, we aim to extend such a result and show that any control strategy, which can facilitate the linear system synchronization, can be modified to yield a stable distributed estimator. As a result, we bridge the fields of distributed state estimation and linear system synchronization. Let us consider the synchronization of the following homogeneous LTI system: $\displaystyle\eta_{i}(k+1)$ $\displaystyle=\tilde{S}\eta_{i}(k)+\tilde{B}u_{i}(k),\;\forall i\in\mathcal{V},$ (47) where $u_{i}(k)$ is the control input of agent $i$. In literature, a large variety of synchronization algorithms has been proposed with the framework below: $\begin{split}&\omega_{i}(k+1)=\mathcal{A}\omega_{i}(k)+\mathcal{B}\eta_{i}(k+1),\\\ &\Delta_{i}(k)=\tilde{\Gamma}\omega_{i}(k),\\\ &u_{i}(k)=\sum_{j=1}^{m}a_{ij}\gamma_{ij}(k)(\Delta_{j}(k)-\Delta_{i}(k)),\end{split}$ (48) where $\omega_{i}(k)$ is the “hidden state” that is necessary for agent $i$ to yield the communication state $\Delta_{i}(k)$ and input $u_{i}(k)$, and $\tilde{\Gamma}$ refers to the control gain. Notice that (48) can be used to model the controller with memory. Moreover, $\gamma_{ij}(k)\in[0,1]$ models the fading or lossy effect of the communication channel from agent $j$ to agent $i$. At every time, the agent collects the available information in its neighborhood and synthesizes its communication state and control signal via (48). For simplicity, we denote $\mathcal{U}$ as the control strategy that can be represented by (48). Let the average of local states at time $k$ be $\bar{\eta}(k)=\frac{1}{m}\sum_{i=1}^{m}\eta_{i}(k).$ The network of subsystems (47) reaches strong synchronization under $\mathcal{U}$, if the following statements hold at any time: 1. 1. Consistency: the average of local states keeps consistent throughout the execution, i.e., $\bar{\eta}(k+1)=\tilde{S}\bar{\eta}(k).$ (49) 2. 2. Exponential Stability: agents exponentially reach consensus in mean square sense, i.e., there exist $c>0$ and $\rho\in(0,1)$ such that $\mathbb{E}[||\eta_{i}(k)-\bar{\eta}(k)||^{2}]\leq c\rho^{k},\;\forall i\in\mathcal{V}.$ (50) We now review several existing strategies which facilitate the strong synchronizationand show that they can be represented by (48): 1. 1. Let $\Delta_{i}(k)=\tilde{\Gamma}\eta_{i}(k)$ be the communication state defined in Section IV-A. To facilitate the synchronization of homogeneous linear systems in undirected communication topology, You et al. [31] design the following control law: $u_{i}(k)=\sum_{j=1}^{m}a_{ij}(\Delta_{j}(k)-\Delta_{i}(k)),$ (51) which coincides with (48). 2. 2. Another example is the filtered consensus protocol given in [34]. By designing the hidden state as $\omega_{i}(k)=F(q)\eta_{i}(k),$ (52) where $q$ is the unit advance operator, i.e., $q^{-1}s(k)=s(k-1)$, and $F(z)$ is the transfer function of a square stable filter, the synchronization of linear systems is achieved by (48) under a more relaxed condition than (37), that is: $\prod_{j}|\lambda_{j}^{u}(S)|<\frac{1+\sqrt{\mu_{2}/\mu_{m}}}{1-\sqrt{\mu_{2}/\mu_{m}}}.$ 3. 3. Instead of focusing on perfect communication channels, the authors in [32] and [33] develop the control protocols to account for the random failure on communication links and Markovian switching topologies, respectively. By modeling the packet loss with the Bernoulli random variable $\gamma_{ij}(k)\in\\{0,1\\}$, these works complement the results in [31] and prove the mean square stability under the control strategy (48). Notice that Algorithms 1 and 2 utilize (51) for achieving synchronization and producing stable distributed estimators. In what follows, we argue that the optimal Kalman estimate can indeed be distributively implemented using any linear system synchronization algorithms facilitating (49)-(50). To be specific, Algorithm 1 should be modified444Similarly, in the case of $n<m$, one can also derive the general form of Algorithm 2 with any linear system synchronization strategy $\mathcal{U}$. by replacing (31) with $\eta_{i}(k+1)=\tilde{S}\eta_{i}(k)+\tilde{B}u_{i}(k)+\tilde{L}_{i}z_{i}(k),$ (53) where $u_{i}(k)$ is generated by $\mathcal{U}$ that facilitates (49)-(50). We then state the stability of local estimators as below: ###### Theorem 4. Consider any algorithm $\mathcal{U}$ which facilitates the statements (49) and (50). At any time $k$, suppose each $\gamma_{ij}(k)$ is independent of the noise $\\{w(k)\\}$ and $\\{v(k)\\}$. Then (53) yields a stable estimator for each sensor node. Specifically, the following statements hold for any $k\geq 0$: 1. 1. the average of local estimates from all sensor coincides with the optimal Kalman estimate; 2. 2. the error covariance of each local estimate is bounded. ###### Proof. The proof is given in Appendix-H. ∎ ###### Remark 8. Theorem 4 assumes the independence of the communication topology and system/measurement noises. Therefore, as for the event-based synchronization algorithms, where the communication relies on the agents’ states, we cannot analyze its efficiency of solving the distributed estimation problem by directly resorting to Theorem 4. In the future work, we will continue to investigate this topic. In contrast with Fig 1, this work, by using the lossless decomposition of Kalman filter, decouples the local filter from the consensus process, as shown in Fig. 3. The decoupling enables us to leverage the rich results in linear systems synchronization to analyze the performance of local estimators, as proved in Theorem 4. Moreover, following the similar proof arguments as that of Theorem 3, we can show that with our framework, the error covariance of each local estimate actually consists of two orthogonal parts: the inherent estimation error of Kalman filter and the distance from local estimate to Kalman filter, namely: $\displaystyle\operatorname*{cov}(\breve{e}_{i}(k))=\operatorname*{cov}(\breve{x}_{i}(k)-x(k))$ $\displaystyle=$ $\displaystyle\operatorname*{cov}(\breve{x}_{i}(k)-\hat{x}(k)+\hat{x}(k)-x(k))$ $\displaystyle=$ $\displaystyle\operatorname*{cov}(\breve{x}_{i}(k)-\hat{x}(k))+\operatorname*{cov}(\hat{x}(k)-x(k))$ $\displaystyle=$ $\displaystyle\operatorname*{cov}\Big{(}\breve{x}_{i}(k)-\frac{1}{m}\sum_{i=1}^{m}\breve{x}_{i}(k)\Big{)}+\operatorname*{cov}(\hat{x}(k)-x(k))$ $\displaystyle=$ $\displaystyle m^{2}F\operatorname*{cov}\Big{(}\eta_{i}(k)-\frac{1}{m}\sum_{i=1}^{m}\eta_{i}(k)\Big{)}F^{T}+\operatorname*{cov}(\hat{x}(k)-x(k)),$ where the third equality holds due to the optimality of Kalman filter, and the last equality holds by (32). Notice that the second term of RHS is the error covariance of Kalman filter, while first term is the error between local estimate and Kalman filter and purely determined by the consensus process. Therefore, by choosing proper strategy $\mathcal{U}$, extensive results on achieving strong synchronization can be applied to (53) to deal with the consensus error in various settings, such as directed graph, time-varying topologies, etc. Particularly, if infinite consensus steps are allowed between two consecutive sampling instants, the consensus error vanishes, i.e., $\eta_{i}(k)-\frac{1}{m}\sum_{i=1}^{m}\eta_{i}(k)=0$, and the performance of the proposed estimator is optimal since it coincides with that of the Kalman filter. That means the global optimality can be guaranteed. ## VI Numerical Example In this section, we present numerical examples to verify the theoretical results obtained in previous sections. ### VI-A Numerical example when $n<m$ Let us consider the case where four sensors cooperatively estimate the system state. The system parameters are listed below: $\begin{split}&A=\begin{bmatrix}0.9&0\\\ 0&1.1\end{bmatrix},\;C=\begin{bmatrix}1&0&1&1\\\ 0&1&1&-1\end{bmatrix}^{T},\\\ &Q=0.25I_{2},\;R=4I_{4}.\end{split}$ (54) In this example, the number of states is smaller than that of sensors, i.e. $n<m$. We therefore choose Algorithm 2. Moreover, notice that the system is unstable, and sensor $1$ cannot observe the unstable state. Suppose that the topology of these four sensors is a ring with weight $1$ for each edge. The Laplacian matrix is thus: $\mathcal{L_{G}}=\begin{bmatrix}2&-1&0&-1\\\ -1&2&-1&0\\\ 0&-1&2&-1\\\ -1&0&-1&2\end{bmatrix}.$ (55) It is not difficult to check that the second smallest and the largest eigenvalues of $\mathcal{L_{G}}$ are respectively $\mu_{2}=2$, $\mu_{4}=4$. To fulfill the sufficient condition in Lemma 6, let us choose $\zeta=0.5$. We set the initial state $x(0)\sim\mathcal{N}(0,I)$ and the initial local estimate $\breve{x}_{i}(0)=0$ for each sensor $i\in\\{1,2,3,4\\}$. It can be seen that the mean squared local estimate error $e_{i}(k)$ enters steady state and is stable after a few steps (see Fig. 4). $0$$10$$20$$30$$0.2$$0.4$$0.6$$0.8$$1$Time/sEstimation error of state $x_{1}$KFs1s2s3s4 $0$$10$$20$$30$$1$$1.5$$2$$2.5$Time/sEstimation error of state $x_{2}$KFs1s2s3s4 Figure 4: Average mean square estimation error of system states under Kalman filter and local estimators in 10000 experiments. ### VI-B Numerical example when $n>m$ In the second example, we simulate the heat transfer process 555State estimation in diffusion process has wide applications in sensor network, e.g., urban CO2 emission monitoring [39], temperature monitoring in data center [40], etc. in a planar closed region discussed in [41] and [42]: $\frac{\partial u}{\partial t}=\alpha\Big{(}\frac{\partial^{2}u}{\partial x_{1}^{2}}+\frac{\partial^{2}u}{\partial x_{2}^{2}}\Big{)},$ (56) with boundary conditions $\frac{\partial u}{\partial x_{1}}\Big{|}_{t,0,x_{2}}=\frac{\partial u}{\partial x_{1}}\Big{|}_{t,l,x_{2}}=\frac{\partial u}{\partial x_{2}}\Big{|}_{t,x_{1},0}=\frac{\partial u}{\partial x_{2}}\Big{|}_{t,x_{1},l}=0,$ (57) where $x_{1}$ and $x_{2}$ are the coordinates in the region; $u(t,x_{1},x_{2})$ indicates the temperature at time $t$ at position $(x_{1},x_{2})$, $l$ is the side length of the square region and $\alpha$ adjusts the speed of the diffusion process. With a $N\times N$ grid and sample frequency $1$Hz, the diffusion process can be discretized as: $\begin{split}u&(k+1,i,j)-u(k,i,j)=\frac{\alpha}{h^{2}}[u(k,i-1,j)+u(k,i,j-1)\\\ &+u(k,i+1,j)+u(k,i,j+1)-4u(k,i,j)],\end{split}$ (58) where $h=\frac{l}{N-1}$ denotes the size of each grid and $u(k,i,j)$ indicates the temperature at time $k$ at location $(ih,jh)$. By collecting all the temperature values of each grid, we define the state variable $U(k)=[u(k,0,0),\cdots,u(k,0,N-1),u(k,1,0),\cdots,u(k,N-1,N-1))]^{T}$. Further, by introducing process noise into (58), one derives the following system equation: $U(k+1)=AU(k)+w(k),$ (59) where $w(k)\sim\mathcal{N}(0,Q)$ is Gaussian noise. As shown in Fig. 5, $m$ sensors are randomly deployed in this region to monitor the temperature, where the measurement of each sensor is a linear combination of temperature of the grids around it. Specifically, suppose the location of sensor $s$ is $(\hat{x}_{1},\hat{x}_{2})$ such that $\hat{x}_{1}\in[i,i+1)$, $\hat{x}_{2}\in[j,j+1)$, we define $\Delta\hat{x}_{1}=x_{i1}-i$ and $\Delta\hat{x}_{1}=x_{i2}-j$. We assume that the measurement of sensor $s$ at time $k$ is $\begin{split}y_{s}(k)=&\frac{1}{h^{2}}\big{[}(1-\Delta\hat{x}_{1})(1-\Delta\hat{x}_{2})u(k,i,j)\\\ &+\Delta\hat{x}_{1}(1-\Delta\hat{x}_{2})u(k,i+1,j)\\\ &+(1-\Delta\hat{x}_{1})\Delta\hat{x}_{2}u(k,i,j+1)\\\ &+\Delta\hat{x}_{1}\Delta\hat{x}_{2}u(k,i+1,j+1)\big{]}+v_{s}(k).\end{split}$ (60) We collect the measurements of each sensor at time $k$ and denote it as $Y(k)$, then it follows $Y(k)=CU(k)+v(k),$ (61) where $v(k)\sim\mathcal{N}(0,R)$ is the measurement noise and the measurement matrix $C$ can be derived from (60). The parameters for the simulation are listed below: * • $\alpha=0.2$; * • $l=4$ and $N=5$, thus the grid size $h=1$; * • $n=N^{2}=25$ and $m=15$. Therefore, $n>m$, which is different from our first example.; * • $Q=0.2I_{25}$ and $R=3I_{15}$. As discussed in Remark 7, we replace $\eta_{i,i}(k)$ with the estimates given by local Kalman filters. The results are shown in Fig. 5. Our algorithm achieves better performance compared with local Kalman filters which merely use the measurement of the sensor itself. The improvement of each sensor can be found in TABLE I. Specifically, for each sensor $i$, we respectively define the performance of local Kalman filter and our algorithm in terms of: $\varrho_{i1}\triangleq\frac{\operatorname*{tr}(\hat{P}_{i})}{\operatorname*{tr}(P)},\varrho_{i2}\triangleq\frac{\operatorname*{tr}(\breve{P}_{i})}{\operatorname*{tr}(P)},$ (62) where $\hat{P}_{i}$, $\breve{P}_{i}$ and $P$ are respectively the steady-state error covariance of local Kalman filter, our estimator and centralized Kalman filter. We see that the proposed scheme outperforms the local Kalman filter by at least $50\%$ for each sensor. Figure 5: (a) The position and topology of $m$ sensors in the $N\times N$ grid lines; (b) The estimate variance of centralized Kalman filter; (c) The estimate variance of local Kalman filter; (d) The estimate variance of our estimators in 10000 experiments. TABLE I: Performance Improvement in Comparison with Local Kalman filter Sensor index $i$ | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- Local KF performance $\varrho_{i1}$ | 1.94 | 1.94 | 1.96 | 1.96 | 1.94 | 1.93 | 1.97 | 1.94 | 1.95 | 1.94 | 1.92 | 1.94 | 1.95 | 1.94 | 1.95 Our estimator performance $\varrho_{i2}$ | 1.26 | 1.35 | 1.31 | 1.31 | 1.26 | 1.13 | 1.22 | 1.21 | 1.23 | 1.44 | 1.12 | 1.22 | 1.18 | 1.35 | 1.18 Improvement $\varrho_{i1}-\varrho_{i2}$ | 68% | 59% | 65% | 65% | 68% | 80% | 75% | 73% | 71% | 50% | 80% | 72% | 76% | 59% | 77% ### VI-C Comparison with existing algorithms We further compare the performance of Algorithm 1 with those of existing algorithms: 1) centralized Kalman filter (CKF), 2) KCF2009 ( [13]), and 3) CMKF2018 ( [43]), through a numerical example on inverted pendulum. Notice that an inverted pendulum has $n=4$ states: $x=[p;\ \dot{p};\ \theta;\ \dot{\theta}]$, namely, the cart position, cart velocity, pendulum angle from vertical and pendulum angle velocity, respectively. We consider the system linearized at $\theta=\dot{\theta}=0$ and discretized with sampling interval $T=0.01s$, where the detailed system equation can be found in [44] with system noise $w(k)\sim\mathcal{N}(0,0.05^{2}I_{n})$. In the example, $m=4$ sensors are connected as a ring to infer the system state. Let the measurement equation be $\displaystyle y(k)$ $\displaystyle=\left[\begin{array}[]{cccc}1&0&0&0\\\ 1&0&0&0\\\ 1&0&0&0\\\ 0&0&1&0\end{array}\right]x(k)+v(k),$ (63) where $v(k)\sim\mathcal{N}(0,0.3^{2}I_{m})$. Notice that sensor $4$ cannot fully observe the state space. Fig. 6 illustrates the mean square error (MSE) of its estimate on $p$ and $\theta$, respectively. The results show that our algorithm yields better estimation performance. Figure 6: Comparison of the mean square error of the estimates provided by different algorithms in 10000 experiments. ### VI-D Experiment when the global knowledge on system matrix is unavailable Finally, notice that the proposed distributed estimator is based on a lossless decomposition of Kalman filter as developed in Section III, which requires the global knowledge on 1) the system matrix $A$, 2) the measurement matrix $C$, and 3) noise covariance matrix $Q$ and $R$. In the case that certain part of $A$, $C$, $Q$ and $R$ are unknown, before running Algorithm 1 or 2, each sensor can broadcast its local parameters. In this way, every sensor can obtain the system parameters it needs within finite steps. To quantify the overhead incurred by this initialization, i.e., broadcasting the parameters, in the third example, we conduct an experiment using $m=15$ raspberry pis equipped with temperature sensors which run the proposed distributed estimation algorithm every minute. In our experiment, it is assumed that the sensors do not have global information on $C$ and $R$. Thus, let each of them broadcast its $C_{i}$ and $R_{i}$ at the starting phase so that every sensor can obtain system parameters it needs. The mean traffic of a sensor with $3$ neighbors is shown in Fig. 7. It turns out, compared with the centralized Kalman filter, our solution induces lower communication burden even with the additional effort on initial broadcasting. Obviously, the merits become more apparent with the increasing scale of sensor networks. Figure 7: Mean network traffic v.s. time. ## VII Conclusion In this paper, the problem of distributed state estimation has been studied for an LTI Gaussian system. We investigate both cases where $m>n$ and $m\leq n$, and propose distributed estimators for both cases to introduce low communication cost. The local estimator is proved to be stable at each sensor side, in the sense that the covariance of estimation error is proved to be bounded and the asymptotic error covariance can also be derived. Our major merit lays in reformulating the problem of distributed estimation to that of linear system synchronization. ## Appendix A Proof of Lemma 1 We will prove by contradiction. If $(X^{T}+qp^{T},q)$ is not controllable, then we can find some $s$, such that the rank of $\begin{bmatrix}X^{T}+qp^{T}-sI&q\end{bmatrix}$ is strictly less than $n$. Therefore, there exists a non-zero $v$, such that $v^{T}\begin{bmatrix}X^{T}+qp^{T}-sI&q\end{bmatrix}=0,$ which implies that $(X+pq^{T})v-sv=0,\,q^{T}v=0.$ Therefore $(X+pq^{T})v-sv=0$ and $Xv-sv=0$, implying that $s$ is an eigenvalue of both $X$ and $X+pq^{T}$, which contradicts with the assumption $X$ and $X+pq^{T}$ do not share eigenvalues. We thus complete the proof. ## Appendix B Proof of Lemma 2 We will prove this lemma by construction. Towards the end, let us next consider the following equation: $T[p,Xp,\cdots,X^{n-1}p]=TR_{X}=[q,Yq,\cdots,Y^{n-1}q]=R_{Y},$ (64) where $R_{X}=[p,Xp,\cdots,X^{n-1}p]$ and $R_{Y}=[q,Yq,\cdots,Y^{n-1}q]$. Since $(X,p)$ is controllable, $R_{X}$ is full rank and thus invertible, and $T=R_{Y}R_{X}^{-1}$ solves (64). Clearly $Tp=q$. In what follows, we shall prove that $TX=YT$. To this end, let us denote the characteristic polynomial of $X$ as $\varphi(s)=s^{n}+\alpha_{n-1}s^{n-1}+\ldots\alpha_{0}$. It is noted that $\begin{split}TX^{n}p&=T(-\alpha_{n-1}X^{n-1}-\alpha_{n-2}X^{n-2}-\cdots-\alpha_{0}I)p\\\ &=(-\alpha_{n-1}Y^{n-1}q-\alpha_{n-2}Y^{n-2}q-\cdots-\alpha_{0}q)=Y^{n}q,\end{split}$ (65) where the first and the last equality is due to Carley-Hamilton and the second equality is from the fact $TR_{X}=R_{Y}$. As a result $TXR_{X}=T[Xp,\cdots,X^{n}p]=[Yq,\cdots,Y^{n}q]=YR_{Y},$ Hence, $TX=YR_{Y}R_{X}^{-1}=YT$, which finishes the proof. ## Appendix C Proof of Lemma 4 1) From (20), it is easy to verify that $\begin{split}&S\hat{\xi}_{i}(k)+\operatorname{\textbf{1}}_{n}z_{i}(k)\\\ =&(\Lambda+\operatorname{\textbf{1}}_{n}\beta^{T})\hat{\xi}_{i}(k)+\operatorname{\textbf{1}}_{n}[y_{i}(k+1)-\beta^{T}\hat{\xi}_{i}(k)]\\\ =&\Lambda\hat{\xi}_{i}(k)+\operatorname{\textbf{1}}_{n}y_{i}(k+1).\end{split}$ (66) As a result, the local filter (20) has the same input-output relationship with (15). 2) By Lemma 2, we know that for any $i\in\mathcal{V}$, we can find $G_{i}^{u}\in\mathbb{R}^{n\times n^{u}}$, such that $(G_{i}^{u})^{T}S^{T}=\left(A^{u}\right)^{T}(G_{i}^{u})^{T},\,(G_{i}^{u})^{T}\beta=(C_{i}^{u}A^{u})^{T},$ which implies that $\displaystyle G_{i}^{u}A^{u}-\mathbf{1}_{n}C_{i}^{u}A^{u}$ $\displaystyle=SG_{i}^{u}-\operatorname{\textbf{1}}_{n}\beta^{T}G_{i}^{u}$ (67) $\displaystyle=(\Lambda+1\beta^{T})G_{i}^{u}-\operatorname{\textbf{1}}_{n}\beta^{T}G_{i}^{u}=\Lambda G_{i}^{u},$ $\displaystyle\beta^{T}G_{i}^{u}$ $\displaystyle=C_{i}^{u}A^{u}.$ Furthermore, $\displaystyle\begin{bmatrix}G_{i}^{u}&0\end{bmatrix}A-\mathbf{1}_{n}C_{i}A$ $\displaystyle=\begin{bmatrix}G_{i}^{u}A^{u}&0\end{bmatrix}-\operatorname{\textbf{1}}_{n}\begin{bmatrix}C_{i}^{u}A^{u}&C_{i}^{s}A^{s}\end{bmatrix}$ (68) $\displaystyle=\Lambda\begin{bmatrix}G_{i}^{u}&0\end{bmatrix}-\mathbf{1}_{n}\begin{bmatrix}0&C_{i}^{s}A^{s}\end{bmatrix},$ $\displaystyle\beta^{T}\begin{bmatrix}G_{i}^{u}&0\end{bmatrix}$ $\displaystyle=\begin{bmatrix}C_{i}^{u}A^{u}&0\end{bmatrix}=C_{i}A-\begin{bmatrix}0&C_{i}^{s}A^{s}\end{bmatrix},$ where $A$ and $C_{i}$ are given in (8) and (10), respectively. For simplicity, we denote $G_{i}\triangleq\begin{bmatrix}G_{i}^{u}&0\end{bmatrix}\in\mathbb{R}^{n\times n}.$ (69) Moreover, let $\epsilon_{i}(k)\triangleq G_{i}x(k)-\hat{\xi}_{i}(k).$ (70) It follows from (15) that $\begin{split}&\quad\;\;\epsilon_{i}(k+1)=G_{i}x(k+1)-\hat{\xi}_{i}(k+1)\\\ &=G_{i}Ax(k)+G_{i}w(k)-\Lambda\hat{\xi}_{i}(k)-\operatorname{\textbf{1}}_{n}y_{i}(k+1)\\\ &=(G_{i}-\operatorname{\textbf{1}}_{n}C_{i})Ax(k)-\Lambda\hat{\xi}_{i}(k)+(G_{i}-\operatorname{\textbf{1}}_{n}C_{i})w(k)\\\ &\quad-\operatorname{\textbf{1}}_{n}v_{i}(k+1)\\\ &=\Lambda G-\operatorname{\textbf{1}}_{n}\begin{bmatrix}0&C_{i}^{s}A^{s}\end{bmatrix}x(k)-\Lambda\hat{\xi}_{i}(k)+(G_{i}-\operatorname{\textbf{1}}_{n}C_{i})w(k)\\\ &\quad-\operatorname{\textbf{1}}_{n}v_{i}(k+1)\\\ &=\Lambda\epsilon_{i}(k)-\operatorname{\textbf{1}}_{n}C_{i}^{s}A^{s}x^{s}(k)+(G_{i}-\operatorname{\textbf{1}}_{n}C_{i})w(k)-\operatorname{\textbf{1}}_{n}v_{i}(k+1),\end{split}$ (71) where the second to last equality holds by (68). Due to the fact that $\Lambda$ and $A^{s}$ are stable, we conclude that $\epsilon_{i}(k)$ is stable, i.e., $\operatorname*{cov}(\epsilon_{i}(k))$ is bounded. One thus has $\begin{split}z_{i}(k)&=y_{i}(k+1)-\beta^{T}\hat{\xi}_{i}(k)\\\ &=y_{i}(k+1)-\beta^{T}(G_{i}x(k)-\epsilon_{i}(k))\\\ &=C_{i}(Ax(k)+w(k))+v_{i}(k+1)+\beta^{T}\epsilon_{i}(k)\\\ &\qquad-(C_{i}A-\begin{bmatrix}0&C_{i}^{s}A^{s}\end{bmatrix})x(k)\\\ &=\beta^{T}\epsilon_{i}(k)+C_{i}^{s}A^{s}x^{s}(k)+C_{i}w(k)+v_{i}(k+1).\end{split}$ (72) As proved in (71), $\operatorname*{cov}(\epsilon_{i}(k))$ is bounded. Moreover, it follows from (9) that $C_{i}^{s}A^{s}x^{s}(k)$ is a linear combination of the stable parts in $x(k)$. Also, the covariance of $w(k)$ and $v_{i}(k+1)$ are bounded as $Q$ and $R_{i}$, respectively. We thus conclude that $z_{i}(k)$ is stable, i.e., the covariance of $z_{i}(k)$ is always bounded. ## Appendix D Proof of Lemma 5 For the proof of Lemma 5, we need the following result: ###### Lemma 7. Given any vector $w\in\mathbb{R}^{n}$. Suppose $(S^{T},v)$ is controllable, then there exists a polynomial $p$ of at most $n-1$ degree, such that $w$ can be decomposed as $w^{T}=v^{T}\varphi(S).$ (73) ###### Proof. Suppose $\varphi(S)=\alpha_{0}I+\alpha_{1}S+\cdots+\alpha_{n-1}S^{n-1}$. We thus rewrite (73) as $w=\begin{bmatrix}v&S^{T}v&\cdots&\left(S^{n-1}\right)^{T}v\end{bmatrix}\begin{bmatrix}\alpha_{0}\\\ \vdots\\\ \alpha_{n-1}\end{bmatrix}.$ (74) Since $(S^{T},v)$ is controllable, the first matrix on the RHS of the equation has a column rank of $n$ and hence the above equation is always solvable. We therefore complete the proof. ∎ Now we are ready to prove Lemma 5. Notice that any matrix $W$ can be decomposed as $W=\begin{bmatrix}w_{1}^{T}\\\ \vdots\\\ w_{n}^{T}\end{bmatrix}=e_{1}w_{1}^{T}+e_{2}w_{2}^{T}+\cdots e_{n}w_{n}^{T}.$ (75) Since $(S^{T},\beta)$ is controllable, (24) can be concluded by applying Lemma 7 to (75). ## Appendix E Proof of Theorem 1 To begin with, we note that the following relation holds true at any $k\geq 0$: $F_{i}S^{k}=\Big{[}\sum_{j=1}^{n}H_{j}p_{ij}(S)\Big{]}S^{k}=\sum_{j=1}^{n}H_{j}S^{k}p_{ij}(S),$ (76) where the last equality holds as $S$ is commutable with any polynomials of itself. Then let us consider the output of system (22): $\displaystyle\hat{x}(k+1)$ $\displaystyle=\sum_{t=0}^{k}F(I_{m}\otimes S)^{t}(I_{m}\otimes\textbf{1}_{n})z(k-t)$ (77) $\displaystyle=\sum_{t=0}^{k}\Big{(}\sum_{i=1}^{m}F_{i}S^{t}\textbf{1}_{n}z_{i}(k-t)\Big{)}$ $\displaystyle=\sum_{t=0}^{k}\Big{(}\sum_{i=1}^{m}\sum_{j=1}^{n}H_{j}S^{t}p_{ij}(S)\textbf{1}_{n}z_{i}(k-t)\Big{)}$ $\displaystyle=\sum_{t=0}^{k}\Big{(}\sum_{j=1}^{n}H_{j}S^{t}\big{[}\sum_{i=1}^{m}p_{ij}(S)\textbf{1}_{n}z_{i}(k-t)\big{]}\Big{)}$ $\displaystyle=\sum_{t=0}^{k}H(I_{n}\otimes S)^{t}Tz(k-t)=\tilde{x}(k+1).$ Notice that (27) has $z(k)$ as its input and $\tilde{x}(k)$ as its output. As proved, given any $z(k)$, (22) and (27) yield the same output, i.e., $\tilde{x}(k)=\hat{x}(k+1)$. Hence, we conclude that the two systems have the identical transfer functions. The proof is thus completed. ## Appendix F Proof of Theorem 3 For simplicity, we first define aggregated vectors and matrices as below: $\begin{split}\eta(k)&\triangleq\begin{bmatrix}\eta_{1}(k)\\\ \vdots\\\ \eta_{m}(k)\end{bmatrix},\hat{\xi}(k)\triangleq\begin{bmatrix}\hat{\xi}_{1}(k)\\\ \vdots\\\ \hat{\xi}_{m}(k)\end{bmatrix},\\\ L_{\eta}&\triangleq\begin{bmatrix}\tilde{L}_{1}&&\\\ &\ddots&\\\ &&\tilde{L}_{m}\end{bmatrix}.\\\ \end{split}$ (78) Then, we can rewrite (31) in matrix form as: $\begin{split}&\eta(k+1)\\\ =&(I_{m}\otimes\tilde{S})\eta(k)+L_{\eta}z(k)-[I_{m}\otimes(\tilde{B}\tilde{\Gamma})](\mathcal{L}_{\mathcal{G}}\otimes I_{n})\eta(k)\\\ =&[I_{m}\otimes\tilde{S}-\mathcal{L}_{\mathcal{G}}\otimes(\tilde{B}\tilde{\Gamma})]\eta(k)+L_{\eta}z(k).\end{split}$ (79) Next let us denote the average state of all agents as $\bar{\eta}(k)\triangleq\frac{1}{m}\sum_{i=1}^{m}\eta_{i}(k)=\frac{1}{m}(\operatorname{\textbf{1}}_{m}^{T}\otimes I_{mn})\eta(k).$ (80) Since $\operatorname{\textbf{1}}_{m}^{T}\mathcal{L}_{\mathcal{G}}=0$, it holds that $\begin{split}&\bar{\eta}(k+1)\\\ =&\frac{1}{m}(\operatorname{\textbf{1}}_{m}^{T}\otimes I_{mn})\Big{(}[I_{m}\otimes\tilde{S}-\mathcal{L}_{\mathcal{G}}\otimes(\tilde{B}\tilde{\Gamma})]\eta(k)+L_{\eta}z(k)\Big{)}\\\ =&\tilde{S}\bar{\eta}(k)+\frac{1}{m}(\operatorname{\textbf{1}}_{m}^{T}\otimes I_{mn})L_{\eta}z(k).\end{split}$ (81) Furthermore, we define the state deviation of each sensor as $\delta_{i}(k)\triangleq\eta_{i}(k)-\bar{\eta}(k)$ and then stack them as an aggregated vector $\delta(k)\triangleq\operatorname*{col}(\delta_{1}(k),\cdots,\delta_{m}(k))$. Combining (79) and (81) yields the dynamic equation of $\delta(k)$: $\begin{split}\delta(k+1)&=[I_{m}\otimes\tilde{S}-\mathcal{L}_{\mathcal{G}}\otimes(\tilde{B}\tilde{\Gamma})]\delta(k)+L_{\delta}z(k),\end{split}$ (82) where $L_{\delta}\triangleq[(I_{m}-\frac{1}{m}\operatorname{\textbf{1}}_{m}\operatorname{\textbf{1}}_{m}^{T})\otimes I_{mn}]L_{\eta}.$ (83) Recall that the Laplacian matrix of an undirected graph is symmetric. Therefore, we can always find an unitary matrix $\Phi\triangleq[\frac{1}{\sqrt{m}}\operatorname{\textbf{1}}_{m},\phi_{2},\cdots,\phi_{m}]$, such that $\mathcal{L}_{\mathcal{G}}$ is diagonalized as $\diag(0,\mu_{2},\cdots,\mu_{m})=\Phi^{T}\mathcal{L}_{\mathcal{G}}\Phi.$ (84) Using the property of Kronecker product yields that $\begin{split}(\Phi\otimes I_{mn})^{T}[I_{m}\otimes\tilde{S}-\mathcal{L}_{\mathcal{G}}\otimes(\tilde{B}\tilde{\Gamma})](\Phi\otimes I_{mn})\\\ =\diag(\tilde{S},\tilde{S}-\mu_{2}\tilde{B}\tilde{\Gamma},...,\tilde{S}-\mu_{m}\tilde{B}\tilde{\Gamma}).\end{split}$ (85) Denote $\tilde{\delta}(k)\triangleq(\Phi\otimes I_{mn})^{T}\delta(k).$ (86) One has $\begin{split}\tilde{\delta}(k+1)=A_{\tilde{\delta}}\tilde{\delta}(k)+L_{\tilde{\delta}}z(k),\end{split}$ (87) where $A_{\tilde{\delta}}\triangleq diag(\tilde{S},\tilde{S}-\mu_{2}\tilde{B}\tilde{\Gamma},\cdots,\tilde{S}-\mu_{m}\tilde{B}\tilde{\Gamma})$ and $L_{\tilde{\delta}}\triangleq[(\Phi^{T}-\frac{1}{m}\Phi^{T}\operatorname{\textbf{1}}_{m}\operatorname{\textbf{1}}_{m}^{T})\otimes I_{mn}]L_{\eta}$. We next study the stability of above system. To proceed, let us partition the state into two parts, i.e., $\tilde{\delta}(k)=[\tilde{\delta}^{T}_{1}(k),\tilde{\delta}^{T}_{2}(k)]^{T}$, where $\tilde{\delta}_{1}(k)\in\mathbb{R}^{mn}$ is a vector consisting of the first $mn$ entries of $\tilde{\delta}(k)$ and satisfies $\tilde{\delta}_{1}(k)=\frac{1}{\sqrt{m}}\sum_{i=1}^{m}\delta_{i}(k)=\frac{1}{\sqrt{m}}\sum_{i=1}^{m}(\eta_{i}(k)-\bar{\eta}(k))=0.$ (88) Therefore, $\tilde{\delta}_{1}(k)$ is stable. Moreover, it holds that $\tilde{\delta}_{2}(k+1)=\diag(\tilde{S}-\mu_{2}\tilde{B}\tilde{\Gamma},\cdots,\tilde{S}-\mu_{m}\tilde{B}\tilde{\Gamma})\tilde{\delta}_{2}(k)+\tilde{L}_{\tilde{\delta}}z(k),$ (89) where $\tilde{L}_{\tilde{\delta}}$ consists the last $(m^{2}n-mn)$ rows of $\tilde{L}_{\tilde{\delta}}$. In view of Lemma 6, $\diag(\tilde{S}-\mu_{2}\tilde{B}\tilde{\Gamma},\cdots,\tilde{S}-\mu_{m}\tilde{B}\tilde{\Gamma})$ is Schur. Recalling Lemma 4, $z(k)$ is also stable. We therefore conclude that (89) is stable, which further implies the stability of (87). On the other hand, one derives from (72) that $z(k)=Cw(k)+v(k+1)+(I_{m}\otimes\beta^{T})\epsilon(k)+C^{s}A^{s}x^{s}(k),$ (90) where $\epsilon(k)\triangleq\operatorname*{col}(\epsilon_{1}(k),\cdots,\epsilon_{m}(k))$ and $C^{s}={\left[\begin{array}[]{c}C_{1}^{s}\\\ \vdots\\\ C_{m}^{s}\end{array}\right]}.$ Recalling (71), it follows that $\begin{split}\epsilon(k+1)=(I_{m}\otimes\Lambda)\epsilon(k)&+\begin{bmatrix}G_{1}-\operatorname{\textbf{1}}_{n}C_{1}\\\ \vdots\\\ G_{m}-\operatorname{\textbf{1}}_{n}C_{m}\end{bmatrix}w(k)\\\ -(I_{m}\otimes\operatorname{\textbf{1}}_{n})&v(k+1)-(I_{m}\otimes\operatorname{\textbf{1}}_{n})C^{s}A^{s}x^{s}(k)\\\ =(I_{m}\otimes\Lambda)\epsilon(k)&+W_{\epsilon}w(k)+V_{\epsilon}v(k+1)+A_{\epsilon}x^{s}(k),\end{split}$ where $\begin{split}W_{\epsilon}&\triangleq\begin{bmatrix}G_{1}-\operatorname{\textbf{1}}_{n}C_{1}\\\ \vdots\\\ G_{m}-\operatorname{\textbf{1}}_{n}C_{m}\end{bmatrix},\\\ V_{\epsilon}&\triangleq-(I_{m}\otimes\operatorname{\textbf{1}}_{n}),A_{\epsilon}\triangleq-(I_{m}\otimes\operatorname{\textbf{1}}_{n})C^{s}A^{s}.\end{split}$ (91) By combining the above dynamics with (9), one derives that $\begin{split}\begin{bmatrix}\tilde{\delta}(k+1)\\\ \epsilon(k+1)\\\ x^{s}(k+1)\end{bmatrix}=&\begin{bmatrix}A_{\tilde{\delta}}&L_{\tilde{\delta}}(I_{m}\otimes\beta^{T})&L_{\tilde{\delta}}C^{s}A^{s}\\\ &I_{m}\otimes\Lambda&A_{\epsilon}\\\ &&A^{s}\end{bmatrix}\begin{bmatrix}\tilde{\delta}(k)\\\ \epsilon(k)\\\ x^{s}(k)\end{bmatrix}\\\ &+\begin{bmatrix}L_{\tilde{\delta}}C\\\ W_{\epsilon}\\\ J\end{bmatrix}w(k)+\begin{bmatrix}L_{\tilde{\delta}}\\\ V_{\epsilon}\\\ 0\end{bmatrix}v(k+1)\end{split}$ (92) Notice that the above system is stable. Hence, we calculate the covariance at both sides and in steady state. It holds that $W_{r}$, the steady state covariance, is the unique solution of below Lyapunov equation: $\begin{split}W_{r}=A_{r}W_{r}A_{r}^{T}+\begin{bmatrix}L_{\tilde{\delta}}C\\\ W_{\epsilon}\\\ J\end{bmatrix}Q\begin{bmatrix}L_{\tilde{\delta}}C\\\ W_{\epsilon}\\\ J\end{bmatrix}^{T}+\begin{bmatrix}L_{\tilde{\delta}}\\\ V_{\epsilon}\\\ 0\end{bmatrix}R\begin{bmatrix}L_{\tilde{\delta}}\\\ V_{\epsilon}\\\ 0\end{bmatrix}^{T},\end{split}$ (93) where $A_{r}=\begin{bmatrix}A_{\tilde{\delta}}&L_{\tilde{\delta}}(I_{m}\otimes\beta^{T})&L_{\tilde{\delta}}C^{s}A^{s}\\\ &I_{m}\otimes\Lambda&A_{\epsilon}\\\ &&A^{s}\end{bmatrix}.$ In view of (86), it holds that $\delta(k)=\begin{bmatrix}\Phi\otimes I_{mn}&0&0\end{bmatrix}\begin{bmatrix}\tilde{\delta}(k)\\\ \epsilon(k)\\\ x^{s}(k)\end{bmatrix}=\Phi_{\delta}\begin{bmatrix}\tilde{\delta}(k)\\\ \epsilon(k)\\\ x^{s}(k)\end{bmatrix},$ (94) where $\Phi_{\delta}\triangleq\begin{bmatrix}\Phi\otimes I_{mn}&0&0\end{bmatrix}.$ (95) Moreover, let us denote $\bar{e}_{i}(k)\triangleq\breve{x}_{i}(k)-\hat{x}(k),$ (96) which is the bias from local estimate $\breve{x}_{i}(k)$ to optimal Kalman one. Combining (16) and (35) yields $\hat{x}(k)=F\sum_{i=1}^{m}\eta_{i}(k).$ (97) One thus has $\bar{e}_{i}(k)=mF(\eta_{i}(k)-\bar{\eta}(k))=mF\delta_{i}(k).$ (98) Stacking such errors from all sensors together yields $\begin{split}\bar{e}(k)=(I_{m}\otimes mF)\delta(k)=(I_{m}\otimes mF)\Phi_{\delta}\begin{bmatrix}\tilde{\delta}(k)\\\ \epsilon(k)\end{bmatrix}.\end{split}$ (99) Therefore, in steady state, the covariance of $\bar{e}(k)$ can be calculated as $\bar{W}=[(I_{m}\otimes mF)\Phi_{\delta}]W_{r}[(I_{m}\otimes mF)\Phi_{\delta}]^{T}.$ (100) Finally, for any sensor $i$, let us denote its estimation error as $\begin{split}\breve{e}_{i}(k)&=\breve{x}_{i}(k)-x(k)\\\ &=(\breve{x}_{i}(k)-\hat{x}(k))+(\hat{x}(k)-x(k))\\\ &=\bar{e}_{i}(k)+\hat{e}(k),\end{split}$ (101) where $\hat{e}(k)$ is the estimation error of Kalman filter. Since Kalman filter is optimal, $\bar{e}_{i}(k)$ is orthogonal to $\hat{e}(k)$. By defining $\breve{e}(k)\triangleq\operatorname*{col}(\breve{e}_{1}(k),\cdots,\breve{e}_{m}(k))$, we therefore have $\breve{e}(k)=\bar{e}(k)+\operatorname{\textbf{1}}_{m}\otimes\hat{e}(k).$ (102) Calculating the covariance of both sides yields $\breve{W}=\bar{W}+(\operatorname{\textbf{1}}_{m}\operatorname{\textbf{1}}_{m}^{T})\otimes P,$ (103) where $\breve{W}$ is the steady-state covariance of $\breve{e}(k)$ and $P$ is given in (5). Notice that the above calculation also indicates the boundedness of $\operatorname*{cov}(\breve{e}(k))$ at any time. ## Appendix G Proof of Corollary 1 As proved in Appendix-F, one can exactly calculate $\bar{W}$ by solving Lyapunov equations (93) and (100). The result is thus obvious by invoking (103). ## Appendix H Proof of Theorem 4 To proceed, let us introduce the following lemma: ###### Lemma 8. Given any random variables $\kappa_{1},\cdots,\kappa_{\tau}$, it follows that $\mathbb{E}\Big{[}\Big{|}\Big{|}\sum_{i=1}^{\tau}\kappa_{i}\Big{|}\Big{|}^{2}\Big{]}\leq\Big{(}\sum_{i=1}^{\tau}\sqrt{\mathbb{E}[||\kappa_{i}||^{2}]}\Big{)}^{2}.$ (104) ###### Proof. In order to prove (104), it is equivalent to show that $\sum_{i=1}^{\tau}\sum_{j=1}^{\tau}\mathbb{E}[\kappa_{i}^{T}\kappa_{j}]\leq\sum_{i=1}^{\tau}\sum_{j=1}^{\tau}\sqrt{\mathbb{E}[\kappa_{i}^{T}\kappa_{i}]}\sqrt{\mathbb{E}[\kappa_{j}^{T}\kappa_{j}]}.$ (105) By Cauchy-Schwarz inequality, it holds for any $i,j$ that $\mathbb{E}[\kappa_{i}^{T}\kappa_{j}]\leq\sqrt{\mathbb{E}[\kappa_{i}^{T}\kappa_{i}]}\sqrt{\mathbb{E}[\kappa_{j}^{T}\kappa_{j}]}.$ (106) The proof is thus completed. ∎ We next prove Theorem 4. Applying similar arguments to Theorem 2, it is easy to see from the consistency condition (49) that the average of local estimates coincides with the optimal Kalman filter. We hence focus on the analysis of estimation error covariance. Let us denote $\delta_{i}(k)\triangleq\eta_{i}(k)-1/m\sum_{i=1}^{m}\eta_{i}(k)$ and $\varpi_{i}(k)\triangleq\omega_{i}(k)-1/m\sum_{i=1}^{m}\omega_{i}(k)$. Moreover, we define $\displaystyle\delta(k)\triangleq\operatorname*{col}(\delta_{1}(k),\cdots,\delta_{m}(k)),$ $\displaystyle\varpi(k)\triangleq\operatorname*{col}(\varpi_{1}(k),\cdots,\varpi_{m}(k)).$ It hence follows from (48) that $\begin{bmatrix}\delta(k+1)\\\ \varpi(k)\end{bmatrix}=\begin{bmatrix}\mathcal{D}(k)&\mathcal{J}(k)\\\ \widetilde{\mathcal{B}}&\widetilde{\mathcal{A}}\end{bmatrix}\begin{bmatrix}\delta(k)\\\ \varpi(k-1)\end{bmatrix}+\begin{bmatrix}L_{\delta}\\\ 0\end{bmatrix}z(k),$ (107) where $L_{\delta}$ is defined in (83), and $\displaystyle\mathcal{D}(k)\triangleq I_{m}\otimes\tilde{S}-\mathcal{L}(k)\otimes(\tilde{B}\tilde{\Gamma}\mathcal{B}),$ $\displaystyle\mathcal{J}(k)\triangleq-\mathcal{L}(k)\otimes(\tilde{B}\tilde{\Gamma}\mathcal{A}),\widetilde{\mathcal{A}}\triangleq I_{m}\otimes\mathcal{A},\;\widetilde{\mathcal{B}}\triangleq I_{m}\otimes\mathcal{B},$ with $\mathcal{L}(k)\triangleq\\{\mathcal{L}_{i,j}(k)\\}$ being the (random) Laplacian matrix with respect to the weights $\\{a_{ij}\gamma_{ij}(k)\\}$. Namely, $\mathcal{L}_{i,j}(k)\triangleq\left\\{\begin{array}[]{l}{\sum_{l=1}^{m}a_{il}\gamma_{il}(k),\;\quad j=i}\\\ {-a_{ij}\gamma_{ij}(k),\qquad\quad\;j\neq i}\end{array}.\right.$ (108) For simplicity, Let $\displaystyle\mathcal{Q}(k)\triangleq\begin{bmatrix}\mathcal{D}(k)&\mathcal{J}(k)\\\ \widetilde{\mathcal{B}}&\widetilde{\mathcal{A}}\end{bmatrix}.$ (109) Since $\delta_{i}(0)=0$ and $\varpi_{i}(0)=0$ hold for any $i$, it follows that $\begin{bmatrix}\delta(k+1)\\\ \varpi(k)\end{bmatrix}=\sum_{t=0}^{k}\bigg{(}\mathcal{Q}(k,t+1)\begin{bmatrix}L_{\delta}\\\ 0\end{bmatrix}z(t)\bigg{)},$ (110) where the transition matrix is defined as $\mathcal{Q}(k,s)=\left\\{\begin{array}[]{cc}\mathcal{Q}(k)\mathcal{Q}(k-1)\cdots\mathcal{Q}(s),&k\geq s,\\\ I,&k<s.\end{array}\right.$ Then consider the update of any agent $i$. From the above equation, we conclude that $\delta_{i}(k+1)=\sum_{t=0}^{k}\Pi_{i}(k,t+1)z(t),$ (111) where $\Pi_{i}(k,t+1)$ refers to the $i$-th row of matrix $\mathcal{Q}(k,t+1)[L_{\delta}\quad 0]^{T}.$ Namely, the consensus error of agent $i$, i.e. $\delta_{i}(k+1)$, is caused by the sequence of residuals $\\{z(t)\\}$, where $t\leq k$. For simplicity, we denote $\kappa_{i}(k,t)\triangleq\Pi_{i}(k,t+1)z(t).$ Since $\operatorname*{cov}(z(t))$ is bounded at any time, in view of (50), the following statement holds for any $t\leq k$: $\mathbb{E}[||\kappa_{i}(k,t)||^{2}]\leq c\rho^{k-t}.$ (112) Therefore, one has that $\begin{split}cov(\delta_{i}(k+1))&=\mathbb{E}[||\delta_{i}(k+1)||^{2}]=\mathbb{E}\Big{[}\Big{|}\Big{|}\sum_{t=0}^{k}\kappa_{i}(k,t)\Big{|}\Big{|}^{2}\Big{]}\\\ &\leq\Big{(}\sum_{i=1}^{\tau}\sqrt{\mathbb{E}[||\kappa_{i}(k,t)||^{2}]}\Big{)}^{2}\leq\bigg{(}\sum_{t=0}^{k}\sqrt{c\rho^{k-t}}\bigg{)}^{2}\\\ &=\frac{c(1-\sqrt{\rho}^{k})^{2}}{(1-\sqrt{\rho})^{2}},\end{split}$ (113) where the first inequality holds by using Lemma 8. Since $\rho\in(0,1)$, combining the above results with (98) and (101) yields that the estimation error is stable. ###### Remark 9. It is noted that the reformulation (20) with stable input $z_{i}(k)$ is essential to establish the stability of local estimators. To be concrete, the stability of (111) is guaranteed under the bounded input, which is key to prove the boundedness of estimation error covariance, as can be observed from (98)-(103). On the other hand, if an unstable input, e.g., $y_{i}(k)$ as in (15), is applied, we cannot conclude on the stability of local estimator even using the exponentially converged synchronization algorithms. ## References * [1] M. V. Subbotin and R. S. Smith, “Design of distributed decentralized estimators for formations with fixed and stochastic communication topologies,” _Automatica_ , vol. 45, no. 11, pp. 2491–2501, 2009. * [2] L. Xie, D.-H. Choi, S. Kar, and H. V. Poor, “Fully distributed state estimation for wide-area monitoring systems,” _IEEE Transactions on Smart Grid_ , vol. 3, no. 3, pp. 1154–1169, 2012. * [3] Z.-Q. Luo, “Universal decentralized estimation in a bandwidth constrained sensor network,” _IEEE Transactions on Information Theory_ , vol. 51, no. 6, pp. 2210–2219, 2005. * [4] T. T. Vu and A. R. Rahmani, “Distributed consensus-based Kalman filter estimation and control of formation flying spacecraft: Simulation and validation,” in _Proceedings of the AIAA Guidance, Navigation, and Control Conference_ , 2015, p. 1553. * [5] B. Jia, K. D. Pham, E. Blasch, D. Shen, Z. Wang, and G. Chen, “Cooperative space object tracking using space-based optical sensors via consensus-based filters,” _IEEE Transactions on Aerospace and Electronic Systems_ , vol. 52, no. 4, pp. 1908–1936, 2016. * [6] B. D. Anderson and J. B. Moore, _Optimal Filtering_. Courier Corporation, 2012. * [7] Y. Bar-Shalom and L. Campo, “The effect of the common process noise on the two-sensor fused-track covariance,” _IEEE Transactions on Aerospace and Electronic Systems_ , no. 6, pp. 803–805, 1986. * [8] K. H. Kim, “Development of track to track fusion algorithms,” in _Proceedings of 1994 American Control Conference_. IEEE, 1994, pp. 1037–1041. * [9] S.-L. Sun and Z.-L. Deng, “Multi-sensor optimal information fusion Kalman filter,” _Automatica_ , vol. 40, no. 6, pp. 1017–1023, 2004. * [10] B. Chen, G. Hu, D. W. Ho, and L. Yu, “Distributed Kalman filtering for time-varying discrete sequential systems,” _Automatica_ , vol. 99, pp. 228–236, 2019. * [11] R. Olfati-Saber, “Distributed Kalman filtering for sensor networks,” in _Proceedings of the 46th IEEE Conference on Decision and Control_. IEEE, 2007, pp. 5492–5498. * [12] ——, “Distributed Kalman filter with embedded consensus filters,” in _Proceedings of the 44th IEEE Conference on Decision and Control_. IEEE, 2005, pp. 8179–8184. * [13] ——, “Kalman-consensus filter: Optimality, stability, and performance,” in _Proceedings of the 48th IEEE Conference on Decision and Control (CDC) held jointly with the 28th Chinese Control Conference_. IEEE, 2009, pp. 7036–7042. * [14] G. Battistelli, L. Chisci, G. Mugnai, A. Farina, and A. Graziano, “Consensus-based linear and nonlinear filtering,” _IEEE Transactions on Automatic Control_ , vol. 60, no. 5, pp. 1410–1415, 2014. * [15] W. Li and Y. Jia, “Consensus-based distributed multiple model UKF for jump Markov nonlinear systems,” _IEEE Transactions on Automatic Control_ , vol. 57, no. 1, pp. 227–233, 2011. * [16] G. Battistelli and L. Chisci, “Stability of consensus extended Kalman filter for distributed state estimation,” _Automatica_ , vol. 68, pp. 169–178, 2016\. * [17] S. Del Favero and S. Zampieri, “Distributed estimation through randomized gossip Kalman filter,” in _Proceedings of the 48th IEEE Conference on Decision and Control (CDC) held jointly with the 28th Chinese Control Conference_. IEEE, 2009, pp. 7049–7054. * [18] S. Kar and J. M. Moura, “Gossip and distributed Kalman filtering: Weak consensus under weak detectability,” _IEEE Transactions on Signal Processing_ , vol. 59, no. 4, pp. 1766–1784, 2010. * [19] K. Ma, S. Wu, Y. Wei, and W. Zhang, “Gossip-based distributed tracking in networks of heterogeneous agents,” _IEEE Communications Letters_ , vol. 21, no. 4, pp. 801–804, 2016. * [20] F. S. Cattivelli and A. H. Sayed, “Diffusion strategies for distributed Kalman filtering and smoothing,” _IEEE Transactions on Automatic Control_ , vol. 55, no. 9, pp. 2069–2084, 2010. * [21] J. Hu, L. Xie, and C. Zhang, “Diffusion Kalman filtering based on covariance intersection,” _IEEE Transactions on Signal Processing_ , vol. 60, no. 2, pp. 891–902, 2011. * [22] F. S. Cattivelli, C. G. Lopes, and A. H. Sayed, “Diffusion strategies for distributed Kalman filtering: Formulation and performance analysis,” _Proc. Cognitive Information Processing_ , pp. 36–41, 2008. * [23] M. Farina, G. Ferrari-Trecate, and R. Scattolini, “Distributed moving horizon estimation for linear constrained systems,” _IEEE Transactions on Automatic Control_ , vol. 55, no. 11, pp. 2462–2475, 2010. * [24] A. Haber and M. Verhaegen, “Moving horizon estimation for large-scale interconnected systems,” _IEEE Transactions on Automatic Control_ , vol. 58, no. 11, pp. 2834–2847, 2013. * [25] G. Battistelli and L. Chisci, “Kullback–Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability,” _Automatica_ , vol. 50, no. 3, pp. 707–718, 2014. * [26] L. Chen, P. O. Arambel, and R. K. Mehra, “Estimation under unknown correlation: Covariance intersection revisited,” _IEEE Transactions on Automatic Control_ , vol. 47, no. 11, pp. 1879–1882, 2002. * [27] X. He, W. Xue, and H. Fang, “Consistent distributed state estimation with global observability over sensor network,” _Automatica_ , vol. 92, pp. 162–172, 2018. * [28] S. Das and J. M. Moura, “Consensus+ innovations distributed Kalman filter with optimized gains,” _IEEE Transactions on Signal Processing_ , vol. 65, no. 2, pp. 467–481, 2016. * [29] G. Battistelli, L. Chisci, and D. Selvi, “A distributed Kalman filter with event-triggered communication and guaranteed stability,” _Automatica_ , vol. 93, pp. 75–82, 2018. * [30] L. Shi, P. Cheng, and J. Chen, “Sensor data scheduling for optimal state estimation with communication energy constraint,” _Automatica_ , vol. 47, no. 8, pp. 1693–1698, 2011. * [31] K. You and L. Xie, “Network topology and communication data rate for consensusability of discrete-time multi-agent systems,” _IEEE Transactions on Automatic Control_ , vol. 56, no. 10, pp. 2262–2275, 2011. * [32] K. You, Z. Li, and L. Xie, “Consensus condition for linear multi-agent systems over randomly switching topologies,” _Automatica_ , vol. 49, no. 10, pp. 3125–3132, 2013. * [33] L. Xu, Y. Mo, and L. Xie, “Distributed consensus over Markovian packet loss channels,” _IEEE Transactions on Automatic Control_ , vol. 65, no. 1, pp. 279–286, 2019. * [34] G. Gu, L. Marinovici, and F. L. Lewis, “Consensusability of discrete-time dynamic multiagent systems,” _IEEE Transactions on Automatic Control_ , vol. 57, no. 8, pp. 2085–2089, 2011. * [35] F. Amato, M. Ariola, and P. Dorato, “Finite-time control of linear systems subject to parametric uncertainties and disturbances,” _Automatica_ , vol. 37, no. 9, pp. 1459–1463, 2001. * [36] Y. Su and J. Huang, “Two consensus problems for discrete-time multi-agent systems with switching network topology,” _Automatica_ , vol. 48, no. 9, pp. 1988–1997, 2012. * [37] Y. Mo and E. Garone, “Secure dynamic state estimation via local estimators,” in _2016 IEEE 55th Conference on Decision and Control (CDC)_. IEEE, 2016, pp. 5073–5078. * [38] X. Yang, J. Yan, Y. Mo, and K. You, “A distributed implementation of steady-state Kalman filter,” in _2021 40th Chinese Control Conference (CCC)_. IEEE, 2021, pp. 5154–5159. * [39] X. Mao, X. Miao, Y. He, X.-Y. Li, and Y. Liu, “Citysee: Urban $co_{2}$ monitoring with sensors,” in _2012 Proceedings IEEE INFOCOM_. IEEE, 2012, pp. 1611–1619. * [40] L. Parolini, B. Sinopoli, B. H. Krogh, and Z. Wang, “A cyber–physical systems approach to data center modeling and control for energy efficiency,” _Proceedings of the IEEE_ , vol. 100, no. 1, pp. 254–268, 2011. * [41] Y. Mo, R. Ambrosino, and B. Sinopoli, “Sensor selection strategies for state estimation in energy constrained wireless sensor networks,” _Automatica_ , vol. 47, no. 7, pp. 1330 – 1338, 2011. * [42] ——, “Network energy minimization via sensor selection and topology control,” _IFAC Proceedings Volumes_ , vol. 42, no. 20, pp. 174 – 179, 2009\. * [43] W. Li, G. Wei, D. W. Ho, and D. Ding, “A weightedly uniform detectability for sensor networks,” _IEEE transactions on neural networks and learning systems_ , vol. 29, no. 11, pp. 5790–5796, 2018. * [44] Z. Li and Y. Mo, “Efficient secure state estimation against sparse integrity attack for system with non-derogatory dynamics,” _arXiv preprint arXiv:2106.03066_ , 2021.
# Conditional Action and imperfect Erasure of Qubits Heinz-Jürgen Schmidt Universität Osnabrück, Fachbereich Physik, D - 49069 Osnabrück, Germany ###### Abstract We consider state changes in quantum theory due to “conditional action" and relate these to the discussion of entropy decrease due to interventions of “intelligent beings" and the principles of Szilard and Landauer/Bennett. The mathematical theory of conditional actions is a special case of the theory of “instruments” which describes changes of state due to general measurements and will therefore be briefly outlined in the present paper. As a detailed example we consider the imperfect erasure of a qubit that can also be viewed as a conditional action and will be realized by the coupling of a spin to another small spin system in its ground state. ## I Introduction According to a widespread opinion, there are two types of state change in quantum mechanics: Time evolution in closed systems, and state changes due to measurements. The mathematical description of these two processes is known in principle: 1. (i) Time evolution in closed systems can be described by means of unitary operators $U(t)$ according to $\rho\mapsto U(t)\,\rho\,U(t)^{\ast}\;,$ (1) where $U(t)$ is obtained by solutions of the time-dependent Schrödinger equation and $\rho$ denotes any statistical operator. 2. (ii) Conditional state changes according to the outcome of a measurement of an observable $A$ will be described, in the simplest case, by maps of the form $\rho\mapsto P_{n}\,\rho\,P_{n}\;,$ (2) where $\left(P_{n}\right)_{n\in{\mathcal{N}}}$ is the family of eigenprojections of a self-adjoint operator $A=\sum_{n}a_{n}P_{n}$. Without selection according to the outcomes of the measurement the total state change will be $\rho\mapsto\sum_{n\in{\mathcal{N}}}P_{n}\,\rho\,P_{n}\;,$ (3) In a recent article S20 we have suggested a third type of state change, called “conditional action", that combines the two afore-mentioned ones insofar as it describes a state change depending on the result of a preceding measurement. 1. (iii) In the simplest case a conditional action is mathematically described by maps of the form $\rho\mapsto U_{n}\,P_{n}\,\rho\,P_{n}\,U_{n}^{\ast}\;,$ (4) with the same notation as in (2) and a family of unitary operators $\left(U_{n}\right)_{n\in{\mathcal{N}}}$. Without selection according to the outcomes of the measurement the total state change will be $\rho\mapsto\sum_{n\in{\mathcal{N}}}U_{n}\,P_{n}\,\rho\,P_{n}\,U_{n}^{\ast}\;,$ (5) Before explaining the details of this suggestion and suitable generalizations we will fix some general notation used in the present paper. A measurement leading to the state change (2) is called a “Lüders measurement" in accordance with BLM96 , sometimes also called “projective measurements" in the literature. In order not to have to go into technical intricacies the quantum system $\Sigma$ will be described by a finite-dimensional Hilbert space ${\mathcal{H}}$. In this case the index set ${\mathcal{N}}$ will also be finite. Let $B({\mathcal{H}})$ denote the real linear space of Hermitean operators $A:{\mathcal{H}}\rightarrow{\mathcal{H}}$, and $B_{1}^{+}({\mathcal{H}})$ the convex subset of statistical operators, i. e., Hermitean operators $\rho$ with non-negative eigenvalues and $\mbox{Tr}\rho=1$. The state changes considered above in (2) and (3) can be viewed as a map $\mathfrak{L}:{\mathcal{N}}\times B({\mathcal{H}})\rightarrow B({\mathcal{H}})$ defined by $\mathfrak{L}(n)(\rho):=P_{n}\,\rho\,P_{n}\;,$ (6) that will be called a “Lüders instrument" and the corresponding map $\mathfrak{L}({\mathcal{N}})(\rho):=\sum_{n\in{\mathcal{N}}}P_{n}\,\rho\,P_{n}\;,$ (7) the “total Lüders operation". A difference of the two state changes according to (i) and (ii) arises when we consider the change of the von Neumann entropy $S(\rho):=-\mbox{Tr}\left(\rho\,\log\rho\right),\quad\mbox{for }\rho\in B_{1}^{+}({\mathcal{H}})\;.$ (8) Under unitary time evolutions (i) the entropy remains constant, $S(\rho)=S\left(U_{t}\,\rho\,U_{t}^{\ast}\right)\;,$ (9) whereas for Lüders measurements (ii) the entropy may increase and we can only state that $S(\rho)\leq S(\mathfrak{L}({\mathcal{N}})(\rho))\;,$ (10) see vN32 – SG20 , in accordance with the second law of thermodynamics. In contrast to closed systems, time evolution in open systems can take a more general form. An obvious model to account for the time evolution in open systems is to consider the extension of the system $\Sigma$ with Hilbert space ${\mathcal{H}}$ by another, auxiliary system $E$ (environment, heat bath, measurement apparatus, …) with Hilbert space ${\mathcal{K}}$ and the unitary time evolution $V$ of the total system $\Sigma+E$. If the total system is initially in the state $\rho\otimes\sigma$ it will generally evolve into an entangled state $V\left(\rho\otimes\sigma\right)V^{\ast}$. In the end, we again consider the system $\Sigma$ and find its reduced state $\rho_{1}$ given by the partial trace $\rho_{1}=\mbox{Tr}_{\mathcal{K}}\left(V\left(\rho\otimes\sigma\right)V^{\ast}\right)\;.$ (11) The corresponding state change $\rho\mapsto\rho_{1}$ will, in general, not be of unitary type (1), but represents a natural extension (ie) of the state changes according to (i). In general, the entropy balance for these state changes is ambivalent: $S(\rho_{1})$ can be smaller or larger than $S(\rho)$. In fact, the initial entropy of the total system is $S(\rho)+S(\sigma)$ and the unitary time evolution $V$ leaves this invariant. But the separation of the total system into its parts $\rho_{1}$ according to (11) and $\rho_{2}=\mbox{Tr}_{\mathcal{H}}\left(V\left(\rho\otimes\sigma\right)V^{\ast}\right)\;,$ (12) increases the entropy (or leaves it constant) according to “subadditivity" of $S$, see NC00 , 11.3.4., and hence $S(\rho)+S(\sigma)\leq S(\rho_{1})+S(\rho_{2})\;.$ (13) But $S(\rho_{1})-S(\rho)$ may assume positive or negative values. This can be physically understood as the phenomenon that, apart from a possible increase of the total entropy according to (13), there may be an entropy flow from the system $\Sigma$ into the environment $E$ or vice versa. An analogous extension of the system $\Sigma$ to $\Sigma+E$ can also be considered for Lüders measurements. We again start with an initial total state $\rho\otimes\sigma$, where $\sigma\in B_{1}^{+}({\mathcal{K}})$, and a unitary time evolution $V$ of the total system. Then a Lüders measurement corresponding to a complete family $\left(Q_{n}\right)_{n\in{\mathcal{N}}}$ of mutually orthogonal projections of the auxiliary system is performed and the post-measurement total state is reduced to the system $\Sigma$ leading to the final state $\mathfrak{I}(n)(\rho):=\mbox{Tr}_{\mathcal{K}}\left(\left(\mathbbm{1}\otimes Q_{n}\right)V\left(\rho\otimes\sigma\right)V^{\ast}\left(\mathbbm{1}\otimes Q_{n}\right)\right)\;,$ (14) or, without selection, to $\mathfrak{I}({\mathcal{N}})(\rho):=\mbox{Tr}_{\mathcal{K}}\left(\sum_{n}\left(\mathbbm{1}\otimes Q_{n}\right)V\left(\rho\otimes\sigma\right)V^{\ast}\left(\mathbbm{1}\otimes Q_{n}\right)\right)\;.$ (15) Thus we obtain extensions (iie) of the state changes (ii) due to Lüders measurements by maps $\mathfrak{I}:{\mathcal{N}}\times B({\mathcal{H}})\rightarrow B({\mathcal{H}})$ of the form (14), that are called “instruments" in the literature, see BLPY16 and Section II for more precise mathematical definitions. Lüders instruments are idealized special cases of general instruments that, in some sense, minimize the perturbation of the $\Sigma$ system by the measurement, but “real measurements" are better described by general instruments. Analogous remarks as in the case of open systems apply for the entropy balance: It is well-known, see L73 or NC00 , Exercise 11.15, that general measurements may decrease the system’s entropy. The latter observation has lead us to the suggestion S20 that the entropy decrease of systems due to the “intervention of intelligent beings" as, e. g., Maxwell’s demon, can be explained by the same mechanism. Originally, the notion of “conditional action" was developed to describe the intervention of Maxwell’s demon in the energy distribution of a gas with two chambers: Depending on the result of an energy measurement on a gas molecule approaching the partition between the two chambers, a door is opened or shut. Thus, the further time evolution of the gas depends on the result of the measurement. Similarly, the result of measuring whether a single molecule is in the left or right chamber can be used to trigger an isothermal expansion to the right or left (Szilard’s engine). Szilard argues S29 that the entropy decrease of the system is compensated by the entropy costs of acquiring information about the position of the gas particle (“Szilard’s principle"). His arguments are formulated within classical physics and not easy to understand, see also the analysis and reconstruction of Szilard’s reasoning in LR94 , EN98 and EN99 . Nevertheless, it seems possible that the entropy decrease due to such external interventions is a special case of the well-understood entropy decrease due to state changes described by general instruments. In fact, it can be easily confirmed, that the maps of the form (4) describing “conditional action" are special cases of instruments and hence are called “Maxwell instruments" in S20 . The mathematical notion of state changes described by instruments is sufficiently general to cover not only changes due to inevitable measurement disturbances but also “deliberate" state changes depending on the result of a measurement. This notion of “conditional action" will be slightly generalized in the present paper, and then comprises not only interventions of Maxwell’s demon or cycles of Szilard’s engine S29 , Z84 , S20 , but also quantum teleportation NC00 Ch. 1.3.7, quantum error correction NC00 Ch. 10 or erasure of qubits S20 . Relative to the choice of a suitable basis a qubit has two values, “0" or “1". Consider a “yes-no"-measurement corresponding to said basis. If the result is “1" the two states are swapped, hence $``1"\mapsto``0"$. If the result is “0" then nothing is done, hence $``0"\mapsto``0"$. This constitutes the conditional action which sets the qubit state to its default value “0" at any case and hence can be legitimately considered as an “erasure of a qubit". The latter example contains an ironic punch line in that the erasure of memory contents with measurement results and the corresponding entropy costs are usually considered to resolve the apparent contradiction between the actions of Maxwell’s demon and the second law (Landauer’s principle). If memory erasure itself were taken as a conditional action, we would seem to enter an infinite circle of creating and erasing new memory contents. The obvious resolution to this problem is the observation that the entropy decrease in the system $\Sigma$ is due to some flow of entropy from $\Sigma$ to the auxiliary system $E$ as described above. If $E$ can be viewed as a “memory device" then, at the end of the conditional action, it already contains the missing entropy. It is not necessary to erase the content of the memory. The latter would not create the missing entropy, but only make it visible. As in S20 it seems sensible to distinguish between the principle that erasure of memory produces entropy L61 (“Landauer’s principle" in the narrow sense) and the position that this effect constitutes the solution of the apparent paradox of Maxwell’s demon B82 (henceforward called “Landauer/Bennett principle"). Moreover, it will be a matter of substantiating our critique of the Landauer/Bennett principle (not of the Landauer principle) outlined above with a more realistic model of qubit erasure than that given in S20 . To this end we realize the qubit (the system $\Sigma$) by a single spin with spin quantum number $s=1/2$ described by a Hilbert space ${\mathcal{H}}\cong{\mathbbm{C}}^{2s+1}={\mathbbm{C}}^{2}$ and model the erasure of the qubit by the coupling of the single spin with a “heat bath" $E$ consisting of $N=6$ spins such that the time evolution of the total system can be analytically calculated. The quotation marks refer to the fact that the “heat bath" is pretty small and not macroscopic, as usually required, and that, moreover, it is rather a “cold bath". This is due to the choice of the default value "0" of the qubit as the ground state $\downarrow$ of the single spin. Thus, erasing the qubit is physically equivalent to cooling the system $\Sigma$ to the temperature $T=0$. Although this is, strictly speaking, impossible due to the third law of thermodynamics, it can be approximately accomplished by coupling the single spin to a system of $N=6$ spins in its ground state. Here we ignore the physical impossibility to prepare a system in its ground state and consider the ground state of the “heat bath" as a suitable approximation to a state of very low temperature. This approximation has the advantage of providing fairly simple expressions for the relevant quantities considered in this paper. The corresponding calculations are presented in Section V. As a side effect of this account results the necessity to define the concept of “conditional action" somewhat more generally than in S20 . This is done in Section II where we also recapitulate the basic notions of quantum measurement theory required for the present work. A critical account of the Szilard principle in the realm of quantum theory is given in Section III, where we also formulate an upper bound for the entropy decrease due to conditional action that is compatible with Szilard’s reasoning but only valid under certain restrictions. A similar bound is derived in Section IV where the connections of the present theory with the OLR approach A17 are considered. The proofs are moved to the appendix, as is the explicit construction of a “standard" measurement dilation for a general instrument. This measurement dilation is well-known but nevertheless reproduced here since some arguments given in this paper depend on its details. We close with a Summary and Outlook in Section VI. ## II General definitions and results In the following we will heavily rely upon the mathematical notions of operations and instruments. Although these notions are well-known, see, e. g., K83 – P13b and BLPY16 , it will be in order to recall the pertinent definitions adapted to the present purposes and their interpretations in the context of measurement theory. For readability, we sometimes will repeat definitions already presented in the introduction I. Let ${\mathcal{H}}$ be a $d$-dimensional Hilbert space, $B({\mathcal{H}})$ denote the space of Hermitean operators $A:{\mathcal{H}}\longrightarrow{\mathcal{H}}$ and $B^{+}({\mathcal{H}})$ the cone of positively semi-definite operators, i. e., having only non-negatives eigenvalues. The convex subset $B_{1}^{+}({\mathcal{H}})\subset B^{+}({\mathcal{H}})$ consists of statistical operators $\rho$ with $\mbox{Tr}\rho=1$. Such operators physically describe (mixed) states. Pure states are represented by one-dimensional projectors $P_{\psi}$, where $\psi\in{\mathcal{H}}$ with $\|\psi\|=1$. According to NC00 , 8.2.1, there are three equivalent ways to define operations: * • By considering the system $\Sigma$ coupled to environment $E$, * • by an operator-sum representation, or * • via physically motivated axioms. Here we follow the second approach and define an “operation" to be a map $A:B({\mathcal{H}})\longrightarrow B({\mathcal{H}})$ of the form $A(\rho)=\sum_{i\in{\mathcal{I}}}A_{i}\,\rho\,A_{i}^{\ast}\;,$ (16) with the Kraus operators $A_{i}:{\mathcal{H}}\rightarrow{\mathcal{H}}$ and a finite index set ${\mathcal{I}}$, see K83 . It follows that an operation is linear and maps $B^{+}({\mathcal{H}})$ into itself. It may be trace-preserving or not. Operations are intended to describe state changes due to measurements. For example, the total Lüders operation (7) is a trace-preserving operation in the above sense with ${\mathcal{I}}={\mathcal{N}}$ and $A_{n}=P_{n}$ for all $n\in{\mathcal{N}}$. An operation $A:B({\mathcal{H}})\rightarrow B({\mathcal{H}})$ will be called pure iff the representation (16) of $A$ can be reduced to a single Kraus operator, i. e., $A(\rho)=A_{1}\,\rho\,A_{1}^{\ast}\;.$ (17) Physically, this means that a pure operation maps pure states onto pure states, up to a positive factor. There exists a so-called statistical duality between states and observables, see BLPY16 , chapter 23.1. In the finite-dimensional case $B({\mathcal{H}})$ can be identified with its dual space $B({\mathcal{H}})^{\ast}$ by means of the Euclidean scalar product $\mbox{Tr}\,(A\,B)$. Physically, we may distinguish between the two spaces in the sense that $B({\mathcal{H}})$ is spanned by the subset of statistical operators representing states and $B({\mathcal{H}})^{\ast}$ is spanned by the subset of operators with eigenvalues in the interval $[0,1]$ representing effects. Effects describe yes-no-measurements including the subset of projectors, which are the extremal points of the convex set of effects, see BLPY16 . Every operation $A:B({\mathcal{H}})\rightarrow B({\mathcal{H}})$, viewed as a transformation of states (Schrödinger picture) gives rise to the dual operation $A^{\ast}:B({\mathcal{H}})^{\ast}\longrightarrow B({\mathcal{H}})^{\ast}$ viewed as a transformation of effects (Heisenberg picture). Reconsider the representation (16) of the operation $A$ by means of the Kraus operators $A_{i}$. Then the dual operation $A^{\ast}$ has the corresponding representation $A^{\ast}(X)=\sum_{i\in{\mathcal{I}}}A_{i}^{\ast}\,X\,A_{i}\;,$ (18) for all $X\in B({\mathcal{H}})^{\ast}$. Let ${\mathcal{N}}$ be a finite set of outcomes. Then the map ${\mathfrak{I}}:{\mathcal{N}}\times B({\mathcal{H}})\longrightarrow B({\mathcal{H}})$ will be called an instrument iff * • ${\mathfrak{I}}(n)$ is an operation for all $n\in{\mathcal{N}}$, and * • $\mbox{Tr}\left(\sum_{n\in{\mathcal{N}}}{\mathfrak{I}}(n)(\rho)\right)=\mbox{Tr}\rho$ for all $\rho\in B({\mathcal{H}})$. The first condition can be re-written as ${\mathfrak{I}}(n)(\rho)=\sum_{i\in{\mathcal{I}}_{n}}A_{ni}\,\rho\,A_{ni}^{\ast}\quad\mbox{for all }n\in{\mathcal{N}},$ (19) with suitable Kraus operators $A_{ni}:{\mathcal{H}}\rightarrow{\mathcal{H}}$. The second condition can be rephrased by saying that the total operation ${\mathfrak{I}}({\mathcal{N}})$ defined by ${\mathfrak{I}}({\mathcal{N}})(\rho)\equiv\sum_{n\in{\mathcal{N}}}{\mathfrak{I}}(n)(\rho)$ (20) will be trace-preserving. An instrument ${\mathfrak{I}}$ will be called “pure" iff each operation ${\mathfrak{I}}(n),\;n\in{\mathcal{N}},$ is pure. Examples of pure instruments are given by Lüders instruments (6) and “Maxwell instruments" (4). Similarly as for operations, every instrument ${\mathfrak{I}}$ gives rise to a dual instrument ${\mathfrak{I}}^{\ast}:{\mathcal{N}}\times B({\mathcal{H}})^{\ast}\longrightarrow B({\mathcal{H}})^{\ast}$ defined by ${\mathfrak{I}}^{\ast}(n)(X):={\mathfrak{I}}(n)^{\ast}(X)$ (21) for all $n\in{\mathcal{N}}$ and $X\in B({\mathcal{H}})^{\ast}$. The condition that the total operation (20) will be trace-preserving translates into ${\mathfrak{I}}^{\ast}({\mathcal{N}})({\mathbbm{1}})=\sum_{n\in{\mathcal{N}}}{\mathfrak{I}}^{\ast}(n)({\mathbbm{1}})=\sum_{n\in{\mathcal{N}}}\sum_{i\in{\mathcal{I}}_{n}}A_{ni}^{\ast}\,A_{ni}={\mathbbm{1}}\;.$ (22) Thus every dual instrument yields a resolution of the identity by means of effects $F_{n}:={\mathfrak{I}}^{\ast}(n)({\mathbbm{1}})=\sum_{i\in{\mathcal{I}}_{n}}A_{ni}^{\ast}\,A_{ni}\;,$ (23) and hence to a generalized observable in the sense of a positive operator- valued measure $F=\left(F_{n}\right)_{n\in{\mathcal{N}}}$, see BLPY16 .Note, however, that compared to the general definition in BLPY16 we will have to consider generalized observables only in the discrete, finite-dimensional case. The traditional notion of “sharp" observables represented by self- adjoint operators corresponds to the special case of a projection-valued measure $\left(P_{n}\right)_{n\in{\mathcal{N}}}$ satisfying $\sum_{n\in{\mathcal{N}}}P_{n}={\mathbbm{1}}$. From now on, by "observables" we always want to understand the generalized case. It can be shown S20 that “Maxwell instruments" are just pure instruments corresponding to sharp observables. The example of imperfect erasure of qubits considered in Section V suggest that the class of “Maxwell instruments" is too narrow to describe realistic conditional actions. First, imperfect erasure cannot be described by a pure instrument, since the initial state of the “heat bath" is not pure. Moreover, the measurement of a sharp “heat bath" observable does not give rise to a sharp qubit observable. Hence it seems sensible to use general instruments to describe conditional action. Fortunately, the main results on the entropy balance of conditional action in S20 can be easily generalized to general instruments. To this end we reconsider the map $\mathfrak{I}$ defined in (14) by means of coupling the system $\Sigma$ to some environment $E$. Recall that the environment $E$ is described by some Hilbert space ${\mathcal{K}}$ and an initial state $\sigma\in B_{1}^{+}\left({\mathcal{K}}\right)$. Moreover, $V$ denotes the unitary time evolution of the total system and $\left(Q_{n}\right)_{n\in{\mathcal{N}}}$ a sharp environment observable. It can be shown that (i) (14) defines an instrument in the above sense and (ii) every instrument can be obtained in this way, see Theorem 7. 14 of BLPY16 , Exercise 8. 9 of NC00 , or Appendix A. The special instrument defined in (14) will be referred to as a “measurement dilation" $\mathfrak{D}_{{\mathcal{K},\sigma,V,Q}}$ of a given instrument $\mathfrak{I}$. If the initial state $\sigma$ of the environment is pure, $\sigma=P_{\phi}$, the measurement dilation will also be denoted by $\mathfrak{D}_{{\mathcal{K},\phi,V,Q}}$. A measurement dilation of a given instrument $\mathfrak{I}$ is hence a physical realization of $\mathfrak{I}$ by a Lüders instrument of the extended system $\Sigma+E$ and a subsequent reduction to $\Sigma$. Let a conditional action be described by the instrument $\mathfrak{I}$ with measurement dilation $\mathfrak{I}=\mathfrak{D}_{{\mathcal{K},\sigma,V,Q}}$. W. r. t. this measurement dilation we define the two reduced states $\rho_{1}:=\mbox{Tr}_{\mathcal{K}}\left(\sum_{n\in{\mathcal{N}}}\left({\mathbbm{1}}\otimes Q_{n}\right)V\left(\rho\otimes\sigma\right)V^{\ast}\left({\mathbbm{1}}\otimes Q_{n}\right)\right)\;,$ (24) and $\rho_{2}:=\mbox{Tr}_{\mathcal{H}}\left(\sum_{n\in{\mathcal{N}}}\left({\mathbbm{1}}\otimes Q_{n}\right)V\left(\rho\otimes\sigma\right)V^{\ast}\left({\mathbbm{1}}\otimes Q_{n}\right)\right)\;.$ (25) Then the analogous arguments leading to the entropy balance (13) also prove: ###### Proposition 1 Under the preceding conditions the following holds: $\Delta S:=S(\rho)-S(\rho_{1})\leq S(\rho_{2})-S(\sigma)\;.$ (26) In connection with the Szilard principle discussed in the next Section the following proposition will be of some interest: ###### Proposition 2 The total operation $\mathfrak{I}({\mathcal{N}})$ of an instrument with measurement dilation $\mathfrak{I}=\mathfrak{D}_{{\mathcal{K},\sigma,V,Q}}$ is independent of the environment observable $Q$. This means in particular that $\mathfrak{I}({\mathcal{N}})$ could even be realized by a coupling of $\Sigma$ to some environment $E$, unitary time evolution and final state reduction, without any measurement at all. The proof of Proposition 2 can be found in Appendix B.1. ## III The Szilard principle revisited As mentioned in the Introduction, the ideas of L. Szilard S29 to resolve the apparent contradiction between the results of the “intervention of intelligent beings" and the second law are published more than nine decades ago and are confined to classical physics. Therefore, the reconstruction of “Szilard’s principle" for quantum mechanics could appear as somewhat daring. In this section, nevertheless, we will reconsider what we understand by “Szilard’s principle” from the point of view developed in the present article. According to this principle the entropy decrease of the system is compensated by the entropy costs of acquiring information about the system’s state. Recall that we have considered a so-called measurement dilation of the instrument $\mathfrak{I}$ describing state changes due to conditional action which extends the system $\Sigma$ by an auxiliary system $E$ (environment). For the “standard dilation" given in Appendix A the dimension of the Hilbert space ${\mathcal{K}}$ corresponding to the auxiliary system $E$ equals the number of outcomes $\left|{\mathcal{N}}\right|$ of the Lüders measurement of $Q$ if the instrument $\mathfrak{I}$ is pure. It is therefore tempting to consider the auxiliary system $E$ as a “memory” that holds the information about the result of the measurement and to interpret the “entropy cost of information acquisition” as the entropy $S(\rho_{2})$ of the final state $\rho_{2}$ of $E$ after the measurement. The probabilities $p_{n},\,n\in{\mathcal{N}},$ of the various outcomes are given by $p_{n}=\mbox{Tr}\left(\rho\,F_{n}\right)$, where $F=\left(F_{n}\right)_{n\in{\mathcal{N}}}$ is the observable (23) corresponding to the conditional action and $\rho$ is the initial state of the system $\Sigma$. The Shannon entropy $H(p)$ of the probability distribution $\left(p_{n}\right)_{n\in{\mathcal{N}}}$ is independent of any measurement dilation and will be called the “Shannon entropy of the experiment" $H(\rho,F):=H(p):=-\sum_{n\in{\mathcal{N}}}p_{n}\,\log p_{n}\;.$ (27) Then we can prove the following inequality, which confirms Szilard’s principle in the above given version: ###### Theorem 1 (Szilard’s principle - quantum case) The entropy decrease $\Delta S=S(\rho)-S(\rho_{1})$ of a conditional action corresponding to a pure instrument $\mathfrak{I}$ is bounded by the Shannon entropy of the experiment, i. e., $\Delta S\leq H(\rho,F)\;.$ (28) For the proof see Appendix B.2. It is worth noting that the bound in (28) is independent of the pure instrument describing the conditional action and depends only on the probabilities $p_{n}=\mbox{Tr}\left(\rho F_{n}\right)$. The theorem is trivially satisfied if the conditional action leads to an increase of entropy, i. e., $\Delta S\leq 0$, as in the case of a Lüders measurement without any conditional action. Another trivial case is given if the observable $F$ is sharp and the projections $F_{n}$ are one-dimensional. In this case $S(\rho)\leq S\left(\sum_{n}F_{n}\rho F_{n}\right)=H(\rho,F)$ and the theorem holds since $S(\rho_{1})\geq 0$. In other words: Entropy cannot fall below the value of zero. Otherwise the bound (28) is non-trivial. Consider the example of ${\mathcal{H}}={\mathbb{C}}^{3}$ with three mutually orthogonal one- dimensional projections $P_{1},P_{2},P_{3}$, $\rho={\textstyle\frac{1}{2}}P_{1}+{\textstyle\frac{3}{10}}P_{2}+{\textstyle\frac{1}{5}}P_{3}\;,$ (29) and $F=(P_{1},P_{2}+P_{3})$. It follows that $S(\rho)=-{\textstyle\frac{1}{2}}\log\left({\textstyle\frac{1}{2}}\right)-{\textstyle\frac{3}{10}}\log\left({\textstyle\frac{3}{10}}\right)-{\textstyle\frac{1}{5}}\log\left({\textstyle\frac{1}{5}}\right)\approx 1.02965>H(\rho,F)=\log 2\approx 0.693147\;,$ (30) and hence, in this example, Theorem 1 says more than just that the entropy of $\rho$ cannot drop to negative values. It is straightforward to construct a Maxwell instrument corresponding to the observable $F$ such that $\rho_{1}={\textstyle\frac{4}{5}}\,P_{1}+{\textstyle\frac{1}{5}}\,P_{2}$ and hence $\Delta S\approx 0.529251<\log 2\approx 0.693147$ in accordance with Theorem 1. A slight generalization of Theorem 1 is the following: ###### Corollary 1 The upper bound (28) also holds if the instrument $\mathfrak{I}$ can be written as a convex linear combination of pure instruments with the same set of outcomes ${\mathcal{N}}$. For the proof see Appendix B.3. A pure instrument has a standard dilation with one-dimensional projections $Q_{n},\;n\in{\mathcal{N}},$ and a pure initial state $\sigma=P_{\phi}$ of $E$. If we extend this standard dilation by considering a real mixed initial state $\sigma$ of $E$ we obtain a convex combination of pure instruments for which Corollary 1 holds. But not every instrument is a convex linear combination of pure ones. Actually, there exist instruments where the bound of entropy decrease given in (28) is violated. To provide an example we consider the (perfect) erasure of two qubits. Thus ${\mathcal{H}}={\mathbbm{C}}^{2}\otimes{\mathbbm{C}}^{2}\cong{\mathbbm{C}}^{4}$ and we consider an orthonormal basis of ${\mathcal{H}}$ denoted by $\left(\psi_{1}=\uparrow\uparrow,\,\psi_{2}=\uparrow\downarrow,\,\psi_{3}=\downarrow\uparrow,\,\psi_{4}=\downarrow\downarrow\right)$. The conditional action maps all these four basis states onto the default state $\psi_{4}=\downarrow\downarrow$. It has the following measurement dilation: ${\mathcal{K}}={\mathcal{H}}$, initial auxiliary state $\phi=\downarrow\downarrow$, unitary time evolution $V$ of the total system defined by $V(\Phi\otimes\Psi)=\Psi\otimes\Phi$ and Lüders measurement of the auxiliary observable $Q=\left(Q_{\nu}\right)_{\nu=1,\ldots,4}=\left(|\psi_{\nu}\rangle\langle\psi_{\nu}|\right)_{\nu=1,\ldots,4}$. The corresponding instrument $\mathfrak{I}=\mathfrak{D}_{{\mathcal{K}},\phi,V,Q}$ is pure and hence satisfies (28). Then we consider another instrument $\widetilde{\mathfrak{I}}$ by changing the auxiliary observable to $\widetilde{Q}=\left(Q_{1}+Q_{2},Q_{3}+Q_{4}\right)=\left(|\psi_{1}\rangle\langle\psi_{1}|\otimes{\mathbbm{1}},|\psi_{2}\rangle\langle\psi_{2}|\otimes{\mathbbm{1}}\right)$. The corresponding system observable $\widetilde{F}=\left(|\uparrow\rangle\langle\uparrow|\otimes{\mathbbm{1}},|\downarrow\rangle\langle\downarrow|\otimes{\mathbbm{1}}\right)$ is also two-valued and corresponds to a measurement of the first qubit w. r. t. the considered basis. All other components of $\mathfrak{D}_{{\mathcal{K}},\phi,V,Q}$ are left unchanged. Consider the initial state $\rho={\textstyle\frac{1}{4}}{\mathbbm{1}}$ of the system with $S(\rho)=\log 4$ and $H(\rho,\widetilde{F})=\log 2$. It follows that $V(\rho\otimes P_{\phi})V^{\ast}=P_{\phi}\otimes\rho$ and hence $\rho_{1}=P_{\phi}$ and $S(\rho_{1})=0$. Consequently, $\Delta S=S(\rho)-S(\rho_{1})=\log 4>\log 2=H(\rho,\widetilde{F})$ in contrast to (28). Similar examples abound: Whenever the entropy decrease $\Delta S$ due to a conditional action is larger than $\log 2$ then a corresponding measurement dilation can be modified to yield $S(\rho,\widetilde{F})=\log 2$ without changing $\Delta S$ due to Proposition 2. The modified instrument $\widetilde{\mathfrak{I}}$ cannot be written as a convex linear combination of pure instruments according to Corollary 1. As a conclusion for the evaluation of Szilard’s principle we can state that there are examples of conditional actions where the entropy decrease in the system can be explained by an entropy increase at least as large in a memory, as well as counter examples. In the counterexamples, however, we have no violation of the second law, but only an impossibility to reduce the auxiliary system to its function as a memory. This is especially true for the limiting case of an entropy reduction without measurement. The example of Section V, see Figure 3, shows that the upper bound (28) of the entropy decrease holds for a larger class of conditional actions than given by Theorem 1 or Corollary 1. It remains an open task to determine this class more precisely. ## IV Connections to the OLR approach There exists a vast amount of literature on Maxwell’s demon and related questions footnote . Among them is an article that comes rather close to the results of the present work, namely A13 , that deals with Szilard’s engine and where we read in the abstract: > In this paper, Maxwell’s Demon is analyzed within a “referential" approach > to physical information that defines and quantifies the Demon’s information > via correlations between the joint physical state of the confined molecule > and that of the Demon’s memory. On this view […] information is erased not > during the memory reset step of the Demon’s cycle, but rather during the > expansion step, when these correlations are destroyed. The mentioned notion of “observer-local referential (OLR) information" is further outlined in A17 . A detailed comparison of the “conditional action approach" and the “OLR approach" is beyond the scope of this paper. Arguably, a key difference is that we could not model the formation of a correlation by a measurement and the subsequent destruction of that correlation by a conditional action as a sequence of state changes and instead had to use measurement dilation as a surrogate, see Section VI. To illustrate the nevertheless existing connections between the two approaches, we will derive another upper bound for the entropy decrease analogous to Theorem 1 using the notion of OLR information. We will restrict ourselves to the case of conditional actions described by Maxwell instruments $\mathfrak{M}$, i. e., instruments of the form (4). Let $\mathfrak{L}$ denote the corresponding Lüders instrument of the form (2) such that $\mathfrak{M}$ and $\mathfrak{L}$ share the same Hilbert space ${\mathcal{H}}$ and the same sharp observable $P=\left(P_{n}\right)_{n\in{\mathcal{N}}}$. Further consider the standard measurement dilations $\mathfrak{L}=\mathfrak{D}_{{\mathcal{K}},\phi^{\prime},V^{\prime},Q^{\prime}},\quad\mbox{and}\quad\mathfrak{M}=\mathfrak{D}_{{\mathcal{K}},\phi,V,Q}\;,$ (31) as explicitly constructed in Appendix A but specialized to the case of pure instruments, see also S20 . W. r. t. these measurement dilations we further define $\displaystyle\rho_{12}$ $\displaystyle:=$ $\displaystyle\sum_{n\in{\mathcal{N}}}\left(\mathbbm{1}-Q_{n}\right)\,V\,\left(\rho\otimes P_{\phi}\right)\,V^{\ast}\,\left(\mathbbm{1}-Q_{n}\right)\,,$ (32) $\displaystyle\rho_{1}$ $\displaystyle:=$ $\displaystyle\mbox{Tr}_{\mathcal{K}}\,\rho_{12},\quad\mbox{and}\quad\rho_{2}:=\mbox{Tr}_{\mathcal{H}}\,\rho_{12}\;,$ (33) and analogously for the primed quantities: $\displaystyle\rho_{12}^{\prime}$ $\displaystyle:=$ $\displaystyle\sum_{n\in{\mathcal{N}}}\left(\mathbbm{1}-Q^{\prime}_{n}\right)\,V^{\prime}\,\left(\rho\otimes P_{\phi^{\prime}}\right)\,V^{\prime\ast}\,\left(\mathbbm{1}-Q^{\prime}_{n}\right)\,,$ (34) $\displaystyle\rho_{1}^{\prime}$ $\displaystyle:=$ $\displaystyle\mbox{Tr}_{\mathcal{K}}\,\rho_{12}^{\prime},\quad\mbox{and}\quad\rho_{2}^{\prime}:=\mbox{Tr}_{\mathcal{H}}\,\rho_{12}^{\prime}\;.$ (35) In accordance with A17 we define the OLR information $\displaystyle{\mathcal{I}}$ $\displaystyle:=$ $\displaystyle S(\rho_{1})+S(\rho_{2})-S(\rho_{12})=:S_{1}+S_{2}-S_{12}\;,$ (36) $\displaystyle{\mathcal{I}}^{\prime}$ $\displaystyle:=$ $\displaystyle S(\rho_{1}^{\prime})+S(\rho_{2}^{\prime})-S(\rho_{12}^{\prime})=:S_{1}^{\prime}+S_{2}^{\prime}-S_{12}^{\prime}\;,$ (37) $\displaystyle\Delta{\mathcal{I}}$ $\displaystyle:=$ $\displaystyle{\mathcal{I}}^{\prime}-{\mathcal{I}}\;.$ (38) Further, let $S_{0}:=S(\rho)=S(\rho\otimes P_{\phi})=S(\rho\otimes P_{\phi^{\prime}})\;,$ (39) and $\Delta S:=S_{0}-S_{1}\;.$ (40) Then we can prove the following ###### Proposition 3 Under the preceding conditions the entropy decrease $\Delta S$ is bounded from above by $\Delta S\leq\Delta{\mathcal{I}}\;.$ (41) The proof of Proposition 3 can be found in Appendix B.4. Of course, this is only a first step to analyze the mentioned relations, since the conditions of Proposition 3 are rather limited, e. g., by the fact that only the standard dilation is considered and not an arbitrary measurement dilation. ## V Imperfect erasure of a qubit The role of the detailed example considered in this Section is twofold: First, we can explain and illustrate the definitions of the previous sections using a non-trivial but still computable example. Second, this reasonably realistic case also demonstrates the viability of the general theory. ### V.1 Definition of the model We consider a system of $N$ spins with spin quantum number $s=1/2$ equipped with a uniform anti-ferromagnetic Heisenberg coupling and a Zeeman term. This leads to a Hamiltonian $H_{N}=J\,\sum_{1\leq\mu<\nu\leq N}\\!\hbox{}\mathop{\vphantom{\mathbf{s}}\smash{\mathbf{s}}}\limits_{\sim}\\!{}_{\mu}\cdot\\!\hbox{}\mathop{\vphantom{\mathbf{s}}\smash{\mathbf{s}}}\limits_{\sim}\\!{}_{\nu}\,+B\sum_{\mu=1}^{N}\\!\hbox{}\mathop{\vphantom{s}\smash{s}}\limits_{\sim}\\!{}_{\mu}^{z}\;,$ (42) where $J>0$ and $B>0$ are dimensionless physical parameter characterizing the spin system. $\\!\hbox{}\mathop{\vphantom{\mathbf{s}}\smash{\mathbf{s}}}\limits_{\sim}\\!{}_{\mu}=\left(\\!\hbox{}\mathop{\vphantom{s}\smash{s}}\limits_{\sim}\\!{}_{\mu}^{x},\\!\hbox{}\mathop{\vphantom{s}\smash{s}}\limits_{\sim}\\!{}_{\mu}^{y},\\!\hbox{}\mathop{\vphantom{s}\smash{s}}\limits_{\sim}\\!{}_{\mu}^{z}\right)$ represents the vector of spin operators at the site $\mu$. It is well-known that the corresponding time evolution can be analytically calculated since we may write the Hamiltonian in the form $H_{N}=\frac{J}{2}\left(\\!\hbox{}\mathop{\vphantom{S}\smash{S}}\limits_{\sim}\\!{}^{2}-\frac{3N}{4}\mathbbm{1}\right)+B\,\\!\hbox{}\mathop{\vphantom{S}\smash{S}}\limits_{\sim}\\!{}^{z}\;,$ (43) where $\\!\hbox{}\mathop{\vphantom{\mathbf{S}}\smash{\mathbf{S}}}\limits_{\sim}\hbox{}\\!:=\sum_{\mu=1}^{N}\\!\hbox{}\mathop{\vphantom{\mathbf{s}}\smash{\mathbf{s}}}\limits_{\sim}\\!{}_{\mu}$ denotes the total vector of spin operators and $\\!\hbox{}\mathop{\vphantom{S}\smash{S}}\limits_{\sim}\\!{}^{z}$ its $z$-component. Since $\\!\hbox{}\mathop{\vphantom{S}\smash{S}}\limits_{\sim}\\!{}^{2}$ and $\\!\hbox{}\mathop{\vphantom{S}\smash{S}}\limits_{\sim}\\!{}^{z}$ commute they possess a system of common eigenvectors $\left|\alpha;S,M\right\rangle$ satisfying the eigenvalue equations $\displaystyle\\!\hbox{}\mathop{\vphantom{S}\smash{S}}\limits_{\sim}\\!{}^{2}\,\left|\alpha;S,M\right\rangle$ $\displaystyle=$ $\displaystyle S(S+1)\,\left|\alpha;S,M\right\rangle\;,$ (44) $\displaystyle\\!\hbox{}\mathop{\vphantom{S}\smash{S}}\limits_{\sim}\\!{}^{z}\,\left|\alpha;S,M\right\rangle$ $\displaystyle=$ $\displaystyle M\,\left|\alpha;S,M\right\rangle\;,$ (45) and hence $H_{N}\,\left|\alpha;S,M\right\rangle=\left({\textstyle\frac{J}{2}}\left(S(S+1)-{\textstyle\frac{3N}{4}}\right)+B\,M\right)\left|\alpha;S,M\right\rangle=:E_{N}(S,M)\,\left|\alpha;S,M\right\rangle\;.$ (46) The theory of coupling angular momenta treated in many textbooks yields that the quantum number $S$ assumes the values $\frac{1}{2},1,\frac{3}{2},\ldots,\frac{N}{2}$ for odd $N$ and $0,1,2,\ldots,\frac{N}{2}$ for even $N$ and $M=-S,-S+1,\ldots,S-1,S$. The symbol “$\alpha$" stands for further quantum numbers that allow for the degeneracy $D_{N}(S)$ of the eigenspaces with common eigenvalues $S(S+1)$ and $M$ of $\\!\hbox{}\mathop{\vphantom{S}\smash{S}}\limits_{\sim}\\!{}^{2}$ and $\\!\hbox{}\mathop{\vphantom{S}\smash{S}}\limits_{\sim}\\!{}^{z}$, resp., such that the normalized eigenvectors $\left|\alpha;S,M\right\rangle$ will be unique up to a phase. For a selection of such degeneracies see Figure 1. Figure 1: “Half Galton Board": The degeneracies $D_{N}(S)$ of states $\left|S,M\right\rangle$ generated by coupling $N$ spins with $s=1/2$ can be obtained by the (red) number of coupling paths that start at $(N=0,S=0)$ and terminate at $(N,S)$. We will consider a single spin with $s=1/2$ with Hilbert space ${\mathcal{H}}\cong{\mathbbm{C}}^{2}$ representing one qubit and try to realize the erasure of the qubit by coupling the single spin to a “heat bath" consisting of $N=6$ uniformly coupled spins such that the total Hamiltonian $H$ will be of the form $H_{7}$. Moreover, we choose $J=B=1$ thereby fixing a natural energy unit and corresponding physical units of time and temperature by setting $\hbar=k_{B}=1$. The choice of the “heat bath" with $N=6$ spins has the pleasant consequence that all relevant quantities can be directly calculated by the means of computer-algebraic means without resorting to the theory of coupling angular momenta. The “heat bath" with Hilbert space ${\mathcal{K}}\cong{\mathbbm{C}}^{64}$ has a ground state with energy $E_{6}(0,0)=E_{6}(1,-1)=-\frac{9}{4}$ that is $14$-fold degenerate. This follows from the degeneracies $D_{6}(S=0)=5$ and $D_{6}(S=1)=9$, see Figure 1. Let $Q_{0}$ denote the projector onto the corresponding eigenspace and $Q_{1}$ the complementary projector such that $Q_{0}+Q_{1}={\mathbbm{1}}_{\mathcal{K}}$. We will assume that initially the “heat bath" is in its ground state $\sigma:={\textstyle\frac{1}{14}}Q_{0}$ corresponding to the temperature $T=0$ whereas the single spin is in an arbitrary mixed state $\rho$. Then a unitary time evolution $U_{t}:=\exp\left(-{\sf i}\,t\,H\right)$ takes place followed by a Lüders measurement of the sharp heat bath observable $(Q_{0},Q_{1})$. After this measurement we consider the two reduced states $\rho_{1}=\mbox{Tr}_{\mathcal{K}}\left(\sum_{n=0,1}\left({\mathbbm{1}}\otimes Q_{n}\right)U_{t}\left(\rho\otimes\sigma\right)U_{t}^{\ast}\left({\mathbbm{1}}\otimes Q_{n}\right)\right)\;,$ (47) and $\rho_{2}=\mbox{Tr}_{\mathcal{H}}\left(\sum_{n=0,1}\left({\mathbbm{1}}\otimes Q_{n}\right)U_{t}\left(\rho\otimes\sigma\right)U_{t}^{\ast}\left({\mathbbm{1}}\otimes Q_{n}\right)\right)\;.$ (48) Obviously, $\rho_{1}$ is the result of the total operation $\rho_{1}=\mathfrak{I}({\mathcal{N}})(\rho)$ corresponding to the instrument $\mathfrak{I}(n)(\rho):=\mbox{Tr}_{\mathcal{K}}\left(\left({\mathbbm{1}}\otimes Q_{n}\right)U_{t}\left(\rho\otimes\sigma\right)U_{t}^{\ast}\left({\mathbbm{1}}\otimes Q_{n}\right)\right)\;,$ (49) where $n\in{\mathcal{N}}=\\{0,1\\}$. It turns out that for the special model we have considered the matrix elements of $\rho_{n}$ are $4\pi$-periodic functions of $t$. Instead of dwelling into a debate how to cope with these oscillating terms we simply make the choice $t=2\pi$, i.e., we consider the time evolution of a half period before performing the final measurement. This choice gives reasonable results which suffices to constructing an example of the general theory outlined in this paper. Now all parameters of our model for imperfect erasure are fixed and we proceed by presenting the relevant results without explicating the further details of the computer-algebraic calculation. ### V.2 Results on the instrument $\mathfrak{I}$ The first results concern the calculation and visualization of the total trace-preserving operation $\rho\mapsto\rho_{1}=\mathfrak{I}({\mathcal{N}})(\rho)$. Recall that $\rho\in{\mathcal{B}}({\mathcal{H}})$ where the latter is a $4$-dimensional space spanned by the four Pauli matrices $\sigma_{0}=\left(\begin{array}[]{cc}1&0\\\ 0&1\end{array}\right),\;\sigma_{1}=\left(\begin{array}[]{cc}0&1\\\ 1&0\end{array}\right),\;\sigma_{2}=\left(\begin{array}[]{cc}0&-{\sf i}\\\ {\sf i}&0\end{array}\right),\;\sigma_{3}=\left(\begin{array}[]{cc}1&0\\\ 0&-1\end{array}\right)\;.$ (50) They are mutually orthogonal w. r. to the Euclidean scalar product $(A,B)\mapsto\mbox{Tr}\left(AB\right)$ of Hermitean $2\times 2$-matrices and have the length $\sqrt{2}$. W. r. t. this basis the total operation $\mathfrak{I}({\mathcal{N}})$ can be represented by the $4\times 4$ matrix ${\mathbf{I}}={\textstyle\frac{1}{7}}\left(\begin{array}[]{cccc}7&0&0&0\\\ 0&1&0&0\\\ 0&0&1&0\\\ -4&0&0&3\\\ \end{array}\right)\;.$ (51) Note that $\mathfrak{I}({\mathcal{N}})$ being trace preserving is equivalent to the property $\mathfrak{I}^{\ast}({\mathcal{N}})(\sigma_{0})=\sigma_{0}$ of the dual instrument $\mathfrak{I}^{\ast}$. The latter is represented by the transposed matrix ${\mathbf{I}}^{\top}$ and hence the first row of ${\mathbf{I}}$, that corresponds to the first column of ${\mathbf{I}}^{\top}$, must be necessarily of the form $(1,0,0,0)$. The density matrices $\rho$ in the Hilbert space ${\mathcal{H}}\cong{\mathbbm{C}}^{2}$ can be represented by the points $(x_{1},x_{2},x_{3})^{\top}$ of a unit ball in ${\mathbbm{R}}^{3}$ such that the pure states corresponding to one-dimensional projectors form its surface ${\mathcal{S}}^{2}$, the so-called “Bloch sphere". This representation is given by $\rho={\textstyle\frac{1}{2}}\left(\sigma_{0}+\sum_{i=1}^{3}x_{i}\,\sigma_{i}\right)\;.$ (52) Figure 2: Visualization of the total operation $\mathfrak{I}({\mathcal{N}})$ of imperfect erasure of a qubit as the affine mapping of the Bloch sphere ${\mathcal{S}}^{2}$ onto an ellipsoid ${\mathcal{E}}$ that touches ${\mathcal{S}}^{2}$ at the south pole. Under the total operation $\mathfrak{I}({\mathcal{N}})$, which is an affine map on states, the Bloch sphere is mapped onto an ellipsoid ${\mathcal{E}}$ lying inside ${\mathcal{S}}^{2}$, see Figure 2. Due to $\det({\mathbf{I}})={\textstyle\frac{1}{7}}\times{\textstyle\frac{1}{7}}\times{\textstyle\frac{3}{7}}={\textstyle\frac{3}{343}}$ the volume of ${\mathcal{S}}^{2}$ is compressed to less than one percent. This volume compression is typical for conditional action. The invariance of ${\mathcal{E}}$ under rotations about the $3$-axis is due to the azimuthal symmetry of the Hamiltonian (42) and of the initial state of the “heat bath". Some properties of the mapping $\mathfrak{I}({\mathcal{N}})$ can be read off the matrix (51): The south pole of ${\mathcal{S}}^{2}$ is mapped onto itself and this is the only point where ${\mathcal{E}}$ touches the Bloch sphere. Since this south pole corresponds to the state $\rho={\textstyle\frac{1}{2}}\left(\sigma_{0}-\sigma_{3}\right)$ its invariance under $\mathfrak{I}({\mathcal{N}})$ is reflected by the equation $\mathbf{I}(1,0,0,-1)^{\top}=(1,0,0,-1)^{\top}$. Physically, the south pole represents the ground state of the qubit corresponding to the Gibbs state with temperature $T=0$. Its invariance under $\mathfrak{I}({\mathcal{N}})$ hence means that the quit remains in its ground state if it is coupled to a “heat bath" of temperature $T=0$, which is very plausible. The orientation relative to the coordinate system and the semi-axes $({\textstyle\frac{1}{7}},{\textstyle\frac{1}{7}},{\textstyle\frac{3}{7}})$ of the ellipsoid ${\mathcal{E}}$ can be read off the lower right $3\times 3$-submatrix of ${\mathbf{I}}$, see (51). The center of ${\mathcal{E}}$ lies at $x_{3}=-{\textstyle\frac{4}{7}}$ corresponding to the state $\rho_{1}^{\prime}={\textstyle\frac{1}{2}}\left(\sigma_{0}-{\textstyle\frac{4}{7}}\sigma_{3}\right)={\textstyle\frac{1}{14}}\left(\begin{array}[]{cc}3&0\\\ 0&11\\\ \end{array}\right)\;,$ (53) and the north pole of ${\mathcal{S}}^{2}$ is mapped onto $\rho_{1}^{\prime\prime}={\textstyle\frac{1}{2}}\left(\sigma_{0}-{\textstyle\frac{1}{7}}\sigma_{3}\right)={\textstyle\frac{1}{7}}\left(\begin{array}[]{cc}3&0\\\ 0&4\\\ \end{array}\right)\;.$ (54) By a perfect erasure of a qubit the Bloch sphere would be completely mapped onto the south pole; a more realistic scenario corresponds to a mapping onto a small ellipsoid close to the south pole. The present example may not yield the best possible result; however its virtue lies in the fact that ${\mathcal{E}}$ can be analytically calculated and is rather simple in form. After having analyzed the total operation $\mathfrak{I}({\mathcal{N}})$ we proceed by considering the two components $\mathfrak{I}(n),\;n=0,1,$ of the instrument $\mathfrak{I}$. We will determine the corresponding Kraus operators $A_{nmj}$ such that $\mathfrak{I}(n)(\rho)=\sum_{mj}A_{nmj}\,\rho\,A_{nmj}^{\ast},\quad n\in{\mathcal{N}}\;.$ (55) The index $i$ occurring in (19) has been replaced here by a multi-index $i=(m,j)$. According to the general theory the Kraus operators $A_{nmj}$ can be derived from the measurement dilation $\mathfrak{I}={\mathfrak{D}}_{{\mathcal{K}},\sigma,V,Q}$ by means of the equation $\left\langle a\right|A_{nmj}\left|b\right\rangle=\sqrt{q_{j}}\left\langle a\otimes\phi_{m}\right|V\left|b\otimes\psi_{j}\right\rangle,\quad a,b\in{\mathcal{H}},\;m\in{\mathcal{M}}_{n}\;,$ (56) see NC00 , 8.35. Here we have used the spectral decomposition of the initial state $\sigma$ of the auxiliary system $\sigma=\sum_{j}q_{j}\,\left|\psi_{j}\right\rangle\left\langle\psi_{j}\right|\;,$ (57) and that of the projector $Q_{n}$ $Q_{n}=\sum_{m\in{\mathcal{M}}_{n}}\left|\phi_{m}\right\rangle\left\langle\phi_{m}\right|\;.$ (58) The latter is defined w. r. t. a suitable partition ${\mathcal{M}}=\biguplus_{n}{\mathcal{M}}_{n}$ of the index set ${\mathcal{M}}$ corresponding to an orthonormal basis $\left(\phi_{m}\right)_{m\in{\mathcal{M}}}$ of ${\mathcal{K}}$ adapted to the sharp observable $\left(Q_{n}\right)_{n\in{\mathcal{N}}}$. Figure 3: Plot of the final entropy $S_{1}$ versus the initial one $S_{0}$ for an imperfect erasure of a qubit. The blue dots correspond to $10,000$ randomly chosen initial states $\rho$ of the single spin. The enveloping green curve is analogously calculated for the one-dimensional family $\rho(p)$ according to (62,63) and reaches its maximum of $S_{0}=\log 2$ for $p={\textstyle\frac{1}{2}}$ (dashed green line). The corresponding value $S_{1/2}$ of $S_{1}$ is given by (64). Only for the points below the (red) line $S_{0}=S_{1}$ there occurs a decrease of entropy due to the conditional action. For the green curve this will happen if $0<p<p_{1}$, where $p_{1}$ is given by (65). The dashed blue curve represents $S_{0}-H(\rho(p),F)$ according to (27). Therefore, the bound (28) holds although the conditions of Theorem 1 or Corollary 1 are not satisfied. For the present example it appears at first sight that we would need $14\times 64=896$ Kraus operators. Fortunately, only $94$ Kraus operators do not vanish. Also, they can be combined and simplified so that only the following three operators remain: $A_{1}=\left(\begin{array}[]{cc}\sqrt{\textstyle\frac{5}{14}}&0\\\ 0&\sqrt{\textstyle\frac{5}{14}}\\\ \end{array}\right),\quad A_{2}=\left(\begin{array}[]{cc}-{\textstyle\frac{1}{\sqrt{14}}}&0\\\ 0&{\textstyle\frac{3}{\sqrt{14}}}\\\ \end{array}\right),\quad A_{3}=\left(\begin{array}[]{cc}0&0\\\ {\textstyle\frac{2}{\sqrt{7}}}&0\\\ \end{array}\right)\;.$ (59) Here the first two operators $A_{1}$ and $A_{2}$ belong to $\mathfrak{I}(0)$ and $A_{3}$ to $\mathfrak{I}(1)$. Hence the observable $F=(F_{0},F_{1})$ given by the instrument $\mathfrak{I}$ is obtained as $F_{0}=A_{1}^{\ast}\,A_{1}+A_{2}^{\ast}\,A_{2}=\left(\begin{array}[]{cc}{\textstyle\frac{3}{7}}&0\\\ 0&1\\\ \end{array}\right)\quad\mbox{and}\quad F_{1}=A_{3}^{\ast}\,A_{3}=\left(\begin{array}[]{cc}{\textstyle\frac{4}{7}}&0\\\ 0&0\\\ \end{array}\right)\;,$ (60) satisfying $F_{0}+F_{1}={\mathbbm{1}}$, as it is required for $F$ being an observable. The fact that $F$ is not a sharp observable means that, despite energy conservation, there is no perfect correlation between the energy of the individual spin and that of the “heat bath”. The latter would be expected only on the basis of time-dependent perturbation theory (Fermi’s “Golden Rule”) and does not hold for a finite interaction between the spin and the “heat bath”. ### V.3 Results on the entropy balance Figure 4: Plot of the initial total entropy $S_{0}+S_{i}$ (red curve) and the final entropy $S_{1}+S_{2}$ (blue curve) calculated for the one-parameter family $\rho(p)$ according to (61) such that $S_{0}=S(\rho(p))$. Obviously, $S_{0}+S_{i}<S_{1}+S_{2}$ for $0<p\leq 1$. $S_{i}=\log 14$ denotes the initial entropy of the “heat bath" and hence $S_{0}+S_{i}$ assumes its maximum $\log 2+\log 14=\log 28\approx 3.3322$ at $p=1/2$. At $p=1$ the final entropy $S_{1}+S_{2}$ assumes the value $S_{f}\approx 3.54621$ according to (66). We now turn to the entropy balance. First we plot $S_{1}:=S(\rho_{1})$ versus $S_{0}:=S(\rho)$, see Figure 3. For the one-parameter family of states $\rho(p):=\left(\begin{array}[]{cc}p&0\\\ 0&1-p\end{array}\right)\;,$ (61) where $0\leq p\leq 1$, we obtain a curve with parametric representation $\displaystyle S_{0}(\rho(p))$ $\displaystyle=$ $\displaystyle-p\log(p)-(1-p)\log(1-p),$ (62) $\displaystyle S_{1}(\rho(p))$ $\displaystyle=$ $\displaystyle{\textstyle\frac{1}{7}}\left((3p-7)\log\left(1-{\textstyle\frac{3p}{7}}\right)-3p\log\left({\textstyle\frac{3p}{7}}\right)\right)\;,$ (63) see the green curve in Figure 3. The value $p=1/2$ corresponds to the maximum $\log 2$ of $S_{0}$ and the value $S_{1/2}={\textstyle\frac{1}{7}}\left({\textstyle\frac{11}{2}}\log\left({\textstyle\frac{14}{11}}\right)+{\textstyle\frac{3}{2}}\log\left({\textstyle\frac{14}{3}}\right)\right)$ (64) of $S_{1}$ corresponding to the entropy of the center of the ellipsoid ${\mathcal{E}}$. This curve is the envelope of the set of all points $\left(S_{0}(\rho),S_{1}(\rho)\right)$ as can be seen as follows. The surfaces with constant entropy (“adiabatic surfaces") are the concentric spheres ${\mathcal{S}}$ inside the Bloch sphere (together with the center considered as a degenerate sphere). The set of states $\rho$ corresponding to such a concentric sphere ${\mathcal{S}}$ is mapped under $\mathfrak{I}({\mathcal{N}})$ onto an ellipsoid ${\mathcal{E}}^{\prime}\subset{\mathcal{E}}$ that is also invariant under rotations about the $3$-axis. The north pole $N$ of ${\mathcal{S}}$ corresponding to the state $\rho(p)$ is mapped onto the north pole $N^{\prime}$ of ${\mathcal{E}}^{\prime}$. Similarly, the south pole $S$ of ${\mathcal{S}}$ corresponding to the state $\rho(1-p)$ is mapped onto the south pole $S^{\prime}$ of ${\mathcal{E}}^{\prime}$. The total ellipsoid ${\mathcal{E}}^{\prime}$ is bounded by the two concentric spheres through $N^{\prime}$ and $S^{\prime}$ and hence the entropy of all states corresponding to ${\mathcal{E}}^{\prime}$ is bounded by $S_{1}(\rho(p))$ and $S_{1}(\rho(1-p))$. Decrease of entropy, i. e., $S(\rho_{1})<S(\rho_{0})$, will not always occur. For example, the north pole of ${\mathcal{S}}^{2}$ corresponds to a pure state of vanishing entropy and is mapped onto a mixed state with positive entropy. For the states $\rho(p)$ decrease of entropy is equivalent to $0<p<p_{1}:={\textstyle\frac{1}{7}}\left({\textstyle\frac{11}{2}}\log\left({\textstyle\frac{14}{11}}\right)+{\textstyle\frac{3}{2}}\log\left({\textstyle\frac{14}{3}}\right)\right)\approx 0.51958\;,$ (65) see Figure 3. For the value $p=p_{1}$ the initial state $\rho(p_{1})$ is mapped under $\mathfrak{I}({\mathcal{N}})$ onto $\rho(1-p_{1})$ which has the same entropy. Recall that within the family $\rho(p),\;0\leq p\leq p_{1}$ only states with $0<p<1/2$ have a positive temperature and hence for these entropy decrease is guaranteed. Although the instrument $\mathfrak{I}$ does not satisfy the conditions of Theorem 1 or Corollary 1 its entropy decrease satisfies the same bound given in (28), see Figure 3. According to the considerations of Section II and Proposition 1 it is clear that a possible decrease of entropy will be compensated by an equal or larger increase of entropy of the auxiliary system, i. e., of the “heat bath". Nevertheless, it will be instructive to check this result for the considered example, see Figure 4. We have plotted the initial total entropy $S_{0}+S_{i}$ (red curve) and the final entropy $S_{1}+S_{2}$ (blue curve) calculated for the one-parameter family $\rho(p)$ according to (61) such that $S_{0}=S(\rho(p))$. Obviously, $S_{0}+S_{i}<S_{1}+S_{2}$ for $0<p\leq 1$ in accordance with the second law. $S_{i}=\log 14$ denotes the initial entropy of the “heat bath" due to $14$-fold degeneracy of the ground state of $H_{6}$. At $p=1$ the final entropy $S_{1}+S_{2}$ assumes the value $S_{f}={\textstyle\frac{4}{7}}\log\left({\textstyle\frac{7}{4}}\right)+{\textstyle\frac{3}{7}}\log\left({\textstyle\frac{7}{3}}\right)+{\textstyle\frac{5\log(14)}{14}}+{\textstyle\frac{4}{7}}\ \log\left({\textstyle\frac{63}{4}}\right)+{\textstyle\frac{\log(126)}{14}}\approx 3.54621\;,$ (66) see Figure 4. ## VI Summary and Outlook In this paper we have elaborated a recent proposal S20 to describe the “intervention of intelligent beings" in quantum systems in terms of “conditional action". This is a genuinely physical concept. Mathematically, the notion of general “instruments", originally intended to explain state changes due to measurements, is already broad enough to include conditional action. A fundamental assumption here is that it is not necessary to describe the inner life of “intelligent beings" in more detail; it is sufficient to analyze the workings of apparatuses built to realize measurements and conditional actions. Ideally, such an analysis includes the original measurement on the system $\Sigma$, the storage of the measurement result in a classical memory, and the subsequent unitary time evolution of $\Sigma$ conditioned by the memory contents. But the construction of such a complete model of a conditional action would, in my opinion, require a solution of the quantum measurement problem and hence is impossible at present. We must therefore confine ourselves to considering physical realizations of conditional actions restricted to so-called “measurement dilations”. These are well-known mathematical constructions BLPY16 that reduce general instruments acting on $\Sigma$ to special Lüders instruments acting on a larger system $\Sigma+E$. These tools also open the way to understanding the (possible) entropy decrease in $\Sigma$ due to the conditional action as an entropy flow from $\Sigma$ to the auxiliary system $E$, in the same sense as the (possible) entropy decrease due to a general measurement can be explained. The latter explanation can also be related to existing approaches to resolving the apparent contradiction of said entropy decrease with a tentative second law of quantum thermodynamics. Among such approaches are the Szilard principle, the Landauer-Bennett principle, and the recent OLR approach. Due to the Szilard principle the entropy decrease in $\Sigma$ is, at least, compensated by the entropy production associated with the measurement of the system’s state. This principle has been confirmed by the present conditional action approach in the special case where the auxiliary system $E$ can be conceived as a memory device, see Theorem 1, but not in general. Also a partial compatibility to the OLR approach has been shown, in so far as, in special cases, the entropy decrease in $\Sigma$ is bounded by the loss of mutual information due to conditional action, see Proposition 3. Similarly, the approach based on the entropy costs of memory erasure (Landauer-Bennett principle) is compatible with our approach, but cannot be viewed as the ultimate solution of the apparent paradox. We have analyzed the imperfect erasure of a qubit by means of a physical model. This model descries the cooling of a single spin by coupling it to a “cold bath" consisting of six other spins such that the total time evolution can be analytically calculated. This model thus represents a more or less realistic measurement dilatation of imperfect erasure conceived as a conditional action, and as such motivates the slight generalization of this concept compared to S20 . At the same time this example reveals some problems of the mentioned principles based on acquisition or deletion of information, since imperfect erasure of a qubit can be achieved without any measurement at all. This is even more plausible if one considers the physical interpretation of the erasure as a cooling of a single spin. The OLR approach appears to avoid this problem because it relies on an information concept that is independent of possible measurements, see A08 for a corresponding treatment of imperfect memory erasure. For future investigations it seems to be a desirable goal to extend the conditional action approach to the field of classical physics. First steps toward this goal restricted to discrete state spaces have been made in S20 . The role of measurement is different in classical theories because, unlike in quantum theory, there are always idealized measurements that do not change the state of the system. However, there exist non-trivial instruments describing conditional action even in classical theories, and it should be possible to realize them by extending the system analogously to the quantum case. ## Appendix A Construction of the standard measurement dilation for a general instrument Let an instrument of the form (19) be given, i. e., $\mathfrak{J}(n)(\rho)=\sum_{i\in{\mathcal{I}}_{n}}A_{ni}\,\rho A_{ni}^{\ast},\quad n\in{\mathcal{N}}\;,$ (67) such that the corresponding observable $F=\left(F_{n}\right)_{n\in{\mathcal{N}}}$ is given by $F_{n}=\sum_{i\in{\mathcal{I}}_{n}}A_{ni}^{\ast}A_{ni},\quad\mbox{for all }n\in{\mathcal{N}}\;,$ (68) satisfying $\sum_{n\in{\mathcal{N}}}F_{n}={\mathbbm{1}}_{\mathcal{H}}$. Following NC00 we want to explicitly construct a measurement dilation of $\mathfrak{J}$ of the form (15), see also the analogous construction for a Maxwell instrument in S20 . To this end we define ${\mathcal{N}}^{\prime}:=\\{(n,i)\left|\right.n\in{\mathcal{N}}\mbox{ and }i\in{\mathcal{I}}_{n}\\}$ and choose ${\mathcal{K}}={\mathbbm{C}}^{{\mathcal{N}}^{\prime}}$ and an orthonormal basis $\left(|ni\rangle\right)_{n\in{\mathcal{N},\,i\in{\mathcal{I}}_{n}}}$ in ${\mathcal{K}}$. Let $\phi\in{\mathcal{K}}$ be one of these basis vectors, say, $\phi=|11\rangle$. Further, let $\left(Q_{n}\right)_{n\in{\mathcal{N}}}$ denote the complete family of projectors in the Hilbert space ${\mathcal{K}}$ defined by $Q_{n}=\sum_{i\in{\mathcal{I}}_{n}}|ni\rangle\langle ni|\;,\quad\mbox{for all }n\in{\mathcal{N}}\;.$ (69) Moreover, let $\check{Q}_{ni}$ be the subspace of ${\mathcal{H}}\otimes{\mathcal{K}}$ formed by vectors of the form $\psi\otimes|ni\rangle$ for all $\psi\in{\mathcal{H}}$ and fixed $n\in{\mathcal{N}},\,i\in{\mathcal{I}}_{n}$ and define a linear map $V_{11}:\check{Q}_{11}\rightarrow{\mathcal{H}}\otimes{\mathcal{K}}$ by $V_{11}\left|\psi 11\right\rangle:=V_{11}\left(\psi\otimes|11\rangle\right):=\sum_{n\in{\mathcal{N}},i\in{\mathcal{I}}_{n}}A_{ni}\psi\otimes|ni\rangle\;,$ (70) for all $\psi\in{\mathcal{H}}$. ###### Lemma 1 The map $V_{11}:\check{Q}_{11}\rightarrow{\mathcal{H}}\otimes{\mathcal{K}}$ is a partial isometry, i. e., satisfies $V_{11}^{\ast}\,V_{11}={\mathbbm{1}}_{\check{Q}_{11}}$. Proof: Let $\varphi,\,\psi$ be two arbitrary vectors of ${\mathcal{H}}$ and consider the scalar products $\displaystyle\left\langle\varphi 11\right|V_{11}^{\ast}V_{11}\left|\psi 11\right\rangle$ $\displaystyle\stackrel{{\scriptstyle(\ref{E2})}}{{=}}$ $\displaystyle\sum_{nimj}\left\langle\varphi\right|A_{ni}^{\ast}\,A_{mj}\left|\psi\right\rangle\underbrace{\left\langle ni\right|\left.mj\right\rangle}_{\delta_{nm}\delta_{ij}}$ (71) $\displaystyle=$ $\displaystyle\sum_{ni}\left\langle\varphi\right|A_{ni}^{\ast}\,A_{ni}\left|\psi\right\rangle$ (72) $\displaystyle\stackrel{{\scriptstyle(\ref{Eobs})}}{{=}}$ $\displaystyle\left\langle\varphi\right|\underbrace{\sum_{n}F_{n}}_{{\mathbbm{1}}_{\mathcal{H}}}\left|\psi\right\rangle=\left\langle\varphi 11\right|\left.\psi 11\right\rangle\;,$ (73) which completes the proof of Lemma 1. $\Box$ Next we extend the partial isometry $V_{11}$ to a unitary operator $V:{\mathcal{H}}\otimes{\mathcal{K}}\rightarrow{\mathcal{H}}\otimes{\mathcal{K}}$. This completes the definition of the quantities ${\mathcal{K}},\phi,V,Q$ required for the measurement dilation. It remains to show that $\mathfrak{J}={\mathfrak{D}}_{{\mathcal{K}},\phi,V,Q}$. To this end we introduce an orthonormal basis $\left(\left|\ell\right\rangle\right)_{\ell=1,\ldots,d}$ in ${\mathcal{H}}$ and write $\rho=\sum_{k\ell}|k\rangle\langle k|\rho|\ell\rangle\langle\ell|\;.$ (74) Hence $\rho\otimes P_{\phi}=\sum_{k\ell}|k11\rangle\langle k|\rho|\ell\rangle\langle\ell 11|\;,$ (75) and, further, $\displaystyle V\left(\rho\otimes P_{\phi}\right)V^{\ast}$ $\displaystyle\stackrel{{\scriptstyle(\ref{E5})}}{{=}}$ $\displaystyle\sum_{k\ell}V\,|k11\rangle\langle k|\rho|\ell\rangle\langle\ell 11|\,V^{\ast}$ $\displaystyle\stackrel{{\scriptstyle(\ref{E2})}}{{=}}$ $\displaystyle\sum_{k\ell nimj}A_{ni}|k\rangle\langle k|\rho|\ell\rangle\langle\ell|A_{mj}^{\ast}\otimes|ni\rangle\langle mj|$ $\displaystyle\stackrel{{\scriptstyle(\ref{E4})}}{{=}}$ $\displaystyle\sum_{nimj}A_{ni}\,\rho\,A_{mj}^{\ast}\otimes|ni\rangle\langle mj|\;.$ (77) Using $Q_{r}|ni\rangle\langle mj|Q_{r}=\delta_{rn}\,\delta_{rm}\,|ri\rangle\langle rj|\;,$ (78) for all $r\in{\mathcal{N}}$, (77) implies $\displaystyle\left({\mathbbm{1}}\otimes Q_{r}\right)V\left(\rho\otimes P_{\phi}\right)V^{\ast}\left({\mathbbm{1}}\otimes Q_{r}\right)$ $\displaystyle=$ $\displaystyle\sum_{ij}\left(A_{ri}\,\rho\,A_{rj}^{\ast}\right)\otimes|ri\rangle\langle rj|\;,$ (79) and $\displaystyle{\mathcal{D}}_{{\mathcal{K}},\phi,V,Q}(r)(\rho)$ $\displaystyle=$ $\displaystyle\mbox{Tr}_{\mathcal{K}}\left(\left({\mathbbm{1}}\otimes Q_{r}\right)V\left(\rho\otimes P_{\phi}\right)V^{\ast}\left({\mathbbm{1}}\otimes Q_{r}\right)\right)$ (80) $\displaystyle\stackrel{{\scriptstyle(\ref{E8})}}{{=}}$ $\displaystyle\sum_{ij}\left(A_{ri}\,\rho\,A_{rj}^{\ast}\right)\mbox{Tr}_{\mathcal{K}}\left(|ri\rangle\langle rj|\right)$ (81) $\displaystyle=$ $\displaystyle\sum_{i}A_{ri}\,\rho\,A_{ri}^{\ast}\;,$ (82) since $\mbox{Tr}_{\mathcal{K}}\left(|ri\rangle\langle rj|\right)=\delta_{ij}$ for all $r\in{\mathcal{N}}$. The latter expression equals $\mathfrak{J}(r)(\rho)\stackrel{{\scriptstyle(\ref{E1})}}{{=}}\sum_{i\in{\mathcal{I}}_{r}}A_{ri}\,\rho\,A_{ri}^{\ast}\;,$ thereby proving that the above construction is a correct measurement dilation of ${\mathfrak{J}}$. ## Appendix B Proofs ### B.1 Proof of Proposition 2 We define $\rho_{1}:=\mathfrak{I}({\mathcal{N}})(\rho)=\sum_{n\in{\mathcal{N}}}\mbox{Tr}_{\mathcal{K}}\left(\left({\mathbbm{1}}\otimes Q_{n}\right)\rho^{\prime}\left({\mathbbm{1}}\otimes Q_{n}\right)\right)\;,$ (83) where $\rho^{\prime}:=V\left(\rho\otimes\sigma\right)V^{\ast}\;,$ (84) and will prove Proposition 2 by showing that $\rho_{1}=\mbox{Tr}_{\mathcal{K}}\left(\rho^{\prime}\right)$. Assume some orthonormal basis $\left(\ldots,|\alpha\rangle,\ldots,|\beta\rangle,\ldots\right)$ in ${\mathcal{H}}$ and another orthonormal basis $\left(|m\rangle\right)_{m\in{\mathcal{M}}}$ in ${\mathcal{K}}$ that is adapted to the environment observable $Q$ in the sense that $Q_{n}=\sum_{m\in{\mathcal{M}}_{n}}|m\rangle\langle m|\;,\quad\mbox{for all }n\in{\mathcal{N}},$ (85) w. r. t. a partition ${\mathcal{M}}=\biguplus_{n\in{\mathcal{N}}}{\mathcal{M}}_{n}$ of the index set ${\mathcal{M}}$. It follows that $\left({\mathbbm{1}}\otimes Q_{n}\right)\left|\beta m\right\rangle=|\beta\rangle\otimes\left\\{\begin{array}[]{r@{\quad: \quad} l}|m\rangle&m\in{\mathcal{M}}_{n},\\\ 0&\mbox{else},\end{array}\right.,$ (86) for all $m\in{\mathcal{M}}$ and any base vector $\left|\beta\right\rangle$ and analogously for $\left\langle\alpha m\right|\left({\mathbbm{1}}\otimes Q_{n}\right)$. Hence an arbitrary matrix element of $\rho_{1}$ assumes the form $\displaystyle\left\langle\alpha\right|\rho_{1}\left|\beta\right\rangle$ $\displaystyle\stackrel{{\scriptstyle(\ref{rhodef})}}{{=}}$ $\displaystyle\sum_{n\in{\mathcal{N}}}\sum_{m\in{\mathcal{M}}}\left\langle\alpha m\right|\left({\mathbbm{1}}\otimes Q_{n}\right)\rho^{\prime}\left({\mathbbm{1}}\otimes Q_{n}\right)\left|\beta m\right\rangle$ (87) $\displaystyle\stackrel{{\scriptstyle(\ref{betam})}}{{=}}$ $\displaystyle\sum_{n\in{\mathcal{N}}}\sum_{m\in{\mathcal{M}}_{n}}\left\langle\alpha m\right|\rho^{\prime}\left|\beta m\right\rangle$ (88) $\displaystyle=$ $\displaystyle\sum_{m\in{\mathcal{M}}}\left\langle\alpha m\right|\rho^{\prime}\left|\beta m\right\rangle$ (89) $\displaystyle=$ $\displaystyle\left\langle\alpha\right|\mbox{Tr}_{\mathcal{K}}\left(\rho^{\prime}\right)\left|\beta\right\rangle\;,$ (90) thereby completing the proof of Proposition 2. $\Box$ ### B.2 Proof of Theorem 1 For a pure instrument, the construction of a standard dilation given in Appendix A is simplified by omitting the indices $i,j\in{\mathcal{I}}_{n}$ and the corresponding sums. Especially we obtain $\rho_{12}:=\sum_{r}\left({\mathbbm{1}}\otimes Q_{r}\right)V\left(\rho\otimes P_{\phi}\right)V^{\ast}\left({\mathbbm{1}}\otimes Q_{r}\right)\stackrel{{\scriptstyle(\ref{E8})}}{{=}}\sum_{r}\left(A_{r}\,\rho\,A_{r}^{\ast}\right)\otimes|r\rangle\langle r|\;.$ (91) It follows that $S(\rho)\leq S(\rho_{12})\;,$ (92) since the von Neumann entropy vanishes for pure state like $P_{\phi}$, is additive for tensor products and invariant under unitary transformations. Moreover, it is non-decreasing under Lüders measurements. Further we consider the two reduced states of $\rho_{12}$: $\displaystyle\rho_{1}$ $\displaystyle=$ $\displaystyle\mbox{Tr}_{\mathcal{K}}\,\rho_{12}\stackrel{{\scriptstyle(\ref{rho12})}}{{=}}\sum_{r}A_{r}\,\rho\,A_{r}^{\ast},$ (93) $\displaystyle\rho_{2}$ $\displaystyle=$ $\displaystyle\mbox{Tr}_{\mathcal{H}}\,\rho_{12}\stackrel{{\scriptstyle(\ref{rho12})}}{{=}}\sum_{r}\mbox{Tr}\left(A_{r}\,\rho\,A_{r}^{\ast}\right)\,|r\rangle\langle r|\stackrel{{\scriptstyle(\ref{eff1})}}{{=}}\sum_{r}\mbox{Tr}\left(\rho\,F_{r}\right)\,|r\rangle\langle r|=\sum_{r}p_{r}\,|r\rangle\langle r|\;.$ (94) Due to the subadditivity of the von Neumann entropy, see NC00 , 11.3.4, we conclude $S(\rho)\stackrel{{\scriptstyle(\ref{Srho12})}}{{\leq}}S(\rho_{12})\leq S(\rho_{1})+S(\rho_{2})\;,$ (95) see also Proposition 1, and hence $\Delta S=S(\rho)-S(\rho_{1})\leq S(\rho_{2})=H(\rho,F)\;.$ (96) The latter equation follows from the spectral composition of $\rho_{2}$ due to (48) which implies $S(\rho_{2})=-\sum_{r}p_{r}\,\log p_{r}\stackrel{{\scriptstyle(\ref{ShannonExp})}}{{=}}H(\rho,F)\;,$ (97) thereby completing the proof of Theorem 1. $\Box$ ### B.3 Proof of Corollary 1 Turning to the proof of Corollary 1 we assume that the instrument $\mathfrak{I}$ can be written as a convex sum of pure instruments, i. e., $\mathfrak{I}(n)(\rho)=\sum_{i\in I}\lambda_{i}\,\mathfrak{I}^{(i)}(n)(\rho)\;,$ (98) for all $n\in{\mathcal{N}}$ and $\rho\in B_{1}^{+}({\mathcal{H}})$, such that $\lambda_{i}>0\quad\mbox{for all }i\in I\quad\mbox{and}\quad\sum_{i\in I}\lambda_{i}=1\;.$ (99) We will apply Theorem 1 for each pure instrument $\mathfrak{I}^{(i)}$ and obtain $\Delta S^{(i)}=S(\rho)-S\left(\rho_{1}^{(i)}\right)\leq H\left(p^{(i)}\right)\;,$ (100) using some self-explaining notation. In particular, the Shannon entropy $H\left(p^{(i)}\right)$ is calculated for the probability distribution $p_{n}^{(i)}=\mbox{Tr}\left(\mathfrak{I}^{(i)}({n})(\rho)\right)\;,$ (101) satisfying $\sum_{n}p_{n}^{(i)}=1\;.$ (102) Moreover, $\sum_{i}\lambda_{i}\,p_{n}^{(i)}\stackrel{{\scriptstyle(\ref{PR1},\ref{PR6})}}{{=}}\mbox{Tr}\left(\mathfrak{I}(n)(\rho)\right)=p_{n}\;,$ (103) for all $n\in{\mathcal{N}}$. Due to concavity of the Shannon entropy, see NC00 , Ex. 11.21, (103) implies $H(\rho,F)=H(p)\geq\sum_{i}\lambda_{i}\,H\left(p^{(i)}\right)\;.$ (104) Similarly, concavity of the von Neumann entropy, see NC00 , 11.3.5., yields $S(\rho_{1})=S\left(\mathfrak{I}({\mathcal{N}})(\rho)\right)\stackrel{{\scriptstyle(\ref{PR1})}}{{=}}S\left(\sum_{i}\lambda_{i}\mathfrak{I}^{(i)}({\mathcal{N}})(\rho)\right)\geq\sum_{i}\lambda_{i}S\left(\mathfrak{I}^{(i)}({\mathcal{N}})(\rho)\right)=\sum_{i}\lambda_{i}S\left(\rho_{1}^{(i)}\right)\;.$ (105) Finally, $\Delta S=S(\rho)-S\left(\rho_{1}\right)\stackrel{{\scriptstyle(\ref{PR10})}}{{\leq}}S(\rho)-\sum_{i}\lambda_{i}\,S\left(\rho_{1}^{(i)}\right){=}\sum_{i}\lambda_{i}\,\left(S(\rho)-S\left(\rho_{1}^{(i)}\right)\right)\stackrel{{\scriptstyle(\ref{PR5})}}{{\leq}}\sum_{i}\lambda_{i}\,H\left(p^{(i)}\right)\stackrel{{\scriptstyle(\ref{PR9})}}{{\leq}}H(p)\;,$ (106) thereby completing the proof of Corollary 1. $\Box$ ### B.4 Proof of Proposition 3 By setting $A_{n}=U_{n}\,P_{n}$, resp. $A_{n}=P_{n}$, for all $n\in{\mathcal{N}}$, we obtain from (91): $\rho_{12}=\sum_{n}\left(U_{n}P_{n}\rho P_{n}U_{n}^{\ast}\right)\otimes|n\rangle\langle n|=:\sum_{n}p_{n}\,\rho_{n}\otimes|n\rangle\langle n|\;,$ (107) where $p_{n}:=\mbox{Tr}\left(\rho\,P_{n}\right)$, and $\rho_{12}^{\prime}=\sum_{n}\left(P_{n}\rho P_{n}\right)\otimes|n\rangle\langle n|\;.$ (108) Moreover, by means of (48), $\rho_{2}=\rho_{2}^{\prime}=\sum_{n}p_{n}|n\rangle\langle n|\;,$ (109) and hence $S_{2}=S_{2}^{\prime}=H(p)=H(\rho,P)\;.$ (110) Since the $\rho_{n}\otimes|n\rangle\langle n|$ in (107) as well as the $\left(P_{n}\rho P_{n}\right)\otimes|n\rangle\langle n|$ in (108) have orthogonal support, theorem 11.8 (4) of NC00 can be applied and yields: $\displaystyle S_{12}$ $\displaystyle=$ $\displaystyle S\left(\rho_{12}\right)=\sum_{n}p_{n}S\left(\rho_{n}\otimes|n\rangle\langle n|\right)+H(p)$ (111) $\displaystyle=$ $\displaystyle\sum_{n}p_{n}S\left(\frac{1}{p_{n}}U_{n}P_{n}\rho P_{n}U_{n}^{\ast}\right)+H(p)$ (112) $\displaystyle=$ $\displaystyle\sum_{n}p_{n}S\left(\frac{1}{p_{n}}P_{n}\rho P_{n}\right)+H(p)$ (113) $\displaystyle=$ $\displaystyle S_{12}^{\prime}\;,$ (114) using the invariance of von Neumann entropy under unitary transformations in (113). Finally, $\displaystyle\Delta{\mathcal{I}}$ $\displaystyle\stackrel{{\scriptstyle(\ref{OLRa}-\ref{OLRc})}}{{=}}$ $\displaystyle\left(S_{1}^{\prime}+S_{2}^{\prime}-S_{12}^{\prime}\right)-\left(S_{1}+S_{2}-S_{12}\right)$ (115) $\displaystyle\stackrel{{\scriptstyle(\ref{S12d})}}{{=}}$ $\displaystyle S_{1}^{\prime}+S_{2}^{\prime}-S_{1}-S_{2}\stackrel{{\scriptstyle(\ref{S2p})}}{{=}}S_{1}^{\prime}-S_{1}$ (116) $\displaystyle\geq$ $\displaystyle S_{0}-S_{1}=\Delta S\;,$ (117) where we have used $S_{1}^{\prime}\geq S_{0}$ in (117) since a total Lüders operation never decreases entropy. This completes the proof of Proposition 3. $\Box$ ###### Acknowledgements. I thank all members of the DFG Research Unit FOR 2692 as well as Thomas Bröcker for stimulating and insightful discussions. ## References * (1) H.-J. Schmidt, Conditional action and quantum versions of Maxwell’s demon, Found. Phys. 50, 1480 – 1508, (2020) * (2) P. Busch, P. J. Lahti, and P. Mittelstädt, The Quantum Theory of Measurement, $2^{nd}$ revised ed., Springer-Verlag, Berlin, 1996. * (3) J. von Neumann, Mathematische Grundlagen der Quantenmechanik, Springer-Verlag, Berlin, 1932, English translation: Mathematical Foundations of Quantummechanics, Princeton University Press, Princeton, 1955. * (4) M. A. Nielsen and I. L. Chuang, Quantum computation and Quantum information, Cambridge University Press, Cambridge, 2000. * (5) H.-J. Schmidt and J. Gemmer, Sequential measurements and entropy, J. Phys.: Conf. Ser. 1638, 012007 (2020). * (6) P. Busch, P. J. Lahti, J.-P. Pellonpää and K. Ylinen, Quantum Measurement, Springer-Verlag, Berlin, 2016. * (7) L. Szilard, Über die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter Wesen (On the reduction of entropy in a thermodynamic system by the intervention of intelligent beings), ZS. f. Phys. 53 (11–12), 840 – 856, 1929 * (8) H. S. Leff and A. F. Rex, Entropy of Measurement and Erasure: Szilard’s Membrane Model Revisited, Am. J. Phys. 62, 994 – 1000, (1994) * (9) J. Earman and J. D. Norton, Exorcist XIV: The Wrath of Maxwell’s Demon. Part I. From Maxwell to Szilard, Stud. Hist. Phil. Mod. Phys. 29 (4), 435 - 471, (1998) * (10) J. Earman and J. D. Norton, Exorcist XIV: The Wrath of Maxwell’s Demon. Part II. From Szilard to Landauer and Beyond, Stud. Hist. Phil. Mod. Phys. 30 (1), 1 - 40, (1999) * (11) W. H. Zurek, Maxwell’s demon, Szilard’s engine and quantum measurements, in G. T. Moore and M. O. Seully (eds), Frontiers of Nonequilibrium Statistical Mechanics, Plenum Press, New York, 1984, pp 151 – 161 * (12) G. Lindblad, Entropy, Information and Quantum Measurements, Commun. math. Phys. 33, 305 – 322, (1973) * (13) R. Landauer, Irreversibility and heat generation in the computing process, IBM J. Res. Dev. 5, 183-191, (1961) * (14) C. H. Bennett, The Thermodynamics of Computation–a Review, Int. J. Theor. Phys. 21, No. 12, 905 – 940, (1982) * (15) N. G. Anderson, Conditioning, Correlation and Entropy Generation in Maxwell’s Demon, Entropy 15, 4243 – 4265, (2013) * (16) K. Kraus, States, Effects, and Operations - Fundamental Notions of Quantum Theory, Lecture Notes in Physics 190, Springer-Verlag, Berlin, 1983. * (17) J.-P. Pellonpää, Quantum instruments: I. Extreme instruments, J. Phys. A: Math. Theor. 46, 025302, (2013) * (18) J.-P. Pellonpää, Quantum instruments: II. Measurement theory, J. Phys. A: Math. Theor. 46, 025303, (2013) * (19) For an overview of work on Maxwell’s demon see LR03 and EN98 , EN99 . * (20) H. Leff and A. Rex (Eds.), Maxwell’s Demon 2: Entropy, classical and quantum information, computing, Institute of Physics, Bristol, 2003. * (21) N. G. Anderson, Information as a physical quantity, Inf. Sci. 415 - 416, 397 – 413, (2017) * (22) N. G. Anderson, Information erasure in quantum systems, Phys. Lett. A 372, 5552 – 5555, (2008)
# Aalto-1, multi-payload CubeSat: design, integration and launch J. Praks M. Rizwan Mughal 111M. Rizwan Mughal is also associated with Electrical Engineering Department, Institute of Space Technology, Islamabad, Pakistan, Correspondence<EMAIL_ADDRESS>R. Vainio P. Janhunen J. Envall P. Oleynik A. Näsilä H. Leppinen P. Niemelä A. Slavinskis J. Gieseler P. Toivanen T. Tikka T. Peltola A. Bosser G. Schwarzkopf N. Jovanovic B. Riwanto A. Kestilä A. Punkkinen R. Punkkinen H.-P. Hedman T. Säntti J.-O. Lill J.M.K. Slotte H. Kettunen A. Virtanen Department of Electronics and Nanoengineering, Aalto University School of Electrical Engineering, 02150 Espoo, Finland Department of Physics and Astronomy, 20014 University of Turku, Finland Finnish Meteorological Institute, Space and Earth Observation Centre, Helsinki, Finland VTT Technical Research Centre of Finland Ltd, Espoo, Finland Tartu Observatory, University of Tartu, Observatooriumi 1, 61602 Tõravere, Estonia Department of Future Technologies, 20014 University of Turku, Finland Accelerator Laboratory, Turku PET Centre, Åbo Akademi University, 20500 Turku, Finland Physics, Faculty of Science and Technology, Åbo Akademi University, 20500 Turku, Finland Department of Physics, P.O.Box 35, 40014 University of Jyvaskyla, Finland ###### Abstract The design, integration, testing and launch of the first Finnish satellite Aalto-1 is briefly presented in this paper. Aalto-1, a three-unit CubeSat, launched into Sun-synchronous polar orbit at an altitude of approximately 500 km, is operational since June 2017. It carries three experimental payloads: Aalto Spectral Imager (AaSI), Radiation Monitor (RADMON) and Electrostatic Plasma Brake (EPB). AaSI is a hyperspectral imager in visible and near- infrared (NIR) wavelength bands, RADMON is an energetic particle detector and EPB is a de-orbiting technology demonstration payload. The platform was designed to accommodate multiple payloads while ensuring sufficient data, power, radio, mechanical and electrical interfaces. The design strategy of platform and payload subsystems consists of in-house development and commercial subsystems. The CubeSat Assembly, Integration & Test (AIT) followed Flatsat$-$Engineering-Qualification Model (EQM) $-$Flight Model (FM) model philosophy for qualification and acceptance. The paper briefly describes the design approach of platform and payload subsystems, their integration and test campaigns and spacecraft launch. The paper also describes the ground segment & services that were developed by Aalto-1 team. ###### keywords: Aalto-1 , CubeSat , hyperspectral , radiation , Aalto Spectral Imager , Radiation Monitor , Electrostatic Plasma Brake ††journal: Acta Astronautica ## 1 Introduction Nowadays, there is an increased interest towards small satellite missions due to advances in Commercial Off-The-Shelf (COTS) technology miniaturization. Traditionally, the classification of small satellites is only based on their mass but the CubeSat standard also takes into consideration the volume [1]. Over the past decade, the applications of small satellites in general and CubeSats in particular have increased manifold due to the availability of low- cost design, testing and launch possibilities [2, 3, 4, 5]. Initially perceived for training and educational activities, the applications of CubeSats have expanded in vast application areas in the past few years [6]. Example application areas include remote sensing, Earth observation, disaster management, science, astronomy, space weather and technology demonstration etc. [7, 8, 9, 10, 11, 12, 13]. The abundant availability of COTS components with faster development cycles has led to the NewSpace movement [14]. This approach has led the transformation of CubeSat missions from educational and technology demonstration to real missions with potentially risky but higher commercial and science return [15, 16, 17]. A large number of commercial applications using CubeSats have evolved in the past few years with a promising future scope of commercial applications [2, 3, 4, 5]. Until now, more than one thousand CubeSats have been launched into space [18]. However, the forecast suggests an exponential increase in nanosatellite launches every year [19]. There has also been great advancement in the technology development for nano and microsatellites [20]. A number of innovative platforms have been designed and demonstrated in space [21, 22, 23, 24]. Due to technology miniaturization, the capability of CubeSat platforms has been ever increasing [25, 26]. The current CubeSat missions are able to demonstrate innovative platforms with high power generation, precise attitude pointing and higher data downlink capabilities with potential to compete with their bigger satellite counterparts. During the past decade, the worldwide trend of the first satellite by each university or Small & Medium Enterprises (SME) has been designing and launching relatively less complex single-unit (1U) CubeSat for capability demonstration. In contrast, we at Aalto university followed a more challenging approach, i.e., designing a multi-payload CubeSat with student teams. The mission objective was to build and launch a spacecraft with focus on science, imaging and de-orbiting technology demonstration while also providing hands-on educational training. This paper presents detailed design aspects of the Aalto-1 CubeSat with a capability description of payloads and the platform to accomplish the mission objectives. The in-orbit demonstration and lessons learned are presented in an accompanying paper [27]. This paper is organized as follows: Section 2 briefly introduces mission objectives and requirements, Section 3 presents the mission design, project implementation and educational outcomes, section 4 presents space segment design and implementation, section 5 presents all payloads: their specifications and designs, Section 6 introduces design approach of the platform subsystems, Section 7 presents the integration & testing, section 8 focuses on ground segment and Section 9 concludes the paper. ## 2 Mission objectives The Aalto-1 satellite project was initiated from Aalto University student’s aspiration to make the first satellite mission in Finland. The idea was supported by teachers and developed during a special assignment in Space Technology course in 2010 spring semester in the form of feasibility study of the satellite. The goal of the course was set to develop a realistic satellite concept which should be possible to implement (at least partly) by students. It was required that the main payload should be developed in Finland and it should be connected to Aalto University curriculum. During the feasibility study this was translated to the goal to build first Finnish satellite with Earth Observation payload. For the university, the main driver for the Aalto-1 project was to provide hands-on education in space engineering, science and entrepreneurship, while taking advantage of the NewSpace movement [14, 2, 3, 4, 5] and harness the enthusiasm of building the first national satellite. It was envisioned, that in addition to satellite development, students will also learn to work with experienced space scientists and develop connections to industrial partners.The mission was largely financed and led by Aalto University and integrated to Aalto space technology curriculum. Despite a main goal of building, launching and operating first national satellite, the proposed payload selection introduced complex technology demonstration and science goals. The first feasibility study built the satellite concept around four payload candidates and established a consortium for building a 3U Cubesat. The study also derived the main mission requirements. As an outcome of the feasibility study, the satellite mission and platform was to be developed by Aalto University students and payloads were to be contributed by partner organisations. The main payload candidate was a spectral Earth Observation imager, AaSI, based on technology developed by VTT Technical Research Centre of Finland (VTT). This further led to wider spectral device offering for space applications by VTT. Another payload candidate, a radiation monitoring device, later called RADMON, was proposed by a team from University of Turku and University of Helsinki. The third payload candidate, selected by the study, was e-sail experiment device, EPB, which was already in development at Finnish Meteorological Institute (FMI) for ESTCube-1 CubeSat mission [28, 29]. In the original feasibility study, a vibration monitoring system was also proposed. However, the idea was later abandoned as impractical for a rather monolithic nanosatellite. Neither of the selected payloads had flight heritage. Moreover, AaSI and EPB main technology was never demonstrated in space before for proposed purpose. Earth Observation with tunable Fabry–Pérot Interferometer (FPI) was a novel concept and also deorbiting a satellite with electrostatic force by using a tether was never attempted before. This provided a technical challenge and scientific novelty for the project. The project consortium, which consisted of Aalto University, University of Turku, University of Helsinki, VTT and Finnish Meteorological Institute, decided to build a multi-payload mission. In a retrospective, one can say that this decision elongated the project significantly and enforced also several compromises in the design due to contradicting requirements. It took slightly over five years from the first idea to FM completion. The overarching Aalto-1 mission objective was to build a satellite to carry out in-orbit demonstration of AaSI, RADMON and EPB experiments, each of them with specific mission objectives. AaSI’s main objective was to demonstrate the operation of a tunable FPI-based spectral imager for Earth Observation in the space environment. The FPI technology developed at VTT allowed to build for the first time freely tunable spectral EO camera to nanosatellite form factor. As a minimum, the instrument was required to take wavelength calibration measurements and record a spectrum of at least six wavelengths of a cloud-free land. For a full demonstration, the instrument was required to take measurements to investigate wavelength stability, thermal effects and long-term degradation of filters, optics, the sensor and other components along with demonstrating various operation modes. RADMON’s main objective was to operate and calibrate a CubeSat-compatible radiation detector which registers protons in nine energy channels with threshold energies of 10 – 40 MeV and electrons in five energy channels with threshold energies of 1.5 – 12 MeV. EPB’s main objective was to deploy a tether and then charge it to estimate the force exerted by the Coulomb drag between the tether’s electric field and the Earth’s ionosphere, as well as to demonstrate de-orbiting by keeping the tether charged for an extended period of time. This novel propulsion concept was (and currently still is) never demonstrated in space. By now two launched nanosatellites, ESTCube-1 and Aalto-1 have made attempts to deploy this system. However, in the near future AuroraSat-1, Foresail-1 and ESTCube-2 are heading towards similar goals using the same technology [30]. Detailed mission requirements kept developing along the project and were not documented in detail. Therefore, it can be said, that the mission was technology driven as it often happens in CubeSat missions. However, the proper feasibility study in the very beginning and well established consortium helped to keep the focus on results. The finally launched satellite followed closely the original plan of payloads and functionality, but seriously underestimated requirements due to many constraints including time and resources. ## 3 Mission design and implementation In order to satisfy the payload in-orbit demonstration requirements, Aalto-1 was required to be launched to a polar orbit with an altitude of at least 500 km. A polar orbit provides sufficient conditions to estimate the Coulomb drag force [31] and allows for RADMON to measure at various latitudes, including the South Atlantic Anomaly. Polar orbit also allows coverage in Finland and provides good opportunities for Earth Observation. The attitude requirements were set by all payloads, but dominated by EPB requirements.The lower limit of altitude was required by EPB. In lower altitudes, the atmospheric drag might dominate the de-orbiting impact which makes electrostatic drag estimation difficult. The highest altitude limit was set by 25-year orbital decay requirement for space debris mitigation. The EPB experiment requires spinning satellite of hundreds of degrees per second in order to provide centrifugal force for tether deployment [32]. The angular momentum was to be provided in steps: spin up the satellite, deploy the tether, spin up again, etc. AaSI requires nadir pointing during image acquisition and RADMON requires attitude knowledge, but the requirement was not critical. Another notable requirement for the mission was surface conductivity requirement by EPB to keep spacecraft potential during Coloumb drag experiment. The satellite was designed for two years in the orbit, which was estimated as sufficient time to carry out all experiments. The mission design in terms of energy and thermal budget was flexible, as it was decided that payload duty cycles can be adjusted in-orbit according to the need. The satellite operation from Aalto University was one of the key mission requirements. For this purpose, the ground segment was developed. The ground station includes UHV, VHF and S-band steerable antennas and associated transceivers. The mission operation software was designed and implemented by Aalto students. The product tree of Aalto-1 mission with ground segment, space segment and launch segment description is shown in Fig 1. Figure 1: Aalto-1 product tree ### 3.1 Project Implementation After successful feasibility study in spring 2010, the satellite project was quickly funded and supported by Aalto MIDE (Multidisciplinary Institute of Digitalisation and Energy)[33]. The project was also formally organized by establishing posts for project responsible professor, project coordinator, Steering Group, and Scientific Advisory Board. Under the official project umbrella, student groups developed their own organization for building the satellite and ground segment. Thematic student teams were in most cases oriented on single subsystem development or a single topic. The quality assurance was maintained as a separate independent branch as it is practiced in bigger satellite projects. During the semester, student teams had weekly meetings and decisions were made by team-leader meetings. Thanks to the available funding, it was possible to hire few doctoral students, provide summer trainee positions and occasional master thesis positions with salary (usual practice in Finnish Universities). The doctoral students formed a backbone of the project which helped accumulate the salient knowledge. Many subsystems were developed as a master thesis project. Payload teams formed separate project structure in their home organizations and their team leaders were part of the Scientific Advisory Board. Satellite bus and payload developments were financially independent and applied for funds independently. Several satellite project students made their master thesis with payload team. The project schedule was built to mimic larger space projects, where the main project phases were separated by milestone reviews. The Preliminary Design Review was arranged in November 2011, Critical Design Review in May 2013, Test Readiness Review in May 2015 and Flight Readiness Review in January 2016. Several smaller reviews were arranged along the project. The flight model of the satellite was delivered to Netherlands in May 2016. Review panels were assembled from space technology professionals and CubeSat team members from other universities. Both, documentation based review format (in the beginning of the project) and presentation based review format (towards the end of the project) were used. Flatsat, EQM and FM model policy with fast iterative development model was implemented in the project. A single Flight Qualification Model (FQM) approach was considered in the beginning of the project, but it proved to be impractical. The students were inexperienced and learned most efficiently by making prototypes and hardware versions, therefore rapid iterations and frequent hardware models proved to be more efficient than waterfall design. The main challenge for the project was to find and keep the knowledge in the team during multi year project. Student teams were volatile and documentation often incomplete, despite a requirement for documentation to retrieve study credits. The fact that key persons were hired and committed to the project, helped the project to continue. Constant support by the university and project organization was also highly important. The project was also well aligned with university goals: it was able to produce degrees, research papers and positive publicity. Aalto University procured and financed, along with few sponsoring partners, also the satellite launch, the first in Finland. ### 3.2 Educational outcomes The student work was incorporated to student’s individual studies mainly via special assignments, bachelor and master thesis projects and also as a part of doctoral studies. The main challenge was to align project needs and project documentation with teaching and outcome assessment in situation where most of the work was done in groups. During the project evolved a documentation and reporting approach where project documentation was used for grading and individual contributions were assessed by self evaluation and peer reviews. Assessment was done on the basis of provided snapshot of the evolving documentation and it was required that the documentation was available for entire project. The final grade was assigned by supervising professor [34]. Far more that 100 students contributed in the design and development of the satellite. However, the contribution varied from single semester participation in meetings to several years of design and implementation. Around 12 Master level and 28 Bachelor level thesis were conducted in the satellite design and development activity during the coarse of the project. By now, also three doctoral dissertations are defended based mainly on Aalto-1 satellite related topics [35, 36, 37] and several are still on the way. Additionally more than 10 Master level thesis were conducted at partner institutes related to the design and development of payloads. The outcomes and results were also published in many scientific conferences and journals in the field. The project gathered a lot of media attention which led to the awareness of space technology and small satellites in vast areas [38]. Many of the Aalto-1 students became space engineers and scientists at partner institutions. A subgroup of Aalto-1 students established the ICEYE company, which builds operates a fleet of Synthetic-Aperture Radar (SAR) satellites. Another group formed Reaktor Space Lab company, which specializes on nanosatellite missions. ## 4 Space Segment design and implementation The feasibility study and preliminary design analysis proposed 3U CubeSat platform to carry out the mission. The CubeSat platform was selected because it provided affordable access to space and also available commercial subsystems for inexperienced team. The payloads were designed concurrently with the satellite platform, AaSI and RADMON were entirely new designs whereas EPB development was already started for ESTCube-1 satellite [28]. The 3U satellite platform was designed and manufactured mainly by students of Aalto University. However, in early stage of the project it was decided that electrical power system and attitude system should be procured from commercial provider. The main reason for that was the reliability concern of fresh designs. The satellite design, as shown in Fig. 2, features 3U CubeSat body, 3-axis stabilization, body mounted solar panels, deployable UHF antennas, several cameras and openings for payloads. Electronics of the satellite is accommodated in two electronic stacks, connected by cabling. The Long Stack features all the avionics and AaSI payload. The Short Stack accommodates RADMON and EPB. The main reason for this separation was the design decision to align the EPB reel motor rotation axis with satellite rotation axis in spinning mode. Figure 2: The structure and subsystems of Aalto-1 satellite. The highlighted subsystems are: 1) Radiation Monitor (RADMON), 2) Electrostatic Plasma Brake (EPB) 3) Global Positioning System’s (GPS’s) antenna and stack interface board, 4) Attitude Determination and Control System (ADCS), 5) GPS and S-band radio, 6) Aalto Spectral Imager (AaSI), 7) Electrical Power System (EPS), 8) On-Board Computer (OBC), 9) Ultra High Frequency (UHF) radios, 10) solar panels, 11) electron guns for EPB, 12) S-band antenna, 13) debug connector, and 14) UHF antennas. The student designed Aalto-1 platform consists of a in-house developed cold- redundant On Board Computer (OBC) running Linux, Ultra High Frequency (UHF) and S-band radios, a navigation system based on Global Positioning System (GPS), aluminium structure, solar panels, Sun sensors, Antenna Deployment System (ADS) and commercially procuredElectrical Power Subsystem (EPS) and Attitude Determination & Control Subsystem (ADCS). Figure 3: A block diagram of digital, RF and power interfaces An overview of the Aalto-1 power, data and Radio Frequency (RF) interfaces is presented in Fig. 3. The power interface provides regulated voltage levels (3.3 V, 5 V and 12 V) to the satellite avionics and payloads. Several digital interfaces including Inter Integrated Communication (I2C), Serial Peripheral Interface (SPI) and Universal Asynchronous Receiver Transmitter (UART) have been implemented which are controlled by the OBC. ## 5 Payloads The instrument design technique, mass, volume, electrical and mechanical interfaces and key design challenges of the Aalto-1 payloads is described in detail in this and subsequent sections. ### 5.1 Radiation monitor The RADMON instrument [39] is a compact low-power radiation monitor. It has envelope dimensions of about 4$\times$9$\times$10 $\text{cm}^{3}$, a mass of 360 g and a power consumption of 920 mW. The spacecraft supplies both +5 V and +12 V to the instrument. The instrument consists of a detector assembly inside a brass casing, a signal processing board, a digital board, and an electrical power board. Three boards are connected by a 52-pin internal bus running through all of the boards (see Fig. 4(a)). The instrument is integrated in the short stack of the satellite with another bus connector as well as with four spacers placed in the corners of the PCB stack. The bus connector also provides the electrical interface to the satellite. (a) The RADMON radiation monitor assembly. (b) The detector unit. Figure 4: The assembly compounds three printed circuit boards and a brass container with a detector unit inside. An aluminum entrance window in front of the brass container covers the detector unit. A scintillator with a readout photodiode and a silicon detector are housed within the brass container. The detector unit consists of a rectangular 2.1$\times$2.1$\times$0.35 $\text{mm}^{3}$ silicon detector and a 10$\times$10$\times$10 $\text{mm}^{3}$ CsI(Tl) scintillation detector that are enclosed by the brass casing determining the acceptance aperture (see Fig. 4(b)). The casing has an aluminum entrance window that protects the detector stack from low-energy charged particles and photons. The scintillator has a thin polytetrafluoroethylene (PTFE) wrapping on five sides and has a readout photodiode on the sixth side. We have used a Hamamatsu S3590-08 PIN silicon photodiode with dimensions of 10$\times$10 $\text{mm}^{2}$ and a depletion thickness of about 0.3 mm. The silicon detector has a biased guard ring and a floating one. The passive area of the silicon detector extends to about 0.7 mm around the active spot. Two detectors produce electrical signals for a standard E – E analysis aimed at the determination of particle species and the energy deposited in the detector. A coincidence logic prevents the registration of particles coming from outside the aperture and bremsstrahlung X-rays generated in the brass container. The aluminum window sets thresholds for electron detection at about 1 MeV and for proton detection at about 10 MeV. The brass case becomes transparent for protons at about 55 MeV, approximately at the same energy as protons incident through the aperture start to penetrate the scintillator. RADMON registers protons in nine energy channels with threshold energies of 10 – 40 MeV and electrons in five energy channels with threshold energies of 1.5 – 12 MeV. The detailed analysis of the instrument response to electrons and protons is described in [40]. The data rate can be adjusted by changing the polling frequency of the instrument. Nominally, science data is collected every 15 seconds and housekeeping data every 60 seconds. This gives a data rate of about 25.4 kBytes per hour, including the packet overhead. Testing and ground calibrations of RADMON were performed using radioactive sources and a proton beam from the MGC-20 cyclotron at the Åbo Akademi University, Turku, Finland. The maximum beam energy available in the cyclotron was about 17 MeV. The beam was scattered at about 60 degrees from a thin tantalum foil to lower the beam intensity and achieve a low-enough flux for the calibrations. The proton beam energy was step-wise decreased by adding absorbers between the foil and the detector. This setup allowed to successfully calibrate the instrument for the low-energy proton response, and Geant4 [41, 42] simulations were used to extend the proton response over the full energy range. The electron response was monitored utilizing beta particles from different radioactive decay sources. Radiation tolerance of RADMON electronics has been tested in the RADiation Effects Facility (RADEF) of the University of Jyvaskyla, Finland. The device was tested in a 50-MeV proton beam for total dose up to 10 krad, which it survived without observable degradation [39]. As the device relies on a commercial version of the Xilinx Virtex-4 field-programmable gate array, we have implemented a triple-redundant memory with active scrubbing running parallel to the normal operations of the instrument [43]. The system was tested in RADEF to be able to cope with a 50-MeV proton flux of $10^{6}$ cm-2 s-1, after which the rate of double bit errors became significant [39]. The instrument, being integrated into the satellite short stack, is also sensitive to electromagnetic interference. Especially the scintillator detector signal path is affected by the electromagnetic emission of other spacecraft subsystems. This has led to an increase of the noise levels in this signal and the inability to detect at the smallest signal levels, which has increased the threshold of the electron measurements from the nominal 0.7 MeV [39] to 1.5 MeV achieved in space [40]. ### 5.2 Electrostatic Plasma Brake The plasma brake payload is based on the Coulomb drag principle, which is the driving phenomenon behind the Electric Solar Wind Sail (E-sail) invention [44, 45]. The brake itself consists of a 100 meter long tether; a storage reel; a vacuum qualified piezo motor and control electronics for tether deployment; a high voltage source; and four electron guns. Once the tether has been deployed, it can be charged with a voltage of either +1 kV or $-$1 kV, with respect to the surrounding ionospheric plasma. As the satellite moves through the plasma with its orbital speed, the electrostatic interaction between the tether and the plasma introduces a force opposite in direction to the satellite’s velocity, thus slowly reducing the orbital speed. Figure 5: Tether Reel FM board from both sides. On the left, tether reel lock Kieku ready and locked. The black object left from Kieku is the optical feedback Kyylä. The electron guns are located satellite side panel X+, next to the payload which deploys from Z- end of the satellite. Fig. 5 shows the tether reel Printed Circuit Board (PCB) (left panel) and the high voltage board (right panel). These two boards were stacked together to have dimensions of 5$\times$9$\times$10 cm. The spacecraft EPS supplied 3.3 V, 5 V, and 12 V to EPB. Power consumption of EPB depends on the operation mode: launch locks use 1.25 W each for about 20 sec (locks are not released at the same time), high voltage tether system consumes a few hundreds of mW depending on the ambient ionospheric plasma density, and the reeling system draws 2.3 W during the deployment. None of these tasks are executed simultaneously, and the tether is not deployed all at once. The data rate is low throughout the mission as only the tether voltage and current are sampled with a frequency of 10 Hz. The tether itself is constructed of four aluminum filaments and it is based on the Heytether geometry [46]. The tether is deployed with the help of centrifugal force, the satellite must therefore be spinning around a suitable axis. Once the proper spin mode is reached, the tether reel motor is activated and the tether is slowly unreeled out to space. At the tip of the tether there is an aluminum tip mass, whose task is to assist in tether deployment by increasing the pull force experienced by the tether. On the bottom side of the reel there is a slip ring serving as the contact point for the high voltage source through two cantilever spring sliders being the only mechanically redundant subsystem of EPB. When a positive voltage is applied, one or more electron guns are activated in order to eject excess electrons and thus maintain the positive voltage, as the surrounding plasma attempts to neutralize it. In negative tether voltage mode, the tether gathers positive ions from the plasma and the conducting parts of the satellite surface collect the same flux of thermal electrons from the plasma to maintain current balance.. The plasma brake payload was tested prior to the system level tests. Vibration tests were carried out to qualify the mechanical components, PCB s, and the reel motor, especially, as the motor was designed for laboratory use. It was noted that the high voltage sliders dug two dents to the slip ring that were able to stop reel rotation. Simple resistor-based launch locks were introduced to the bottom side of the reel PCB to keep the sliders apart from the slip ring during the launch. The functionality of the payload was successfully tested in thermal-vacuum. Furthermore, specific to EPB payload, high voltage tests were made, and the tether outreeling was tested to determine the minimum centrifugal force required for the tether deployment. ### 5.3 Aalto-1 Spectral Imager AaSI The miniaturized spectral imager, AaSI, as shown in Fig. 6, is the main payload of the Aalto-1 nanosatellite. The imager is based on a tunable FPI, which is used as an adjustable passband filter. This enables the imager to acquire images at freely selectable wavelengths. The operational range is 500–900 nm and the spectral resolution is 10–20 nm. In addition to the spectral imager, a visible (VIS) spectrum Red–Green–Blue (RGB) camera is included in the instrument [47, 48, 49]. Table 1: Main parameters of AaSI. Wavelength range 500–900 nm Spectral resolution 10–15 nm Field of view 10∘ $\times$ 10∘ (SPE), 15∘ $\times$ 10∘ (VIS) Spectral image size 512 $\times$ 512 pixels VIS image size 2048 $\times$ 1280 pixels Number of spectral bands 6, 25 or 75 Size 97 $\times$ 97 $\times$ 48 mm3 Mass 600 g Table 1 introduces the main parameters of AaSI. Figure 6: The Aalto-1 Spectral Imager AaSI. The size of the instrument is ca. 0.5 U and it is compatible with the PC104 interface. The instrument has two cameras: a visible spectrum RGB camera (left) and a spectral imager (right). ## 6 Platform The Aalto-1 platform subsystems include an EPS [50], an ADCS [51], a GPS-based navigation system [52], a UHF [53, 54] and S-band [55] radios for Telemetry, Telecommand & Communication (TT&C), and a Linux-based OBC [56, 57, 58, 59]. The electronic subsystems are placed in two circuit board stacks, the Long Stack and the Short Stack, which are connected using a stack interface board. The electronics followed CubeSat electronics format, whereas the bus pin- layout followed PC-104 standard. The design philosophy of the CubeSat platform is a hybrid combination of subsystems developed in-house and commercial products. The satellite structure, solar panels, Sun sensors, TT&C and OBC were fully designed in- house whereas the ADCS and the EPS were procured from commercial partners. The CubeSat structure, antenna and antenna deployment system were also developed in house. The in-house developed subsystems were fully designed, integrated and testing by student teams. The PCB designs were manufactured by commercial PCB provider, whereas the component soldering and stuffing was performed in our facility. Special consideration was employed in the design of the critical subset of subsystems, consisting of EPS, OBC and UHF. Redundant parts, fault detection and recovery procedures were added to increase their reliability and fault tolerance. The agile development approaches were followed in the design and verification of the satellite. The development process of the subsystems has been iterative, since the prototype of each subsystem was developed and qualified in quick iterations. The waterfall verification approach was followed in the Flatsat, EQM and FM integration [60]. The detailed design description of each platform subsystem is presented in the subsequent subsections. The in-orbit performance of major platform subsystems can be read from [27]. ### 6.1 Electrical Power Subsystem The EPS ensures the power generation, conditioning, storage and distribution to each subsystem and payload [61]. The EPS was procured from a commercial partner Clyde Space (Currently ÅAC-Clyde). The solar panels were designed in- house in order to accommodate the conductive surface requirements of EPB. A block diagram of Aalto-1 EPS is presented in Fig. 7 Figure 7: Block diagram of EPS The incident solar radiation is converted to electrical power by the solar panels, developed at Aalto University [62]. The Solar Panel design featured thermally conductive PCB design. In-house made design provided freedom on sensor and payload location and satellite structure design. The power from panels is transferred to the Electrical Power System Control Board (EPSCB) where Battery Charge Regulators (BCR) convert the input voltage from the solar panels to battery charging voltage (6.2 V to 8.4 V). The Power Conditioning Modules (PCMs) are responsible for regulating the voltages produced by solar panels and the battery unit. The dc-dc converters convert the voltage levels to the ones used by subsystems and distribute the power to the Satellite Bus (SB). The major subsystems have dedicated power lines, which are controlled by switches located on the battery board and accessible through stack connector. The operating voltages, standby and peak power consumption of platform avionics and payloads are provided in Table 2. Table 2: Aalto-1 Power budget. System | Details | | Operating --- voltage (V) | Standby --- Power (W) | Peak --- Power (W) TT&C | UHF | 12 | 0.2 | 1.55 S-band | 3.3 | 0 | 3.5 ADS | 12 | 0 | 7 GPS | Active Antenna | | 0.03 | 0.03 GPS Receiver | | 0.015 | 0.1 OBC | | | 0.25 | 0.55 ADCS | | Coils and --- Electronics 5 | 0.5 | 1.8 Sun sensors | 5 | 0 | 0.06 Payloads | RADMON | 12, 5 | 0 | 1 EPB | 12, 5, 3.3 | 2.3 | 3 AaSI | 12, 5 | 0 | 4 Total | | | 3.295 | 22.59 The EPS has several safety features implemented for increased reliability of the platform. It monitors the I2C bus lines for inactivity and erroneous behaviour (see Fig. 3), which if detected will cause a power cycle event of the whole platform. Battery power level is monitored as well and the low power mode is activated if depth of discharge is below the critical value. In this mode only the EPS is active, operating the battery charging circuits. Lastly, a timer feature, which starts a 30 minutes countdown after the first EPS power up, was set as a redundant antenna deployment trigger, in addition to the main dedicated countdown timers. ### 6.2 Attitude determination & control subsystem The ADCS is the most critical subsystem to ensure the required pointing and spin modes for payloads. Aalto-1 ADCS (iADCS100), provided by commercial partners Berlin Space Technologies (BST) and Hyperion Technologies, consists of an integrated solution of attitude determination sensors and attitude control actuators. The attitude sensors include, gyroscopes, magnetometers and a star tracker. The Sun sensors were developed in-house by Aalto University and integrated to the solar panels[63]. The attitude actuators include magnetorquer rods and reaction wheels.Aalto-1 was the first satellite carrying iADCS100 attitude system and the Aalto students participated in the development. The FM of Aalto-1 ADCS is shown in Fig. 8. Figure 8: FM of the ADCS iADCS100 with star tracker by Berlin Space Technologies GmbH and Hyperion Technologies. ### 6.3 Global positioning system subsystem The GPS subsystem of Aalto-1 is shown in Fig. 9 which contains a Fastrax IT03 GPS receiver and an Adactus ADA-15S patch antenna. When operated, the GPS subsystem consumes approximately 160 mW of power [64, 52]. The main purpose of the subsystem has been to provide more accurate positioning than Two Line Element (TLE)-based solutions, for example, during plasma brake operations. The Fastrax receiver was selected as the manufacturer was willing to provide the receiver without the usual altitude and velocity restrictions [65]. In early 2010s, when a GPS subsystem was included in the Aalto-1 design, there were not many GNSS subsystems for nanosatellites available as commercial off- the-shelf products. Figure 9: Aalto-1 S-band transmitter and GPS subsytem ### 6.4 Telemetry, tracking & communication subsystem A UHF transceiver was used as the primary radio on the Aalto-1 satellite. The UHF radio supported transmission power of up to 1.2 W. The unit, as shown in Fig. 12, is fully redundant, equipped with two cold redundant TI CC1125-based transceivers and an MSP430 microcontroller (MSP430FR5729). It is capable of half-duplex bidirectional communication at 437.220 MHz. The UHF communication system is equipped with two dipole UHF antennas, each connected to one of the two redundant radios [53]. The OBC software and the arbiter can perform the switching from active to redundant radio. A UHF antenna deployment system, as shown in Fig. 10, consists of timer control board for antenna release and two L-shaped doors to keep the antennas stowed during launch. After the spacecraft is deployed, the antenna release mechanism burns the dyneema strings thereby deploying the antennas. Additionally, the redundant timer on the EPS can trigger the antenna deployment. The deployed antenna configuration is shown in Fig. 11. An automatic UHF beacon is transmitted every two minutes by default. The UHF beacon containing a static Morse code is transmitted every two minutes by default. Figure 10: Aalto-1 Antenna deployment system and UHF-band antenna in stowed configuration Figure 11: Aalto-1 UHF antenna in deployed configuration Figure 12: Aalto-1 UHF-band cold redundant transceiver Along with the UHF radio, Aalto-1 has an S-Band transmitter used for high speed telemetry downlink. Because of regulations, the S-band transmission can be active only above the Aalto Ground Station. The S-band transmitter featuring a single transceiver (TI CC2500) and a microcontroller (MSP430FR5739) is shown in Fig. 9. The communication frequency is 2.402 GHz with the design data rate of 500 kbps. The S-band communication system uses an in house designed single circular polarization patch antenna. It also forms a secondary downlink channel [55]. ### 6.5 Onboard computer The Aalto-1 OBC consists of two cold-redundant 32-bit AT91RM9200, microcontrollers from Mircochip. The architecture hosts a 256-Mbit Synchronous Dynamic Random Access Memory (SDRAM) volatile memory (AS4C16M16S), a parallel/NOR flash (S29JL064J), a dataflash (AT45DB642D), and a NAND flash (S34ML02G1). These different memories are used to store boot-loaders, kernel images and file systems. The architecture uses three different bus interfaces including I2C, UART and SPI. The UART, SPI and USB are supported by the processor itself, while I2C is handled by an an external controller (PCA9665). Figure 13: Aalto-1 On-Board Computer’s flight spare The OBC consists of several components that can be classified as watchdogs, the most important one being the arbiter [57]. An MSP430-based arbiter selects which of the two processors is powered therefore preventing mission failure due to hard failure of one of the OBCs. In the arbitration logic, a full reboot is required to switch from the active to redundant OBC. The switching procedure was set to execute when the arbiter powers up and it does not receive a heartbeat signal of the active OBC. A further read on system description, arbitration logic, Failure mode and effects analysis (FMEA) and error handling procedures, can be found in [56]. A detailed block diagram of the OBC representing the data interfaces with payloads and platform subsystems is shown in Fig. 3 whereas the flight spare model of the OBC is shown in Fig. 13. The OBC runs the Linux operating system and bash for scripting different command sequences. At the time of its selection in 2010, Linux was not a common choice for satellite OBC s, but has since become popular in small spacecraft [66], [59]. Software of the OBC, due to high complexity of the Linux operating system, has been thoroughly analysed and additionally strengthened against various identified failure scenarios[57]. The version control in software development approach was followed in the software design with regular commits to the Github repository. ### 6.6 Software Several on-board data handling tools have been used in existing CubeSat designs [67]. The Aalto-1 on-board data handling and flight software is built around applications running on Linux which was quite a new choice for nanosatellites at the design selection stage. The applications utilise certain libraries to communicate with satellite subsystems and the satellite internal data bus. A number of libraries were developed for several subsystems, the most prominent being libarbiter for arbiter, libeps for communication with EPS, libicp for communications with subsystems on I2C and libradio and libsband for radio communication. The detailed description on design choices and lessons learned on Aalto-1 software design approach can be followed in [59]. Linux is a feature full operating system with mature, stable and time-proven core code base. This simplified the development of flight logic and utilities. Well known Linux library ecosystem and APIs assured a proper separation of concerns. Nonetheless, Linux is a complex software which necessitated a thorough analysis to ensure reliability on the OBC [57]. During the system’s boot procedure there is no possibility for intervention and thus needed to be made fault tolerant by adding an emergency boot procedure with reduced functionality and less dependencies. A memory storage was divided into sections with primary and recovery file systems. Unsorted Block Image File System (UBIFS) have been used and it supports wear leveling and due to its use of journals is power loss tolerant. On the overall system level, a number of software and hardware watchdog timers are used in conjunction with the arbiter heartbeat output. Bus and radio communication libraries are strengthened with appropriate checksum and implementing non-blocking procedures. ### 6.7 Thermo mechanical subsystem There are two long and a short standard PC-104 stack to route power and data signals among platforms and payloads . The long stack, required by few subsystems, is 2U long whereas the short stack is 1U long. As evident from Fig.2, the orientation of subsystems on one unit is different than those on the other two units, therefore a stack interface board was used. The in-house built structure is compatible with standard dimensions and provides mechanical interface to internal subsystems, solar panels and antenna deployment. The breakdown of total spacecraft mass is provided in Table 3 Table 3: Aalto-1 mass budget System | Details | Quantity | Unit mass (g) | Total mass (g) ---|---|---|---|--- Structure | 3U and harness | 1 | 1180 | 1180 TT&C | UHF antenna | 4 | 1 | 4 UHF transceiver | 1 | 90 | 90 S-band & GPS board | 1 | 75 | 75 S-band antenna | 1 | 50 | 50 EPS | Solar panels | 4 | 130 | 520 Control board | 1 | 83 | 83 Batteries | 1 | 258 | 258 OBC | | 1 | 75 | 75 ADCS | Coils and Electronics | 1 | 360 | 360 Sun sensors | 6 | 10 | 60 Payloads | RADMON | 1 | 360 | 360 EPB | 1 | 300 | 300 AaSI | 1 | 600 | 600 Total | | | 3572 | 4015 The spacecraft used passive thermal control system [68]. The structure rails were anodized black since it provides optimum emissivity/absorptivity ratio. The electrically conductive surfaces were masked before anodization and later chromate coated. The unused areas of solar cell PCB s were gold plated. In order to increase the thermal conductivity from solar cell to the structure, indium foil washers were placed in the screw joints. For better thermal conductivity, many grounding vias were also placed in the solar cell footprints. The telemetry data of Aalto 1 reveals that the equilibrium temperature is well maintained inside the spacecraft. ## 7 Satellite integration & testing The model philosophy of the project followed a Flatsat, EQM and FM approach. This approach was selected mainly due to the fact that all subsystems and payloads were new development items and early verification of them was seen as highly beneficial. Additionally, lessons learned from other CubeSat projects in other universities often highlighted the importance of leaving significant amount of time for the integration and testing campaign on the system level prior to a launch. Figure 14: Aalto-1 Flatsat model integrating (starting from top left continuing clockwise) AaSI, EPB, UHF, S-band with patch antenna and GPS, OBC and commercial ADCS Most of the testing performed prior to system level integration was done on subsystem level by each development team. Usually, development kits and other test equipment was utilized rather than other satellite subsystems as their development was performed concurrently by separate teams.The first full interface tests were performed on the Flatsat model which is shown in Fig. 14. A number of interface mismatches were identified and troubleshooted at this stage. While being rather typical for any project with many concurrent developments, earlier testing of the system as a whole, if possible, would probably have saved required redesign and manufacturing effort at the EQM level. As most of the satellite subsystems were in-house developed, making small modifications was however relatively easy at this point of the project. A full environmental qualification test campaign was performed with the satellite EQM. No major issues were found during these tests, which raised the confidence level on the system level design. However, not all satellite functionalities had been implemented at this point and thus were not fully reference tested prior and during the environmental testing campaign. This left some uncertainties to be fully verified later at the FM test campaign. It also highlighted the importance of thorough reference testing and reporting prior to environmental tests. The satellite FM was built soon after the EQM environmental test campaign, including some necessary minor modifications. Testing with the EQM also continued throughout the FM campaign and allowed simpler software development and testing on the system level, as well as tests that could have caused unnecessary stress to the FM. Such tests were, for example, long duration durability testing, outdoor long-range testing and magnetic testing. Some issues were still found using the EQM and it was possible to implement necessary fixes to the FM. One of such issues was related to a component in the Telemetry/Telecommand (TM/TC) radio and may not have been noticed without the long duration durability testing, and could possibly have caused mission failure soon after the launch. A full acceptance test campaign was performed with the satellite FM. The FM testing consisted of pre-built test scripts to command the subsystems and receive respective telemetries. No major issues were found during these tests, as was expected thanks to the successful EQM tests. Due to the late readiness of the third party provided ADCS and flight software, it was not possible to perform a thorough enough functional or performance test campaign for it. Testing of the ADCS algorithms was planned to be performed using a hardware- in-loop approach, which did not work as expected through the satellite main communication bus due to communication delays. Rather, access to the ADCS internal sensor and actuator bus would have been required, but it was not possible at that point. This highlighted the importance of early delivery of third-party systems with final and fully tested flight software and should be considered a high risk regarding any new developments by a third party. The Aalto-1 launch campaign started after the assembly, integration and verification stage. A lot of issues were addressed even when the satellite was in the launch pod. As an example, the batteries had become empty and it was a trouble charging them because no such interface was provided on the access port. The batteries in FM were charged with a solar lamp transported to the launch pod. A photograph of the integration of the FM into the commercial orbital deployer is shown in Fig. 15. Figure 15: Aalto-1 integration with the deployer In the end, the selected model philosophy proved to be a very suitable approach for the project that had significant amount of new development items. Like in many other CubeSat projects, schedule issues were encountered to perform system level testing in the most ideal and thorough way possible. Emphasis on system level testing from the very beginning of development can solve some of such issues encountered at later stages of the project. A most- viable-product approach, used typically in software engineering, has been followed and determined beneficial in projects after Aalto-1, where the highest importance functions of the satellite are implemented and tested as early as possible on the system level, and later incremented with additional features in order of priority towards the full satellite integration and testing. Such an approach however requires agile in-house development and close cooperation with third parties. Ultimately, the most suitable development approach for any CubeSat project is highly dependant on many aspects, such as the available resources, experience, the number of development items and the usage of in-house or third-party systems. The development approach should be carefully planned only after such aspects have been identified. ## 8 Ground segment & services The ground segment originally used an Icom IC-910H radio transceiver and a relay based pre-amplifier that was designed for voice communication. Reception was implemented using an RTL-SDR. Five months after launch, the setup was updated with a newly developed solid state switched pre-amplifier due to problems with the relay-switched pre-amplifier. In 2018, the transceiver was changed to a USRP B200 Software Defined Radio (SDR). To ease operation of multiple missions from the ground station, the digitized radio signal is distributed to multiple programs through a shared memory buffer. With the OpenWebRX software, the spectrum between $431\text{\,}\mathrm{MHz}$ and $439\text{\,}\mathrm{MHz}$ can be monitored using a web browser. Brushed motors in the antenna rotator caused strong, broadband interference close to the antenna while rotating during a satellite pass. The issue was reduced with upgraded rotators and an upgraded controller. Block diagram of the relevant parts of the currently operational ground station is presented in Fig.16. Figure 16: A block diagram of Ground station. The ground segment is controlled using the Mission Control Center (MCC) software developed by the Aalto-1 team. The back-end is based on a PostgreSQL database that stores every received packet with a timestamp. Furthermore, the housekeeping system stores every housekeeping value separately with timestamp which has proven cumbersome and ineffective due to storage of large set of values in the database. For upcoming missions, we plan to use a database structure that uses one table per subsystem and one row includes an entire housekeeping package of that subsystem. This should reduce query time substantially as not every value has to be queried independently. We have developed a Graphical User Interface (GUI) for the MCC based on Qt. Qt was chosen since the application programming interface (API) does not change very quick which ensures long compatibility in the future. The GUI shows information about the current position of the satellite, satellite passes, received and transmitted packets and housekeeping most recent data. In addition, the history of a single housekeeping value can be plotted. ## 9 Conclusion The Aalto-1 projects key scientific and technological and educational objectives were achieved. The platform and payloads were successfully designed, developed and integrated with many student teams getting hands-on learning. The integration, testing, verification and launch activities were successfully accomplished. The subsystems and payloads demonstrated partial mission success with many lessons learned which have been briefed in an accompanying paper. This project started a new era of space activities in Finland. A number of new space start-ups were founded as an outcome of this project. The (former) Aalto satellites group members have started and joined a number of new missions, such as ICEYE SAR satellite constellation, Aalto-3, Reaktor Hello World, FORESAIL [69], and Comet Interceptor [70]. The Aalto-1 design has been been beneficial in the space technology curriculum and a source of inspiration for new students in the space technology lab. ## Acknowledgements The RADMON team thanks P.-O. Eriksson and S. Johansson at the Accelerator Laboratory, Åbo Akademi University, for operating the cyclotron. Testing work at the University of Jyvaskyla has been supported by the Academy of Finland under the Finnish Centre of Excellence Programms 2006-2011 and 2012-2017 (Project No:s 213503 and 2513553, Nuclear and Accelerator Based Physics), and by the European Space Agency (ESA/ESTEC Contract 18197/04/NL/CP). Aalto University and its Multidisciplinary Institute of Digitalisation and Energy are thanked for Aalto-1 project funding, as are Aalto University, Nokia, SSF, the University of Turku and RUAG Space for supporting the launch of Aalto-1. ## References * CalPoly [2009] CalPoly, Cubesat design specification, The CubeSat Program, California Polytechnic State … 8651 (2009) 22. * Frischauf et al. [2018] N. Frischauf, R. Horn, T. Kauerhoff, M. Wittig, I. Baumann, E. Pellander, O. Koudelka, NewSpace: New Business Models at the Interface of Space and Digital Economy: Chances in an Interconnected World, New Space (2018). * Peters [2015] G. Peters, Utilizing commercial best practices for success in NewSpace, Microwave Journal (2015). * Tkatchova [2018] S. Tkatchova, Emerging Space Markets, 2018\. * Salt [2013] D. Salt, NewSpace-delivering onthedream, Acta Astronautica (2013). * Bouwmeester and Guo [2010] J. Bouwmeester, J. Guo, Survey of worldwide pico- and nanosatellite missions, distributions and subsystem technology, Acta Astronautica 67 (2010) 854–862. * Das et al. [????] A. Das, R. Cobb, M. Stallard, Techsat 21 - A revolutionary concept in distributed space based sensing, p. 1. * Pang et al. [2015] C. K. Pang, A. Kumar, C. H. Goh, C. V. Le, Nano-satellite swarm for sar applications: design and robust scheduling, IEEE Transactions on Aerospace and Electronic Systems 51 (2015) 853–865. * Selva and Krejci [2012] D. Selva, D. Krejci, A survey and assessment of the capabilities of cubesats for earth observation, Acta Astronautica 74 (2012) 50–68. * Chaudhry and Mishra [2015] V. Chaudhry, I. Mishra, Zenith: A nano-satellite for atmospheric monitoring, 2015\. JO:; JF:. * Selva and Krejci [2012] D. Selva, D. Krejci, A survey and assessment of the capabilities of Cubesats for Earth observation, 2012. * Santilli et al. [2018] G. Santilli, C. Vendittozzi, C. Cappelletti, S. Battistini, P. Gessini, CubeSat constellations for disaster management in remote areas, Acta Astronautica 145 (2018). * Seyedabadi et al. [2020] M. E. Seyedabadi, M. Falanga, M. Azam, N. Baresi, R. Fleron, V. Jantarachote, v. A. Juarez Ortiz, J. J. Julca Yaya, M. Langer, S. Manuthasna, N. Martinod, M. R. Mughal, M. Noman, J. Park, A. Pimnoo, J. Praks, L. Reyneri, A. Sanna, T. Sisman, J. Some, T. Ulambayar, Y. Xiaozhou, D. Xiaolong, L. Baldis, Science Missions Using CubeSats, Chinese Journal of Space Science 40 (2020) 443. * Foust [2011] J. Foust, The evolving ecosystem of NewSpace, https://www.thespacereview.com/article/1906/1, 2011\. [Online; accessed 21-Feb-2020]. * Sweeting [2018] M. N. Sweeting, Modern small satellites-changing the economics of space, Proceedings of the IEEE 106 (2018) 343–361. * Sweeting [2018] M. N. Sweeting, Modern small satellites-changing the economics of space, Proceedings of the IEEE 106 (2018) 343–361. * Poghosyan and Golkar [2017] A. Poghosyan, A. Golkar, CubeSat evolution: Analyzing CubeSat capabilities for conducting science missions, 2017. * Straub et al. [2019] J. Straub, T. Villela, C. A. Costa, A. M. Brandão, F. T. Bueno, R. Leonardi, Towards the Thousandth CubeSat: A Statistical Overview, International Journal of Aerospace Engineering (2019) 13. * Crusan and Galica [2019] J. Crusan, C. Galica, NASA’s CubeSat Launch Initiative: Enabling broad access to space, Acta Astronautica (2019). * Ali et al. [2020] A. Ali, H. Ali, J. Tong, M. R. Mughal, S. U. Rehman, Modular design and thermal modeling techniques for the power distribution module (pdm) of a micro satellite, IEEE Access 8 (2020) 160723–160737. * Rizwan Mughal et al. [2012] M. Rizwan Mughal, J. De Los Rios, L. Reyneri, A. Ali, Scalable plug and play tiles for modular nanosatellites, Proceedings of the International Astronautical Congress, IAC 6 (2012) 4631–4638. * Mughal [2014] M. Mughal, Student research highlight smart panel bodies for modular small satellites, IEEE Aerospace and Electronic Systems Magazine 29 (2014) 38–41. * Mughal et al. [2019] M. R. Mughal, A. Ali, J. Praks, L. M. Reyneri, Intra-spacecraft optical communication solutions using discrete transceiver, International Journal of Satellite Communications and Networking 37 (2019) 588–600. * Ali et al. [2018] A. Ali, S. A. Khan, M. Usman Khan, H. Ali, M. Rizwan Mughal, J. Praks, Design of modular power management and attitude control subsystems for a microsatellite, International Journal of Aerospace Engineering (2018). * Mughal et al. [2014] M. R. Mughal, A. Ali, L. M. Reyneri, Plug-and-play design approach to smart harness for modular small satellites, Acta Astronautica 94 (2014) 754–764. * Ali et al. [2014] A. Ali, M. R. Mughal, H. Ali, L. Reyneri, Innovative power management, attitude determination and control tile for cubesat standard nanosatellites, Acta Astronautica 96 (2014) 116–127. * Jaan Praks [tion] M. R. M. e. Jaan Praks, Aalto-1, multipayload cubesat: design, integration and launch, Acta Astronautica (2020 (submitted for publication)). * Slavinskis et al. [2015] A. Slavinskis, M. Pajusalu, H. Kuuste, E. Ilbis, T. Eenmäe, I. Sünter, K. Laizans, H. Ehrpais, P. Liias, E. Kulu, et al., ESTCube-1 in-orbit experience and lessons learned, IEEE Aerospace and Electronic Systems Magazine 30 (2015) 12–22. * Kestilä et al. [2013] A. Kestilä, T. Tikka, P. Peitso, J. Rantanen, A. Näsilä, K. Nordling, H. Saari, R. Vainio, P. Janhunen, J. Praks, , M. Hallikainen, Aalto-1 nanosatellite technical description and mission objectives, Geoscientific Instrumentation, Methods and Data Systems 32 (2013) 71–92. * Iakubivskyi et al. [2019] I. Iakubivskyi, P. Janhunen, J. Praks, V. Allik, K. Bussov, B. Clayhills, J. Dalbins, T. Eenmäe, H. Ehrpais, J. Envall, S. Haslam, E. Ilbis, N. Jovanovic, E. Kilpua, J. Kivastik, J. Laks, P. Laufer, M. Merisalu, M. Meskanen, R. Märk, A. Nath, P. Niemelä, M. Noorma, M. R. Mughal, S. Nyman, M. Pajusalu, M. Palmroth, A. S. Paul, T. Peltola, M. Plans, J. Polkko, Q. S. Islam, A. Reinart, B. Riwanto, J. Sate, I. Sünter, M. Tajmar, E. Tanskanen, H. Teras, P. Toivanen, R. Vainio, M. Väänänen, A. Slavinskis, Coulomb drag propulsion experiments of ESTCube-2 and FORESAIL-1, Acta Astronautica (2019). * Slavinskis et al. [2014] A. Slavinskis, U. Kvell, E. Kulu, I. Sünter, H. Kuuste, S. Lätt, K. Voormansik, M. Noorma, High spin rate magnetic controller for nanosatellites, Acta Astronautica 95 (2014) 218–226. * Khurshid et al. [2014] O. Khurshid, T. Tikka, J. Praks, M. Hallikainen, Accommodating the plasma brake experiment on-board the Aalto-1 satellite, Proc. Estonian Acad. Sci. 63(2S) (2014) 258–266. * Aalto-yliopisto and Multidisciplinary Institute of Digitalisation and Energy (2013) [MIDE] Aalto-yliopisto, Multidisciplinary Institute of Digitalisation and Energy (MIDE), Mide - multidisciplinary institute of digitalisation and energy 2008-2013, 2013\. * Praks et al. [2015] J. Praks, A. Kestilä, T. Tikka, H. Leppinen, O. Khurshid, M. Hallikainen, Aalto-1 Earth observation CubeSat mission - Educational outcomes, in: Geoscience and Remote Sensing Symposium (IGARSS), 2015 IEEE International, Milan, Italy,, pp. 1340–1343. * Leppinen [2018] H. Leppinen, Enabling technologies and practices for low-cost nanosatellite missions, Doctoral dissertation, Aalto University, Espoo, Finland, 2018\. * Khurshid [2017] O. Khurshid, Attitude estimation of a small spinning satellite using Kalman filter approaches, Aalto University publication series DOCTORAL DISSERTATIONS; 122/2017, Aalto University, 2017. * Kestilä [2017] A. Kestilä, Rapid Space Mission Design, Realization and Deployment; Nopea avaruustehtävän suunnittelu, toteutus ja käyttöönotto, Aalto University publication series DOCTORAL DISSERTATIONS; 111/2017, Aalto University, 2017. * Praks et al. [2015] J. Praks, A. Kestila, T. Tikka, H. Leppinen, O. Khurshid, M. Hallikainen, AALTO-1 earth observation cubesat mission - Educational outcomes, in: International Geoscience and Remote Sensing Symposium (IGARSS). * Peltonen et al. [2014] J. Peltonen, H. Hedman, A. Ilmanen, M. Lindroos, M. Määttänen, J. Pesonen, R. Punkkinen, A. Punkkinen, R. Vainio, E. Valtonen, T. Sǎntti, J. Pentikäinen, E. Hæggström, Electronics for the radmon instrument on the aalto-1 student satellite, in: 10th European Workshop on Microelectronics Education (EWME), pp. 161–166. * Oleynik et al. [2020] P. Oleynik, R. Vainio, A. Punkkinen, O. Dudnik, J. Gieseler, H. Hedman, H. Hietala, E. Hæggström, P. Niemelä, J. Peltonen, J. Praks, R. Punkkinen, T. Säntti, E. Valtonen, Calibration of RADMON Radiation Monitor Onboard Aalto-1 CubeSat, Advances in Space Research 66(1) (2020) 42–51. * Agostinelli et al. [2003] S. Agostinelli, J. Allison, K. Amako, J. Apostolakis, H. Araujo, P. Arce, M. Asai, D. Axen, S. Banerjee, G. Barrand, F. Behner, L. Bellagamba, J. Boudreau, L. Broglia, A. Brunengo, H. Burkhardt, S. Chauvie, J. Chuma, R. Chytracek, G. Cooperman, G. Cosmo, P. Degtyarenko, A. Dell’Acqua, G. Depaola, D. Dietrich, R. Enami, A. Feliciello, C. Ferguson, H. Fesefeldt, G. Folger, F. Foppiano, A. Forti, S. Garelli, S. Giani, R. Giannitrapani, D. Gibin, J. G. Cadenas, I. González, G. G. Abril, G. Greeniaus, W. Greiner, V. Grichine, A. Grossheim, S. Guatelli, P. Gumplinger, R. Hamatsu, K. Hashimoto, H. Hasui, A. Heikkinen, A. Howard, V. Ivanchenko, A. Johnson, F. Jones, J. Kallenbach, N. Kanaya, M. Kawabata, Y. Kawabata, M. Kawaguti, S. Kelner, P. Kent, A. Kimura, T. Kodama, R. Kokoulin, M. Kossov, H. Kurashige, E. Lamanna, T. Lampén, V. Lara, V. Lefebure, F. Lei, M. Liendl, W. Lockman, F. Longo, S. Magni, M. Maire, E. Medernach, K. Minamimoto, P. M. de Freitas, Y. Morita, K. Murakami, M. Nagamatu, R. Nartallo, P. Nieminen, T. Nishimura, K. Ohtsubo, M. Okamura, S. O’Neale, Y. Oohata, K. Paech, J. Perl, A. Pfeiffer, M. Pia, F. Ranjard, A. Rybin, S. Sadilov, E. D. Salvo, G. Santin, T. Sasaki, N. Savvas, Y. Sawada, S. Scherer, S. Sei, V. Sirotenko, D. Smith, N. Starkov, H. Stoecker, J. Sulkimo, M. Takahata, S. Tanaka, E. Tcherniaev, E. S. Tehrani, M. Tropeano, P. Truscott, H. Uno, L. Urban, P. Urban, M. Verderi, A. Walkden, W. Wander, H. Weber, J. Wellisch, T. Wenaus, D. Williams, D. Wright, T. Yamada, H. Yoshida, D. Zschiesche, Geant4 – a simulation toolkit, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 506 (2003) 250 – 303. * Allison et al. [2006] J. Allison, K. Amako, J. Apostolakis, H. Araujo, P. Arce Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. A. P. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. G. Daquino, M. Donszelmann, M. Dressel, G. Folger, F. Foppiano, J. Generowicz, V. Grichine, S. Guatelli, P. Gumplinger, A. Heikkinen, I. Hrivnacova, A. Howard, S. Incerti, V. Ivanchenko, T. Johnson, F. Jones, T. Koi, R. Kokoulin, M. Kossov, H. Kurashige, V. Lara, S. Larsson, F. Lei, O. Link, F. Longo, M. Maire, A. Mantero, B. Mascialino, I. McLaren, P. Mendez Lorenzo, K. Minamimoto, K. Murakami, P. Nieminen, L. Pandola, S. Parlati, L. Peralta, J. Perl, A. Pfeiffer, M. G. Pia, A. Ribon, P. Rodrigues, G. Russo, S. Sadilov, G. Santin, T. Sasaki, D. Smith, N. Starkov, S. Tanaka, E. Tcherniaev, B. Tome, A. Trindade, P. Truscott, L. Urban, M. Verderi, A. Walkden, J. P. Wellisch, D. C. Williams, D. Wright, H. Yoshida, Geant4 developments and applications, IEEE Transactions on Nuclear Science 53 (2006) 270–278. * Ilmanen [2014] A. Ilmanen, Soft Error Correcting Configuration Scrubber for the Compact Radiation Monitor Onboard the Aalto-1 Satellite, Master’s thesis, Department of Information Technology, University of Turku, 2014. * Janhunen and Sandroos [2007] P. Janhunen, A. Sandroos, Simulation study of solar wind push on a charged wire: basis of solar wind electric sail propulsion, Annales Geophysicae 25 (2007) 755–767. * Janhunen [2010] P. Janhunen, Electrostatic plasma brake for deorbiting a satellite, Journal of Propulsion and Power 26 (2010) 370–372. * Seppänen et al. [2011] H. Seppänen, S. Kiprich, R. Kurppa, P. Janhunen, E. Hæggström, Wire-to-wire bonding of μm-diameter aluminum wires for the electric solar wind sail, Microelectronic Engineering 88 (2011) 3267 – 3269. * Näsilä [5 20] A. Näsilä, Aalto-1 -satelliitin spektrikamerateknologian validointi avaruusympäristöön; Validation of Aalto-1 Spectral Imager Technology to Space Environment, G2 pro gradu, diplomityö, 2013-05-20. * Praks et al. [2018a] J. Praks, P. Niemelä, A. Näsilä, A. Kestilä, N. Jovanovic, B. A. Riwanto, T. Tikka, H. Leppinen, R. Vainio, P. Janhunen, Miniature Spectral Imager in-Orbit Demonstration Results from Aalto-1 Nanosatellite Mission, IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium (2018a) 1986–1989. * Praks et al. [2018b] J. Praks, P. Niemelä, A. Näsilä, H. Leppinen, A. Kestilä, T. Tikka, B. Riwanto, N. Jovanovic, R. Vainio, P. Janhunen, Nanosatellite based spectral imager Earth Observation mission results, in: 2018 2nd URSI Atlantic Radio Science Meeting (AT-RASC), pp. 1–1. * Hemmo [2013] J. Hemmo, Electrical Power Systems for Finnish Nanosatellites, Master’s thesis, Aalto University, Espoo, Finland, 2013. * Tikka et al. [2014] T. Tikka, O. Khurshid, N. Jovanovic, H. Leppinen, A. Kestilä, J. Praks, Aalto-1 Nanosatellite Attitude Determination and Control System End-to-End Testing, in: 6th European CubeSat Symposium, Estavayer-le-Lac, Switzerland, p. 78. * Leppinen [2013] H. Leppinen, Integration of a GPS subsystem into the Aalto-1 nanosatellite, Master’s thesis, Aalto University, Espoo, Finland, 2013. * Lankinen [2015] M. Lankinen, Design and Testing of Antenna Deployment System for Aalto-1 Satellite, Master’s thesis, Aalto University, Espoo, Finland, 2015. * Cantero Gómez [2013] J. Cantero Gómez, Communication link design at 437.5 mhz for a nanosatellite, 2013. * Jussila et al. [2013] J. Jussila, S. Ben Cheikh, J. Holopainen, M. Lankinen, A. Kestilä, J. Praks, M. Hallikainen, Design of high data rate, low power and efficient S-band transmitter for Aalto-1 nanosatellite mission, in: Proceedings of the 2nd IAA Conference on University Satellites Missions and CubeSat Workshop, pp. 811–829. * Razzaghi [2012] E. Razzaghi, Design and Qualification of On-Board Computer for Aalto-1 CubeSat, Master’s thesis, Aalto University, Espoo, Finland, 2012. * Javanainen [2016] J. Javanainen, Reliability evaluation of Aalto-1 nanosatellite software architecture, Master’s thesis, Aalto University, Espoo, Finland, 2016\. * da Silva Garcia [2 11] N. da Silva Garcia, Reliability Analysis of Satellite On-Board Software, G2 pro gradu, diplomityö, 2017-12-11. * Leppinen et al. [2019] H. Leppinen, P. Niemelä, N. Silva, H. Sanmark, H. Forén, A. Yanes, R. Modrzewski, A. Kestilä, J. Praks, Developing a Linux-based nanosatellite on-board computer: flight results from the Aalto-1 mission, IEEE Aerospace and Electronic Systems Magazine 34 (2019). * Tikka et al. [2015] T. Tikka, A. Kestilä, B. Riwanto, N. Jovanovic, J. Praks, Agile development and testing approach and results for the aalto series nanosatellites, in: 3RD IAA Conference on University Satellite Missions, Cubesatworkshop & International Workshop on Lean Satellite Standardization (IWLS2). * Ali et al. [2013] A. Ali, M. R. Mughal, H. J. Ali, M. Leonardo, Innovative electric power supply system for nano-satellites. * Finnholm et al. [2013] J. Finnholm, J. Hemmo, J. Praks, Design and manufacturing process of aalto-1 solar panels, in: Proc. 2nd IAA ConferenCe on University SatellIte Missions and CubeSat Workshop, Roma, Italy. * Karvinen et al. [2015] K. Karvinen, T. Tikka, J. Praks, Using hobby prototyping boards and commercial-off-the-shelf (cots) components for developing low-cost, fast-delivery satellite subsystems, Journal of Small Satellites 04 (2015) 301–314. * Leppinen et al. [2013] H. Leppinen, A. Kestilä, M. Ström, M. Komu, M. Hallikainen, Design of a Low-power GPS Subsystem for a Nanosatellite Science Mission, in: Proceedings of the 2nd IAA Conference on University Satellites Missions and Cubesat Workshop, Rome, Italy, 3-9 February, pp. 596–606. IAA-CU-13-10-02. * Leppinen et al. [2016] H. Leppinen, A. Kestilä, T. Tikka, J. Praks, The Aalto-1 nanosatellite navigation subsystem: Development results and planned operations, in: 2016 European Navigation Conference (ENC), Helsinki, Finland, May 30 - June 2, pp. 1–8. * Leppinen [2017] H. Leppinen, Current use of Linux in spacecraft flight software, IEEE Aerospace and Electronic Systems Magazine 32 (2017) 4–13. * Mughal et al. [2012] M. Mughal, L. Reyneri, A. Ali, Uml based design methodology for serial data handling system of nanosatellites, Proceedings - 2012 IEEE 1st AESS European Conference on Satellite Telecommunications, ESTEL 2012 (2012). * Ali et al. [2020] A. Ali, J. Tong, H. Ali, M. R. Mughal, L. M. Reyneri, A detailed thermal and effective induced residual spin rate analysis for leo small satellites, IEEE Access 8 (2020) 146196–146207. * Palmroth et al. [2019] M. Palmroth, J. Praks, R. Vainio, P. Janhunen, E. K. J. Kilpua, A. Afanasiev, M. Ala-Lahti, A. Alho, T. Asikainen, E. Asvestari, M. Battarbee, A. Binios, A. Bosser, T. Brito, M. Dubart, J. Envall, U. Ganse, N. Y. Ganushkina, H. George, J. Gieseler, S. Good, M. Grandin, S. Haslam, H.-P. Hedman, H. Hietala, N. Jovanovic, S. Kakakhel, M. Kalliokoski, V. V. Kettunen, T. Koskela, E. Lumme, M. Meskanen, D. Morosan, M. R. Mughal, P. Niemelä, S. Nyman, P. Oleynik, A. Osmane, E. Palmerio, J. Peltonen, Y. Pfau-Kempf, J. Plosila, J. Polkko, S. Poluianov, J. Pomoell, D. Price, A. Punkkinen, R. Punkkinen, B. Riwanto, L. Salomaa, A. Slavinskis, T. Säntti, J. Tammi, H. Tenhunen, P. Toivanen, J. Tuominen, L. Turc, E. Valtonen, P. Virtanen, T. Westerlund, FORESAIL-1 CubeSat Mission to Measure Radiation Belt Losses and Demonstrate Deorbiting, Journal of Geophysical Research: Space Physics 124 (2019) 5783–5799. * Snodgrass and Jones [2019] C. Snodgrass, G. H. Jones, The European Space Agency’s Comet Interceptor lies in wait, Nature Communications 10 (2019).
# The Langevin Monte Carlo algorithm in the non-smooth log-concave case Joseph Lehec ###### Abstract We prove non-asymptotic polynomial bounds on the convergence of the Langevin Monte Carlo algorithm in the case where the potential is a convex function which is globally Lipschitz on its domain, typically the maximum of a finite number of affine functions on an arbitrary convex set. In particular the potential is not assumed to be gradient Lipschitz, in contrast with most existing works on the topic. Keywords: Statistical Sampling, Markov Chain Monte Carlo, Convexity. MS Classification: 62D05 (68W20, 65C05, 52A23) ## 1 Introduction ### Setting. Sampling from a high-dimensional log-concave probability measure is a problem dating back to the early nineties and the seminal work of Dyer, Frieze and Kannan [13] and which has many applications to various fields such as Bayesian statistics, convex optimization and statistical inference. This problem is always addressed via Markov Chain Monte Carlo methods, but there is a large variety of those: Metropolis-Hastings type random walks (ball walk), Glauber like dynamics (hit and run) or Hamiltonian Monte Carlo. In this article, we will consider the so-called Langevin algorithm, which is defined as follows. Given a probability measure $\mu$ on $\mathbb{R}^{n}$ we let $\varphi$ be its potential, namely $\mu$ has density $\mathrm{e}^{-\varphi}$ with respect to the Lebesgue measure. The Langevin diffusion associated to $\mu$ is the solution $(X_{t})$ of the following stochastic differential equation $dX_{t}=dB_{t}-\frac{1}{2}\nabla\varphi(X_{t})\,dt,$ (1) where $(B_{t})$ is a standard $n$-dimensional Brownian motion. The Langevin algorithm is the Euler scheme associated to this diffusion: Given a time step parameter $\eta$ we let $(\xi_{k})_{k\geq 1}$ be a sequence of i.i.d. centered Gaussian vectors with covariance $\eta\,\text{Id}$ and set $x_{k+1}=x_{k}+\xi_{k+1}-\frac{\eta}{2}\nabla\varphi(x_{k}).$ (2) We shall focus on the _log-concave_ case, namely the case where the potential $\varphi$ is convex. One originality of this work is that we will consider the _constrained_ case, allowing the measure $\mu$ to be supported on a set $K$ different from $\mathbb{R}^{n}$. In other words the potential $\varphi$ is allowed to take the value $+\infty$ outside some set $K$. Notice that the log- concavity assumption implies that $K$ is convex. In the constrained case the Langevin diffusion (1) becomes $dX_{t}=dB_{t}-\frac{1}{2}\nabla\varphi(X_{t})\,dt-d\Phi_{t},$ (3) where $(\Phi_{t})$ is a process that repels $(X_{t})$ inward when it reaches the boundary of $K$, see the next section for a precise definition. The discretization then becomes $x_{k+1}=\mathcal{P}\left(x_{k}+\xi_{k+1}-\frac{\eta}{2}\nabla\varphi(x_{k})\right),$ (4) where $\mathcal{P}$ is the projection on $K$: For $x\in\mathbb{R}^{n}$ the point $\mathcal{P}_{K}(x)$ is the closest point to $x$ in $K$. This is the algorithm we will study throughout the article. It was first introduced in our joint paper with Bubeck and Eldan [5] and to the best of our knowledge it has not been investigated since. The second originality of this work is that we do not assume the potential $\varphi$ to be smooth. More precisely we will assume that the gradient of $\varphi$ (or rather its subdifferential) is uniformly bounded on $K$, but we do not assume it to be Lipschitz or even continuous. Let us point out that this is by no means an exotic situation, the reader could think for instance of $\varphi$ being the maximum of a finite number of affine functions on $K$. We do not make any assumption whatsoever on the convex set $K$. The drawback of this very generic situation and of our approach is that we are only able to get convergence estimates in Wasserstein distance. Recall that the Wasserstein distance $W_{2}$ between two probability measures $\mu$ and $\nu$ is defined as $W_{2}^{2}(\mu,\nu)=\inf_{X\sim\mu,Y\sim\nu}\left\\{\mathbb{E}[|X-Y|^{2}]\right\\}.$ By a slight abuse of notation, if $X,Y$ are random vectors we also write $W_{2}(X,Y)$ for the Wasserstein distance between the law of $X$ and that of $Y$. ### Main results. Our main result is the following bound between the Langevin algorithm (4) after $k$ steps and its corresponding point $X_{k\eta}$ in the true Langevin diffusion (3). ###### Theorem 1. Assume that $\mu$ is log-concave, with globally Lipschitz potential $\varphi$ on its support $K$ and let $L$ be the Lipschitz constant. Assume that the time step $\eta$ satisfies $\eta<nL^{-2}$ and suppose that the Langevin algorithm and diffusion are initiated at the same point $x_{0}$. Then for every integer $k$ we have $\frac{1}{n}W_{2}^{2}(X_{k\eta},x_{k})\leq A\,k\,\eta^{3/2}$ (5) where $A=(2\mathrm{e}^{1/2}+1)\frac{(1+\sigma_{0})(n+2\log k)^{1/2}}{r_{0}}+\frac{7}{6}\,\frac{L}{n^{1/2}},$ (6) and $r_{0}=d(x_{0},\partial K)\quad\text{and}\quad\sigma_{0}=\frac{1}{n}\left(\varphi(x_{0})-\inf_{K}\\{\varphi(y)\\}\right).$ (7) ###### Remark. The transport cost $W_{2}^{2}$ behaves additively when taking tensor products, so the Wasserstein distance between any two probability measures on $\mathbb{R}^{n}$ is typically of order $\sqrt{n}$. Therefore $\frac{1}{n}W_{2}^{2}$ is of order $1$, which explains why we wrote the theorem this way. The reader should thus have in mind that the theorem provides some non trivial information as soon as the right-hand side of (5) is smaller than some small constant $\varepsilon$. The result depends on the starting point via the parameters $r_{0}$ and $\sigma_{0}$. In order to get a meaningful bound the algorithm should not be initiated too close to the boundary of $K$ or at a point where the potential $\varphi$ is too large. Let us also point out that the theorem is also valid when there is no support constraint, namely when $K=\mathbb{R}^{n}$. One just replaces $r_{0}$ by $+\infty$, so that $A=O(Ln^{-1/2})$ in this case. Let us comment also on the parameter $\sigma_{0}$. Obviously $\sigma_{0}=0$ if the potential is minimal at $x_{0}$. Fradelizi’s theorem [16, Theorem 4] asserts that if $\mu$ is log-concave on $\mathbb{R}^{n}$ with density $f$ and has its barycenter at $x_{0}$ then $\sup_{x\in\mathbb{R}^{n}}\\{f(x)\\}\leq e^{n}f(x_{0}).$ In terms of the parameter $\sigma_{0}$ this means that if $\mu$ has its barycenter at $x_{0}$ then $\sigma_{0}\leq 1$. Since $\varphi$ is assumed to be Lipschitz with constant $L$, if $x_{0}$ is at $O(nL^{-1})$ distance from the barycenter then again $\sigma_{0}$ is order $1$. In general we shall think of $\sigma_{0}$ as a parameter of order $1$. Also we are never going to take more than $poly(n)$ steps so $\log k$ will always be negligible compared to $n$. Under the previous assumptions the parameter $A$ thus satisfies $A=O\left(\max\left(\frac{n^{1/2}}{r_{0}};\frac{L}{n^{1/2}}\right)\right).$ In order to estimate the complexity of the Langevin algorithm, we need to combine the previous theorem with some estimate on the speed of convergence of the Langevin diffusion $(X_{t})$ towards its equilibrium measure $\mu$. For this we shall use two functional inequalites, the Poincaré inequality and the logarithmic Sobolev inequality. Recall that the measure $\mu$ is said to satisfy the logarithmic Sobolev inequality if for every probability measure $\nu$ on $\mathbb{R}^{n}$ we have $D(\nu\mid\mu)\leq\frac{C_{LS}}{2}\,I(\nu\mid\mu)$ (8) where $D(\nu\mid\mu)$ and $I(\nu\mid\mu)$ denote respectively the relative entropy and the relative Fisher information of $\nu$ with respect to $\mu$: $D(\nu\mid\mu)=\int_{\mathbb{R}^{n}}\log\left(\frac{d\nu}{d\mu}\right)\,d\nu\quad\text{and}\quad I(\nu\mid\mu)=\int_{\mathbb{R}^{n}}\left|\nabla\log\left(\frac{d\nu}{d\mu}\right)\right|^{2}\,d\nu.$ The smallest constant $C_{LS}$ for which (8) holds true is called the log- Sobolev constant of $\mu$. The factor $\frac{1}{2}$ is just a matter of convention, with this normalization the log-Sobolev constant of the standard Gaussian measure is $1$, in any dimension. It is well-known that the log- Sobolev inequality is stronger than the Poincaré inequality. More precisely, letting $C_{P}$ be the best constant in the Poincaré inequality: $\mathrm{var}_{\mu}(f)\leq C_{P}\int_{\mathbb{R}^{n}}|\nabla f|^{2}\,d\mu,$ we have $C_{P}\leq C_{LS}$. ###### Theorem 2. Again assume that $\mu$ is log-concave with globally Lipschitz potential on its support, with Lipschitz constant $L$. Let $x_{0}$ be a point in the support of $\mu$ and recall the definition (7) of $\sigma_{0}$ and $r_{0}$. Assume in addition that the measure $\mu$ satisfies the log-Sobolev inequality with constant $C_{LS}$. Then after $k$ steps of the Langevin algorithm started at $x_{0}$ with time step parameter $\eta<nL^{-2}$ we have $\frac{1}{n}W_{2}^{2}(x_{k},\mu)\leq 2B\,\mathrm{e}^{-k\eta/2C_{LS}}+2A\,k\eta^{3/2}$ where again $A$ is given by (6) and $B=4C_{LS}\left(1+\log\left(\frac{\max(C_{LS},1)\,n}{\min(r_{0},1)}\right)+\sigma_{0}+\frac{L}{n}\right).$ ###### Remark. Note that we initiate the Langevin algorithm at a Dirac point mass, we do not need any _warm start_ hypothesis. The starting point only plays a role through the parameters $r_{0}$ and $\sigma_{0}$. Let us describe what the theorem gives in terms of the complexity of the Langevin algorithm. Say we want $\frac{1}{n}W_{2}^{2}(x_{k},\mu)\leq\varepsilon$ for some small $\varepsilon$. The first term of the right-hand side is a little bit intricate, so let us assume that all parameters of the problem are at most polynomial in the dimension. Then that term is just $poly(n)\exp(-k\eta/2C_{LS})$, which is negligible as soon as $k\eta=\Omega(C_{LS}\log n)$. Let us also assume that $\sigma_{x}=O(1)$ (see the discussion above). Then the theorem shows that choosing $\eta=\Theta^{*}\left(\frac{\varepsilon^{2}}{C_{LS}^{2}}\min\left(\frac{r_{0}^{2}}{n},\frac{n}{L^{2}}\right)\right)$ and running the algorithm for $k=\Theta^{*}\left(\frac{C_{LS}^{3}}{\varepsilon^{2}}\max\left(\frac{n}{r_{0}^{2}};\frac{L^{2}}{n}\right)\right)$ steps produces a point $x_{k}$ satisfying $\frac{1}{n}W_{2}^{2}(x_{k},\mu)\leq\varepsilon$. The notation $\Theta^{*}$ hides universal constants as well as possible $polylog(n)$ dependencies, this is a common practice in this field. Note in particular that if we treat all parameters other than the dimension as constants, then we already get a non trivial information after a number of steps of the algorithm which is nearly linear in the dimension. Of course not every log-concave measure satisfies log-Sobolev, simply because log-Sobolev implies sub-Gaussian tails. However there are a number of interesting cases in which the log-Sobolev inequality is known to hold true, which we list below. 1. 1. If the potential $\varphi$ is $\alpha$-uniformly convex for some $\alpha>0$, in the sense that $x\mapsto\varphi(x)-\frac{\alpha}{2}|x|^{2}$ is convex, then $\mu$ satisfies $\log$-Sobolev with constant $1/\alpha$. This is the celebrated Bakry-Émery criterion, see [1]. See also [3] for an alternate proof based on the Prékopa-Leindler inequality. 2. 2. If $\mu$ is log-concave and is supported on a ball of radius $R$, then $\mu$ satisfies log-Sobolev with constant $R^{2}$, up to a universal constant. This follows trivially from E. Milman’s result that, within the class of log- concave measures, Gaussian concentration and the log-Sobolev inequality are equivalent, see [21, Theorem 1.2.], or [18, Theorem 2]. 3. 3. If $\mu$ is log-concave, supported on a ball of radius $R$ and isotropic, in the sense that its covariance matrix is the identity matrix, then Lee and Vempala [19] have shown that $\mu$ satisfies log-Sobolev with constant $R$, up to a universal factor. Note that the isotropy condition implies that $R\geq\sqrt{n}$, so this improves greatly upon the previous result in the isotropic case. In the first case, notice that since the potential is at the same time globally Lipschitz and uniformly convex, the support of $\mu$ must be bounded. Actually if the potential is globally Lipschitz it cannot grow fast enough at infinity to insure log-Sobolev. So if we insist on assuming that the potential is Lipschitz and on using log-Sobolev then we have to assume that the support is bounded. One way around this issue is to use the Poincaré inequality rather than log- Sobolev. Indeed every log-concave measure satisfies the Poincaré inequality. Kannan, Lovasz and Simonovits [17] proved that the Poincaré constant of an isotropic log-concave measure on $\mathbb{R}^{n}$ is $O(n)$ and conjectured that it should actually be bounded. This conjecture, which was the major open problem in the field of asymptotic convex geometry, was recently nearly solved by Yuansi Chen [8], who proved an $n^{o(1)}$ bound for the Poincaré constant of an isotropic log-concave vector in dimension $n$. The result of Chen relies on a technique invented by Eldan [14] which was also used by Lee an Vempala [19] to prove a $O(\sqrt{n})$ bound for the KLS constant, as well as the aforementioned log-Sobolev result. Recall that if $\nu$ is a probability measure, absolutely continuous with respect to $\mu$, the chi-square divergence of $\nu$ with respect to $\mu$ is defined as $\chi^{2}(\nu\mid\mu)=\int_{\mathbb{R}^{n}}\left(\frac{d\nu}{d\mu}-1\right)^{2}\,d\mu.$ Our next theorem then states as follows. ###### Theorem 3. Assume that $\mu$ is a log-concave probability measure with globally Lipschitz potential $\varphi$ on its domain, with constant $L$. Then after $k$ steps of the Langevin algorithm initiated at a random point $x_{0}$ taking values in the domain, and with time step parameter $\eta$ satisfying $\eta\leq nL^{-2}$, we have $\frac{1}{n}W_{2}^{2}(x_{k},\mu)\leq\frac{4}{n}\,C_{P}\chi^{2}(x_{0}\mid\mu)\mathrm{e}^{-k\eta/C_{P}}+2\,Ak\eta^{3/2}.$ where $C_{P}$ is the Poincaré constant of $\mu$ and where $A=(2\mathrm{e}^{1/2}+1)(n+2\log k)^{1/2}\,\mathbb{E}\left[\frac{1+\sigma_{0}}{r_{0}}\right]+\frac{7}{6}\,\frac{L}{n^{1/2}}.$ Note that $r_{0}$ and $\sigma_{0}$ are random here. So the price we have to pay for using Poincaré rather than log-Sobolev is a warm start hypothesis: the algorithm must be initiated at a random point $x_{0}$ whose chi-square divergence to $\mu$ is finite. In the unconstrained case, namely when $\mu$ is supported on the whole $\mathbb{R}^{n}$, a natural choice for a warm start is an appropriate Gaussian measure. One can indeed get the following estimate. ###### Lemma 4. Suppose $\mu$ is log-concave, supported on the whole $\mathbb{R}^{n}$, with globally Lipschitz potential, with Lipschitz constant $L$. Let $\gamma$ be the Gaussian measure centered at a point $x_{0}$ and with covariance $\frac{n}{L^{2}}Id$. Then $\log\chi^{2}(\gamma\mid\mu)\leq n(1+\sigma_{0})+\frac{n}{2}\log\left(\frac{L^{2}C_{P}}{n}\right),$ where $C_{P}$ is the Poincaré constant of $\mu$. In particular when $\sigma_{0}=O(1)$ and all other parameters of the problem are at most polynomial in $n$, we get $\log\chi_{2}^{2}(\gamma\mid\mu)\leq O(n\log n)$. With this choice of a warm start, and observing that in the unconstrained case the parameter $A$ is just $O(L/\sqrt{n})$, the previous theorem gives $\frac{1}{n}W_{1}^{2}(x_{k},\mu)\leq\varepsilon$ after $k=\Theta^{*}\left(\frac{C_{P}^{3}L^{2}n^{2}}{\varepsilon^{2}}\right)$ steps, with $\eta$ chosen appropriately. Also in the constrained case, one can get a non trivial complexity estimate from Theorem 3 by choosing the uniform measure on a ball contained in the support as a warm start. We leave this annoying computation to the reader. Finally let us point out that it is also possible to obtain interesting bounds from our result when the potential is not globally Lipschitz, simply by restricting the measure to a large ball. For simplicity let us only spell out the argument when the measure $\mu$ is supported on the whole $\mathbb{R}^{n}$ and when we have a linear control on the gradient of the potential, but the method could give non trivial bounds in more general situations. So let $\mu$ be a log-concave measure supported on the whole $\mathbb{R}^{n}$, let $\varphi$ be its potential, and consider the Langevin algorithm associated to the measure $\mu$ conditioned on the ball of radius $R$: $x_{k+1}=\mathcal{P}\left(x_{k}+\sqrt{\eta}\xi_{k+1}-\frac{\eta}{2}\nabla\varphi(x_{k})\right),$ (9) where $\mathcal{P}$ is the orthogonal projection on the ball of radius $R$: $\mathcal{P}(x)=\begin{cases}x&\text{if }|x|\leq R\\\ \frac{Rx}{|x|}&\text{otherwise}\end{cases}$ In this special case, Theorem 2 yields the following complexity for the Langevin algorithm. ###### Theorem 5. Assume that $\mu$ is log-concave, supported on the whole $\mathbb{R}^{n}$ and that the gradient of the potential $\varphi$ grows at most linearly: $|\nabla\varphi(x)|\leq\beta(|x|+1),$ for all $x\in\mathbb{R}^{n}$ and for some $\beta>0$. Assume that the Langevin algorithm is initiated at $0$, that $\sigma_{0}=O(1)$, that $\int|x|^{2}\,d\mu=O(n)$, and let $C_{LS}$ be the log-Sobolev constant of $\mu$, with the convention that it equals $+\infty$ if $\mu$ does not satisfy log-Sobolev. Then choosing $R=\Theta^{*}(\sqrt{n})$, $\eta=\Theta^{*}\left(\varepsilon^{2}\max(\beta,1)^{-2}\min(C_{LS},n)^{-2}\right)$ and running the algorithm (9) initiated at $0$ for $k=\Theta^{*}\left(\frac{\min(C_{LS},n)^{3}\max(\beta,1)^{2}}{\varepsilon^{2}}\right)$ steps produces a point $x_{k}$ satisfying $\frac{1}{n}W_{2}^{2}(x_{k},\mu)\leq\varepsilon$. Note in particular that in the case where $C_{LS}$ and $\beta$ are of constant order the complexity does not depend on the dimension. ### Related works. We end this introduction with a discussion on a short selection of related works. Let us first mention that as far as we know, the Langevin algorithm with support constraint was only investigated in our previous paper with Bubeck and Eldan [5]. In this paper the potential was assumed to be gradient Lipschitz. In all the works that we could find on the Langevin Monte Carlo algorithm the potential is always assumed to be smooth, most of the time gradient Lipschitz. This hypothesis is somewhat relaxed in the recent article [7], but the authors analyze the Langevin algorithm for a smoothed out approximation of $\mu$, and in any case they still require the gradient of the potential to be Hölder continuous. The present work appears to be the first were $\nabla\varphi$ is allowed to be discontinuous. Let us give the state of the art convergence bounds for the Langevin algorithm in the smooth, unconstrained case. The first quantitative result appears to be Dalalyan’s article [9]. The result is in total variation distance rather than Wasserstein but as in the present work the strategy consists in writing $TV(x_{k},\mu)\leq TV(x_{k},X_{k\eta})+TV(X_{k\eta},\mu)$ (10) and estimating both terms separately. His assumption is that the potential $\varphi$ satisfies $\alpha Id\leq\nabla^{2}\varphi\leq\beta Id$ pointwise on the whole $\mathbb{R}^{n}$, where $\alpha$ and $\beta$ are positive constants. Actually a closer look at his argument shows that he does not really use log-concavity. Indeed, his main contribution is a bound for the relative entropy of the Langevin algorithm at time $k$ with respect to the corresponding point in the Langevin diffusion. That part of the argument is a nice application of Girsanov’s formula and does not use log-concavity at all, only the fact that $\nabla\varphi$ is Lipschitz is needed. Dalalyan only uses strict log-concavity to estimate how fast the diffusion $(X_{t})$ converges to $\mu$. But that only requires Poincaré for an exponentially fast decay in chi- square divergence or log-Sobolev for a decay in relative entropy. Dalalyan’s theorem can thus be rewritten as follows: if $d\mu=\mathrm{e}^{-\varphi}\,dx$ is supported on the whole $\mathbb{R}^{n}$, if $\nabla\varphi$ is Lipschitz with constant $\beta$ and if $\mu$ satisfies the log-Sobolev inequality with constant $C_{LS}$ then after $k$ steps of the Langevin algorithm with times step parameter $\eta$ we have $TV(x_{k},\mu)\leq D(x_{0}\mid\mu)^{1/2}\mathrm{e}^{-k\eta/2C_{LS}}+\beta n^{1/2}(1+\mathbb{E}[\sigma_{0}])^{1/2}k^{1/2}\eta,$ where again $\sigma_{0}=\frac{1}{n}\left(\varphi(x_{0})-\min_{\mathbb{R}^{n}}\\{\varphi\\}\right)$. The result depends on a warm start hypothesis, the algorithm must be initiated from a random point $x_{0}$ whose relative entropy to the target measure is finite. On the other hand, it is not hard to see that one can find a Gaussian measure whose relative entropy to $\mu$ is $O^{*}(n)$. As a result, it follows from the previous bound that if $\eta$ is chosen appropriately then one has $TV(x_{k},\mu)\leq\varepsilon$ after $k=\Theta^{*}\left(\frac{C_{LS}^{2}\beta^{2}n}{\varepsilon^{2}}\right)$ steps of the algorithm. Durmus and Moulines [12] have the same set of hypothesis as Dalalyan but they prove a result in Wasserstein distance rather than total variation. As opposed to Dalayan they really use the hypothesis $\nabla^{2}\varphi\geq\alpha Id$ for some positive $\alpha$. Also their approach is a bit different from that of Dalalyan: instead of bounding $W_{2}(x_{k},X_{k\eta})$ and $W_{2}(X_{k\eta},\mu)$ separately they directly obtain a recursive inequality for $W_{2}(x_{k},\mu)$. Their approach essentially yields the following result: Suppose that $\alpha Id\leq\nabla^{2}\varphi\leq\beta Id$ pointwise on the whole $\mathbb{R}^{n}$ for some positive constants $\alpha,\beta$. Assume also that the time step parameter $\eta$ satisfies $\eta\leq\frac{1}{2\beta}$. Then $W_{2}(x_{k},\mu)\leq\left(1-\frac{\alpha\eta}{2}\right)^{k}W_{2}(x_{0},\mu)+\frac{2\beta}{\alpha}n^{1/2}\eta^{1/2}.$ (11) Actually, the result of Durmus and Moulines is a bit more involved, for a (short) proof of that very statement see [10, Theorem 1.1]. This result implies that $\frac{1}{n}W_{2}^{2}(x_{k},\mu)\leq\varepsilon$ after a number of steps $k=O^{*}\left(\frac{\beta^{2}}{\alpha^{3}\varepsilon}\right)$, with time step parameter of order $\varepsilon\alpha^{2}/\beta^{2}$. This should be compared to the complexity given by Theorem 5 in this case. Indeed, observe that the hypothesis $\alpha Id\leq\nabla^{2}\varphi\leq\beta Id$ implies that the log- Sobolev constant is $1/\alpha$ at most and that $\nabla\varphi$ grows linearly. Therefore Theorem 5 applies, and it gives the following complexity: $k=\Theta^{*}\left(\frac{\beta^{2}}{\alpha^{3}\varepsilon^{2}}\right)$. The dependence in $\varepsilon$ is thus worse ($\varepsilon^{2}$ rather $\varepsilon$) but the dependence in the other parameters is the same, which is quite remarkable given the fact that Theorem 5 holds under considerably weaker assumptions. Lastly, let us also mention Vempala and Wibisono’s work [25] whose approach is similar in spirit to that of Durmus and Moulines but gives a result closer to Dalalyan’s. They prove that if $\nabla\varphi$ is Lipschitz with constant $\beta$, if $\mu$ satisfies log-Sobolev with constant $C_{LS}$, and if the time step parameter satisfies $\eta\leq 1/(4C_{LS}\beta^{2})$ then after $k$ steps of the algorithm one has $D(x_{k}\mid\mu)\leq\mathrm{e}^{-k\eta/C_{LS}}D(x_{0}\mid\mu)+8n\beta^{2}C_{LS}\eta.$ (12) As in the result of Dalalyan the measure $\mu$ is not assumed to be log- concave, only log-Sobolev is required. Let us note that this result recovers Dalalyan’s by Pinsker’s inequality. Let us also remark that combining it with the transport inequality $W_{2}^{2}(x_{k},\mu)\leq 2C_{LS}\,D(x_{k}\mid\mu)$, which is a consequence of log-Sobolev, one gets $W_{2}^{2}(x_{k},\mu)\leq 2C_{LS}D(x_{0}\mid\mu)\mathrm{e}^{-k\eta/C_{LS}}+16\,n\,\beta^{2}C_{LS}^{2}\eta.$ This pretty much recovers (11) under a weaker hypothesis: Log-Sobolev rather than uniform convexity of the potential, with two caveats: The hypothesis on the time step parameter is a bit more restrictive, and this is from a warm start in the relative entropy sense. ### Acknowledgments. The author is grateful to Sébastien Bubeck and Ronen Eldan for a number of useful discussions related to this work. We are also grateful to Andre Wibisono who brought to our attention the $W_{2}$/chi-square inequality used in the proof of Theorem 3. In the first version of the paper the formulation of that theorem was slightly weaker, with $W_{1}$ in place $W_{2}$. ## 2 The Langevin diffusion with reflected boundary condition In section we define properly the Langevin diffusion with reflection at the boundary of $K$: $dX_{t}=dB_{t}-\frac{1}{2}\nabla\varphi(X_{t})-d\Phi_{t}.$ Recall that $d\mu=\mathrm{e}^{-\varphi}dx$ is assumed to be log-concave, which means that $\varphi\colon\mathbb{R}^{n}\to\mathbb{R}\cup\\{+\infty\\}$ is convex. Actually, it will be slightly more convenient for our purposes to assume that the domain of $\varphi$ is the whole $\mathbb{R}^{n}$ and that the measure $\mu$ is given by $\mu(dx)=\mathbf{1}_{K}(x)\mathrm{e}^{-\varphi(x)}\,dx$ where $K$ is a convex subset of $\mathbb{R}^{n}$ with non empty interior. Since we do not assume the potential $\varphi$ to be everywhere differentiable, the expression $\nabla\varphi(X_{t})$ needs to be clarified. Let us agree on the convention that in the sequel $\nabla\varphi(x)$ stands for the element of the subdifferential of $\varphi$ at point $x$ whose Euclidean norm is minimal. Since $\varphi$ is assumed to be a convex function whose domain is the whole $\mathbb{R}^{n}$, for every $x\in\mathbb{R}^{n}$ the subdifferential of $\varphi$ at $x$ is a non empty closed convex set, so that the Euclidean norm does uniquely attain its minimum on this set. According to Tanaka [24, Theorem 3.1], given a continuous semi-martingale $(W_{t})$ taking values in $\mathbb{R}^{n}$ and satisfying $W_{0}\in K$, there exists a unique couple $(X_{t},\Phi_{t})$ of continuous semi-martingales such that, almost surely 1. 1. $X_{t}\in K$ for all $t\in\mathbb{R}_{+}$, 2. 2. $X_{t}=W_{t}-\Phi_{t}$ for all $t\in\mathbb{R}_{+}$, 3. 3. $(\Phi_{t})$ is of the form $\Phi_{t}=\int_{0}^{t}\nu_{s}\,d\ell_{s}$ where $\ell$ is a measure on $\mathbb{R}_{+}$ which is finite on bounded intervals and supported on the set $\\{t\in\mathbb{R}_{+}\colon X_{t}\in\partial K\\}$, and for any such $t$ the vector $\nu_{t}$ is an outer unit normal to the boundary of $K$ at $X_{t}$. In words the process $(\Phi_{t})$ is a finite variation and continuous process which repels $(X_{t})$ inwards when it reaches the boundary of $K$. In the sequel we shall say that the process $(X_{t})$ is the _reflection_ of $(W_{t})$ at the boundary of $K$ and that the process $(\Phi_{t})$ is _associated_ to $(X_{t})$. The process $(\ell_{t})$ is called the _local time_ of $(X_{t})$ at the boundary of $K$. Now given a standard Brownian motion $(B_{t})$ and a starting point $x\in K$, we want to argue that there exists a unique process $(X_{t})$ such that $(X_{t})$ is the reflection at the boundary of $K$ of the semi-martingale $x+B_{t}-\frac{1}{2}\int_{0}^{t}\nabla\varphi(X_{s})\,ds.$ In other words we want $dX_{t}=dB_{t}-\frac{1}{2}\nabla\varphi(X_{t})\,dt-d\Phi_{t},$ where $(\Phi_{t})$ is associated to $(X_{t})$. This is a stochastic differential equation with reflected boundary condition. If $\nabla\varphi$ is Lipschitz continuous then again Tanaka [24, Theorem 4.1] shows that this equation admits a unique strong solution. Let us now explain why Tanaka’s result remains valid in the present context, even though $\nabla\varphi$ is not assumed to be continuous. First of all, notice that since $\nabla\varphi$ is the gradient of a convex function, it is a monotone map, in the sense that $\langle x-y,\nabla\varphi(x)-\nabla\varphi(y)\rangle\geq 0,\quad\forall x,y\in\mathbb{R}^{n}.$ This property immediately implies pathwise uniqueness for the equation. Indeed suppose that $X$ and $\widetilde{X}$ are two solutions of the equation and that $\Phi$ ant $\widetilde{\Phi}$ are the associated processes. Then $d|X_{t}-\widetilde{X}_{t}|^{2}=-\frac{1}{2}\langle X_{t}-\widetilde{X}_{t},\nabla\varphi(X_{t})-\nabla\varphi(\widetilde{X}_{t})\rangle\,dt-\langle X_{t}-\widetilde{X}_{t},d\Phi_{t}\rangle-\langle\widetilde{X}_{t}-X_{t},d\widetilde{\Phi}_{t}\rangle.$ The first term of the right-hand side is non positive by monotony of $\nabla\varphi$. The second term is also non positive. Indeed the fact that $\Phi$ is associated to $X$ and $\widetilde{X}$ takes values in $K$ imply that $\langle X_{t}-\widetilde{X}_{t},d\Phi_{t}\rangle\geq 0$ for all $t$. Similarly $\langle\widetilde{X}_{t}-X_{t},d\widetilde{\Phi}_{t}\rangle\geq 0$. Therefore the quantity $|X_{t}-\widetilde{X}_{t}|$ is almost surely non increasing, which obviously implies that the equation has the pathwise uniqueness property. To get existence, one option is to approximate $\varphi$ by a smooth convex function and pass to the limit, as in [6]. That paper is a little involved and is written French so let us give an alternative argument for completeness. This argument only works when $\nabla\varphi$ is bounded, but this is the only case we shall consider here. When $\nabla\varphi$ is bounded, it is well known that an application of Girsanov yields the existence of a solution to the equation. Indeed, let $X$ be the reflection at the boundary of $K$ of the process $x+B$. We have $dX_{t}=dB_{t}-d\Phi_{t},$ where $\Phi$ is associated to $X$. Since $\nabla\varphi$ is bounded the process $(D_{t})$ given by $D_{t}=\exp\left(-\frac{1}{2}\int_{0}^{t}\langle\nabla\varphi(X_{s}),dB_{s}\rangle-\frac{1}{8}\int_{0}^{t}|\nabla\varphi(X_{s})|^{2}\,ds\right)$ is a positive martingale with expectation $1$. If we fix a time horizon $T>0$ and define a new probability measure by $d\mathbb{Q}=D_{T}\,d\mathbb{P}$, then by Girsanov, the process $(\widetilde{B}_{t})_{t\in[0,T]}$ given by $\widetilde{B}_{t}=B_{t}+\frac{1}{2}\int_{0}^{t}\nabla\varphi(X_{s})\,ds,$ is a standard Brownian motion under the new measure $\mathbb{Q}$. Since $dX_{t}=d\widetilde{B}_{t}-\frac{1}{2}\nabla\varphi(X_{t})\,dt+d\Phi_{t},$ where $\Phi$ is associated to $X$ this shows that under $\mathbb{Q}$ the process $X$ solves the equation driven by $\widetilde{B}$. This proves _weak_ existence of a solution, in the sense that we had to change the probability space and the Brownian motion. However it is well known that weak existence and pathwise uniqueness altogether imply strong existence, see for instance [15, Chapter IV, Theorem 1.1]. Strictly speaking this only shows strong existence on a finite time interval $[0,T]$. But we can eventually let $T$ tend $+\infty$ and use pathwise uniqueness again to get strong existence of a solution defined for all time. Details are left to the reader. The solution $(X_{t})$ of the equation is a Markov process, whose semigroup is denoted $(P_{t})$ in the sequel: For every test function $f\colon\mathbb{R}^{n}\to\mathbb{R}$ and every $x\in K$ $P_{t}f(x)=\mathbb{E}_{x}[f(X_{t})]$ where the subscript $x$ next to the expectation denotes the starting point of $X_{t}$. By Itô’s formula, if $f$ is $\mathcal{C}^{2}$-smooth in a neighborhood of $K$ then $df(X_{t})=\langle\nabla f(X_{t}),dB_{t}\rangle-\frac{1}{2}\langle\nabla f(X_{t}),\nabla\varphi(X_{t})\rangle\,dt+\frac{1}{2}\Delta f(X_{t})\,dt-\langle\nabla f(X_{t}),d\Phi_{t}\rangle.$ Here we are using Itô’s formula for a continuous semi-martingale having a finite variation part, see for instance [23, Chapter IV, Corollary 32.10]. If $f$ satisfies the Neumann boundary condition: $\langle\nabla f(x),\nu\rangle$ for every $x\in\partial K$ and every $\nu$ that is normal to the boundary of $K$ at $x$ then the last term of the right-hand side vanishes. Taking expectation we then see that the generator of the semigroup $(P_{t})$ is $\frac{1}{2}Lf:=\frac{1}{2}\left(\Delta f-\langle\nabla f,\nabla\varphi\rangle\right)$ with Neumann boundary condition. Also, an integration by part then shows that $\int_{K}(Lf)g\,d\mu=-\int_{K}\langle\nabla f,\nabla g\rangle\,d\mu,$ for every $f,g$ in the domain of $L$. In particular the operator $L$ is symmetric in $L^{2}(\mu)$, which implies that $\mu$ is a reversible measure for the semigroup $(P_{t})$. ## 3 Discretization of the Langevin diffusion In this section we prove Theorem 1. This is the main contribution of the article. We begin with a bound on the local time $(\ell_{t})$ of the diffusion $(X_{t})$ at the boundary of $K$. We need to show that $\ell_{t}=O(t)$. That lemma is essentially taken from our previous work with Bubeck and Eldan [5], except that we have simplified the proof and improved the result a bit. ###### Lemma 6. Assume that the Langevin diffusion $(X_{t})$ is initiated at point $x_{0}$ in the interior of $K$ and recall the definition (7) of $r_{0}$ and $\sigma_{0}$. Then for every $t>0$ we have $\mathbb{E}[\ell_{t}^{2}]^{1/2}\leq\frac{n(1+\sigma_{0})t}{r_{0}}.$ ###### Proof. By Itô’s formula we have $d|X_{t}-x_{0}|^{2}=2\langle X_{t}-x_{0},dB_{t}\rangle-\langle X_{t}-x_{0},\nabla\varphi(X_{t})\rangle dt-2\langle X_{t}-x_{0},d\Phi_{t}\rangle+n\,dt.$ Recall that $d\Phi_{t}=\nu_{t}\,d\ell_{t}$ where $\nu_{t}$ is an outer unit normal at $X_{t}$. By definition of $(\Phi_{t})$, $r_{0}$ and $\ell_{t}$ we have $\langle X_{t}-x_{0},d\Phi_{t}\rangle\geq\sup_{x\in K}\langle x-x_{0},d\Phi_{t}\rangle\geq r_{0}d\ell_{t}.$ Also by convexity of $\varphi$ $-\langle X_{t}-x_{0},\nabla\varphi(X_{t})\rangle\leq\varphi(x_{0})-\varphi(X_{t})\leq n\sigma_{0}.$ We thus obtain $|X_{t}-x_{0}|^{2}+2r_{0}\ell_{t}\leq n(1+\sigma_{0})t+\int_{0}^{t}\langle X_{s}-x_{0},dB_{s}\rangle.$ (13) Taking expectation already gives a bound on the first moment of $\ell_{t}$. To get a bound on the second moment observe that (13) implies that $4r_{0}^{2}\mathbb{E}[\ell_{t}^{2}]\leq n^{2}(1+\sigma_{0})^{2}t^{2}+\mathbb{E}\left[\left(\int_{0}^{t}\langle X_{s}-x_{0},dB_{s}\rangle\right)^{2}\right].$ By Itô’s isometry and using (13) again, this time to bound $\mathbb{E}[|X_{t}-x_{0}|^{2}]$, we get $\mathbb{E}\left[\left(\int_{0}^{t}\langle X_{s}-x_{0},dB_{s}\rangle\right)^{2}\right]=\mathbb{E}\left[\int_{0}^{t}|X_{s}-x_{0}|^{2}\,ds\right]\leq n(1+\sigma_{0})\frac{t^{2}}{2}.$ Plugging this into the previous display yields the desired inequality. ∎ We also need the following elementary bound on the maximum of Gaussian vectors. We provide a proof for completeness. ###### Lemma 7. Let $G_{1},\dotsc,G_{k}$ be standard Gaussian vectors on $\mathbb{R}^{n}$. Then $\mathbb{E}\left[\max_{i\leq k}\left\\{|G_{i}|^{2}\right\\}\right]\leq\mathrm{e}(n+2\log k).$ ###### Proof. Set $\chi_{i}=|G_{i}|^{2}$ for every $i$. The $p$-th moment of $\chi_{i}$ satisfies $\mathbb{E}[\chi_{i}^{p}]=\frac{2^{p}\Gamma\left(\frac{n}{2}+p\right)}{\Gamma\left(\frac{n}{2}\right)}\leq(n+2(p-1))^{p},$ at least when $p$ is an integer. Therefore $\mathbb{E}\left[\max_{i\leq k}\\{\chi_{i}\\}\right]\leq\left[\sum_{i\leq k}\mathbb{E}[\chi_{i}^{p}]\right]^{1/p}\leq k^{1/p}(n+2(p-1))$ Choosing $p$ to be the smallest integer larger than $\log k$ yields the result. ∎ We are now in a position to prove the main result. ###### Proof of Theorem 1. We first couple the diffusion $(X_{t})$ and its discretization $(x_{k})$ in the most natural way one could think of, by choosing the sequence $(\xi_{k})$ as follows: $\xi_{k}=B_{k\eta}-B_{(k-1)\eta},\quad k\geq 1.$ (14) Observe that for any $x\in K$ and $y\in\mathbb{R}^{n}$ we have $|x-\mathcal{P}_{K}(y)|\leq|x-y|$. Therefore $\begin{split}|X_{(i+1)\eta}-x_{i+1}|^{2}&=\left|X_{(i+1)\eta}-\mathcal{P}\left(x_{i}+\xi_{i+1}-\frac{\eta}{2}\nabla\varphi(x_{i})\right)\right|^{2}\\\ &\leq\left|X_{(i+1)\eta}-x_{i}-\xi_{i+1}+\frac{\eta}{2}\nabla\varphi(x_{i})\right|^{2}.\end{split}$ Let $(\widetilde{X}_{t})$ be the process defined by $\widetilde{X}_{t}=x_{i}+B_{t}-B_{i\eta}-\frac{t-i\eta}{2}\nabla\varphi(x_{i})$ for all $t$ between $i\eta$ and $(i+1)\eta$. Then $\widetilde{X}_{i\eta}=x_{i}$ and the previous display can be rewritten as $|X_{(i+1)\eta}-x_{i+1}|^{2}\leq|X_{(i+1)\eta}-\widetilde{X}_{(i+1)\eta}|^{2}.$ The process $(X_{t}-\widetilde{X}_{t})$ is continuous with finite variation on $[i\eta,(i+1)\eta]$ (the Brownian part cancels out). Therefore, on that interval we have $\begin{split}d|X_{t}-\widetilde{X}_{t}|^{2}&=2\langle X_{t}-\widetilde{X}_{t},dX_{t}-d\widetilde{X}_{t}\rangle\\\ &=-\langle X_{t}-\widetilde{X}_{t},\nabla\varphi(X_{t})-\nabla\varphi(x_{i})\rangle\,dt-2\langle X_{t}-\widetilde{X}_{t},d\Phi_{t}\rangle.\end{split}$ Again by monotony of $\nabla\varphi$ we have $\langle X_{t}-x_{i},\nabla\varphi(X_{t})-\nabla\varphi(x_{i})\rangle\geq 0$. Also since $x_{i}\in K$ and $(\Phi_{t})$ is associated to $(X_{t})$ we have $\langle X_{t}-x_{i},d\Phi_{t}\rangle\geq 0$. Plugging this back in the previous display yields $d|X_{t}-\widetilde{X}_{t}|^{2}\leq\langle\widetilde{X}_{t}-x_{i},(\nabla\varphi(X_{t})-\nabla\varphi(x_{i}))\,dt+2d\Phi_{t}\rangle.$ Now we replace $\widetilde{X}_{t}$ by its value, and we integrate between $i\eta$ and $(i+1)\eta$. We get $\begin{split}|X_{(i+1)\eta}-x_{i+1}|^{2}&\leq|X_{i\eta}-x_{i}|^{2}\\\ &+\int_{i\eta}^{(i+1)\eta}\langle B_{t}-B_{i\eta}-\frac{t-i\eta}{2}\nabla\varphi(x_{i}),(\nabla\varphi(X_{t})-\nabla\varphi(x_{i}))\,dt+2d\Phi_{t}\rangle.\end{split}$ We now take expectation. Note that the martingale property of the Brownian motion implies that $\mathbb{E}[\langle B_{t}-B_{i\eta},\nabla\varphi(x_{i})\rangle]=0$ and that $\mathbb{E}[\langle B_{t}-B_{i\eta},d\Phi_{t}\rangle]=\mathbb{E}[\langle B_{(i+1)\eta}-B_{i\eta},d\Phi_{t}\rangle].$ We also use the hypothesis $|\nabla\varphi|\leq L$ and the inequality $\mathbb{E}[|B_{t}-B_{i\eta}|]\leq n^{1/2}(t-i\eta)^{1/2}$. We obtain $\begin{split}\mathbb{E}\left[|X_{(i+1)\eta}-x_{i+1}|^{2}\right]&\leq\mathbb{E}\left[|X_{i\eta}-x_{i}|^{2}\right]+\frac{2}{3}L\eta^{3/2}n^{1/2}\\\ &+2\,\mathbb{E}\left[\langle\xi_{i+1},\Phi_{(i+1)\eta}-\Phi_{i\eta}\rangle\right]+\frac{1}{2}L^{2}\eta^{2}+L\eta\,\mathbb{E}[\ell_{(i+1)\eta}-\ell_{i\eta}],\end{split}$ (15) where $\xi_{i+1}=B_{(i+1)\eta}-B_{i\eta}$. By Cauchy-Schwarz and Lemma 7 $\begin{split}\mathbb{E}\left[\sum_{i=0}^{k-1}\langle\xi_{i+1},\Phi_{(i+1)\eta}-\Phi_{i\eta}\rangle\right]&\leq\mathbb{E}\left[\max_{i\leq k}\left\\{|\xi_{i}|\right\\}\,\ell_{k\eta}\right]\\\ &\leq\mathrm{e}^{1/2}(n+2\log k)^{1/2}\eta^{1/2}\mathbb{E}[\ell_{k\eta}^{2}]^{1/2}.\end{split}$ Summing (15) over $i$ thus yields $\begin{split}\mathbb{E}\left[|X_{k\eta}-x_{k}|^{2}\right]&\leq 2\mathrm{e}^{1/2}(n+2\log k)^{1/2}\eta^{1/2}\mathbb{E}[\ell_{k\eta}^{2}]^{1/2}\\\ &+L\eta\,\mathbb{E}[\ell_{k\eta}]+\frac{2}{3}Ln^{1/2}k\eta^{3/2}+\frac{1}{2}L^{2}k\eta^{2}.\end{split}$ Now recall from Lemma 6 that $\mathbb{E}[\ell_{k\eta}^{2}]^{1/2}\leq\frac{n(1+\sigma_{0})k\eta}{r_{0}}.$ Lastly, we use the assumption $\eta<n/L^{2}$ to simplify the inequality a bit. We finally obtain $\mathbb{E}\left[|X_{k\eta}-x_{k}|^{2}\right]\leq(2\mathrm{e}^{1/2}+1)(n+2\log k)^{1/2}\frac{n(1+\sigma_{0})}{r_{0}}k\eta^{3/2}+\frac{7}{6}Ln^{1/2}k\eta^{3/2},$ which is the result. ∎ ## 4 Convergence of the algorithm under log-Sobolev In this section we prove Theorem 2. Observe first that the Wasserstein distance is indeed a distance, so it satisfies the triangle inequality and we have $\frac{1}{n}W_{2}^{2}(x_{k},\mu)\leq\frac{2}{n}W_{2}^{2}(x_{k},X_{k\eta})+\frac{2}{n}W_{2}^{2}(X_{k\eta},\mu).$ The first term of the right-side is handled by Theorem 1, so we only need to bound the second term. This is the purpose of the following lemma. In this lemma $(P_{t})$ stands for the semi-group of the Langevin diffusion (3). In other words $\nu P_{t}$ denotes the law of $X_{t}$ when $X_{0}$ has law $\nu$. ###### Lemma 8. Assume that $\mu$ is log-concave with globally Lipschitz potential on its support, with Lipschitz constant $L$. Assume also that $\mu$ satisfies log- Sobolev with constant $C_{LS}$. Then for every $x_{0}$ in the interior of the support of $\mu$ and every $t>0$ we have $\frac{1}{n}W_{2}^{2}(\delta_{x_{0}}P_{t},\mu)\leq 4C_{LS}\left(1+\log\left(\frac{\max(C_{LS},1)\,n}{\min(r_{0},1)}\right)+\sigma_{0}+\frac{L}{n}\right)\mathrm{e}^{-t/2C_{LS}},$ where again the parameters $\sigma_{0}$ and $r_{0}$ are defined by (7). ###### Proof. If $\mu$ satisfies the logarithmic Sobolev inequality then it satisfies the transport inequality: $W_{2}^{2}(\nu;\mu)\leq 2C_{CLS}D(\nu\mid\mu)$ for every probability measure $\nu$. This is due to Otto and Villani [22], see also [2]. The log-Sobolev inequality also implies that the relative entropy decays exponentially fast along the semigroup $(P_{t})$: $D(\nu P_{t}\mid\mu)\leq\mathrm{e}^{-t/C_{LS}}D(\nu\mid\mu).$ This is really folklore, one just need to observe that the derivative of the entropy is the Fisher information, and combine log-Sobolev with a Gronwall type argument. Combining the two inequalities yields $W_{2}^{2}(\nu P_{t},\mu)\leq 2C_{LS}\mathrm{e}^{-t/C_{LS}}D(\nu\mid\mu).$ This cannot be applied directly to a Dirac point mass. However, observe that the convexity of $\varphi$ implies that the Wasserstein distance is non increasing along the semigroup: For any two probability measures $\nu_{0},\nu_{1}$ and every time $t$ we have $W_{2}^{2}(\nu_{0}P_{t},\nu_{1}P_{t})\leq W_{2}^{2}(\nu_{0},\nu_{1}).$ This is a well-known fact, which is easily seen using parallel coupling. See the proof of the pathwise uniqueness property in section 2. Combining this with the triangle inequality for $W_{2}$ we thus get $\begin{split}W_{2}^{2}(\delta_{x_{0}}P_{t},\mu)&\leq 2W_{2}^{2}(\delta_{x_{0}}P_{t},\nu P_{t})+2W_{2}^{2}(\nu P_{t},\mu)\\\ &\leq 2W_{2}^{2}(\delta_{x_{0}},\nu)+4C_{LS}\mathrm{e}^{-t/C_{LS}}D(\nu\mid\mu).\end{split}$ (16) This is valid for every $\nu$ and it is natural to take $\nu$ to be the measure $\mu$ conditioned to the ball $B(x_{0},\delta)$ for some small $\delta>0$. Then $W^{2}_{2}(\delta_{x_{0}},\nu)\leq\delta^{2}$ and $D(\nu\mid\mu)=\log\left(\frac{1}{\mu(B(x_{0},\delta))}\right).$ If $\delta\leq r_{0}$ then $B(x_{0},\delta)$ is included in the support of $\mu$ and we have $\log\left(\frac{1}{\mu(B(x_{0},\delta))}\right)\leq\max_{B(x_{0},\delta)}\\{\varphi\\}+n\log\left(\frac{1}{\delta}\right)+\log\left(\frac{1}{v_{n}}\right),$ where $v_{n}$ is the Lebesgue measure of the unit ball in dimension $n$. Recall that $\log\left(\frac{1}{v_{n}}\right)\leq n\log n$. Also $\max_{B(x_{0},\delta)}\\{\varphi\\}\leq\varphi(x_{0})+L\delta\leq\min_{K}\\{\varphi\\}+n\sigma_{0}+L\delta.$ Moreover $\min_{K}\\{\varphi\\}\leq\int_{\mathbb{R}^{n}}\varphi\,d\mu=S(\mu),$ where $S$ denotes the Shannon entropy. It is well-known that among measures of fixed covariance the Gaussian measure maximizes the Shannon entropy (this is just Jensen actually). Therefore $\begin{split}S(\mu)&=\frac{n}{2}\log(2\pi e)+\frac{1}{2}\log\mathrm{det}(\mathrm{cov}(\mu))\\\ &\leq\frac{n}{2}\log(2\pi eC_{LS}).\end{split}$ The last inequality is just a consequence of the fact that the log-Sobolev inequality implies Poincaré, which in turn implies a bound on the covariance matrix. Plugging everything back in (16) we get $\frac{1}{n}W_{2}^{2}(\delta_{x_{0}}P_{t},\mu)\leq\frac{2\delta^{2}}{n}+4C_{LS}\left(\frac{3}{2}+\log\left(\frac{n}{\delta}\right)+\sigma_{0}+\frac{L}{n}\right)\mathrm{e}^{-t/C_{LS}}$ for every $\delta\leq\min(r_{0},1)$. Choosing $\delta=\min\left((2nC_{LS})^{1/2}\mathrm{e}^{-t/2C_{LS}},r_{0},1\right)$ and using the inequality $x\mathrm{e}^{-x}\leq\mathrm{e}^{-x/2}$ yields the result. ∎ ## 5 A convergence result using Poincaré only In this section we prove Theorem 3. Again the idea is to write $\frac{1}{n}W_{2}^{2}(x_{k},\mu)\leq\frac{2}{n}W_{2}^{2}(x_{k},X_{k\eta})+\frac{2}{n}W_{2}^{2}(X_{k\eta},\mu),$ and to bound the first term using Theorem 1. Actually, note that here we allow $x_{0}=X_{0}$ to be random, so we rather condition on $x_{0}$, apply Theorem 1 and then take expectation again. Therefore it is enough to bound the second term. This is where the Poincaré inequality enters the picture. Note that this part of the argument does not rely on the log-concavity of $\mu$. We shall use the following transport/chi- square divergence inequality: If $\mu$ satisfies Poincaré with constant $C_{P}$ then for every probability measure $\nu$ on $\mathbb{R}^{n}$ we have $W_{2}^{2}(\nu,\mu)\leq 2C_{P}\chi^{2}(\nu\mid\mu).$ It seems that this was first proved by Ding [11], with a worst constant. The result with constant $2$ is due to Liu [20]. His argument is combines together the Langevin diffusion and the Hamilton-Jacobi semigroup. On the other hand it is well-known that under Poincaré the chi-square divergence decays exponentially fast along the Langevin diffusion. Letting $(P_{t})$ be the semigroup of the Langevin diffusion associated to $\mu$ we have $\chi^{2}(\nu P_{t}\mid\mu)\leq\mathrm{e}^{-t/C_{P}}\chi^{2}(\nu\mid\mu).$ We thus get the following: ###### Lemma 9. Suppose that $\mu$ satisfies Poincaré with constant $C_{P}$. Then for every probability measure $\nu$ on $\mathbb{R}^{n}$ and every $t>0$ we have $W_{2}^{2}(\nu P_{t},\mu)\leq 2C_{P}\,\chi^{2}(\nu\mid\mu)\,\mathrm{e}^{-t/C_{P}}.$ This finishes the proof of Theorem 3. We end this section with a simple estimate of the chi-square divergence of an appropriate Gaussian measure to $\mu$ in the unconstrained case. ###### Proof of Lemma 4. Recall that $\mu$ is assumed to be supported on the whole $\mathbb{R}^{n}$ with convex and globally Lipschitz potential $\varphi$. Let $\gamma$ be the Gaussian measure centered at a point $x_{0}$ and with covariance $\alpha Id$ for some $\alpha>0$. Then $\begin{split}\chi^{2}(\gamma\mid\mu)&\leq(2\pi\alpha)^{-n}\int_{\mathbb{R}^{n}}\mathrm{e}^{-\frac{1}{\alpha}|x-x_{0}|^{2}+\varphi(x)}\,dx\\\ &\leq(2\pi\alpha)^{-n}\int_{\mathbb{R}^{n}}\mathrm{e}^{-\frac{1}{\alpha}|x-x_{0}|^{2}+\varphi(x_{0})+L|x-x_{0}|}\,dx\\\ &\leq(2\pi\alpha)^{-n}\int_{\mathbb{R}^{n}}\mathrm{e}^{-\frac{1}{2\alpha}|x-x_{0}|^{2}+\varphi(x_{0})+\frac{1}{2}L^{2}\alpha}\,dx=(2\pi\alpha)^{-n/2}\mathrm{e}^{\varphi(x_{0})+\frac{1}{2}L^{2}\alpha}.\end{split}$ Also, reasoning along the same lines as in the previous section we get $\varphi(x_{0})\leq\min_{\mathbb{R}^{n}}\\{\varphi\\}+n\sigma_{0}\leq\frac{n}{2}\log(2\pi\mathrm{e})+\frac{n}{2}\log C_{P}+n\sigma_{0}.$ Putting everything together and choosing $\alpha=n/L^{2}$ yields $\begin{split}\log\chi^{2}(\gamma\mid\mu)&\leq\frac{n}{2}\left(-\log\alpha+1+\log C_{P}+2\sigma_{0}\right)+\frac{1}{2}L^{2}\alpha\\\ &=n\left(1+\sigma_{0}+\frac{1}{2}\log\left(\frac{L^{2}C_{P}}{n}\right)\right),\end{split}$ which is the result. ∎ ## 6 An extension to the non-globally Lipschitz case We begin this section with a simple lemma about the Wassertein distance of $\mu$ to $\mu$ restricted to a large ball. ###### Lemma 10. Let $\mu$ be a log-concave measure on $\mathbb{R}^{n}$, and let $\mu_{R}$ be the measure $\mu$ conditioned on the ball centered at $0$ of radius $R$. There exists a universal constant $C$ such that $W_{2}^{2}(\mu,\mu_{R})\leq CM\exp\left(-\frac{R}{C\sqrt{M}}\right),\quad\forall R\geq C\sqrt{M}$ where $M=\int_{\mathbb{R}^{n}}|x|^{2}\,d\mu$. ###### Proof. Let $X$ have law $\mu$. Note that by Borell’s lemma [4, Lemma 3.1] we have $\mathbb{P}(|X|\geq t)\leq\mathrm{e}^{-t/C_{0}\sqrt{M}}$ for all $t\geq C_{0}\sqrt{M}$ for some universal constant $C_{0}$. This also implies that $\mathbb{E}[|X|^{4}]\leq C_{1}M^{2}$ for some $C_{1}$. Now assume that $R\geq C_{0}\sqrt{M}$, let $\widetilde{X}$ have law $\mu_{R}$ and be independent of $X$ and let $Y=\begin{cases}X&\text{if }|X|\leq R\\\ \widetilde{X}&\text{otherwise}.\end{cases}$ Then $Y$ also has law $\mu_{R}$, so that $\begin{split}W_{2}^{2}(\mu,\mu_{R})&\leq\mathbb{E}[|X-Y|^{2}]=\mathbb{E}[|X-\widetilde{X}|^{2};|X|>R]\\\ &\leq 4\mathbb{E}[|X|^{4}]^{1/2}\,\mathbb{P}(|X|>R)^{1/2}\leq C_{1}M\mathrm{e}^{-R/(C_{0}M^{1/2})},\end{split}$ which is the result. ∎ ###### Proof of Theorem 5. Assuming that $\int_{\mathbb{R}^{n}}|x|^{2}\,d\mu=O(n)$, the previous lemma shows that $\frac{1}{n}W_{2}^{2}(\mu,\mu_{R})$ will be negligible as soon as $R$ is a sufficiently large multiple of $\sqrt{n}\log n$. Now we apply Theorem 2 to $\mu_{R}$ and initiating the Langevin algorithm at $0$. Then the parameter $r_{0}$ is of order $\sqrt{n}\log n$. Moreover the hypothesis $|\nabla\varphi(x)|\leq\beta(|x|+1)$ show that the potential of $\mu_{R}$ is Lipschitz with constant $O^{*}(\beta\sqrt{n})$. Therefore the constant $A$ defined by (6) satisfies $A=O^{*}(\max(\beta,1))$ in this case. On the other hand since $\mu_{R}$ is log-concave and supported on a ball of radius $O^{*}(\sqrt{n})$ its log- Sobolev constant is $O^{*}(n)$ at most. Also, if $\mu$ satisfies log-Sobolev, then the log-Sobolev constant of $\mu_{R}$ cannot be larger than a constant factor times that of $\mu$. This follows easily from the fact that within log- concave measures log-Sobolev and Gaussian concentration are equivalent, see [21, Theorem 1.2]. To sum up, the log-Sobolev constant of $\mu_{R}$ is $O^{*}(\min(n,C_{LS}))$ where $C_{LS}$ is the log-Sobolev constant of $\mu$ (which is possibly infinite). Applying Theorem 2 we see that after $k=\Theta^{*}\left(\frac{\min(C_{LS},n)^{3}\max(\beta,1)^{2}}{\varepsilon^{2}}\right)$ of the Langevin algorithm for $\mu_{R}$ initiated at $0$, and with appropriate time step parameter we have $\frac{1}{n}W_{2}^{2}(x_{k},\mu_{R})\leq\varepsilon$. Since $\frac{1}{n}W_{2}^{2}(\mu_{R},\mu)$ is negligible this implies $\frac{1}{n}W_{2}^{2}(x_{k},\mu)\leq 2\varepsilon$. ∎ ## References * [1] Bakry, D.; Gentil, I.; Ledoux, M. Analysis and geometry of Markov diffusion operators. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 348. _Springer, Cham,_ 2014\. * [2] Bobkov, S. G.; Gentil, I.; Ledoux, M. Hypercontractivity of Hamilton-Jacobi equations. _J. Math. Pures Appl._ (9) 80 (2001), no. 7, 669-696. * [3] Bobkov, S.G.; Ledoux, M. From Brunn-Minkowski to Brascamp-Lieb and to logarithmic Sobolev inequalities. _Geom. Funct. Anal._ 10 (2000), no. 5, 1028-1052. * [4] Borell, C. Convex measures on locally convex spaces. _Ark. Mat._ 12 (1974), 239-252. * [5] Bubeck, S.; Eldan, R.; Lehec, J. Sampling from a log-concave distribution with projected Langevin Monte Carlo. _Discrete Comput. Geom._ 59 (2018), no. 4, 757-783. * [6] Cépa, E. Équations différentielles stochastiques multivoques. (French) [Multivariate stochastic differential equations] Séminaire de Probabilités, XXIX, 86-107, Lecture Notes in Math., 1613, Springer, Berlin, 1995. * [7] Chatterji, N.S., Diakonikolas, J., Jordan, M.I., Bartlett, P.L. Langevin Monte Carlo without smoothness. Preprint, arXiv: 1905.13285, 2019. * [8] Chen, Y. An Almost Constant Lower Bound of the Isoperimetric Coefficient in the KLS Conjecture. _Geom. Funct. Anal._ 31 (2021),no. 1, 34-61. * [9] Dalalyan, A.S. Theoretical guarantees for approximate sampling from smooth and log-concave densities. _J. R. Stat. Soc. Ser. B. Stat. Methodol._ 79 (2017), no. 3, 651-676. * [10] Dalalyan, A.S. Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent. Preprint, arXiv: 1704.04752, 2017. * [11] Ding, Y. A note on quadratic transportation and divergence inequality, _Statist. Probab. Lett._ 100 (2015), 115-123. * [12] Durmus, A.; Moulines, E. High-dimensional Bayesian inference via the unadjusted Langevin algorithm. _Bernoulli_ 25 (2019), no. 4A, 2854-2882. * [13] Dyer, M.; Frieze, A.; Kannan, R. A random polynomial-time algorithm for approximating the volume of convex bodies. _J. Assoc. Comput. Mach._ 38 (1991), no. 1, 1-17. * [14] Eldan, R. Thin shell implies spectral gap up to polylog via a stochastic localization scheme. _Geom. Funct. Anal._ 23 (2013), no. 2, 532-569. * [15] Ikeda, N.; Watanabe, S. Stochastic differential equations and diffusion processes. North-Holland Mathematical Library, Vol. 24. _North-Holland Publishing Company_ , 1981. * [16] Fradelizi, M. Sections of convex bodies through their centroid. _Arch. Math._ (Basel) 69 (1997), no. 6, 515-522. * [17] Kannan, R.; Lovász, L.; Simonovits, M. Isoperimetric problems for convex bodies and a localization lemma. _Discrete Comput. Geom._ 13 (1995), no. 3-4, 541-559. * [18] Ledoux, M. From concentration to isoperimetry: semigroup proofs. _Concentration, functional inequalities and isoperimetry_ , 155-166, Contemp. Math., 545, _Amer. Math. Soc._ , Providence, RI, 2011. * [19] Lee Y.T.; Vempala S.S. Eldan’s Stochastic Localization and the KLS Conjecture: Isoperimetry, Concentration and Mixing. Preprint, arXiv:1712.01791, 2017. * [20] Liu, Y. The Poincaré inequality and quadratic transportation-variance inequalities. _Electron. J. Probab._ 25 (2020), Paper no. 1, 16 p. * [21] Milman, E. Isoperimetric and concentration inequalities: equivalence under curvature lower bound. _Duke Math. J._ 154 (2010), no. 2, 207-239. * [22] Otto, F.; Villani, C. Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality. _J. Funct. Anal._ 173 (2000), no. 2, 361-400. * [23] Rogers, L. C. G.; Williams, D. Diffusions, Markov processes, and martingales. Vol. 2: Itô calculus. 2nd ed. _Cambridge University Press._ , 2000. * [24] Tanaka, H. Stochastic differential equations with reflecting boundary condition in convex regions. _Hiroshima Math. J._ 9 (1979), no. 1, 163-177. * [25] Vempala S.S., Wibisono A. Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices. Preprint, arXiv:1903.08568, 2019.
# AINet: Association Implantation for Superpixel Segmentation Yaxiong Wang1,2 Yunchao Wei333footnotemark: 3 Xueming Qian133footnotemark: 3 Li Zhu122footnotemark: 2 Yi Yang4 1Xi’an Jiaotong University 2 Baidu Research 3 Beijing Jiaotong University 4 Zhejiang University ###### Abstract Recently, some approaches are proposed to harness deep convolutional networks to facilitate superpixel segmentation. The common practice is to first evenly divide the image into a pre-defined number of grids and then learn to associate each pixel with its surrounding grids. However, simply applying a series of convolution operations with limited receptive fields can only implicitly perceive the relations between the pixel and its surrounding grids. Consequently, existing methods often fail to provide an effective context when inferring the association map. To remedy this issue, we propose a novel Association Implantation (AI) module to enable the network to explicitly capture the relations between the pixel and its surrounding grids. The proposed AI module directly implants the features of grid cells to the surrounding of its corresponding central pixel, and conducts convolution on the padded window to adaptively transfer knowledge between them. With such an implantation operation, the network could explicitly harvest the pixel-grid level context, which is more in line with the target of superpixel segmentation comparing to the pixel-wise relation. Furthermore, to pursue better boundary precision, we design a boundary-perceiving loss to help the network discriminate the pixels around boundaries in hidden feature level, which could benefit the subsequent inferring modules to accurately identify more boundary pixels. Extensive experiments on BSDS500 and NYUv2 datasets show that our method could not only achieve state-of-the-art performance but maintain satisfactory inference efficiency. Our code is available at https://github.com/wangyxxjtu/AINet-ICCV2021. ## 1 Introduction Superpixels are image regions formed by grouping image pixels similar in color and other low-level properties, which could be viewed as an over-segmentation of image. The process of extracting superpixels is known as superpixel segmentation. Comparing to pixels, superpixel provides a more effective representation for image data. With such a compact representation, the computational efficiency of vision algorithms could be improved [18, 11, 36]. Consequently, superpixel could benefit many vision tasks like semantic segmentation [12, 40, 43], object detection [9, 30], optical flow estimation [14, 25, 34, 39], and even adversarial attack [7]. In light of the fundamental importance of superpixels in computer vision, superpixel segmentation attracts much attention since it is first introduced by Ren and Malik [28] in 2003. Figure 1: Different from the SCN [41] that implicitly learns the association using the cascaded convolutions, our AINet proposes to implant the corresponding grid features to the surrounding of the pixel to explicitly perceive the relation between each pixel and its neighbor grids. The common practice for superpixel segmentation is to first split the image into grid cells and then estimate the membership of each pixel to its adjacent cells, by which the grouping could be performed. Meanwhile, the membership estimation plays the key role in superpixel segmentation. Traditional approaches usually utilize the hand-craft features and estimate the relevance of pixel to its neighbor cells based on clustering or graph-based methods [1, 24, 20, 22, 2], however, these methods all suffer from the weakness of the hand-craft features and are difficult to integrate into other trainable deep frameworks. Inspired by the success of deep neural networks in many computer version problems, researchers recently attempts to adopt the deep learning technique to superpixel segmentation [15, 41, 37]. As mentioned in abstract, previous deep methods attempt to assign pixels by learning the association of each pixel to its surrounding grids using the fully convolutional networks [31]. The popular solutions like SCN [41], SSN [15] employ the U-net architecture [29] to predict the association, , the 9-way probabilities, for each pixel. Although stacking convolution layers can enlarge the receptive field and help study the _pixel-grid wise_ probabilities, introducing low- level features with skip connection in the final layer will pollute the probabilities due to the added _pixel-pixel wise_ information, since the ultimate target is to predict the association between the target pixel and _its 9-neighbor grids_ instead of its 9-neighbor pixels. To tackle this weakness, we propose to directly implant the grid features to the surrounding of the corresponding pixel using an association implantation (AI) module. Fig. 1 simply shows the core idea of our AI module, before feeding the last features into the prediction layer, our AI module is performed: for each pixel, we place the corresponding grid features to its neighbors, then a convolution with $3\times 3$ kernel is followed, this convolution is no longer to capture the pixel-pixel relation but _the relation between pixel and its 9 neighbor grids_ , providing the consistent context with the target of Sp Seg. Our proposed AI module provides a simple and intuitive way to allow the network to harvest the pixel-neighbor cells context in an explicit fashion, which is exactly required by superpixel segmentation. Comparing to existing methods, such a design is more consistent with the target of superpixel segmentation and could give more beneficial support for the subsequent association map inferring. Besides, a satisfactory superpixel algorithm should acctually identify the boundary pixels, however, some designs towards this target still missed among existing works. To pursue better boundary precision, we augment the optimization with a boundary-perceiving loss. To be specific, we first sample a set of small local patches on the pixel embedding map along the boundaries. Then, the features with the same/different labels in each patch are treated as the positive/negative samples, on which a classification procedure is performed to enhance the compactness of the features with the same label while distinguish the different semantic features. Our boundary-perceiving loss encourages the model to pay more attention to discriminate the features around boundaries, consequently, more boundary pixels could be identified. Quantitative and qualitative results on BSDS500 [3] and NYUv2 [32] datasets demonstrate that the proposed method achieves more outstanding performance against the state-of-the-art superpixel segmentation methods. In summary, we make the following contributions in this work: * • We propose a novel AI module to directly capture the relation between the pixel and its surrounding grid cells, such a design builds a more consistent architecture with the target of superpixel segmentation. * • A boundary-perceiving loss is designed to discriminate the features with different semantic labels around boundaries, which could help the network accurately identify boundary pixels and improve the boundary precision. ## 2 Related Work Superpixel Segmentation Superpixel segmentation is a well-defined problem and has a long line of research [33, 26, 19, 38, 5, 16]. Traditional superpixel algorithms can be broadly classified into graph-based and clustering-based approaches. Graph-based methods consider the pixels as nodes and the edges as strength of connectivity between adjacent pixels, respectively. Consequently, the superpixel segmentation could be formulated as a graph-partitioning problem. Wide-used algorithms, Felzenszwalb and Huttenlocher (FH) [8] and the entropy rate superpixels (ERS) [22], belong to this category. On the other hand, clustering-based approaches utilize classic clustering techniques like $k$-means to compute the connectivity between the anchor pixels and its neighbors. Well-known methods in this category include SLIC [1], LSC [20], Manifold-SLIC [24] and SNIC [2]. Inspired by the success of deep learning techniques, recently, researchers attempt to utilize the deep neural network to learn the membership of each pixel to its surrounding grid cells. Jampani et al. [15] develop the first differentiable deep network motivated by the classic SLIC method, and Yang et al. [41] further simplify the framework and contribute a more efficient model. Figure 2: The framework of our AINet. The network takes an image as input and outputs the association map. Meanwhile, the superpixel embedding and pixel embedding are first obtained by the convolutions and then fed into the AI module to obtain the pixel-superpixel context. And the local patch loss is performed on the pixel-wise embeddings to boost the boundary precision. In AI module, the sampling interval is set to 16, and each block indicates a pixel or superpixel embedding. Application of Superpixel The pre-computed superpixel segmentation could be viewed as a type of weak label or prior knowledge to benefit many downstream tasks, it could be integrated into deep learning pipeline to provide guidance so that some important image properties (e.g., boundaries) could be better preserved [10, 35, 42, 4, 21]. For example, KwaK et al. [18] utilize the superpixel segmentation to perform a region-wise pooling to make the pooled feature have better semantic compactness. In [27], Cheng et al.consider the superpixel as pseudo label and attempt to boost the image segmentation by identifying more semantic boundary. Besides benefiting the image segmentation or feature pooling, superpixel also provides flexible ways to represent the image data. He et al. [13] convert 2D images patterns into 1D sequential representation, such a novel representation allows the deep network to explore the long-range context of the image. Liu et al. [23] learn the similarity between different superpixels, the developed framework could produce different grained segmentation regions by merging the superpixels according to the learned superpixel similarity. ## 3 Preliminaries Before delving into the details of our method, we first introduce the framework of deep-learning based superpixel segmentation, which is also the fundamental theory of this paper. As illustrated in Fig. 1, the image $I$ is partitioned into blocks using a regular size grid, and the grid cell is regarded as the initial superpixel seed. For each pixel $p$ in image $I$, the superpixel segmentation aims at finding a mapping that assigns each pixel to one of its surrounding seeds, 9 neighbors, just as shown in Fig. 1. Mathematically, deep-learning based method feeds the image $I\in\mathcal{R}^{H\times W\times 3}$ to convolution neural network and output an association map $Q\in\mathcal{R}^{H\times W\times 9}$, which indicates the probability of each pixel to its neighbor cells [15, 41]. Since there is no ground-truth for such an output, the supervision for network training is performed in an indirect fashion: the predicted association map $Q$ serves as the intermediate variable to reconstruct the pixel-wise property $l(p)$ like semantic label, position vector, and so on. Consequently, there are two critical steps in training stage. Step1: Estimate the superpixel property from the surrounding pixels: $h(s)=\frac{\sum_{p:s\in N_{p}}{l(p)\cdot q(p,s)}}{\sum_{p:s\in N_{p}}q(p,s)}.$ (1) Step2: Reconstruct the pixel property according to the superpixel neighbors: $l^{{}^{\prime}}(p)=\sum_{s\in N_{p}}h(s)\cdot q(p,s),$ (2) where the $N_{p}$ is the set of adjacent superpixels of $p$, $q(p,s)$ indicates the probability of pixel $p$ assigned to superpixel $s$. Thus, the training loss is to optimize the distance between the ground-truth property and the reconstructed one: $\mathcal{\bm{L}}(Q)=\sum_{p}dist(l(p),l^{{}^{\prime}}(p))).$ (3) Following Yang’s practice [41], the properties of pixels in this paper include the semantic label and the position vector, , two-dimension spatial coordinates, which are optimized by the cross-entropy loss and the $\mathcal{L}_{2}$ reconstruction loss, respectively. ## 4 Methodology An overview of our proposed AINet is shown in Fig. 2. In general, the overall architecture is an encoder-decoder style paradigm, the encoder module compresses the input image and outputs a feature map called superpixel embedding, whose pixels exactly encode the features of grid cells. Subsequently, the superpixel embedding is further fed into the decoder module to produce the association map. Meanwhile, the superpixel embedding and the pixel embedding in decoding stage are integrated to perform the association implantation, and the boundary-perceiving loss also acts on the pixel embedding. Hereinafter, we elaborate the details of our proposed AI module and boundary-perceiving loss. ### 4.1 Association Implantation Module To enable the network to explicitly perceive the relation between each pixel and its surrounding grid cells, this work proposes an association implantation module to perform a direct interaction between the pixel and its neighbor cells. As shown in the top right of Fig. 2, we first obtain the embeddings of superpixels and pixels by the convolution network. Then, for each pixel embedding, the corresponding neighbor superpixel features are picked and implanted to its surrounding. Finally, a convolution with kernel size $3\times 3$ is conducted on the expanded pixel embedding to achieve the knowledge propagation. Formally, let $e_{p}\in\mathcal{R}^{D}$ be the embedding of pixel $p$ from the pixel embedding $E\in\mathcal{R}^{H\times W\times D}$, which is obtained by the deep neural network as shown in Fig. 2. To obtain the embeddings of the grid cells, , superpixel embedding, we compress the input image by $\log_{\text{2}}S$ times using multiple convolutions and max-pooling operations, where $S$ is the sampling interval for the grid cell. For example, if the sampling interval is 16, then, we downsample the image 4 times. This would result in a feature map $M\in\mathcal{R}^{h\times w\times D^{{}^{\prime}}}$ whose pixels exactly encode the features of grid cells, where $h=H/S,\text{and }w=W/S$. To perform the implantation operation on the pixel embedding, we first adjust the channels of $M$ using two $3\times 3$ convolutions, producing a new map $\hat{M}\in\mathcal{R}^{H\times W\times D}$. Then, for the pixel $p$, we pick up its 9 adjacent superpixel embeddings from left to right and top down: $\\{\hat{m}_{tl},\hat{m}_{t},\hat{m}_{tr},\hat{m}_{l},\hat{m}_{c},\hat{m}_{r},\hat{m}_{bl},\hat{m}_{b},\hat{m}_{br}\\}$ from $\hat{M}$. To allow the network could explicitly capture the relation between pixel $p$ and its neighbor cells, we directly implant the superpixel embeddings into the surrounding of the pixel $p$ to provide pixel-superpixel context: $SP=\left[\begin{matrix}\hat{m}_{tl}&\hat{m}_{t}&\hat{m}_{tr}\\\ \hat{m}_{l}&\hat{m}_{c}+e_{p}&\hat{m}_{r}\\\ \hat{m}_{bl}&\hat{m}_{b}&\hat{m}_{br}\end{matrix}\right].$ (4) It is worth noting that the pixels in the same initial grid would share the same surrounding superpixels, since they would degrade into one element in superpixel view. We then adopt a $3\times 3$ convolution to adaptively distill information from the expanded window to benefit the subsequent association map inferring: $e^{{}^{\prime}}_{p}=\sum_{ij}SP_{ij}\times w_{ij}+b,$ (5) where $w$ and $b$ are the convolution weight and bias, respectively. We traverse all of the pixel embeddings in $E$ and apply the operations in Eq. 4\- 5, thus, we could obtain a new pixel embedding $E^{{}^{\prime}}$ whose elements capture the pixel-superpixel level context. In the following, the feature map $E^{{}^{\prime}}$ is fed through a convolution layer to predict the association map $Q$. As shown in Eq. 4\- 5, our AI module directly places the neighbor cell embeddings in the surrounding of the pixel to provide the context required by superpixel segmentation, which is an intuitive and reasonable solution. Comparing to the existing methods that use the stacked convolutions to accumulate the pixel-wise relation, the pixel-superpixel context captured by our AI module is more in line with the target of superpixel segmentation. ### 4.2 Boundary-Perceiving Loss Our boundary-perceiving loss is proposed to help the network appropriately assign the pixels around boundaries. As shown in the bottom right of Fig. 2, we first sample a series of patches with a certain size (5$\times$5, for example) around boundaries in the pixel embedding map, and then a classification procedure is conducted to improve the discrimination of the different semantic features. Formally, let $E\in\mathcal{R}^{H\times W\times D}$ be the pixel-wise embedding map, since the ground-truth label is available during training stage, we could sample a local patch $B\in\mathcal{R}^{K\times K\times D}$ surrounding a pixel in the semantic boundary. For the sake of simplification, the patch $B$ only covers the pixels from two different semantic regions, that is, $B=\\{f_{1},\cdots,f_{m},g_{1},\cdots,g_{n}\\}$, where $f,g\in\mathcal{R}^{D},m+n=K^{2}$. Intuitively, we attempt to make the features in the same categories be closer, while the embeddings from different labels should be far away from each other. To this end, we evenly partition the features in the same categories into two groups, $\bm{f}^{1},\bm{f}^{2},\bm{g}^{1},\bm{g}^{2}$, and employ a classification- based loss to enhance the discrimination of the features: $\displaystyle\mathcal{\bm{L}}_{B}$ $\displaystyle=-\frac{1}{2}(\log(sim(\bm{\mu}_{f1},\bm{\mu}_{f2}))+\log(sim(\bm{\mu}_{g1},\bm{\mu}_{g2})))$ (6) $\displaystyle-\frac{1}{2}(\log(1-sim(\bm{\mu}_{f1},\bm{\mu}_{g1}))+\log(1-sim(\bm{\mu}_{f2},\bm{\mu}_{g2}))),$ where the $\bm{\mu}_{f1}$ is the average representation for $\bm{f}^{1}$, and the function $sim(\cdot,\cdot)$ is the similarity measure for two vectors: $\displaystyle\bm{\mu}_{f1}=\frac{1}{|\bm{f}^{1}|}\sum_{f\in\bm{f}^{1}}f,$ (7) $\displaystyle sim(f,g)=\frac{2}{1+\exp(||f-g||_{1})},$ (8) Taking all of the sampled patches $\mathcal{\bm{B}}$ into consideration, our full boundary-perceiving loss is formulated as follow: $\mathcal{\bm{L}}_{\mathcal{B}}=\frac{1}{|\mathcal{\bm{B}}|}\sum_{B\in\mathcal{\bm{B}}}{\mathcal{L}_{B}}.$ (9) Overall, the full losses for our network training comprise three components, , cross-entropy ($CE$) and $\mathcal{L}_{2}$ reconstruction losses for the semantic label and position vector according to the Eq. 3, and our boundary- perceiving loss: $\mathcal{\bm{L}}=\sum_{p}CE(l^{{}^{\prime}}_{s}(p),l_{s}(p))+\lambda||p-p^{{}^{\prime}}||_{2}^{2}+\alpha\mathcal{\bm{L}}_{\mathcal{B}}$ (10) where $l^{{}^{\prime}}_{s}(p)$ is the reconstructed semantic label from the predicted association map $Q$ and the ground-truth label $l_{s}(p)$ according to Eq. 1\- 2, and $\lambda$, $\alpha$ are two trade-off weights. ## 5 Experiments Datasets. We conduct experiments on two public benchmarks, BSDS500 [3] and NYUv2 [32] to evaluate the effectiveness of our method. BSDS500 comprises 200 training, 100 validation and 200 test images, and each image is annotated by multiple semantic labels from different experts. To make a fair comparison, we follow previous works [41, 15, 37] and treat each annotation as an individual sample. Consequently, 1,087 training, 546 validation samples and 1,063 testing samples could be obtained. NYUv2 is an indoor scene understanding dataset and contains 1,449 images with object instance labels. To evaluate the superpixel methods, Stutz et al. [33] remove the unlabelled regions near the boundary and collect a subset of 400 test images with size 608$\times$448 for superpixel evaluation. Following Yang’s practice [41], we conduct a standard train and test pipeline on the BSDS500 dataset. On the subject of NYUv2 dataset, we directly apply the model trained on BSDS500 and report the performance on the 400 tests to evaluate the generality of the model. (a) Patch shuffle (b) Random shift Figure 3: The illustrations for our patch jitter augmentation, patch shuffle and random shift. Color frames indicate the changed regions. (a) BR-BP on BSDS500 (b) ASA on BSDS500 (c) BR-BP on NYUv2 (d) ASA on NYUv2 Figure 4: Performance comparison on datasets BSDS500 and NYUv2. (a) Inputs (b) GT label (c) SEAL [37] (d) SCN [41] (e) SSN [15] (f) AINet Figure 5: Qualitative results of four SOTA superpixel methods, SEAL, SCN, SSN, and our AINet. The top row exhibits the results from BSDS500 dataset, while the bottom row shows the superpixels on NYUv2 dataset. Augmentation via Patch Jitter. To further improve the performance and enhance the generality of our model, we propose to augment the data by jittering the image patches. Specifically, the proposed patch jitter augmentation comprises two components, , patch shuffle and random shift. Fig. 3 shows the respective examples for these two types of data augmentation. The patch shuffle first samples two image patches with shape $S\times S$ and then randomly exchange them to extend the image patterns, the corresponding ground-truth patches are also exchanged accordingly to maintain the consistency. To further augment the data, we randomly pick up one of the selected two patches and replace it with a random patch, whose ground-truth is assigned with a new label. While the random shift could be conducted along with the horizontal or vertical directions. For horizontal random shift, we first randomly sample a patch with shape $S\times L$, where $L=\text{rand\\_int}(S,W)$, and a random offset $o=\text{rand\\_int}(0,S)$. Then, we conduct a cycle translation on the patch by $o$ offset towards left or right. Meanwhile, the random patch trick in patch shuffle could also be adopted. Finally, the augmentation is done by replacing the original patch with the new one. Analogously, the augmentation along vertical direction could be done similarly. Implementation Details. In training stage, the image is randomly cropped to 208$\times$208 as input, and the network is trained using the adam optimizer [17] for 4k iterations with batch size 16. The learning rate starts with 8e-5 and is discounted by 0.5 for every 2K iterations. The sampling interval is fixed as 16, consequently, the encoder part employs 4 convolution&pooling operations to get the superpixel embedding with shape $13\times 13\times 256$. The following decoder module produces the pixel embedding with shape $208\times 208\times 16$ using 4 convolution&deconvolution operations. Then, the channels of superpixel embedding are first compressed by two convolution layers: 256$\Rightarrow$64$\Rightarrow$16, then our AI module is performed. The boundary-perceiving loss also acts on the pixel embedding, where the patch size is set to 5, , $K=5$. In the following, two convolution layers are stacked to predict the association map $Q$ with shape $208\times 208\times 9$. In our practice, simultaneously equipping the boundary-perceiving loss and AI Module could not make the performance step further, therefore, we first train the network using the first two items in Eq. 10 for 3K iterations, and use the boundary-perceiving loss to finetune 1K. Following Yang’s practice [41], the weight of position reconstruction loss is set to 0.003/16, while the weight for our boundary-perceiving loss is fixed to 0.5, , $\lambda=0.003/16,\alpha=0.5$. In testing, we employ the same strategy as [41] to produce varying numbers of superpixels. Several methods are considered for performance comparison, including classic methods, SLIC [1], LSC [20], ERS [22], SEEDS [6], SNIC [2] and deep learning- based methods, SEAL [37], SSN [15], SCN [41]. We simply use the OpenCV implementation for methods SLIC, LSC and SEEDS. For other methods, we use the official implementations with the recommended parameters from the authors. Evaluation Metrics. We use three popular metrics including achievable segmentation accuracy (ASA), boundary recall (BR) and boundary precision (BP) to evaluate the performance of superpixel. ASA score studies the upper bound on the achievable segmentation accuracy using superpixel as pre-processing step, while BR and BP focus on accessing how well the superpixel model could identify the semantic boundaries. The higher value of these metrics indicates better superpixel segmentation performance. More detailed description and analysis for these three metrics could be found in [33]. ### 5.1 Comparison with the state-of-the-arts Fig. 4 reports the quantitative comparison results on BSDS500 and NYUv2 test sets. As indicated in Fig. 4, our AINet attains the best ASA score and BR-BP on both datasets. With the help of deep convolution networks, the methods, SEAL, SCN, SSN, and AINet could achieve superior or comparable performance against the traditional superpixel algorithms, and our AINet is the best model among them. From Fig. 4 (a)-(b), the AINet could surpass the traditional methods by a large margin on BSDS500 dataset. By harvesting the pixel- sueprpixel level context and highlighting the boundaries, AINet could also outperform the deep methods SEAL, SCN and SSN. Fig. 4 (c)-(d) shows the performance when adapting to the NYUv2 test set, we can observe that the AINet also shows better generality. Although the BR-BP is comparable with the SCN and SSN, our ASA score is more outstanding than all of the competitive methods. Figure 6: Ablation study on BSDS500. The left figure shows the contributions of each component in our system, while the right one discusses two variations of $SP$ (Eq. 4). (a) Images (b) SEAL [37] (c) SCN [41] (d) SSN [15] (e) AINet (f) GT label I: The generated proposals from DEL [23] using different superpixels. (a) Images (b) Threshold=0.3 (c) Threshold=0.4 (d) Threshold=0.5 (e) Threshold=0.6 (f) GT label II: The generated proposals using different thresholds (1 as upper bound). Figure 7: Qualitative proposals from DEL [23] using different superpixels (I), and the results of DEL [23] with our superpixel using different thresholds (II), where threshold=0.3 mean merging the adjacent superpixels if their similarity is above 0.3. Fig. 5 shows the qualitative results of four state-of-the-art methods on dataset BSDS500 and NYUv2, comparing to the competing methods, the boundaries of our results are more accurate and clearer, which intuitively shows the superiority of our method. ### 5.2 Ablation Study To validate the respective contributions of our proposed modules including the data augmentation trick, AI module, and the boundary-perceiving loss, we conduct ablation study on BSDS500 dataset to thoroughly study their effectiveness. The left figure in Fig. 6 reports the performances of all methods, where the BPL means the boundary-perceiving loss, and BPL+PJ stands for the baseline simultaneously equipped with the boundary-perceiving loss and the patch jitter augmentation. From Fig. 6, we can observe that individually applying the three modules on the baseline method could all boost the performance, and the boundary-perceiving loss could contribute the most performance gains. The combination of the patch jitter augmentation and the BPL or AI Module could make the performance step further, and the AI module equipped with the data augmentation achieves better performance. When simultaneously employing the three modules, we could harvest the best BR-BP. Besides, we also give a discussion for two alternative choices of $SP$ (Eq. 4): a greedy version of $SP$ that further adds the neighbor pixels to the corresponding surrounding superpixels like the central position, for example, $\hat{m}_{t}$ is replaced by $\hat{m}_{t}+e_{t}$; And a simplified version that ignoring the central superpixel, , $\hat{m}_{c}+e_{p}$ changes to $e_{p}$. The models with the above two versions of $SP$ are marked as AINet- PNbor and AINet-CPix, respectively. The right figure of Fig. 6 shows the results, we can observe that AINet-PNbor and AINet-CPix could both surpass the baseline but perform a litter worse than AINet. By summing the neighbor pixels, the AINet-PNbor could integrate the pixel-wise relation, on the other hand, the sum operation would also reduce the force of superpixel embedding, which would conspire against capturing the pixel-superpixel context. For AINet-CPix, the excluded $\hat{m}_{c}$ is also one of the neighbor superpixels, directly abandoning $\hat{m}_{c}$ would fail to explicitly perceive the relation between pixel $e_{p}$ and central superpixel $\hat{m}_{c}$. Consequently, the above two variations of $SP$ are both not effective to capture the super context. Figure 8: The average time costs of four deep learning based methods w.r.t number of superpixels. The runtime is added with 1 and scaled by logarithmic to show positive values and a clear tendency. Figure 9: The ASA scores of four state-of-the-art methods on object proposal generation. ### 5.3 Inference Efficiency Besides the performance, the inference speed is also a concerned aspect. Therefore, we conduct experiments on BSDS500 dataset to investigate the inference efficiency of four deep learning-based methods. To make a fair comparison, we only count the time of network inference and post-processing steps (if available). All methods run on the same workstation with NVIDIA 1080Ti GPU and Intel E5 CPU. The time costs of four deep learning-based methods, SEAL, SCN, SSN and our AINet are reported in Fig. 8. The method SCN achieves the best inference efficiency due to its simple architecture, while our AINet introduces more layers and operations, consequently, the inference is slightly slower than the SCN. The superpixel segmentation of SEAL and SSN is much complex comparing to the SCN and our AINet, SEAL needs first output the learned deep features and then feed them to a traditional algorithm to conduct superpixel segmentation, and SSN further performs the $K$-means iteration after obtaining the pixel affinity. As a result, SEAL and SSN both cost much more time in inference stage. Although the SCN is faster, the performance of AINet is much better than SCN. Comparing to these competing methods, our AINet achieves a good trade-off between the performance and the inference efficiency. ### 5.4 Application on Object Proposal Generation Image annotation is one of the important application scenarios for superpixels, since it could identify the semantic boundaries and provide the outlines of many semantic regions. To generate the object proposals, Liu et al. [23] propose a model named DEL, they first estimate the similarities between superpixels and merge them according to a certain threshold, by which the proposed method could flexibly control the grain size of object proposal. In this subsection, we feed the superpixels from four state-of-the-art methods, SEAL, SCN, SSN and our AINet to the framework of [23] to further investigate the superiority of our AINet. To evaluate the performance, we use the ASA score to measure how well the produced object proposals cover the ground-truth labels: $ASA(O,G)=\frac{1}{N}\sum_{O_{k}}\max_{G_{k}}{\\{|O_{k}\cap G_{k}|\\}},$ (11) where $N$ is the number of generated object proposal $O$, and $G$ is the ground-truth semantic label. The performance of all methods is reported in Fig. 9, from which we can observe that the average performance of our AINet is more outstanding. Fig. 7 I shows three results of DEL [23] with the superpixels from four methods, different thresholds are used to produce varied size proposals: the adjacent superpixels would be merged if their similarity is above the threshold, which means that higher value would produce finer object proposals. As shown in Fig. 7 I, our AINet could generate more satisfactory object proposals comparing to the competing methods, which validates the effectiveness of our proposed method. Fig. 7 II exhibits the results using the superpixels of our AINet with different thresholds, varying sizes of generated object proposals could be generated by adjusting the threshold. ## 6 Conclusion We have presented an association implantation network for superpixel segmentation task. A novel association implantation module is proposed to provide the consistent pixel-superpixel level context for superpixel segmentation task. To pursue better boundary precision, a boundary-perceiving loss is designed to improve the discrimination of pixels around boundaries in hidden feature level, and a data augmentation named patch jitter is developed to further improve the performance. Experiments on two popular benchmarks show that the proposed method could achieve state-of-the-art performance with good generalizability. What’s more, the produced superpixels by our method could also perform well when applied to the object proposal generation. In the future, we will continue to study the effectiveness of the proposed AINet on the stereo matching task. ## References * [1] Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurélien Lucchi, Pascal Fua, and Sabine Süsstrunk. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell.(TPAMI), 34(11):2274–2282, 2012. * [2] Radhakrishna Achanta and Sabine Süsstrunk. Superpixels and polygons using simple non-iterative clustering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4895–4904, June 2017. * [3] Pablo Arbelaez, Michael Maire, Charless C. Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):898–916, 2011\. * [4] András Bódis-Szomorú, Hayko Riemenschneider, and Luc Van Gool. Superpixel meshes for fast edge-preserving surface reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2011–2020, June 2015. * [5] Dorin Comaniciu and Peter Meer. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell., 24(5):603–619, 2002\. * [6] Michael Van den Bergh, Xavier Boix, Gemma Roig, and Luc Van Gool. SEEDS: superpixels extracted via energy-driven sampling. Int. J. Comput. Vis. (IJCV), 111(3):298–314, 2015. * [7] Xiaoyi Dong, Jiangfan Han, Dongdong Chen, Jiayang Liu, Huanyu Bian, Zehua Ma, Hongsheng Li, Xiaogang Wang, Weiming Zhang, and Nenghai Yu. Robust superpixel-guided attentional adversarial attack. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 12892–12901, June 2020. * [8] Pedro F. Felzenszwalb and Daniel P. Huttenlocher. Efficient graph-based image segmentation. Int. J. Comput. Vis.(IJCV), 59(2):167–181, 2004. * [9] Raghudeep Gadde, Varun Jampani, Martin Kiefel, Daniel Kappler, and Peter V. Gehler. Superpixel convolutional networks using bilateral inceptions. In European Conference on Computer Version (ECCV), pages 597–613, Oct. 2016. * [10] Utkarsh Gaur and B. S. Manjunath. Superpixel embedding network. IEEE Trans. Image Process., 29:3199–3212, 2020. * [11] Shengfeng He, Rynson W. H. Lau, Wenxi Liu, Zhe Huang, and Qingxiong Yang. Supercnn: A superpixelwise convolutional neural network for salient object detection. Int. J. Comput. Vis., 115(3):330–344, 2015. * [12] Shengfeng He, Rynson W. H. Lau, Wenxi Liu, Zhe Huang, and Qingxiong Yang. Supercnn: A superpixelwise convolutional neural network for salient object detection. Int. J. Comput. Vis. (IJCV), 115(3):330–344, 2015. * [13] Shengfeng He, Rynson W. H. Lau, Wenxi Liu, Zhe Huang, and Qingxiong Yang. Supercnn: A superpixelwise convolutional neural network for salient object detection. Int. J. Comput. Vis.(IJCV), 115(3):330–344, 2015. * [14] Yinlin Hu, Rui Song, Yunsong Li, Peng Rao, and Yangli Wang. Highly accurate optical flow estimation on superpixel tree. Image Vis. Comput., 52:167–177, 2016. * [15] Varun Jampani, Deqing Sun, Ming-Yu Liu, Ming-Hsuan Yang, and Jan Kautz. Superpixel sampling networks. In European Conference on Computer Vision (ECCV), pages 363–380, Sep. 2018. * [16] Xuejing Kang, Lei Zhu, and Anlong Ming. Dynamic random walk for superpixel segmentation. IEEE Trans. Image Process., 29:3871–3884, 2020. * [17] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), May 2015. * [18] Suha Kwak, Seunghoon Hong, and Bohyung Han. Weakly supervised semantic segmentation using superpixel pooling network. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,, pages 4111–4117, Feb. 2017. * [19] Alex Levinshtein, Adrian Stere, Kiriakos N. Kutulakos, David J. Fleet, Sven J. Dickinson, and Kaleem Siddiqi. Turbopixels: Fast superpixels using geometric flows. IEEE Trans. Pattern Anal. Mach. Intell., 31(12):2290–2297, 2009\. * [20] Zhengqin Li and Jiansheng Chen. Superpixel segmentation using linear spectral clustering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1356–1363, 2015. * [21] Zhenguo Li, Xiao-Ming Wu, and Shih-Fu Chang. Segmentation using superpixels: A bipartite graph partitioning approach. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 789–796, June 2012. * [22] Ming-Yu Liu, Oncel Tuzel, Srikumar Ramalingam, and Rama Chellappa. Entropy rate superpixel segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2097–2104, June 2011. * [23] Yun Liu, Peng-Tao Jiang, Vahan Petrosyan, Shi-Jie Li, Jiawang Bian, Le Zhang, and Ming-Ming Cheng. DEL: deep embedding learning for efficient image segmentation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI), pages 864–870, July 2018. * [24] Yong-Jin Liu, Cheng-Chi Yu, Minjing Yu, and Ying He. Manifold SLIC: A fast method to compute content-sensitive superpixels. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 651–659, 2016. * [25] Jiangbo Lu, Hongsheng Yang, Dongbo Min, and Minh N. Do. Patch match filter: Efficient edge-aware filtering meets randomized search for fast correspondence field estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1854–1861, June 2013. * [26] Vaïa Machairas, Matthieu Faessel, David Cárdenas-Peña, Théodore Chabardès, Thomas Walter, and Etienne Decencière. Waterpixels. IEEE Trans. Image Process., 24(11):3707–3716, 2015. * [27] Cheng Ouyang, Carlo Biffi, Chen Chen, Turkay Kart, Huaqi Qiu, and Daniel Rueckert. Self-supervision with superpixels: Training few-shot medical image segmentation without annotation. In European Conference on Computer Version (ECCV), pages 762–780, Aug. 2020. * [28] Xiaofeng Ren and Jitendra Malik. Learning a classification model for segmentation. In IEEE International Conference on Computer Vision (ICCV), pages 10–17, Oct. 2003. * [29] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI), pages 234–241, Oct. 2015. * [30] Abhishek Sharma, Oncel Tuzel, and Ming-Yu Liu. Recursive context propagation network for semantic scene labeling. In Advances in Neural Information Processing Systems (NIPS), pages 2447–2455, Dec. 2014. * [31] Evan Shelhamer, Jonathan Long, and Trevor Darrell. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 39(4):640–651, 2017\. * [32] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from RGBD images. In European Conference on Computer Version (ECCV), pages 746–760, Oct. 2012. * [33] David Stutz, Alexander Hermans, and Bastian Leibe. Superpixels: An evaluation of the state-of-the-art. Comput. Vis. Image Underst., 166:1–27, 2018. * [34] Deqing Sun, Ce Liu, and Hanspeter Pfister. Local layering for joint motion estimation and occlusion detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1098–1105, June 2014. * [35] Wen Sun, Qingmin Liao, Jing-Hao Xue, and Fei Zhou. SPSIM: A superpixel-based similarity index for full-reference image quality assessment. IEEE Trans. Image Process., 27(9):4232–4244, 2018. * [36] Teppei Suzuki, Shuichi Akizuki, Naoki Kato, and Yoshimitsu Aoki. Superpixel convolution for segmentation. In IEEE International Conference on Image Processing (ICIP), pages 3249–3253, Oct 2018. * [37] Wei-Chih Tu, Ming-Yu Liu, Varun Jampani, Deqing Sun, Shao-Yi Chien, Ming-Hsuan Yang, and Jan Kautz. Learning superpixels with segmentation-aware affinity loss. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 568–576, June 2018. * [38] Olga Veksler, Yuri Boykov, and Paria Mehrani. Superpixels and supervoxels in an energy optimization framework. In European Conference on Computer Vision (ECCV), pages 211–224, Sep. 2010. * [39] Koichiro Yamaguchi, David A. McAllester, and Raquel Urtasun. Robust monocular epipolar flow estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1862–1869, June 2013. * [40] Chuan Yang, Lihe Zhang, Huchuan Lu, Xiang Ruan, and Ming-Hsuan Yang. Saliency detection via graph-based manifold ranking. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3166–3173, June 2013. * [41] Fengting Yang, Qian Sun, Hailin Jin, and Zihan Zhou. Superpixel segmentation with fully convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 13961–13970, June 2020. * [42] Donghun Yeo, Jeany Son, Bohyung Han, and Joon Hee Han. Superpixel-based tracking-by-segmentation using markov chains. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 511–520, June 2017. * [43] Wangjiang Zhu, Shuang Liang, Yichen Wei, and Jian Sun. Saliency optimization from robust background detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2814–2821, June 2014.
# Semi-classical Lindblad master equation for spin dynamics ###### Abstract We derive the semi-classical Lindblad master equation in phase space for both canonical and non-canonical Poisson brackets using the Wigner-Moyal formalism and the Moyal star-product. The semi-classical limit for canonical dynamical variables, i. e., canonical Poisson brackets, is the Fokker-Planck equation, as derived before. We generalize this limit and show that it holds also for non-canonical Poisson brackets. Examples are gyro-Poisson brackets, which occur in spin ensembles, systems of recent interest in atomic physics and quantum optics. We show that the equations of motion for the collective spin variables are given by the Bloch equations of nuclear magnetization with relaxation. The Bloch and relaxation vectors are expressed in terms of the microscopic operators: The Hamiltonian and the Lindblad functions in the Wigner-Moyal formalism. J. Dubois, Ulf Saalmann and Jan M. Rost Max Planck Institute for the Physics of Complex Systems, Nöthnitzer Straße 38, 01187 Dresden, Germany Keywords: Lindblad master equations, non-canonical variables, Hamiltonian systems, spin systems, classical and semi-classical limit, thermodynamical limit. ## 1 Lindblad quantum master equation Since perfect isolation of a system is an idealization [1, 2], open microscopic systems, which interact with an environment, are prevalent in nature as well as in experiments. Examples include superadiance [3, 4], Cooper-pair pumping [5] or non-equilibrium nuclear processes [6]. In such situations energy can be exchanged between the system and surrounding particles. The ensuing energy loss leads to dissipation and fluctuation. A description of the dynamics which fully accounts for the interactions between the system and its environment often implies equations which cannot be solved even numerically in a reasonable amount of time. If the relaxation time of the environment is short compared to the typical timescale of the system, a Markovian approximation can be used. As a result, the quantum dynamics of the open microscopic system described by the density matrix $\hat{\rho}$ is governed by the Lindblad master equation [1] $\frac{\partial}{\partial t}\hat{\rho}=-\frac{\mathrm{i}}{\hbar}\left[\hat{H},\hat{\rho}\right]+\frac{1}{\hbar}\sum_{k=1}^{k_{\rm max}}\left(\left[\hat{L}_{k}\hat{\rho},\hat{L}^{\dagger}_{k}\right]+\left[\hat{L}_{k},\hat{\rho}\hat{L}^{\dagger}_{k}\right]\right),$ (1) where $[\cdot,\cdot]$ denotes the commutator. Equation (1) has a Hamiltonian part, with the Hamilton operator $\hat{H}$, and a dissipative part that results from the interactions with the environment with Lindblad operators $\hat{L}_{k}$, also referred to as jump operators with arbitrarily large $k_{\rm max}$. The Lindblad master equation equation (1) is applied to problems in atomic physics [7, 4], quantum optics [8, 9], condensed matter [10, 11] quantum information [12, 13] and decoherence [14, 15]. In general, the simulation of equation equation (1) in the thermodynamic limit takes extremely long time and is numerically demanding, even with quantum-jump methods [16]. However, some of the quantum systems, whose dynamics is accurately described by equation equation (1), exhibit classical behavior [3, 4]. Classical behavior of quantum systems often facilitates to understand their quantum dynamics and nonlinear phenomena observed in experiments. For instance, classical trajectories allow one to identify mechanisms in microscopic systems [17, 18], and the comparison of classical and quantum solutions with experimental results reveals which effects observed in experiments are inherently quantum. A bridge to the classical phenomena can be built with semi-classical master equations derived from the quantum ones. The former can capture inherent quantum effects in the thermodynamic limit and allows us to identify connections between microscopic and macroscopic scales. For canonical variables, the semi-classical limit of equation equation (1) has been established [19]. Here, we formulate the Lindblad master equation equation (1) in the Wigner-Moyal formalism [20] and we derive its semi-classical limit for canonical and non-canonical Poisson brackets using the Moyal-star product. In section 1.1 and 1.2, we recall some properties on the Lindblad quantum master equation equation (1). In section 2, we formulate the Lindblad master equation equation (1) in the Wigner-Moyal formalism for canonical and non- canonical Poisson brackets and we derive its semi-classical limit. We show that for gyro-Poisson brackets [21], the semi-classical Lindblad equation corresponds to the Fokker-Planck equation. This result is consistent with the result obtained for canonical Poisson brackets [19]. In section 3, as an example, we use the results obtained in section 2 to derive the Bloch equations from equation equation (1) and the semi-classical limit of a model for superradiance and spin squeezing. ### 1.1 Mean-field equations We consider a set of quantum operators $(\hat{A}_{1},...,\hat{A}_{N})=\hat{\mathbf{A}}$. The time evolution of $\hat{\mathbf{A}}$, given by $\langle\hat{\mathbf{A}}\rangle=\mathrm{tr}(\hat{\rho}\hat{\mathbf{A}})$, is governed by $\frac{\mathrm{d}}{\mathrm{d}t}\left\langle\hat{\mathbf{A}}\right\rangle=\frac{\mathrm{i}}{\hbar}\left\langle\left[\hat{H},\hat{\mathbf{A}}\right]\right\rangle+\frac{1}{\hbar}\sum_{k}\left\langle\hat{L}_{k}^{\dagger}\left[\hat{\mathbf{A}},\hat{L}_{k}\right]-\left[\hat{\mathbf{A}},\hat{L}_{k}^{\dagger}\right]\hat{L}_{k}\right\rangle.$ (2) Equation (2) can be written in the form $\mathrm{d}\langle\hat{\mathbf{A}}\rangle/\mathrm{d}t=\langle\mathbf{u}(\hat{\mathbf{A}})\rangle$, where $\mathbf{u}$ is a function of the Hamilton and Lindblad operators. The latter equations does not necessarily provide a set of coupled ordinary differential equations (ODEs) for the variables $\langle\hat{A}_{k}\rangle$ because of correlation terms of the form $\langle\hat{A}_{k}...\hat{A}_{j}\rangle$. In the _mean-field approximation_ , these terms are assumed to factorize $\langle\hat{A}_{k}...\hat{A}_{j}\rangle=\langle\hat{A}_{k}\rangle...\langle\hat{A}_{j}\rangle$ and, as a consequence, equation equation (2) becomes $\frac{\mathrm{d}}{\mathrm{d}t}\big{\langle}\hat{\mathbf{A}}\big{\rangle}=\mathbf{u}\big{(}\big{\langle}\hat{\mathbf{A}}\big{\rangle}\big{)}.$ (3) Equations equation (3) are referred to as the mean-field equations. They provide a set of coupled ODEs for the expectation values $\langle\hat{A}_{k}\rangle$ as variables which are no longer operators. Therefore, the mean-field approximation is often referred to as the classical limit of the quantum master equations [3, 4]. However, as we will show, the semi-classical limit of the quantum master equation equation (1) reveals classical correlations between phase space functions of the same form as between operators in equation equation (1) and therefore contains more information than the mean-field equations. Under the mean-field approximation as introduced above, quantum and semi-classical master equations reduce to the same result. ### 1.2 $\mathbb{C}$-number conjugate Lindblad operators $\mathbb{C}$-number conjugate Lindblad (CCL) operators fulfill $\hat{L}_{k}^{\dagger}=c_{k}\hat{L}_{k},$ (4) with $c_{k}\in\mathbb{C}$. Special cases are Hermitian ($c_{k}=1$) and skew- Hermitian ($c_{k}=-1$) Lindblad operators. In section 2 we will show that for CCL operators, there is no dissipation in the classical limit. Moreover, the Lindblad quantum master equation equation (1) takes the form $\frac{\partial\hat{\rho}}{\partial t}=-\frac{\mathrm{i}}{\hbar}\left[\hat{H},\hat{\rho}\right]+\frac{1}{\hbar}\sum_{k}c_{k}\left[\left[\hat{L}_{k},\hat{\rho}\right],\hat{L}_{k}\right].$ (5) The diffusion term in equation equation (5) is of the order $\hbar$ since $[\hat{A},\hat{B}]={\cal O}(\hbar)$. This term becomes zero in the classical or the thermodynamic limit ($\hbar=0$) and therefore we expect no dissipation. In addition, the semi-classical limit of the commutator $[\cdot,\cdot]$ is well known [see equation equation (15)], and therefore we can easily check from equation equation (5) that the general semi-classical Lindblad equation derived in section 2 has the correct form for this specific case. ## 2 Derivation of the semi-classical Lindblad equation ### 2.1 Reminder on the Moyal-Wigner formalism and the Moyal star-product The Moyal-Wigner formalism [20] is an alternative but equivalent formulation of quantum mechanics based on a non-commutating algebra in a deformed phase space, also referred to as the deformation quantization of quantum mechanics [22]. In this statistical theory the operators $\hat{F}$ become scalar functions $F(\mathbf{z})$ which depend on phase-space variables $\mathbf{z}=(z_{1},...,z_{n})$. The state of the system is described by the quasi-probability distribution $\rho(\mathbf{z},t)$, where $\rho(\mathbf{z},t)\mathrm{d}^{n}\\!z$ represents the probability that the system is in a small volume $\mathrm{d}^{n}\\!z$ of phase space around $\mathbf{z}$. For a given $\rho(\mathbf{z},t)$, the expectation value of an observable $F$ is given by $\langle F(\mathbf{z})\rangle=\int\rho(\mathbf{z},t)F(\mathbf{z})\;\mathrm{d}^{n}\\!z.$ (6) Also known as the _Wigner quasi-probability distribution_ , $\rho(\mathbf{z},t)$ is analogous to the density matrix. From the evolution of $\rho(\mathbf{z},t)$, one can determine the time evolution of the physical observables equation (6) and also, how the quantum state evolves in phase space. The Moyal-Wigner formalism operates in the Hilbert space of phase-space functions with a Lie algebra. The corresponding Moyal bracket $\llbracket\cdot,\cdot\rrbracket$ is related to the standard Lie bracket $[\cdot,\cdot]$ of quantum operators by $\mathrm{i}\hbar\llbracket F,G\rrbracket\equiv[\hat{F},\hat{G}].$ (7) Over the Hilbert space of phase-space functions one defines a Moyal star- product denoted with $\star$, which is associative. In terms of the Moyal star-product, the Moyal bracket between two observables in phase space $F(\mathbf{z})$ and $G(\mathbf{z})$ reads $\mathrm{i}\hbar\llbracket F,G\rrbracket=F\star G-G\star F.$ (8) The Moyal bracket is also a Lie bracket [22], $\displaystyle\llbracket F,G\rrbracket=-\llbracket G,F\rrbracket,$ (9a) $\displaystyle\llbracket F,G\star H\rrbracket=\llbracket F,G\rrbracket\star H+G\star\llbracket F,H\rrbracket,$ (9b) $\displaystyle\llbracket F,\llbracket G,H\rrbracket\rrbracket+\llbracket G,\llbracket H,F\rrbracket\rrbracket+\llbracket H,\llbracket F,G\rrbracket\rrbracket=0$ (9c) satisfying the properties of antisymmetry equation (9a), the Leibniz’s rule equation (9b) and the Jacobi identity equation (9c). We note that we can also introduce the Moyal anticommutator $\mathrm{i}\hbar\llparenthesis F,G\rrparenthesis\equiv F\star G+G\star F$. Together with equation equation (7) and the relation $2\hat{F}\hat{G}=[\hat{F},\hat{G}]+(\hat{F},\hat{G})$, one can show that a triple product of quantum operators reads in terms of the Moyal-Wigner functions $\hat{F}\hat{G}\hat{H}\equiv F\star(G\star H)=(F\star G)\star H.$ (10) From equation (10) it is clear that one can express a product of arbitrarily many operators in terms of the Moyal-Wigner functions. ### 2.2 Moyal star-product for canonical dynamical variables The Moyal star-product has been first introduced for canonical phase-space variables [20] $\mathbf{z}=(\mathbf{r},\mathbf{p})$ with $n=2m$, where $\mathbf{r}=(r_{1},...,r_{m})$ is, for instance, the position and $\mathbf{p}=(p_{1},...,p_{m})$ its canonically conjugate momentum. The Moyal star-product $\star$ for canonical dynamical variables [20, 22, 23, 24] reads $\star=\exp\left[\frac{\mathrm{i}\hbar}{2}\left(\overleftarrow{\frac{\partial}{\partial\mathbf{r}}}\cdot\overrightarrow{\frac{\partial}{\partial\mathbf{p}}}-\overleftarrow{\frac{\partial}{\partial\mathbf{p}}}\cdot\overrightarrow{\frac{\partial}{\partial\mathbf{r}}}\right)\right],$ (11) where $\overleftarrow{\cdot}$ and $\overrightarrow{\cdot}$ refer to application of the partial derivative to the left-hand and the right-hand side of the star-product, respectively. In addition to being associative, the Moyal-star product for canonical variables given by equation equation (11) is commutative to 0th order in $\hbar$, i.e., $F\star G=FG+{\cal O}(\hbar)=GF+{\cal O}(\hbar)$, and its identity is the unity, i.e., $F\star 1=1\star F=F$. Up to 2nd order in $\hbar$, the Moyal bracket corresponds to the canonical Poisson bracket, i.e., $\llbracket F,G\rrbracket=\\{F,G\\}+{\cal O}(\hbar^{2})$, where $\\{F,G\\}=\frac{\partial F}{\partial\mathbf{r}}\cdot\frac{\partial G}{\partial\mathbf{p}}-\frac{\partial F}{\partial\mathbf{p}}\cdot\frac{\partial G}{\partial\mathbf{r}}.$ (12) ### 2.3 Moyal star-product for non-canonical dynamical variables As before, we consider a finite-dimensional Hamiltonian system whose (phase- space) variables are denoted as $\mathbf{z}=(z_{1},...,z_{n})$. It is given by a Hamiltonian $H(\mathbf{z})$ and a general Poisson bracket [25] $\\{F,G\\}=\frac{\partial F}{\partial\mathbf{z}}\cdot\mathbb{J}(\mathbf{z})\frac{\partial G}{\partial\mathbf{z}},$ (13) where $\mathbb{J}(\mathbf{z})$ is the Poisson matrix. The Poisson bracket equation (13) is antisymmetric, therefore $\mathbb{J}^{\rm T}{=}{-}\mathbb{J}$, and satisfies the Jacobi identity, cf. also equation equation (9c). If the Poisson matrix is symplectic, it reads $\mathbb{J}(\mathbf{z})=[\mathbb{0},\mathbb{I};-\mathbb{I},\mathbb{0}]$ and the Poisson bracket is called canonical, otherwise it is non-canonical. For non-canonical dynamical variables, the Moyal star-product up to 2nd order in $\hbar$ is given by [26, 27, 28] $\star=\exp\left[\frac{\mathrm{i}\hbar}{2}\overleftarrow{\frac{\partial}{\partial\mathbf{z}}}\cdot\mathbb{J}(\mathbf{z})\overrightarrow{\frac{\partial}{\partial\mathbf{z}}}\right]+\hbar^{2}\tilde{\star}+{\cal O}(\hbar^{3}),$ (14) where $\tilde{\star}$ is a higher order star-product, explicitly given before [26, 27, 28]. It arises because the derivatives with respect to the dynamical variables and the Poisson matrix $\mathbb{J}(\mathbf{z})$ in the exponential of equation equation (14) do not commute. It is symmetric, i.e., $F\tilde{\star}G=G\tilde{\star}F$ and vanishes for constant Poisson matrices, since it depends only on the derivatives of the Poisson matrix but not on the matrix itself. This is in particular true for the symplectic matrix (corresponding to the canonical Poisson bracket), for which $\tilde{\star}$ and higher-order corrections vanish such that equation (14) reduces to equation (11). To the best of our knowledge, there is no explicit formula of the star product for non-canonical variables at all orders in $\hbar$. There is, however, a recurrence method [28] to determine the expression of the correcting terms of the non-canonical Moyal star-product at higher orders in $\hbar$ which ensures that the Moyal star-product possesses the right properties (in particular the associativity) to a given order in $\hbar$, $(F\star G)\star H=F\star(G\star H)+{\cal O}(\hbar^{3})$ [26, 27, 28]: At 0th order in $\hbar$ it results from the associativity of the multiplication, at 1st order it results from the Leibniz’s rule of the non-canonical Poisson bracket equation (13), and at 2nd order it results from the Jacobi identity which is satisfied by the non- canonical Poisson bracket equation (13). For our derivation of the semi- classical Lindblad equation, we use the associativity of the star-product up to the 2nd order in $\hbar$. Since the Moyal star-product can easily be expanded in a series of $\hbar$, it is particularly well-suited for the derivation of semi-classical equations from quantum master equations. Up to 2nd order in $\hbar$, the Moyal bracket reads $\llbracket F,G\rrbracket=\\{F,G\\}+{\cal O}(\hbar^{2})\,.$ (15) The 1st-order term in $\hbar$ vanishes in equation (15) due to the antisymmetry of the Poisson matrix. ### 2.4 The Lindblad master equations in the Wigner-Moyal formalism and their semi-classical limit With equation equation (7), the commutator equation (8) and equation equation (10), we can formulate the Lindblad master equation equation (1) for the phase space distribution $\rho(\mathbf{z},t)$ in terms of the Moyal bracket and the Moyal star-product as $\frac{\partial\rho}{\partial t}=\llbracket H,\rho\rrbracket+\mathrm{i}\sum_{k}\bigg{(}\llbracket L_{k}\star\rho,L^{\ast}_{k}\rrbracket+\llbracket L_{k},\rho\star L^{\ast}_{k}\rrbracket\bigg{)},$ (16) where $H(\mathbf{z},t)$ is the Hamiltonian and $L_{k}(\mathbf{z},t)$ are the Lindblad functions. The scalar functions $L^{\ast}_{k}(\mathbf{z},t)$ are complex conjugate to $L_{k}(\mathbf{z},t)$. We note that the Hamiltonian and the Lindblad functions may depend explicitly on time. Equation equation (16) is the Lindblad equation in the Wigner-Moyal formalism for both, canonical and non-canonical phase-space variables. Note that equation equation (16) differs from a semi-classical Lindblad equation put forward by Bondar et al. [29]. In our case the triple products, for instance $\hat{L}_{k}\hat{\rho}\hat{L}_{k}^{\ast}$ in equation equation (1), become $(L_{k}\star\rho)\star L_{k}^{\ast}$ or equivalently $L_{k}\star(\rho\star L_{k}^{\ast})$ due to the associativity of the star product, instead of $L_{k}\star\rho\star L_{k}^{\ast}$ [29]. Equation equation (16) is in general an infinite-order partial differential equation, and can be written in a series of $\hbar$. In order to obtain the semi-classical limit of the Lindblad quantum master equation equation (1), we expand the right-hand side of equation equation (16) up to 1st order in $\hbar$. We obtain $\displaystyle\frac{\partial\rho}{\partial t}$ $\displaystyle=$ $\displaystyle\left\\{H,\rho\right\\}+\mathrm{i}\sum_{k}\bigg{(}\left\\{L_{k}\rho,L_{k}^{\ast}\right\\}+\left\\{L_{k},\rho L_{k}^{\ast}\right\\}\bigg{)}$ (17) $\displaystyle-\dfrac{\hbar}{2}\sum_{k}\bigg{(}\left\\{\left\\{L_{k},\rho\right\\},L_{k}^{\ast}\right\\}+\left\\{L_{k},\left\\{\rho,L^{\ast}_{k}\right\\}\right\\}\bigg{)}+{\cal O}(\hbar^{2}).$ Equation equation (17) has the same form as the equation obtained for canonical Poisson brackets [19], but with the important difference that the Poisson brackets have been replaced by non-canonical Poisson brackets equation (13). Two main parts in equation equation (16) govern the dynamics: A Hamiltonian part [1st term on the right-hand side of equation equation (17)] corresponding to the Liouville equation if $L_{k}=0$ for all $k$, and a part coming from the dissipative term in equation equation (1). The CCL functions corresponding to CCL operators equation (4) fulfill $L_{k}^{\ast}=c_{k}L_{k}$. If $L_{k}^{\ast}=c_{k}L_{k}$ for all $k$, the 0th order in $\hbar$ on the right-hand side of equation equation (17) vanishes due to the antisymmetry of the Poisson bracket. In addition, if we substitute the Moyal bracket into equation equation (5) and use equation equation (15), it is easy to check that the resulting expression is the same as the one obtained in equation equation (17) for $L_{k}^{\ast}=c_{k}L_{k}$. ### 2.5 Canonical and gyro-Poisson brackets: the Fokker-Planck equation In this section, we consider Poisson matrices such that $\frac{\partial}{\partial\mathbf{z}}\cdot\mathbb{J}(\mathbf{z})=\mathbf{0},$ (18) i.e., $\partial J_{ij}(\mathbf{z})/\partial z_{i}=0$ for all $i$ and $j$. This condition is fulfilled for canonical Poisson brackets, Poisson brackets which do not depend on the phase-space variables, gyro-Poisson brackets [21] as used in equation equation (23) below, and Poisson brackets in Nambu systems [30], for instance. These brackets can be used for spin systems or particles driven by electromagnetic fields. Using condition equation (18) and the antisymmetry of the Poisson matrix, we rewrite equation equation (17) as $\frac{\partial}{\partial t}\rho(\mathbf{z},t)+\frac{\partial}{\partial z_{i}}\big{[}u_{i}(\mathbf{z},t)\rho(\mathbf{z},t)\big{]}-\frac{1}{2}\frac{\partial^{2}}{\partial z_{i}\partial z_{j}}\big{[}D_{ij}(\mathbf{z},t)\rho(\mathbf{z},t)\big{]}=0,$ (19a) where the Einstein summation convention has been used. Equation (19a) corresponds to the _Fokker-Planck equation_ [31, 1]. Therefore, just as for canonical variables [19], the semi-classical Lindblad equation for non- canonical variables is governed by a Fokker-Planck equation, provided equation (18) holds. The _drift vector_ $\mathbf{u}(\mathbf{z},t)=(u_{1},...,u_{n})$ and the _diffusion matrix_ $\mathbb{D}(\mathbf{z},t)=[D_{ij}]$ read as a function of the Hamiltonian and the Lindblad functions $\displaystyle u_{i}(\mathbf{z},t)$ $\displaystyle=$ $\displaystyle\left\\{z_{i},H\right\\}-\mathrm{i}\sum_{k}\bigg{(}L_{k}\left\\{z_{i},L_{k}^{\ast}\right\\}+L_{k}^{\ast}\left\\{L_{k},z_{i}\right\\}\bigg{)}$ (19b) $\displaystyle-\dfrac{\hbar}{2}\sum_{k}\bigg{(}\left\\{\left\\{L_{k},z_{i}\right\\},L_{k}^{\ast}\right\\}+\left\\{\left\\{L_{k}^{\ast},z_{i}\right\\},L_{k}\right\\}\bigg{)}+{\cal O}(\hbar^{2}),$ $\displaystyle D_{ij}(\mathbf{z},t)$ $\displaystyle=$ $\displaystyle\hbar\sum_{k}\bigg{(}\left\\{z_{i},L_{k}\right\\}\left\\{z_{j},L_{k}^{\ast}\right\\}+\left\\{z_{i},L_{k}^{\ast}\right\\}\left\\{z_{j},L_{k}\right\\}\bigg{)}+{\cal O}(\hbar^{2}).$ (19c) Note that the (non-Hermitian) Lindblad functions $L_{k}$ contribute to the drift vector also at first order in $\hbar$. We have verified that this is also true for canonical Poisson brackets as could have been anticipated since equation (17) has the same form as the equation obtained for canonical Poisson brackets [19]. This 1st-order-$\hbar$ contribution in $u_{i}$, however, is missing in [19], most likely due to an omission during the quite tedious calculation. The diffusion matrix equation (19c) is real positive semi- definite. The Fokker-Planck equation governs the dynamics for classical stochastic Markovian systems [1]. The results (19a–c) we obtain from the quantum master equation are therefore consistent with classical stochastic theory. The 2nd term in the Fokker-Planck equation equation (19a) describes a deterministic drift. The 3rd term describes the diffusion of the stochastic variable, where $\mathbb{D}$ is known as the _diffusion matrix_. The diffusion matrix $\mathbb{D}(\mathbf{z})$ is of order $\hbar$, and as a consequence the diffusion term in equation equation (19a) corresponds to _quantum fluctuations_ or _quantum noise_. The mean value of an observable $F(\mathbf{z})$ given by equation equation (6) reads $\frac{\mathrm{d}}{\mathrm{d}t}\left\langle F(\mathbf{z})\right\rangle=\left\langle\frac{\partial F}{\partial z_{i}}u_{i}(\mathbf{z},t)+\frac{1}{2}\frac{\partial^{2}F}{\partial z_{i}\partial z_{j}}D_{ij}\right\rangle.$ (20) In particular, the average of the dynamical variables is given by $\mathrm{d}\langle\mathbf{z}\rangle/\mathrm{d}t=\langle\mathbf{u}(\mathbf{z})\rangle$. In the classical limit ($\hbar=0$), the diffusion matrix vanishes. In this limit, one gets $\frac{\mathrm{d}\mathbf{z}}{\mathrm{d}t}=\left\\{\mathbf{z},H\right\\}-\mathrm{i}\sum_{k=1}^{N}\bigg{(}L_{k}\left\\{\mathbf{z},L_{k}^{\ast}\right\\}-L_{k}^{\ast}\left\\{\mathbf{z},L_{k}\right\\}\bigg{)}.$ (21) Hence, equation equation (17) in the classical limit corresponds to a global description of the dynamics while equation equation (21) corresponds to a local description of the dynamics. Both are equivalent in the sense that they carry the same amount of information. Classical dissipation is given by $\dfrac{\partial}{\partial\mathbf{z}}\cdot\mathbf{u}(\mathbf{z},t)=2i\sum_{k=1}^{N}\\{L_{k}^{\ast},L_{k}\\}.$ (22) For CCL operators equation (4) and corresponding CCL functions, the 2nd term on the right-hand side of equation equation (21) vanishes and equation equation (21) becomes $\mathbf{u}(\mathbf{z},t)=\\{\mathbf{z},H\\}$. Equation equation (17) becomes Liouville’s equation. The system is _Hamiltonian_ in the classical limit without dissipation. ## 3 Examples In the following, we consider two different examples of spin systems. In both cases, the dynamics of many spins is described by a collective spin variable $\mathbf{S}$ [3, 4]. The Poisson bracket is given by $\left\\{F,G\right\\}=\mathbf{S}\cdot\left(\frac{\partial F}{\partial\mathbf{S}}\times\frac{\partial G}{\partial\mathbf{S}}\right),$ (23) whose _Casimir invariants_ are functions of the collective spin norm $|\mathbf{S}|$. In the first example, we consider the Lindblad quantum master equation for a model of superradiance and spin squeezing. We derive the equations for the expectation value of the spin dynamics quantum mechanically. Then, we derive the equations for the expectation value of the spin dynamics from our semi-classical equations [see equation equation (20)], and we show that both sets of equations are of the same form. In the second example, we consider a general form of the Hamiltonian and the Lindblad functions. We show that the mean-field equations for the semi-classical spin dynamics are the Bloch equation with relaxation terms. ### 3.1 Superradiance and spin squeezing Superradiance and spin squeezing can be obtained from a system of $N$ spins governed by the master equation for the reduced density matrix $\hat{\rho}$ [4] $\frac{\partial\hat{\rho}}{\partial t}=-\frac{i}{\hbar}\left[\hat{H},\hat{\rho}\right]+\dfrac{1}{\hbar}\left(\left[\hat{L},\hat{\rho}\hat{L}^{\dagger}\right]+\left[\hat{L}\hat{\rho},\hat{L}^{\dagger}\right]\right),$ (24a) where the Hamiltonian and the Lindblad operators are $\displaystyle\hat{H}=\Omega\hat{S}_{x},$ (24b) $\displaystyle\hat{L}=\sqrt{\dfrac{\Gamma}{2J}}\left[\hat{S}_{x}\left(\cos\theta+\sin\theta\right)-\mathrm{i}\hat{S}_{y}\left(\cos\theta-\sin\theta\right)\right],$ (24c) respectively. The parameter $\Omega$ and $\Gamma$ are the driving amplitude and the quantum-jump rate, respectively. The operator $\hat{\mathbf{S}}=(\hat{S}_{x},\hat{S}_{y},\hat{S}_{z})$ is the collective spin operator obeying the commutation relations $[\hat{S}_{i},\hat{S}_{j}]=\mathrm{i}\hbar\epsilon_{ijk}\hat{S}_{k}$. The driving amplitude is $\Omega$ and the total angular momentum is $J=N/2$. #### 3.1.1 Mean-field equations computed quantum mechanically The commutator of the spins is given by $[\hat{S}_{i},\hat{S}_{j}]=\mathrm{i}\hbar\epsilon_{ijk}\hat{S}_{k}$. From the equations of motion of the expectation values of quantum observables equation (2), we obtain for the quantum Lindblad master equation equation (24a) $\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\langle\hat{S}_{x}\rangle=\Omega_{c}\langle\hat{S}_{x}\hat{S}_{z}\rangle-\dfrac{\Gamma\hbar}{2J}\langle\hat{S}_{x}\rangle\left[1-\sin(2\theta)\right],$ (25a) $\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\langle\hat{S}_{y}\rangle=-\Omega\langle\hat{S}_{z}\rangle+\Omega_{c}\langle\hat{S}_{y}\hat{S}_{z}\rangle-\dfrac{\Gamma\hbar}{2J}\langle\hat{S}_{y}\rangle\left[1+\sin(2\theta)\right],$ (25b) $\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\langle\hat{S}_{z}\rangle=\Omega\langle\hat{S}_{y}\rangle-\Omega_{c}\left(\langle\hat{S}_{x}^{2}\rangle+\langle\hat{S}_{y}^{2}\rangle\right)-\dfrac{\Gamma\hbar}{J}\langle\hat{S}_{z}\rangle,$ (25c) where $\Omega_{c}=\Gamma\cos(2\theta)/J$. In the mean-field equations [4] one assumes $\langle\hat{S}_{i}\hat{S}_{j}\rangle\approx\langle\hat{S}_{i}\rangle\langle\hat{S}_{j}\rangle$. As a result, we obtain a set of dynamical equations for the expectation value of the spin operators $\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\langle\hat{S}_{x}\rangle=\Omega_{c}\langle\hat{S}_{x}\rangle\langle\hat{S}_{z}\rangle-\dfrac{\Gamma\hbar}{2J}\langle\hat{S}_{x}\rangle\left[1-\sin(2\theta)\right],$ (26a) $\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\langle\hat{S}_{y}\rangle=-\Omega\langle\hat{S}_{z}\rangle+\Omega_{c}\langle\hat{S}_{y}\rangle\langle\hat{S}_{z}\rangle-\dfrac{\Gamma\hbar}{2J}\langle\hat{S}_{y}\rangle\left[1+\sin(2\theta)\right],$ (26b) $\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\langle\hat{S}_{z}\rangle=\Omega\langle\hat{S}_{y}\rangle-\Omega_{c}\left(\langle\hat{S}_{x}\rangle^{2}+\langle\hat{S}_{y}\rangle^{2}\right)-\dfrac{\Gamma\hbar}{J}\langle\hat{S}_{z}\rangle.$ (26c) #### 3.1.2 Expectation values from the semi-classical Lindblad equation In the semi-classical approach, the operators become scalar functions of dynamical phase-space variables. The Hamiltonian operator $\hat{H}$ becomes $H(\mathbf{S})$, and the Lindblad operators $\hat{L}$ become $L(\mathbf{S})$. In this case, they read $\displaystyle H(\mathbf{S})=\Omega S_{x},$ (27a) $\displaystyle L(\mathbf{S})=\sqrt{\dfrac{\Gamma}{2J}}\left[S_{x}\left(\cos\theta+\sin\theta\right)-\mathrm{i}S_{y}\left(\cos\theta-\sin\theta\right)\right].$ (27b) In equation equation (20), we substitute the gyro-Poisson bracket equation (23) and we use $F=\mathbf{S}$. We obtain $\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\langle S_{x}\rangle=\Omega_{c}\langle S_{x}S_{z}\rangle-\dfrac{\Gamma\hbar}{2J}\langle S_{x}\rangle\left[1-\sin(2\theta)\right],$ (28a) $\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\langle S_{y}\rangle=-\Omega\langle S_{z}\rangle+\Omega_{c}\langle S_{y}S_{z}\rangle-\dfrac{\Gamma\hbar}{2J}\langle S_{y}\rangle\left[1+\sin(2\theta)\right],$ (28b) $\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\langle S_{z}\rangle=\Omega\langle S_{y}\rangle-\Omega_{c}\left(\langle S_{x}^{2}\rangle+\langle S_{y}^{2}\rangle\right)-\dfrac{\Gamma\hbar}{J}\langle S_{z}\rangle,$ (28c) which have the same form as equations (25a–c) derived from the quantum- mechanical approach. Beyond the mean-field equations, our semi-classical approach takes into account the semi-classical spin correlations $\langle\hat{S}_{i}\hat{S}_{j}\rangle$. Yet the long-time evolution of the expectation values according to equations (25a–c) and equations (28a–c) differ due to these correlation terms which involve operators and functions, respectively. However, in mean-field approximation, these differences in the correlations is suppressed and quantum as well as semi-classical dynamics is governed by the same mean-field equations. ### 3.2 Collective spin systems and the Bloch equations of nuclear magnetization We consider a general form of the Hamiltonian and Lindblad functions, and a collective spin variable $\mathbf{S}$ with a Poisson bracket equation (23). The expectation value of the collective spin variables is given by equation equation (20). In this case, $\mathrm{d}\langle\mathbf{S}\rangle/\mathrm{d}t=\langle\mathbf{u}(\mathbf{S})\rangle$, it reads $\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\langle\mathbf{S}\rangle$ $\displaystyle=$ $\displaystyle\bigg{\langle}\mathbf{S}\times\left[-\frac{\partial H}{\partial\mathbf{S}}+\mathrm{i}\sum_{k}\left(L_{k}\frac{\partial L^{\ast}_{k}}{\partial\mathbf{S}}-L^{\ast}_{k}\frac{\partial L_{k}}{\partial\mathbf{S}}\right)\right]$ $\displaystyle+\dfrac{\hbar}{2}\sum_{k}\left(\left\\{L_{k},\mathbf{S}\times\frac{\partial L^{\ast}_{k}}{\partial\mathbf{S}}\right\\}+\left\\{L_{k}^{\ast},\mathbf{S}\times\frac{\partial L_{k}}{\partial\mathbf{S}}\right\\}\right)\bigg{\rangle}+{\cal O}(\hbar^{2}).$ In case of a mean-field scenario, where the correlation function almost vanishes, i.e., for any function $f$, $\langle f(\mathbf{S})\rangle\approx f(\langle\mathbf{S}\rangle)$, the expectation value of the collective spin variable is given by a Bloch equation with relaxation terms [32] $\frac{\mathrm{d}}{\mathrm{d}t}\langle\mathbf{S}\rangle=\langle\mathbf{S}\rangle\times\mathbf{B}(\langle\mathbf{S}\rangle)-\mathbf{R}(\langle\mathbf{S}\rangle),$ (29a) where $\mathbf{B}$ is the Bloch vector and $\mathbf{R}$ is the relaxation term. They are related to the Hamiltonian and the Lindblad functions through $\displaystyle\mathbf{B}(\mathbf{S})=-\frac{\partial H}{\partial\mathbf{S}}+\mathrm{i}\sum_{k}\left(L_{k}\frac{\partial L^{\ast}_{k}}{\partial\mathbf{S}}-L^{\ast}_{k}\frac{\partial L_{k}}{\partial\mathbf{S}}\right)-\frac{\hbar}{2}\sum_{k}\left[\mathbf{v}(L_{k},L_{k}^{\ast})+\mathbf{v}(L_{k}^{\ast},L_{k})\right],$ (29b) $\displaystyle\mathbf{R}(\mathbf{S})=\dfrac{\hbar}{2}\sum_{k}\left[2\mathbf{S}\left(\dfrac{\partial L_{k}}{\partial\mathbf{S}}\cdot\dfrac{\partial L_{k}^{\ast}}{\partial\mathbf{S}}\right)-\dfrac{\partial L_{k}}{\partial\mathbf{S}}\left(\mathbf{S}\cdot\dfrac{\partial L_{k}^{\ast}}{\partial\mathbf{S}}\right)-\dfrac{\partial L_{k}^{\ast}}{\partial\mathbf{S}}\left(\mathbf{S}\cdot\dfrac{\partial L_{k}}{\partial\mathbf{S}}\right)\right],$ (29c) where $v_{i}(F,G)=\mathbf{S}\cdot\left(\dfrac{\partial^{2}F}{\partial\mathbf{S}\partial S_{i}}\times\dfrac{\partial G}{\partial\mathbf{S}}\right).$ (29d) Both, $\mathbf{B}$ and $\mathbf{R}$, can depend on the collective spin variable and time through the Hamiltonian and Lindblad functions. The relaxation term $\mathbf{R}$ is of order $\hbar$, and therefore relaxation is here a quantum feature. In the classical limit ($\hbar=0$), there is no relaxation, and therefore the norm of the spin variable is conserved, i.e., $\mathrm{d}|\mathbf{S}|/\mathrm{d}t=0$. However, dissipation is still possible: If the Lindblad functions are not CCL functions, there can be attractors in the dynamical system. The relaxation term also vanishes if $\partial L_{k}/\partial\mathbf{S}\propto\mathbf{S}$, implying that each $L_{k}$ is a function of the norm of the spin $|\mathbf{S}|$. In this case, $L_{k}$ are Casimir invariants and as a consequence the system is Hamiltonian [no dissipation term in equation equation (17)]. The model in the first example (see section 3.1) can serve as an application of equations equation (29a). We substitute the Hamiltonian equation (27a) and Lindblad functions equation (27b) into equations equation (29b) and equation (29c) to obtain $\displaystyle\mathbf{B}(\mathbf{S})=-\Omega\mathbf{e}_{x}+\Omega_{c}\mathbf{S}\times\mathbf{e}_{z},$ $\displaystyle\mathbf{R}(\mathbf{S})=\dfrac{\Gamma\hbar}{2J}\left[S_{x}(1-\sin 2\theta)\mathbf{e}_{x}+S_{y}(1+\sin 2\theta)\mathbf{e}_{y}+2S_{z}\mathbf{e}_{z}\right].$ Using these expressions in equation equation (29a), we obtain the same equations of motion for $\langle\mathbf{S}\rangle$ as equations (26a–c). ## 4 Conclusions To conclude, we have formulated the Lindblad quantum master equation equation (16) in the Wigner-Moyal formalism and we have derived its semi-classical limit for canonical and non-canonical Poisson Poisson brackets. We have shown that for Poisson matrices which fulfill condition equation (18), the semi- classical limit of the Lindblad quantum master equation is a Fokker-Planck equation [see equations equation (19a) equation (19b) equation (19c)]. Since condition equation (18) includes as a special case the canonical Poisson bracket our results agree with those obtained by Strunz et al. [19] in the canonical case. Condition equation (18) is also satisfied by gyro-Poisson brackets, such as the Poisson bracket equation (23), which occurs in spin ensembles [7, 4] and for particles in an electromagnetic field [25]. More specifically, we have shown that the semi-classical limit of a spin ensemble whose dynamics is driven by a Lindblad master equation equation (1) is related to the Bloch equations with relaxations (see section 3.2), and we have expressed the Bloch vector and the relaxation vector of the Bloch equations as a function of the Hamiltonian and Lindblad functions in the Wigner-Moyal formalism (29b, c). To illustrate the relation between the mean-field approximation and our semi- classical approach, we have applied the semi-classical limit of the Lindblad quantum master equation to spin ensembles with a gyro-Poisson bracket equation (23). While the mean-field equations obtained from the quantum-mechanical approach (26a–c) agree with those obtained from the semi-classical approach, our semi-classical approach without further approximation [see equations (19a–c)] also provides information on the spin correlation and dissipation, even in its classical limit ($\hbar=0$). This suggests that our semi-classical approach provides more information in the classical limit than the traditional classical limit of the quantum formulation equation (3) via mean-field approximation [3, 33, 4]. ## Acknowledgments JD thanks Christian Johansen and Alexander Eisfeld for helpful discussions. ## References * [1] Breuer H P and Petruccione F 2002 The theory of open quantum systems (Oxford) * [2] Manzano D 2020 AIP Advances 10 025106 * [3] Bhaseen M J, Mayoh J, Simons B D and Keeling J 2012 Phys. Rev. A 85 013817 * [4] Muñoz C S, Buŭca B, Tindall J, González-Tudela A, Jaksch D and Porras D 2019 Phys. Rev. A 100 042113 * [5] Kamleitner I and Shnirman A 2011 Phys. Rev. B 84 235140 * [6] Antonenko N V, Ivanova S P, Jolos R V and Scheid W 1994 J. Phys. G: Nuc. Part. Phys. 20 1447 * [7] Buča B, Tindall J and Jaksch D 2019 Nature Comm. 10 1730 * [8] Manzano D and Kyoseva E 2016 Sci. Rep. 6 31161 * [9] Gardiner C W and Zoller P 2000 Quantum noise (Springer, Berlin) * [10] Prosen T 2011 Phys. Rev. Lett. 106 217206 * [11] Olmos B, Lesanovsky I and Garrahan J P 2012 Phys. Rev. Lett. 109 020403 * [12] Lidar D A, Chuang I L and Whaley K B 1998 Phys. Rev. Lett. 81 2594 * [13] Kraus B, Büchler H P, Diehl S, Kantian A, Micheli A and Zoller P 2008 Phys. Rev. A 78 042307 * [14] Habib S, Shizume K and Zurek W H 1998 Phys. Rev. Lett. 80 4361 * [15] Brun T A 2000 Phys. Rev. A 61 042107 * [16] Plenio M B and Knight P L 1998 Rev. Mod. Phys. 70 101 * [17] Corkum P B 1993 Phys. Rev. Lett. 71 1994 * [18] Rost J M 1994 Phys. Rev. Lett. 72 1998 * [19] Strunz W T and Percival I C 1998 J. Phys. A: Math. Gen. 31 1801 * [20] Moyal J E 1949 Proc. Cambridge Phil. Soc. 45 99 * [21] Ruijgrok T W and der Vlist H V 1980 Phys. A 101 571 * [22] Błaszak M and Domański Z 2012 Ann. Phys. 327 167 * [23] Littlejohn R G 1986 Phys. Rep. 138 193 * [24] Soloviev M A 2014 Theor. Math. Phys. 181 1612 * [25] Cary J R and Littlejohn R G 1983 Ann. Phys. 151 1 * [26] Kontsevitch M 2003 Lett. Math. Phys. 66 157 * [27] Behr W and Sykora A 2004 Nucl. Phys. B 698 473 * [28] Kupriyanov V G and Vassilevich D V 2008 Euro. Phys. J. C 58 627 * [29] Bondar D I, Cabrera R, Campos A, Mukamel S and Rabitz H A 2016 J. Phys. Chem. Lett. 7 1632 * [30] Nambu Y 1973 Phys. Rev. D 7 2405 * [31] Risken H 1984 The Fokker-Planck equation (Springer-Verlag Berlin Heidelberg New York Tokyo) * [32] Bloch F 1946 Phys. Rev. Lett. 70 460 * [33] Moodie R I, Ballantine K E and Keeling J 2018 Phys. Rev. A 97 033802
∎ 11institutetext: Department of Physics and Astronomy, University of South Carolina, Columbia, SC 29208, USA 22institutetext: Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA 33institutetext: INFN – Laboratori Nazionali di Legnaro, Legnaro (Padova) I-35020, Italy 44institutetext: INFN – Sezione di Bologna, Bologna I-40127, Italy 55institutetext: Dipartimento di Fisica, Sapienza Università di Roma, Roma I-00185, Italy 66institutetext: INFN – Sezione di Roma, Roma I-00185, Italy 77institutetext: INFN – Laboratori Nazionali del Gran Sasso, Assergi (L’Aquila) I-67100, Italy 88institutetext: INFN – Sezione di Milano Bicocca, Milano I-20126, Italy 99institutetext: Dipartimento di Fisica, Università di Milano-Bicocca, Milano I-20126, Italy 1010institutetext: Center for Neutrino Physics, Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061, USA 1111institutetext: INFN – Sezione di Genova, Genova I-16146, Italy 1212institutetext: Dipartimento di Fisica, Università di Genova, Genova I-16146, Italy 1313institutetext: Massachusetts Institute of Technology, Cambridge, MA 02139, USA 1414institutetext: Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Institute of Modern Physics, Fudan University, Shanghai 200433, China 1515institutetext: Department of Physics, University of California, Berkeley, CA 94720, USA 1616institutetext: Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA 1717institutetext: INFN – Gran Sasso Science Institute, L’Aquila I-67100, Italy 1818institutetext: Wright Laboratory, Department of Physics, Yale University, New Haven, CT 06520, USA 1919institutetext: INFN – Laboratori Nazionali di Frascati, Frascati (Roma) I-00044, Italy 2020institutetext: Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France 2121institutetext: Physics Department, California Polytechnic State University, San Luis Obispo, CA 93407, USA 2222institutetext: INPAC and School of Physics and Astronomy, Shanghai Jiao Tong University; Shanghai Laboratory for Particle Physics and Cosmology, Shanghai 200240, China 2323institutetext: Dipartimento di Fisica e Astronomia, Alma Mater Studiorum – Università di Bologna, Bologna I-40127, Italy 2424institutetext: Service de Physique des Particules, CEA / Saclay, 91191 Gif-sur-Yvette, France 2525institutetext: Lawrence Livermore National Laboratory, Livermore, CA 94550, USA 2626institutetext: Department of Nuclear Engineering, University of California, Berkeley, CA 94720, USA 2727institutetext: Dipartimento di Ingegneria Civile e Meccanica, Università degli Studi di Cassino e del Lazio Meridionale, Cassino I-03043, Italy 2828institutetext: Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street Baltimore, MD, 21211 2929institutetext: INFN – Sezione di Padova, Padova I-35131, Italy 3030institutetext: Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA fn1Deceased # Search for Double-Beta Decay of $\mathrm{{}^{130}Te}$ to the $0^{+}$ States of $\mathrm{{}^{130}Xe}$ with CUORE D. Q. AdamsUSC C. AlduinoUSC K. AlfonsoUCLA F. T. Avignone IIIUSC O. AzzoliniINFNLegnaro G. BariINFNBologna F. BelliniRoma,INFNRoma G. BenatoLNGS M. BiassoniINFNMiB A. BrancaMilano,INFNMiB C. BrofferioMilano,INFNMiB C. BucciLNGS J. CamilleriVirginiaTech A. CaminataINFNGenova A. CampaniGenova,INFNGenova L. CanonicaMIT,LNGS X. G. CaoFudan S. CapelliMilano,INFNMiB L. CappelliLNGS,BerkeleyPhys,LBNLNucSci L. CardaniINFNRoma P. CarnitiMilano,INFNMiB N. CasaliINFNRoma E. CeliGSSI,LNGS D. ChiesaMilano,INFNMiB M. ClemenzaMilano,INFNMiB S. CopelloGenova,INFNGenova C. CosmelliRoma,INFNRoma O. CremonesiINFNMiB R. J. CreswickUSC A. D’AddabboGSSI,LNGS I. DafineiINFNRoma C. J. DavisYale S. Dell’OroMilano,INFNMiB S. Di DomizioGenova,INFNGenova V. DompèGSSI,LNGS D. Q. FangFudan G. FantiniRoma,INFNRoma M. FaverzaniMilano,INFNMiB E. FerriMilano,INFNMiB F. FerroniGSSI,INFNRoma E. FioriniINFNMiB,Milano M. A. FranceschiINFNFrascati S. J. FreedmanLBNLNucSci,BerkeleyPhys,fn1 S.H. FuFudan B. K. FujikawaLBNLNucSci A. GiacheroMilano,INFNMiB L. GironiMilano,INFNMiB A. GiulianiParis-Saclay P. GorlaLNGS C. GottiINFNMiB T. D. GutierrezCalPoly K. HanSJTU K. M. HeegerYale R. G. HuangBerkeleyPhys H. Z. HuangUCLA J. JohnstonMIT G. KeppelINFNLegnaro Yu. G. KolomenskyBerkeleyPhys,LBNLNucSci C. LigiINFNFrascati L. MaUCLA Y. G. MaFudan L. MariniBerkeleyPhys,LBNLNucSci R. H. MaruyamaYale D. MayerMIT Y. MeiLBNLNucSci N. MoggiBolognaAstro,INFNBologna S. MorgantiINFNRoma T. NapolitanoINFNFrascati M. NastasiMilano,INFNMiB J. NikkelYale C. NonesSaclay E. B. NormanLLNL,BerkeleyNucEng A. NucciottiMilano,INFNMiB I. NutiniMilano,INFNMiB T. O’DonnellVirginiaTech J. L. OuelletMIT S. PaganYale C. E. PagliaroneLNGS,Cassino L. PagnaniniGSSI,LNGS M. PallaviciniGenova,INFNGenova L. PattavinaLNGS M. PavanMilano,INFNMiB G. PessinaINFNMiB V. PettinacciINFNRoma C. PiraINFNLegnaro S. PirroLNGS S. PozziMilano,INFNMiB E. PrevitaliMilano,INFNMiB A. PuiuGSSI,LNGS C. RosenfeldUSC C. RusconiUSC,LNGS M. SakaiBerkeleyPhys S. SangiorgioLLNL B. SchmidtLBNLNucSci N. D. ScielzoLLNL V. SharmaVirginiaTech V. SinghBerkeleyPhys M. SistiINFNMiB D. SpellerJHU P.T. SurukuchiYale L. TaffarelloINFNPadova F. TerranovaMilano,INFNMiB C. TomeiINFNRoma K. J. VetterBerkeleyPhys,LBNLNucSci M. VignatiINFNRoma S. L. WagaarachchiBerkeleyPhys,LBNLNucSci B. S. WangLLNL,BerkeleyNucEng B. WelliverLBNLNucSci J. WilsonUSC K. WilsonUSC L. A. WinslowMIT S. ZimmermannLBNLEngineering S. ZucchelliBolognaAstro,INFNBologna (Received: date / Accepted: date) ###### Abstract The CUORE experiment is a large bolometric array searching for the lepton number violating neutrino-less double beta decay ($0\nu\beta\beta$) in the isotope $\mathrm{{}^{130}Te}$. In this work we present the latest results on two searches for the double beta decay (DBD) of $\mathrm{{}^{130}Te}$ to the first $0^{+}_{2}$ excited state of $\mathrm{{}^{130}Xe}$: the $0\nu\beta\beta$ decay and the Standard Model-allowed two-neutrinos double beta decay ($2\nu\beta\beta$). Both searches are based on a 372.5 kg$\times$yr TeO2 exposure. The de-excitation gamma rays emitted by the excited Xe nucleus in the final state yield a unique signature, which can be searched for with low background by studying coincident events in two or more bolometers. The closely packed arrangement of the CUORE crystals constitutes a significant advantage in this regard. The median limit setting sensitivities at 90% Credible Interval (C.I.) of the given searches were estimated as $\mathrm{S^{0\nu}_{1/2}=5.6\times 10^{24}\>\mathrm{yr}}$ for the ${0\nu\beta\beta}$ decay and $\mathrm{S^{2\nu}_{1/2}=2.1\times 10^{24}\>\mathrm{yr}}$ for the ${2\nu\beta\beta}$ decay. No significant evidence for either of the decay modes was observed and a Bayesian lower bound at $90\%$ C.I. on the decay half lives is obtained as: $\mathrm{(T_{1/2})^{0\nu}_{0^{+}_{2}}>5.9\times 10^{24}\>\mathrm{yr}}$ for the $0\nu\beta\beta$ mode and $\mathrm{(T_{1/2})^{2\nu}_{0^{+}_{2}}>1.3\times 10^{24}\>\mathrm{yr}}$ for the $2\nu\beta\beta$ mode. These represent the most stringent limits on the DBD of 130Te to excited states and improve by a factor $\sim 5$ the previous results on this process. ††journal: Eur. Phys. J. C ## 1 Introduction Double beta decay (DBD) is an extremely rare nuclear process where a simultaneous transmutation of a pair of neutrons into protons converts a nucleus (A, Z) into an isobar (A, Z+2), with the emission of two electrons and two anti-neutrinos. This two-neutrino decay mode ($2\nu\beta\beta$) is predicted in the Standard Model and was detected in several nuclei. The neutrinoless mode of the decay ($0\nu\beta\beta$) is a posited Beyond Standard Model process that could shed light on many open aspects of modern particle physics and cosmology such as the existence of lepton number violation and elementary Majorana fermions, the neutrino mass scale, and the baryon asymmetry in the Universe Racah:1937qq ; Furry:1939qr ; Pontecorvo:1968 ; Schechter:1981bd ; Fukugita:1986hr . Both DBD modes can proceed through transitions to the ground state as well as to various excited states of the daughter nucleus. While the former can be easier to detect through their shorter half-lives, the latter leaves a unique signature which may be detected with significantly reduced backgrounds. The excited state decays also provide powerful tests of the nuclear physics of DBD and can shed light on nuclear matrix element calculations as well as the ongoing discussion on the quenching of the effective axial coupling constant $g_{A}$; eventually, they could even be used to disentangle the mechanism of $0\nu\beta\beta$ decay Avignone:2007fu . So far, $2\nu\beta\beta$ decay to the first $0^{+}$ excited state has been observed in only 2 isotopes: ${}^{100}\mathrm{Mo}$ Barabash:1995fn and ${}^{150}\mathrm{Nd}$ Barabash:2004dv , with half lives of $(\mathrm{T}_{1/2})^{2\nu}_{0^{+}}=6.1^{+1.8}_{-1.1}\times 10^{20}\>\mathrm{yr}$ and $(\mathrm{T}_{1/2})^{2\nu}_{0^{+}}=1.4^{+0.4}_{-0.2}\mathrm{(stat.)}\pm 0.3\mathrm{(syst.)}\times 10^{20}\>\mathrm{yr}$, respectively. Searches for the same process in other isotopes has yielded lower limits from $3.1\times 10^{20}\>\mathrm{yr}$ to $8.3\times 10^{23}\>\mathrm{yr}$ at $90\>\%$ Confidence Level (C.L.) (see Ref. Barabash:2017bgb for a review). In this work, we focus on the search for $0\nu\beta\beta$ and $2\nu\beta\beta$ decays of $\mathrm{{}^{130}Te}$ to the first $0^{+}$ excited state of $\mathrm{{}^{130}Xe}$ with the CUORE experiment. Presently, the strongest limits on the decay to excited states half-life of ${}^{130}\mathrm{Te}$ come from a combination of Cuoricino Andreotti:2011in and CUORE-0 Adams:2018yrj data: the latter (not included in Ref. Barabash:2017bgb ) was published recently and includes the combination of the predecessor’s results. The obtained limits are: $\displaystyle(\mathrm{T}_{1/2})_{0^{+}_{2}}^{0\nu}$ $\displaystyle>1.4\times 10^{24}\>\mathrm{yr}\;(90\%\>\mathrm{C.L.})$ (1a) $\displaystyle(\mathrm{T}_{1/2})_{0^{+}_{2}}^{2\nu}$ $\displaystyle>2.5\times 10^{23}\>\mathrm{yr}\;(90\%\>\mathrm{C.L.})$ (1b) for the $0\nu$ and $2\nu$ process, respectively. Theoretical predictions PhysRevC.91.054309 on the $\mathrm{(T_{1/2})}^{2\nu}_{0^{+}_{2}}$ observable in the $2\nu\beta\beta$ decay channel are based on the QRPA approach and favor the following range: ${}^{th}(\mathrm{T}_{1/2})^{2\nu}_{0^{+}_{2}}=(7.2-16)\times 10^{24}\>\mathrm{yr}$ (2) where the range depends on the precise treatment of $g_{A}$. The lower bound assumes a constant function of the mass number A, and the upper bound assumes a value of $g_{A}=0.6$ PhysRevC.91.054309 ; SUHONEN2013153 ; SUHONEN20141 . A data driven estimate of the $2\nu\beta\beta$ ground state to excited state decay rate in the IBM-II framework based on Ref. Barea:2013bz ; Kotila:2012zza ; Barabash:2010ie is reported in Ref. Lehnert2015 as ${}^{th}(\mathrm{T}_{1/2})^{2\nu}_{0^{+}_{2}}=2.2\times 10^{25}\>\mathrm{yr}.$ (3) In this regard, as stated before, both a measurement or a more stringent limit with respect to Ref. Adams:2018yrj are informative from the point of view of refining and validating the theoretical computations. The decay to excited states has a unique signature. The double-beta decay emits two electrons, which share kinetic energy up to 734 keV. The subsequent decay of the excited daughter nucleus typically emits two or three high energy gamma rays in cascade. Due to the emission of such coincident de-excitation $\gamma$ rays, both $0\nu\beta\beta$ and $2\nu\beta\beta$ decay channels allow a significant background reduction with respect to the corresponding transitions to the ground state. This holds especially in an experimental setup that exploits a high detector granularity, such as the CUORE experiment. ## 2 Detector and Data Production The Cryogenic Underground Observatory for Rare Events (CUORE) Arnaboldi:2002du ; Brofferio:2019yoc is a ton-scale cryogenic detector located at the underground Laboratori Nazionali del Gran Sasso (LNGS) in Italy. CUORE is designed to search for the $0\nu\beta\beta$ decay of $\mathrm{{}^{130}Te}$ to the ground state of $\mathrm{{}^{130}Xe}$ Alduino:2017ehq ; Adams:2019jhp , and has a low background rate near the $0\nu\beta\beta$ decay region of interest (ROI), an excellent energy resolution, and a high detection efficiency. The CUORE detector consists of a close-packed array of 988 TeO2 crystals operating as cryogenic bolometers Fiorini:1983yj ; Enss:2008x ; Arnaboldi:2010fj at a temperature of $\sim$10 mK. The CUORE crystals are $5\times 5\times 5$ cm3 cubes weighing 750 g each, arranged in 19 towers: each consisting of a copper structure with 13 floors and 4 crystals per floor. A custom-made 3He/4He dilution refrigerator, which represents the state of the art for this cryogenic technique, is used to cool down the CUORE cryostat, where the entire array is contained and shielded. Alduino:2019xia ; Alessandria:2013ufa ; Buccheri:2014bma ; Arnaboldi:2017aek ; DiDomizio:2018ldc ; Benato:2017kdf ; DAddabbo:2017efe . Each CUORE crystal records thermal pulses via a neutron-transmutation doped (NTD) germanium thermistor Haller:1984ntd glued to its surface. Any energy deposition in the crystal causes a sudden rise in temperature and can indicate the emission of a particle inside, the crossing of a particle through, or some environmental thermal instability (e.g. earthquakes). The data acquisition and production of CUORE event data used in this work closely follows the procedure used in Alduino:2019xia and is described in detail in Alduino:2016zrl . We briefly review the basic process here and highlight the differences. The NTD converts the thermal signal to a voltage output, which is amplified, filtered through a 6-pole Bessel anti-aliasing filter, and sampled continuously at 1kHz. The data are stored to disk and triggered offline with an algorithm based on the optimum filter (OF) Gatti:1986cw ; DiDomizio:2010ph ; campani2020lowering . For each triggered pulse, a 10 second window around each trigger (3 seconds before and 7 seconds after) is processed through a series of analysis steps, with the aim of extracting the physical quantities associated to the pulse. The waveform is filtered using an OF built from a pulse template, and the measured noise power spectrum. The signal amplitudes are then evaluated from the OF filtered waveforms and those amplitudes are corrected for small changes in the detector gain due to temperature drifts. We calibrate each bolometer individually using dedicated calibration runs with $\mathrm{{}^{232}Th}$ and $\mathrm{{}^{60}Co}$ gamma sources deployed around the detector array. These calibration runs typically last a few days every two months. We impose a pulse shape selection (PSA) based on 6 pulse shape parameters. This cut removes noisy events, pileup events, and non-physical events. Unlike the decay to ground state search described in Ref. Alduino:2019xia , the physics search described in the present work focuses on coincident energy depositions in multiple crystals. In particular, we are focusing on events where energy is deposited in either two or three bolometers. As the reconstructed time difference between events on nearby bolometers is affected by differences in pulse rise times, a bolometer-by-bolometer correction is applied. Sets of coincident energy releases in $M$ bolometers within a $\pm 5\>$ ms time window are grouped together as multiplets of multiplicity $M$. CUORE started its data taking in May 2017 and, after two significant interruptions for important maintenance of the cryogenic system, is now seeing its exposure grow at an average rate of $\sim$ 50 kg$\times$yr/month. The CUORE data collection is organized in datasets: a dataset begins with a gamma calibration campaign that typically lasts 2-3 days, followed by 6-8 weeks of uninterrupted background data taking, and ends with another gamma calibration. Recently, the CUORE collaboration released the results of the search for $0\nu\beta\beta$ decay to the g.s. on the the accumulated exposure of 372.5 kg$\cdot$y, setting an improved limit on the half-life of $\mathrm{{}^{130}Te}$ of $(\rm{T_{1/2}})^{0\nu}_{0^{+}_{1}}>3.2\times 10^{25}$ yr Adams:2019jhp . ## 3 Analysis In this section we describe the analysis steps that are specific to the search for ${}^{130}\mathrm{Te}$ decay to the excited states of ${}^{130}\mathrm{Xe}$. The de-excitation of the ${}^{130}\mathrm{Xe}$ nucleus follows one of three possible patterns, i.e. paths through states of decreasing energy from the $0^{+}_{2}$ to the $0^{+}_{1}$ ground state (Figure 1). Details about the probability of each de-excitation pattern, referred in the following as A, B and C (in decreasing order of probability), and the energy of the emitted $\gamma$ rays are reported in Table 1. The simultaneous emission of DBD betas and de-excitation gammas produces coincidence multiplets, i.e. sets of simultaneous pulses in $M$ bolometers, grouped by the coincidence algorithm. We search for events with full containment of the final state gammas in the crystals: more specifically we try to avoid multiplets where one or more of the final state $\gamma$s escape the source crystal and are absorbed by some non-active part of the experimental apparatus, or Compton scattering events, where the energy of a single de-excitation gamma is split among two or more detectors. We place energy selection cuts to find these events, which are listed in Table 3 and described in more details in Sec. 3.4. Partitions are defined as unique groupings of energy depositions that pass a particular set of energy selection cuts. For a fixed multiplicity $M$ and a source pattern, they are identified by all possible ways of partitioning the final state particles in $M$ different crystals. Finally we define signatures as partitions from different patterns that are indistinguishable. Single-site ($M=1$) signatures are not taken into account, as the $0\nu\beta\beta$ decay channel would be indistinguishable from the same decay on the ground state of ${}^{130}\mathrm{Xe}$, while the $2\nu\beta\beta$ decay channel, instead, would suffer from high background from the decay to the ground state. Therefore, there remain $8$ partitions for patterns A and B, and $14$ for pattern C. Each of the partitions is labelled with strings of $3$ characters with the following convention [Multiplicity] [Pattern] [Index] where Multiplicity = 2,3,4 indicates the number of involved crystals, Pattern = A,B,C stores the originating de-excitation pattern, and Index is a unique integer counter to distinguish the various combinations of energy groupings for that pattern and multiplicity. Partitions sharing the same expected energy release are indistinguishable and are merged as signatures. For this reason, instead of handling a total number of $22$ partitions we are left with $15$ signatures excitedTAUP19 . An example of indistinguishable partitions is given in Table 3 by the 2A0-2B1 signature. In one of the crystals a gamma energy release of 1257 keV is expected. This can be either due (see Table 1) to $\gamma_{1}$ from pattern A or the simultaneous absorption of $\gamma_{1}+\gamma_{2}$ from pattern B. Figure 1: The decay scheme of ${}^{130}\mathrm{Te}$ is shown with details about the involved excited states of ${}^{130}\mathrm{Xe}$ up to its first $0^{+}$ excited state. The nomenclature $0^{+}_{1},...,0^{+}_{n}$ indicates states with the same angular momentum in increasing order of excitation energy. An energy scale is shown (right) where the ${}^{130}\mathrm{Xe}$ ground state is taken as reference SINGH200133 . Pattern | BR [%] | Energy $\gamma_{1}$ | Energy $\gamma_{2}$ | Energy $\gamma_{3}$ ---|---|---|---|--- A | $86\%$ | $1257\>\mathrm{keV}$ | $536\>\mathrm{keV}$ | - B | $12\%$ | $\>\>671\>\mathrm{keV}$ | $586\>\mathrm{keV}$ | $536\>\mathrm{keV}$ C | $2\%$ | $1122\>\mathrm{keV}$ | $671\>\mathrm{keV}$ | - Table 1: The de-excitation $\gamma$ rays emitted by ${}^{130}\mathrm{Xe}^{*}$ in the transition from the $0^{+}_{2}$ to the ground state. Each row corresponds to a different path through intermediate states. The energies of the emitted $\gamma$s are listed, in order of energy, along with the branching ratio (BR) of each pattern SINGH200133 . ### 3.1 Monte Carlo Simulations We use Monte Carlo (MC) simulations to compute the detection efficiency (Sec. 3.2) and the expected background (Sec. 3.3) associated with each signature, to rank experimental signatures and eventually fine tune selection cuts on the most sensitive ones (Sec. 3.4). CUORE uses a Geant4-based MC to simulate energy depositions in the detector. The Geant4 software G4_AGOSTINELLI2003250 simulates particle interactions in the various volumes and materials of a modeled detector geometry. A separate post-processing step converts the resulting energy depositions into an output as close to the output of the data production as possible. We refer to Ref. Alduino_2017 for further details about the CUORE MC simulations. Signal simulations, that are simulations of the double beta decay to excited states and the subsequent de-excitation gammas, are produced separately for each process ($0\nu$,$2\nu$) and pattern (A,B,C). Gamma energies are generated as monochromatic. Angular correlations induce a negligible effect on the containment efficiency of the experimental signatures listed in Table 3 as opposed to isotropic gamma emission and compared with the dominant systematic uncertainty described in Sec. 5.1. Beta energies are randomly extracted from the beta spectrum of the corresponding decay Haxton:1985am ; Doi:1981mi ; Doi:1981mj ; Kotila:2012zza in the HSD hypothesis 111The $2\nu\beta\beta$ is called single state dominated (SSD) if it is governed by the lowest $1^{+}$ energy level. In the higher-state dominated (HSD) case the calculation is simplified by summing over all the virtual intermediate states and assuming an average closure energy. In the SSD hypothesis the cumulative sum energy distribution of emitted electrons Kotila:2012zza differs by $<0.3\%$ with respect to HSD. We note though that this analysis cannot infer the shape of the beta spectrum because in $2\nu\beta\beta$ signatures the fit is performed just on the gamma peaks. Background simulations take as input the CUORE background model CUORE2nuBB , and include contaminations in the crystal and several other parts of the CUORE setup222The CUORE background model we refer to is still preliminary, however the estimates of the background activities are good enough to understand what will be the expected contribution to the present search. In the final fit the exact values are floated., such as: the copper tower structure, the closest copper vessel enclosing the detector, the Roman lead, the internal and external modern lead shields and the internal lead suspension system. The contaminants include bulk and surface 238U and 232Th chains with different hypotheses on secular equilibrium breaks, bulk 60Co, 40K, and a few other long lived isotopes. Additional sources of background included are the cosmic muon flux and the $2\nu\beta\beta$ decay of 130Te to the ground state. Both signal and background simulated energy spectra are convolved with a Gaussian resolution that has a width of $5~{}\mathrm{keV}$ full width at half maximum, a standard choice for our simulation studies Alduino_2017 . ### 3.2 Efficiency Evaluation The detection efficiency of a given signature consists of two components: the containment efficiency and the analysis efficiency. Given a signature $s$ and a set of energy selection cuts on the involved bolometers, the corresponding containment efficiency $\varepsilon_{s}$ represents the probability that the energy released by a nuclear decay of ${}^{130}\mathrm{Te}$ to the $0^{+}_{2}$ state of ${}^{130}\mathrm{Xe}$ matches the topology of the signature. We evaluate this efficiency component from the signal MC simulations described in Sec. 3.1, by summing over the contributions of all patterns $p$ populating the signature $s$ $\varepsilon_{s}=\Bigg{[}\sum_{p}\mathrm{BR}_{p}\cdot\frac{\big{[}N^{(sel)}_{MC}\big{]}^{(s)}_{p}}{\big{[}N^{(tot)}_{MC}\big{]}_{p}\>\>\>}\Bigg{]}$ (4) where $\mathrm{BR}_{p}$ is the branching ratio of pattern $p$, $\big{[}N^{(sel)}_{MC}\big{]}^{(s)}_{p}$ and $\big{[}N^{(tot)}_{MC}\big{]}_{p}$ are respectively the selected and total number of simulated decays in the de-excitation pattern of interest. For $0\nu\beta\beta$ decay signatures the signal is monochromatic in all the involved crystals, so the signal region is expected to lie around a specific point in the M-dimensional space of coincident energy releases. A selection is enforced, in simulations, with a box cut, i.e. a selection interval for energy releases in each crystal, defined as $|E_{i}-Q_{i}|<5\>\mathrm{keV}\quad\mathrm{where}\quad i=1...M$ (5) where $E_{i}$ is the reconstructed energy release in the ordered energy space333The energy releases of each M-bolometers multiplet are ordered in descending order so that $E_{i}>E_{i+1}$. and $Q_{i}$ is the corresponding expected energy release. For $2\nu\beta\beta$ decay signatures the same selections apply except the one crystal where the energy release from the $\beta\beta$ is expected. Since the emitted neutrinos carry away an unknown (on an event basis) amount of undetected energy, the expected energy release is not monochromatic. It is instead expected to vary from $Q_{j}^{min}$ to $Q_{j}^{max}$ where $j$ indicates the bolometer where the $\beta\beta$ release their energy. For that bolometer, in each multiplet the following selection is applied $Q_{j}^{min}-5\>\mathrm{keV}<E_{j}<Q_{j}^{max}+5\>\mathrm{keV}$ (6) Selection cuts need to be further tuned at a later stage to optimize the sensitivity to signal peaks. We do this including the widest possible sidebands around each signal peak in order to best constrain the underlying continuous background. We try to avoid including background peaks in the fit range, in order to minimize systematics due to their modeling. This process yields the selections listed in Table 3 (see Sec. 3.4). We then update our computation of signal efficiencies using Eq. 4, where $N^{sel}_{MC}$ is replaced by the result of a Gaussian fit to the distribution of selected MC signal events. The second efficiency contribution, namely the analysis efficiency, is the combination of the probability of correctly detecting and reconstructing the energy deposited in each bolometer (cut efficiency, $\varepsilon^{cut}$), and the probability of assigning the correct multiplicity and avoiding an accidental coincidence (accidentals efficiency, $\varepsilon^{acc}$). The cut efficiency term is named after the data processing cuts needed to select triggered events that pass the base and PSA cuts (see Sec. 2). The method used to calculate this efficiency follows closely what was used in Adams:2019jhp . We measure the efficiency of correctly triggering, reconstructing the pulse energy and the pile-up contribution (base cuts) from heater pulses. The base cut efficiency is computed on each bolometer-dataset pair given the large number of available heater events, and then averaged to obtain a per-dataset value. The PSA cut efficiency is extracted from two independent samples of events: either coincident double-site events where the total energy released is compatible with known prominent $\gamma$ lines, or single-site events due to fully absorbed $\gamma$ lines. The first sample includes events whose energy spans a wide range, and allows the determination of the PSA cut efficiency dependence on energy. The second sample has a higher statistics but provides a measurement at the energies of the selected $\gamma$ peaks only, rather than on a continuum. We evaluate for each dataset the PSA cut efficiency term as the average of the two efficiencies obtained from such samples. We treat the difference as a systematic effect. The $\varepsilon^{cut}$ term must be raised to the $M^{th}$ power because it models bolometer-related efficiencies and a multiplet is selected if and only if all of the involved bolometers pass the selection cuts. The accidentals efficiency term $\varepsilon^{acc}$ is obtained, separately for each dataset, as the survival probability of the 40K $\gamma$ line at 1460 keV. A fully absorbed 40K line in CUORE is uncorrelated to any other physical event because it follows an electron capture of a $\sim 3$ keV shell, which is below threshold. Summarizing, the total signal detection efficiency for signature $s$ is $\varepsilon^{tot}=\varepsilon_{s}\times(\varepsilon^{cut})^{M}\times\varepsilon^{acc}.$ (7) Since the cut efficiency and accidentals term are evaluated separately for each dataset, the total efficiency term in Eq. 7 must be thought of as the signal efficiency for signature $s$ for a specific dataset. A summary of the relevant efficiency values is provided in Table 2, where the per-dataset values are exposure weighted over all datasets. The containment efficiency is the dominant term. Table 2: We report the efficiency terms that appear in Eq. 7 separately for the $0\nu\beta\beta$ (top) and $2\nu\beta\beta$ (bottom) analyses. The containment term dominates the efficiency. We report the cut efficiency raised to power $M$ according to the signature it refers to. We quote effective values computed as exposure weighted mean for the cut and accidentals efficiency terms. All values are percentages, the uncertainty on the last digit is included in round brackets. $0\nu\beta\beta$ --- | 2A0-2B1 | 2A1-2B2 | 3A0 Containment | $4.6(2)$ | $2.9(1)$ | $2.5(1)$ Cut | $78.7(2)$ | $69.8(3)$ Accidentals | $98.7(1)$ Total | $3.5(2)$ | $2.3(1)$ | $1.7(1)$ $2\nu\beta\beta$ --- | 2A0-2B1 | 2A1-2B2 | 3A0 Containment | $4.2(2)$ | $2.4(1)$ | $0.19(1)$ Cut | $78.7(2)$ | $69.8(3)$ Accidentals | $98.7(1)$ Total | $3.2(1)$ | $1.9(1)$ | $0.13(1)$ ### 3.3 Background Contributions Radioactive decays and particle interactions other than $\mathrm{{}^{130}Te}$ decay to $\mathrm{{}^{130}Xe}$ excited state, may mimic the process we search for. We estimate this background contribution by means of background MC simulations described in Sec 3.1. We combine background simulations of different sources, according to the CUORE background model, and from the simulated background spectra we compute the expected number of background counts for each signature $B_{s}$, by counting the expected events from each source included in the background model, and summing the contributions from all sources. We apply the same tight selection cuts around the signal region defined in Eqs. 5 and 6. We use $B_{s}$ to evaluate an approximate sensitivity for each signature and ultimately select the ones that will enter the analysis (see Sec. 3.4). Once the signatures that enter the analysis are selected, we optimize the selection cuts around the signal region in order to reject background structures while leaving the widest possible sidebands around the expected signal position. In this way we can parameterize the background with an appropriate analytical function, whose shape is dictated by background simulations, and use that to perform the final analysis (see Sec. 4.1). With this method we infer the number of reconstructed background events in each signature from data, rather than relying just on simulations. ### 3.4 Experimental Signature Ranking The 15 unique signatures under analysis have different signal efficiencies and backgrounds, and thus different detection sensitivities of the signal. In this section we evaluate an approximate sensitivity of each signature and reduce the 15 signatures down to the most sensitive subset. We analytically evaluate the discovery sensitivity of signature $s$ starting from a background-only model for the total number of counts observed in a single bin centered at the expected signal position. In background-free signatures $B_{s}\ll 1$ we assume an exponentially decaying prior P$(\mu)=e^{-\mu}$ where $\mu$ is the true value of the number of background counts. In background-limited ones $B_{s}\gg 1$ we assume a Gaussian prior whose mean and variance are $B_{s}$. We define the discovery sensitivity as the minimum number of observed counts $N_{s}$ such that the probability of observing $N>N_{s}$ counts in the background-only model is smaller than a given threshold $p_{th}$. Then, from $N_{s}$, we extract the corresponding half life sensitivity $\tilde{S}_{1/2}(\varepsilon_{s},B_{s})=\bigg{[}\frac{\ln(2)\>M\Delta t\>N_{A}\>\eta(^{130}\mathrm{Te})}{m(\mathrm{TeO}_{2})}\bigg{]}S(\varepsilon_{s},B_{s})$ (8) where $M$ is the detector mass, $\Delta t$ its live time, $N_{A}$ the Avogadro constant, $\eta(^{130}\rm{Te})=(34.167\pm 0.002)\%$ Fehr:2004 the isotopic abundance of ${}^{130}\mathrm{Te}$ in natural tellurium, $m(\mathrm{TeO}_{2})=159.6$ g/mol the molecular mass of a tellurium dioxide molecule Fehr:2004 and $S(\varepsilon_{s},B_{s})$ is a score function $S(\varepsilon_{s},B_{s})=\begin{cases}\frac{\varepsilon_{s}}{-\ln(p_{th})}&B_{s}<B_{th}\\\ \frac{\varepsilon_{s}}{n_{\sigma}(p_{th})\sqrt{B}_{s}}&B_{s}\geq B_{th}\end{cases}$ (9) where $n_{\sigma}(p_{th})$ is the number of Gaussian sigma which correspond to $p_{th}$, and $B_{th}$ sets the transition from the background-free approximation to the background-limited approximation making $S(\varepsilon_{s},B_{s})$ continuous. For $n_{\sigma}=5$, $p_{th}\sim 3\times 10^{-7}$ and $B_{th}\sim 9$. We note though that all signatures have a number of expected background counts either $<1$ or $>10$ and their ranking would not be affected by a different choice of the $B_{th}$ threshold. We compute the relative score of each signature $s$ as $R_{s}\doteq\frac{S(\varepsilon_{s},B_{s})}{\sum_{s^{\prime}}S(\varepsilon_{s^{\prime}},B_{s^{\prime}})}$ (10) where $s^{\prime}$ is an index running on all experimental signatures. The efficiency term $\epsilon_{s}$ only includes the MC-based containment efficiency term. This is acceptable for the computation of an approximate analytical score function, since the containment term by far dominates the overall efficiency (Tab. 2). We set a threshold of $R_{s}>5\%$ and we identify three signatures both for the $0\nu\beta\beta$ and $2\nu\beta\beta$ decay search, namely the 2A0-2B1, 2A2-2B3 and 3A0 listed in Table 3. The selected experimental signatures account for a majority of the sensitivity contributions among studied signatures. The sum of their scores accounts for $84\%\>(87\%)$ total score of all signatures in the $0\nu\beta\beta\>(2\nu\beta\beta)$ search respectively. Table 3: Selected experimental signatures for DBD search on the $0^{+}_{2}$ excited state of ${}^{130}\mathrm{Xe}$ in the $0\nu\beta\beta$ (top) and $2\nu\beta\beta$ (bottom) channel are listed. For each signature the corresponding Regions Of Interest (ROI, i.e. the applied selection cuts) are listed in terms of the ordered energy releases $E_{1}\geq E_{2}\geq E_{3}$. The component that will be used for the fit is highlighted with a ∗ superscript. For each signature the partition of the secondaries expected to contribute are listed. For each secondary we report in round brackets the expected energy release in keV. The last row reports the corresponding relative score (Eq. 10). $0\nu\beta\beta$ --- 2A0-2B1 | 2A2-2B3 | 3A0 $E_{2}:\gamma(1257)$ | $E_{2}:\gamma(536)$ | $E_{3}:\gamma(536)$ $E_{1}:\beta\beta(734)+\gamma(536)$ | $E_{1}:\beta\beta(734)+\gamma(1257)$ | $E_{2}:\beta\beta(734)$ | | $E_{1}:\gamma(1257)$ $1247<E_{2}<1280$ | $523<E^{*}_{2}<573$ | $526<E_{3}<546$ $1247<E_{1}^{*}<1280$ | $1981<E_{1}<2001$ | $700<E^{*}_{2}<760$ | | $1247<E_{1}<1267$ $39\%$ | $25\%$ | $20\%$ $2\nu\beta\beta$ --- 2A0-2B1 | 2A2-2B3 | 3A0 $E_{2}:\beta\beta(0-734)+\gamma(536)$ | $E_{2}:\gamma(536)$ | $E_{3}:\beta\beta(0-734)$ $E_{1}:\gamma(1257)$ | $E_{1}:\beta\beta(0-734)+\gamma(1257)$ | $E_{2}:\gamma(536)$ | | $E_{1}:\gamma(1257)$ $620<E_{2}<1150$ | $523<E^{*}_{2}<573$ | $400<E_{3}<523$ $1220<E^{*}_{1}<1300$ | $1360<E_{1}<1990$ | $523<E^{*}_{2}<573$ | | $1779<E_{1}+E_{2}<1807$ $40\%$ | $22\%$ | $25\%$ ## 4 Physics Extraction We use a phenomenological parameterization of the background in the fitting regions (as opposed to using the predicted spectra from the MC), hence real data are required to tune the fit. To avoid biasing our results, we build the fit (Sec. 4.1) on blinded data using the blinding procedure described in Sec. 4.2. ### 4.1 Fitting Technique We extract the $0\nu\beta\beta$ decay rate and $2\nu\beta\beta$ decay rate of ${}^{130}\mathrm{Te}$ to the $0^{+}_{2}$ excited state of ${}^{130}\mathrm{Xe}$ using two separate Bayesian fits. For the single process ($0\nu$ or $2\nu$) the fit is run simultaneously on all the involved signatures. Every multiplet of multiplicity $M$ can be represented as a point $\vec{E}_{ev}$ in a $M$-dimensional space of reconstructed energies. The energy releases are ordered so that $E_{i}>E_{i+1}\>\>\forall i=1,..,M-1$. For each signature, one of the components of the $\vec{E}_{ev}$ vector is selected to perform the fit. This is referred to as projected energy, and indicated with a ∗ superscript in Table 3. In the following we will denote this energy as $E_{ev}$. An unbinned Bayesian fit is implemented with the BAT software package Caldwell:2008fw . It allows simultaneous sampling and maximization of the posterior probability density function (pdf) via Markov Chain Monte Carlo. The likelihood can be decomposed, for each signature and dataset, as follows: $\begin{split}\log\mathcal{L}_{s,ds}&=-(\lambda_{S\>s,ds}+\lambda_{B\>s,ds})+\\\ &+\sum_{ev\in(s,ds)}\log\bigg{[}\lambda_{S\>s,ds}\xi_{bo,ds}f_{S}(E_{ev})+\\\ &+\lambda_{B\>s,ds}\xi_{bo,ds}f_{B}(E_{ev})\bigg{]}\end{split}$ (11) where the subscripts $s$, $bo$, $ds$ will be used to refer to a specific signature, bolometer, or dataset respectively. The form of Eq. 11 is the same for $0\nu\beta\beta$ and $2\nu\beta\beta$, the $\lambda_{S}$ and $\lambda_{B}$ terms are the expected number of signal and background events respectively, $\xi_{bo,ds}$ is the ratio between the exposure of bolometer $bo$ to the exposure of dataset $ds$, $f_{S}$ and $f_{B}$ are the normalized signal and background pdfs. They depend just on the projected energy variable $E_{ev}$. The response function of CUORE bolometers to monochromatic energy releases has a functional form defined phenomenologically for each bolometer-dataset pair Alexey2018 Laura2018 as the superposition of 3 Gaussian components to account for non-Gaussian tails. A correction for the bias in the energy scale reconstruction is implemented together with the resolution dependence on energy (see Ref. Alduino:2017ehq for more details). The signal term $f_{S}(E_{ev})$ models such shape in the bolometer-dataset pair the projected energy $E_{ev}$ was released in. The expected number of signal counts can be written as $\begin{split}\lambda_{S\>s,ds}&=\Gamma^{(p)}_{\beta\beta}[\mathrm{yr}^{-1}]\bigg{[}\frac{N_{A}\>10^{3}\>\eta(^{130}\mathrm{Te})}{m(\mathrm{TeO}_{2})\>[\rm{g/mol}]}\bigg{]}\epsilon_{s}\cdot\\\ &\cdot(M\Delta t)_{ds}\>\mathrm{[kg\cdot yr]}\>(\epsilon^{cut})_{ds}^{M}\>(\epsilon^{acc})_{ds}\end{split}$ (12) where $\Gamma^{(p)}_{\beta\beta}$ is the decay rate of process $p$ and the other parameters were introduced following Eq. 8. The $\Gamma_{\beta\beta}^{(p)}$ parameter describes the rate of the process $p=0\nu,2\nu$ under investigation and is given in both cases a uniform physical prior, $\Gamma_{\beta\beta}>0$. The background term $f_{B}(E_{ev})$ is parameterized as $f_{B}(E_{ev})=\frac{1}{\Delta E}\bigg{[}1+m_{s}\big{(}E_{ev}-E^{(s)}_{0}\big{)}\bigg{]}$ (13) where $\Delta E=E^{max}_{s}-E^{min}_{s}$ is the width of the region of interest, $E^{(s)}_{0}$ is the center of it, and $m_{s}$ describes the slope of the background for signature $s$. The normalization of the background term represents the number of expected background counts $\lambda_{B\>s,ds}=\mathrm{BI}_{s}\>(M\Delta t)_{ds}\>(E^{max}_{s}-E^{min}_{s})$ (14) where BIs is the background index for signature $s$. Background simulations suggest that a uniform event distribution is enough to describe the continuous background in all signatures except the $2\nu\beta\beta$ 2A0-2B1. For this reason the $m_{s}$ parameter is included only when necessary. The background is fully described by the BIs and $m_{s}$ which, together, make 4 (3) nuisance parameters in the $2\nu\beta\beta$ $(0\nu\beta\beta)$ case respectively, that will be marginalized over. The prior for background indices BIs and slopes $m_{s}$ is uniform. The combined log-likelihood reads $\log\mathcal{L}(\mathcal{D}|\mathrm{H}_{S+B})=\sum_{s,ds}\log\mathcal{L}_{s,ds}$ (15) where $\mathrm{H}_{S+B}$ indicates that the likelihood is written in the signal-plus-background model hypothesis HS+B, i.e. that the existence of the process of interest is assumed. The background-only hypothesis HB is a particular case that can be obtained by setting $\Gamma_{\beta\beta}=0$. ### 4.2 Blinding and Sensitivity We blind the data by injecting simulated signal events into the experimental spectrum. We inject a random and unknown number of fake signal events that would correspond to an event rate larger than the current 90 % upper limit Adams:2018yrj ; Andreotti:2011in . Then we compute the expected number of counts in each signature, according to known efficiencies and exposures, for each dataset. Each generated signal event is randomly assigned a bolometer, according to its exposure within the considered dataset. Finally the projected energy of the signal event is generated according to the detector response function $f_{S}(E_{ev}|Q_{s})$ centered at the expected position $Q_{s}$ of the monochromatic energy release in the projected energy space. The injection rate of the simulated signal events is comprised between: $\Gamma^{min}_{p}=1\cdot 10^{-23}\>[\mathrm{1/yr}]\quad\mathrm{and}\quad\Gamma^{max}_{p}=5\cdot 10^{-23}\>[\mathrm{1/yr}]$ (16) We then fit the blinded datasets to get data driven estimates of the background levels in each fitting window. These background estimates are used as inputs to our sensitivity studies in the next section. The results of the fits to the blinded data are reported in Table 4. We see a non-null background for both the $\mathrm{2A0-2B1}$ and $\mathrm{2A2-2B3}$ signatures. No background is expected for the $\mathrm{3A0}$ signature. To extract the median half-life sensitivity for each decay, we generate $10^{4}$ background-only Toy Monte Carlo simulations (ToyMC), using the numbers in Tab. 4. A background-only ToyMC simulation is an ensemble of simulated datasets, according to the following procedure which is iterated $N_{toy}$ times, to produce the same number of ToyMC ensembles. We define a set of signatures, together with the multiplicity and cuts in the ordered energy variables that identify candidate events. For each signature, we set a functional form for the background pdf, either constant or linear, and sample a value from the posterior pdf of the corresponding blinded fit. We compute the number of expected background events for each signature and dataset according to Eq. 14 and sample the actual number of background events from a Poisson distribution with expectation value equal to the number of expected counts. We store each simulated ToyMC event as a vector of ordered energy releases $\vec{E}_{ev}$ and related bolometers $\vec{\mathrm{ch}}_{ev}$, where the bolometers are randomly extracted from the active bolometers of each dataset according to their exposure in the data, while the energies are generated according to the selected shape of the background pdf computed with the parameters (e.g. background index) generated according to the posterior pdfs obtained with the blinded fit to the data. We then fit each ToyMC with the signal-plus-background model $\mathrm{H}_{S+B}$ and compute the lower limit for the decay half life from the $90\%$ quantile of the marginalized posterior pdf for the decay rate parameter. We show the distribution of such limits in Figure 2 for both the $0\nu\beta\beta$ and $2\nu\beta\beta$ decay process. We quote the half-life sensitivity as the median limit of the ToyMCs (Table 4). They are respectively: $\mathrm{S^{0\nu}_{1/2}=(5.6\pm 1.4)\times 10^{24}\>\mathrm{yr}}$ for the ${0\nu}$ decay and $\mathrm{S^{2\nu}_{1/2}=(2.1\pm 0.5)\times 10^{24}\>\mathrm{yr}}$ for the ${2\nu}$ decay where the uncertainty is the MAD of the corresponding distribution. Figure 2: Distribution of $90\%$ C.I. marginalized upper limits on $\mathrm{T_{1/2}}=\log 2/\Gamma$ for $0\nu\beta\beta$ decay (top) and $2\nu\beta\beta$ decay (bottom) obtained from Toy MC simulations. We obtain a median sensitivity of $S^{0\nu}_{1/2}=2.1\times 10^{24}$ yr and $S^{2\nu}_{1/2}=5.6\times 10^{24}$ yr (black dashed line), compared to the $90\%$ C.I. limit from this analysis (red solid line). Table 4: Results of the blinded fit to $0\nu\beta\beta$ (top) and $2\nu\beta\beta$ (bottom) candidate events in the signatures of Table 3. For each parameter the mean and standard deviation of the corresponding marginalized posterior distribution are reported. These values are used only as input to the sensitivity studies and fit validation. Final results reported in Table 5. $0\nu\beta\beta$ --- Observable | Blinded Fit Value | Units Blinded $\mathrm{\Gamma_{\beta\beta}^{0\nu}}$ | $2.8\pm{0.1}$ | $\mathrm{10^{-23}\>[yr^{-1}]}$ $\mathrm{BI_{2A0-2B1}}$ | $6.1\pm{3.6}$ | $\mathrm{10^{-4}\>[counts/(keV\>kg\>yr)]}$ $\mathrm{BI_{2A2-2B3}}$ | $2.7\pm{1.5}$ | $\mathrm{10^{-4}\>[counts/(keV\>kg\>yr)]}$ $\mathrm{BI_{3A0}}$ | $8.7\pm{8.3}$ | $\mathrm{10^{-5}\>[counts/(keV\>kg\>yr)]}$ $2\nu\beta\beta$ --- Observable | Blinded Fit Value | Units Blinded $\mathrm{\Gamma_{\beta\beta}^{2\nu}}$ | $\>5.1\pm{0.1}$ | $\mathrm{10^{-23}\>[yr^{-1}]}$ $\mathrm{BI_{2A0-2B1}}$ | $\>3.3\pm{0.4}$ | $\mathrm{10^{-3}\>[counts/(keV\>kg\>yr)]}$ $\mathrm{m_{2A0-2B1}}$ | $-5.5\pm{4.3}$ | $\mathrm{10^{-3}\>[1/keV]}$ $\mathrm{BI_{2A2-2B3}}$ | $\>4.0\pm{0.6}$ | $\mathrm{10^{-3}\>[counts/(keV\>kg\>yr)]}$ $\mathrm{BI_{3A0}}$ | $\>6.9\pm{6.8}$ | $\mathrm{10^{-5}\>[counts/(keV\>kg\>yr)]}$ ## 5 Results Table 5: We report here mean and standard deviation of the marginalized posterior distributions for the decay rate and background parameters for each signature, derived from unblinded data. The S1/2 parameter indicates the median expected sensitivity for limit setting at $90\%$ C.I. on the T1/2 parameter together with the MAD of its distribution. The last row reports the marginalized $90\%$ C.I. Bayesian lower limit on the decay half life. All results come from the combined fit with systematics. $0\nu\beta\beta$ --- Parameter | Final Fit Value | Units $\mathrm{\Gamma_{\beta\beta}^{0\nu}}$ | $5.8\pm 4.5$ | $10^{-26}$ [yr-1] BI2A0-2B1 | $2.1\pm 1.4$ | $10^{-4}$ [counts/(keV kg yr)] BI2A2-2B3 | $2.7\pm 1.2$ | $10^{-4}$ [counts/(keV kg yr)] BI3A0 | $5.8\pm 5.5$ | $10^{-5}$ [counts/(keV kg yr)] $\mathrm{S^{0\nu}_{1/2}}$ | $5.6\pm 1.4$ | $10^{24}$ [yr] $\mathrm{T}_{1/2}^{90\%}$ | $>5.9$ | $10^{24}$ [yr] $2\nu\beta\beta$ --- Parameter | Final Fit Value | Units $\mathrm{\Gamma_{\beta\beta}^{2\nu}}$ | $2.8\pm 1.8$ | $10^{-25}$ [yr-1] BI2A0-2B1 | $3.0\pm 0.3$ | $10^{-3}$ [counts/(keV kg yr)] $m_{\mathrm{2A0-2B1}}$ | $-5.2\pm 4.2$ | $10^{-3}$ [keV-1] BI2A2-2B3 | $4.3\pm 0.5$ | $10^{-3}$ [counts/(keV kg yr)] BI3A0 | $5.4\pm 5.4$ | $10^{-5}$ [counts/(keV kg yr)] $\mathrm{S^{2\nu}_{1/2}}$ | $2.1\pm 0.5$ | $10^{24}$ [yr] $\mathrm{T}_{1/2}^{90\%}$ | $>1.3$ | $10^{24}$ [yr] Figure 3: Result of the unbinned fit plotted on binned data. Error bars are just a visual aid and correspond to the square root of the bin contents. We show the best fit curve (blue solid), its signal component (blue dashed) at the global mode of the posterior for $0\nu\beta\beta$ (left) and $2\nu\beta\beta$ (right), and the 90 % C.I. marginalized limit on the decay rate (red solid). We show in Figure 3 and Table 5 the results of the fit to unblinded data for both $0\nu\beta\beta$ and $2\nu\beta\beta$. Though the data are binned for graphical reasons, the analysis is unbinned. Including contributions from all sources of systematic uncertainty listed in Table 6, no significant signal is observed in either decay mode. The global mode of the joint posterior pdf for the rate parameter is $\displaystyle\hat{\rm{\Gamma}}^{0\nu}_{\beta\beta}=4.0_{-4.0}^{+3.0}\times 10^{-26}\>\rm{yr}^{-1}$ (17a) $\displaystyle\hat{\rm{\Gamma}}^{2\nu}_{\beta\beta}=2.2_{-1.9}^{+1.7}\times 10^{-25}\>\rm{yr}^{-1}$ (17b) whereas we quote the uncertainty as the marginalized $68~{}\%$ smallest interval. We report the marginalized posterior pdf and the $90\%$ C.I. in Figure 4. We include the marginalized posterior pdfs for individual background parameters in Figure 5 $(0\nu\beta\beta)$ and Figure 6 $(2\nu\beta\beta)$. Their means agree with the corresponding results from blinded data within one standard deviation. We observe a slight negative background fluctuation (i.e. limit stronger than expected) in the $0\nu\beta\beta$ decay analysis and a positive one (i.e. limit looser than expected) in the $2\nu\beta\beta$ decay analysis with respect to the median $90\%$ C.I. limit. The probability of setting an even stronger (looser) limit in the $0\nu\beta\beta$ ($2\nu\beta\beta$) decay analysis is respectively $45.1\%$ and $10.4\%$. Null signal rates are included in the $1\sigma$ ($2\sigma$) smallest C.I.444We refer here to the Gaussian case to define the probability content of any $n\sigma$ interval. of the marginalized pdf for the $\rm{\Gamma}^{0\nu}_{\beta\beta}$ ($\rm{\Gamma}^{2\nu}_{\beta\beta}$) rate parameter respectively. The following Bayesian lower bounds on the corresponding half life parameters are set: $\displaystyle\big{(}\mathrm{T_{1/2}}\big{)}^{0\nu}_{0^{+}_{2}}$ $\displaystyle>5.9\times 10^{24}\>\mathrm{yr}\quad(\mathrm{90\%\>\>C.I.})$ (18a) $\displaystyle\big{(}\mathrm{T_{1/2}}\big{)}^{2\nu}_{0^{+}_{2}}$ $\displaystyle>1.3\times 10^{24}\>\mathrm{yr}\quad(\mathrm{90\%\>\>C.I.})$ (18b) The results reported in this Article represent the most stringent limits on the DBD of 130Te to excited states and improve by a factor $\sim 5$ the previous results on this process. ### 5.1 Systematic Uncertainties The major sources of systematic uncertainty are: signal efficiencies, the detector response function, energy calibration, and the uncertainty of the isotopic abundance of 130Te (see Table 6). Each systematic uncertainty can be introduced as a set of nuisance parameters in the fit with a specified prior distribution. Each set of nuisance parameters can be activated independently with the priors listed in Table 6. We individually monitor the effect of activating each source of systematic uncertainty repeating the fit and comparing the 90% C.I. Bayesian limit on the half life with respect to the minimal model, where we describe all sources of systematics with constants rather than fit parameters. Finally, we repeat the fit activating all additional nuisance parameters at once. We include, for each dataset, two separate parameters to model different sources of uncertainty in the cut efficiency evaluation, and replace the $\varepsilon^{cut}$ constant with the sum of $\varepsilon^{cut\>(I)}+\varepsilon^{cut\>(II)}$. We refer to cut efficiency I to parameterize the uncertainty due to the finiteness of the samples of pulser events and $\gamma$ decays used to extract the cut efficiency. Its prior is Gaussian with mean equal to $\varepsilon^{cut}$ and width equal to the corresponding uncertainty (see Table. 2). The additional cut efficiency II is uniformly distributed, with zero mean. It models the systematic uncertainty due to the PSA efficiency, shifting the cut efficiency by at most $0.7\%$. The accidentals efficiency contributes to the systematic uncertainty just with the uncertainty due to limited statistics in the 40K peak. We add one nuisance parameter per dataset with a Gaussian distributed prior to model this effect. The containment efficiency is instead affected by uncertainty due to the simulation of Compton scattering events at low energy. The uncertainty due to the number of simulated signal events is negligible. We take the ratio of the Compton scattering attenuation coefficient to reference data Allison:2016lfl as a measure of the relative uncertainty on this efficiency term. We account for this, for each signature, introducing a nuisance containment efficiency parameter with a Gaussian prior. The 130Te natural isotopic abundance on natural Te is modeled with a single global nuisance parameter with a Gaussian prior $\eta=(34.167\pm 0.002)\%$. Both the detector response function shape and the energy scale bias are evaluated from data, as anticipated in Sec. 4.1. Each effect is separately parameterized with a 2nd order polynomial as a function of energy, whose coefficients are evaluated with a fit to the 5-7 most visible peaks in each dataset. The uncertainty and correlations among such parameters are themselves a source of systematic uncertainty, and are included as 2 independent sets of correlated parameters per dataset, with a multivariate normal prior distribution. In this way, the detector response function width and position are allowed to float within their uncertainty. Uncertainty in modeling the detector response function leads to the dominant systematic effect on the limit, which is below a 1% shift. Sub-dominant effects come from the energy scale bias and the containment efficiency (Table 6). Source | Prior | Effect on $\mathrm{T}_{1/2}^{90\%}$ ---|---|--- $0\nu\beta\beta$ | $2\nu\beta\beta$ Cut efficiency I | Gaussian | $0.2\%$ | $<0.1\%$ Cut efficiency II | Uniform | $0.1\%$ | $0.1\%$ Accidentals efficiency | Gaussian | $0.2\%$ | $<0.1\%$ Containment efficiency | Gaussian | $0.3\%$ | $0.1\%$ ${}^{130}\mathrm{Te}$ isotopic abundance | Gaussian | $<0.1\%$ | $0.1\%$ Energy scale bias | Multiv. Gaussian | $0.3\%$ | $0.1\%$ Detector resolution | Multiv. Gaussian | $0.5\%$ | $0.4\%$ Combined | Multivariate | $0.4\%$ | $0.1\%$ Table 6: Systematic uncertainties. We report each effect separately and their combination on the marginalized $90\%$ C.I. T${}_{1/2}^{90\%}$ limit on the $0\nu\beta\beta$ and $2\nu\beta\beta$ decay half life. Figure 4: Marginalized decay rate posterior pdf for $0\nu\beta\beta$ (top) and $2\nu\beta\beta$ (bottom) from the combined fit with all systematics. We show the $90\%$ C.I. in gray. ## 6 Conclusions We have presented the latest search for DBD of 130Te to the first $0^{+}$ excited state of 130Xe with CUORE based on a 372.5 kg$\cdot$yr TeO2 exposure. We found no evidence for either $0\nu\beta\beta$ nor $2\nu\beta\beta$ decay and we placed a Bayesian lower bound at $90\%$ C.I. on the decay half lives of $\mathrm{(T_{1/2})^{0\nu}_{0^{+}_{2}}>5.9\times 10^{24}\>\mathrm{yr}}$ for the $0\nu\beta\beta$ mode and $\mathrm{(T_{1/2})^{2\nu}_{0^{+}_{2}}>1.3\times 10^{24}\>\mathrm{yr}}$ for the $2\nu\beta\beta$ mode. The median limit setting sensitivity for the $2\nu\beta\beta$ decay of $2.1\times 10^{24}$yr is starting to approach the $7.2\times 10^{24}$yr lower bound of the QRPA theoretical predictions half life range for this decay mode. The CUORE experiment is steadily taking data at an average rate of $\sim 50$ kg$\cdot$yr/month and by the end of its data taking the collected exposure is expected to increase by an order of magnitude. Work is ongoing to improve the sensitivity by extending the analysis to not-fully-contained events, leveraging the information from the topology of higher dimension coincident signal multiplets to further reduce the background, and improving the signal efficiency by lowering the threshold of the pulse shape discrimination algorithm. ###### Acknowledgements. The CUORE Collaboration thanks the directors and staff of the Laboratori Nazionali del Gran Sasso and the technical staff of our laboratories. This work was supported by the Istituto Nazionale di Fisica Nucleare (INFN); the National Science Foundation under Grant Nos. NSF-PHY-0605119, NSF-PHY-0500337, NSF-PHY-0855314, NSF-PHY-0902171, NSF-PHY-0969852, NSF-PHY-1307204, NSF- PHY-1314881, NSF-PHY-1401832, and NSF-PHY-1913374; and Yale University. This material is also based upon work supported by the US Department of Energy (DOE) Office of Science under Contract Nos. DE-AC02-05CH11231 and DE- AC52-07NA27344; by the DOE Office of Science, Office of Nuclear Physics under Contract Nos. DE-FG02-08ER41551, DE-FG03-00ER41138, DE-SC0012654, DE- SC0020423, DE-SC0019316; and by the EU Horizon2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 754496. This research used resources of the National Energy Research Scientific Computing Center (NERSC). This work makes use of both the DIANA data analysis and APOLLO data acquisition software packages, which were developed by the CUORICINO, CUORE, LUCIFER and CUPID-0 Collaborations. We acknowledge contributions from J. Kotila on dedicated computation of beta spectra. ## References * [1] G. Racah. Sulla simmetria tra particelle e antiparticelle. Nuovo Cim., 14:322–328, 1937. * [2] W.H. Furry. On transition probabilities in double beta-disintegration. Phys. Rev., 56:1184–1193, 1939. * [3] B. Pontecorvo. Neutrino experiments and the problem of conservation of leptonic charge. Sov. Phys. JETP, 26:984, 1968. * [4] J. Schechter and J.W.F. Valle. Neutrinoless Double Beta Decay in SU(2) x U(1) Theories. Phys. Rev. D, 25:2951, 1982. * [5] M. Fukugita and T. Yanagida. Baryogenesis Without Grand Unification. Phys. Lett. B, 174:45–47, 1986. * [6] F. T. Avignone, S. R. Elliott, and J. Engel. Double Beta Decay, Majorana Neutrinos, and Neutrino Mass. Rev. Mod. Phys., 80:481–516, 2008. * [7] A. S. Barabash et al. Two neutrino double beta decay of 100Mo to the first excited $0^{+}$ state in 100Ru. Phys. Lett., B345:408–413, 1995. * [8] A. S. Barabash, F. Hubert, P. Hubert, and V. I. Umatov. Double beta decay of 150Nd to the first $0^{+}$ excited state of 150Sm. JETP Lett., 79:10–12, 2004. [Pisma Zh. Eksp. Teor. Fiz.79,12(2004)]. * [9] A. S. Barabash. Double beta decay to the excited states: Review. AIP Conf. Proc., 1894(1):020002, 2017. * [10] E. Andreotti et al. Search for double-$\beta$ decay of 130Te to the first 0+ excited state of 130Xe with CUORICINO. Phys. Rev., C85:045503, 2012. * [11] C. Alduino et al. Double-beta decay of ${}^{130}\hbox{Te}$ to the first $0^{+}$ excited state of ${}^{130}\hbox{Xe}$ with CUORE-0. Eur. Phys. J., C79(9):795, 2019. * [12] Pekka Pirinen and Jouni Suhonen. Systematic approach to $\beta$ and $2\nu\beta\beta$ decays of mass $a=100-136$ nuclei. Phys. Rev. C, 91:054309, 2015. * [13] Jouni Suhonen and Osvaldo Civitarese. Probing the quenching of $g_{A}$ by single and double beta decays. Phys. Lett. B, 725(1):153 – 157, 2013. * [14] J. Suhonen and O. Civitarese. Single and double beta decays in the a=100, a=116 and a=128 triplets of isobars. Nucl. Phys. A, 924:1 – 23, 2014. * [15] J. Barea, J. Kotila, and F. Iachello. Nuclear matrix elements for double-$\beta$ decay. Phys. Rev. C, 87(1):014315, 2013. * [16] J. Kotila and F. Iachello. Phase space factors for double-$\beta$ decay. Phys. Rev. C, 85:034316, 2012. * [17] A.S. Barabash. Precise half-life values for two neutrino double beta decay. Phys. Rev. C, 81:035501, 2010. * [18] Bjoern Lehnert. Excited state transitions in double beta decay: A brief review. EPJ Web of Conferences, 93:01025, 01 2015. * [19] C. Arnaboldi et al. CUORE: A Cryogenic underground observatory for rare events. Nucl. Instrum. Meth. A, 518:775–798, 2004. * [20] C. Brofferio, O. Cremonesi, and S. Dell’Oro. Neutrinoless Double Beta Decay Experiments With TeO2 Low-Temperature Detectors. Front. in Phys., 7:86, 2019. * [21] C. Alduino et al. First Results from CUORE: A Search for Lepton Number Violation via $0\nu\beta\beta$ Decay of 130Te. Phys. Rev. Lett., 120(13):132501, 2018. * [22] D.Q. Adams et al. Improved Limit on Neutrinoless Double-Beta Decay in 130Te with CUORE. Phys. Rev. Lett., 124(12):122501, 2020. * [23] E. Fiorini and T.O. Niinikoski. Low Temperature Calorimetry for Rare Decays. Nucl. Instrum. Meth. A, 224:83, 1984. * [24] C. Enss and D. Mc Cammon. Physical principles of low temperature detectors: Ultimate performance limits and cur- rent detector capabilities. J. Low Temp. Phys., 151:5, 2008. * [25] C. Arnaboldi et al. Production of high purity TeO2 single crystals for the study of neutrinoless double beta decay. J. Cryst. Growth, 312(20):2999–3008, 2010. * [26] C. Alduino et al. The CUORE cryostat: An infrastructure for rare event searches at millikelvin temperatures. Cryogenics, 102:9–21, 2019. * [27] F. Alessandria et al. The 4K outer cryostat for the CUORE experiment: Construction and quality control. Nucl. Instrum. Meth. A, 727:65–72, 2013. * [28] E. Buccheri, M. Capodiferro, S. Morganti, F. Orio, A. Pelosi, and V. Pettinacci. An assembly line for the construction of ultra-radio-pure detectors. Nucl. Instrum. Meth. A, 768:130–140, 2014. * [29] C. Arnaboldi, P. Carniti, L. Cassina, C. Gotti, X. Liu, M. Maino, G. Pessina, C. Rosenfeld, and B.X. Zhu. A front-end electronic system for large arrays of bolometers. JINST, 13(02):P02026, 2018. * [30] S. Di Domizio, A. Branca, A. Caminata, L. Canonica, S. Copello, A. Giachero, E. Guardincerri, L. Marini, M. Pallavicini, and M. Vignati. A data acquisition and control system for large mass bolometer arrays. JINST, 13(12):P12003, 2018. * [31] G. Benato et al. Radon mitigation during the installation of the CUORE 0$\nu\beta\beta$ decay detector. JINST, 13(01):P01010, 2018. * [32] A. D’Addabbo, C. Bucci, L. Canonica, S. Di Domizio, P. Gorla, L. Marini, A. Nucciotti, I. Nutini, C. Rusconi, and B. Welliver. An active noise cancellation technique for the CUORE Pulse Tube Cryocoolers. Cryogenics, 93:56–65, 2018. * [33] E. E. Haller, N. P. Palaio, M. Rodder, W. L. Hansen, and E. Kreysa. NTD germanium, A novel material for low temperature bolometers. Neutron Transmutation Doping of Semiconductor Materials. edited by R. D. Larrabee (Springer, Boston), 1984. * [34] C. Alduino et al. Analysis techniques for the evaluation of the neutrinoless double-$\beta$ decay lifetime in 130Te with the CUORE-0 detector. Phys. Rev. C, 93(4):045503, 2016. * [35] E. Gatti and P.F. Manfredi. Processing the Signals From Solid State Detectors in Elementary Particle Physics. Riv. Nuovo Cim., 9N1:1–146, 1986. * [36] S. Di Domizio, F. Orio, and M. Vignati. Lowering the energy threshold of large-mass bolometric detectors. JINST, 6:P02007, 2011. * [37] A. Campani et al. Lowering the Energy Threshold of the CUORE Experiment: Benefits in the Surface Alpha Events Reconstruction. J. Low Temp. Phys., pages 1–10, 2020. * [38] G. Fantini et al. Sensitivity to double beta decay of $\mathrm{{}^{130}Te}$ to the first $0^{+}$ excited state of $\mathrm{{}^{130}Xe}$ in CUORE. J. of Phys.: Conference Series, 1468, 02 2020. * [39] Balraj Singh. Nuclear data sheets for a= 130. Nuclear Data Sheets, 93(1):33–242, 2001. * [40] S. Agostinelli et al. Geant4 a simulation toolkit. Nucl. Instrum. Meth. A, 506(3):250 – 303, 2003. * [41] C. Alduino et al. The projected background for the CUORE experiment. Eur. Phys. J. C, 77(8), Aug 2017. * [42] W.C. Haxton and G.J. Stephenson. Double beta Decay. Prog. Part. Nucl. Phys., 12:409–479, 1984. * [43] M. Doi, T. Kotani, H. Nishiura, K. Okuda, and E. Takasugi. Neutrino Mass, the Right-handed Interaction and the Double Beta Decay. 1. Formalism. Prog. Theor. Phys., 66:1739, 1981. [Erratum: Prog.Theor.Phys. 68, 347 (1982)]. * [44] M. Doi, T. Kotani, H. Nishiura, K. Okuda, and E. Takasugi. Neutrino Mass, the Right-handed Interaction and the Double Beta Decay. 2. General Properties and Data Analysis. Prog. Theor. Phys., 66:1765, 1981. [Erratum: Prog.Theor.Phys. 68, 348 (1982)]. * [45] D.Q. Adams et al. Measurement of the $2\nu\beta\beta$ Decay Half-life of 130Te with CUORE. in preparation, 2020. * [46] Manuela A Fehr, Mark Rehkämper, and Alex N Halliday. Application of MC-ICPMS to the precise determination of tellurium isotope compositions in chondrites, iron meteorites and sulfides. Int. J. of Mass Spectrometry, 232:83–94, 2004. * [47] Allen Caldwell, Daniel Kollar, and Kevin Kroninger. BAT: The Bayesian Analysis Toolkit. Comput. Phys. Commun., 180:2197–2209, 2009. * [48] A Drobizhev. Searching for $0\nu\beta\beta$ decay of 130Te with the ton-scale CUORE bolometer array. PhD thesis, University of California, Berkeley, 2018. * [49] L. Marini. The CUORE experiment: from the commissioning to the first $0\nu\beta\beta$ limit. PhD thesis, Università degli studi di Genova, 2018. * [50] J. Allison et al. Recent developments in Geant4. Nucl. Instrum. Meth. A, 835:186–225, 2016. Figure 5: Marginalized posterior pdf for the background parameters of the $0\nu\beta\beta$ model from the combined fit with all systematics. We show the $68.3\%,95.5\%,99.7\%$ smallest C.I. in green, yellow and red respectively. Figure 6: Marginalized posterior pdf for the background parameters of the $2\nu\beta\beta$ model from the combined fit with all systematics. We show the $68.3\%,95.5\%,99.7\%$ smallest C.I. in green, yellow and red respectively.
# The evolution of network controllability in growing networks Rui Zhang Xiaomeng Wang Ming Cheng Tao Jia<EMAIL_ADDRESS>College of Computer and Information Science, Southwest University, Chongqing, 400715, P. R. China School of Rail Transportation, Soochow University, Suzhou, Jiangsu, 215131, P. R. China ###### Abstract The study of network structural controllability focuses on the minimum number of driver nodes needed to control a whole network. Despite intensive studies on this topic, most of them consider static networks only. It is well-known, however, that real networks are growing, with new nodes and links added to the system. Here, we analyze controllability of evolving networks and propose a general rule for the change of driver nodes. We further apply the rule to solve the problem of network augmentation subject to the controllability constraint. The findings fill a gap in our understanding of network controllability and shed light on controllability of real systems. ###### keywords: network controllability , growing networks , complex networks ††journal: Physica A ## 1 Introduction How to control a complex system is one of the most challenging problems in science and engineering with a long history. During recent years, there were a significant amount of works addressing the controllability of complex networks, our ability to drive a network from any initial state to any desired final state within finite time[1, 2, 3]. A general framework based on structural controllability of linear systems was first proposed to identify the minimum set of driver nodes (MDS) [4], whose control leads to the control of the whole network. Following it, related problems under this framework were also investigated, ranging from the cost of control [5, 6, 7] to the robustness and optimal of controllability [8, 9, 10, 11], from the multiplicity feature in control [12, 13, 14, 15] to the controllability in multi-layer or temporal networks [16, 17, 18, 19], and more [20, 21, 22, 23]. This framework was also applied to different real networked systems, such as financial networks, power networks, social networks, protein-protein interaction networks, disease networks, and gene regulatory networks [24, 25, 26, 27, 28, 29, 30, 31]. In the meanwhile, research on different directions of control were also stimulated, such as edge controllability [32, 33] , exact controllability [34, 35], strong structural controllability [36] and dominating sets [37, 38, 39], which significantly advanced our understanding on this fundamental problem. However, except for a few works considering the temporal feature of networks [40, 41, 42, 43, 44], most of the current advances on network controllability focus on the static network, in which the number of nodes is fixed and connected by a fixed number of links that do not change over time. But real networks are growing, with new links and nodes constantly added to the system [45]. To the best of our knowledge, there is no study on the general principle for the change of controllability in growing networks. In this work, we analyze controllability of evolving networks and propose a general rule for the change of driver nodes under network expansion. This rule allows us to further study a problem of network augmentation subject to the controllability constraints [43]. The maximum number of new nodes that can be added to the network while keeping the controllability unchanged is difficulty to obtain. However, the upper and lower bound of this problem can be efficiently identified. The upper and lower bound are also affected by different types of degree correlations in directed networks. In the following discussion, we will briefly review the framework of identifying minimum driver nodes and a node classification scheme based on the multiplicity feature in choosing driver nodes. With these basic concepts, we propose a general rule for the change of driver nodes when a new node is added and connected to existing nodes in the network. Finally, we use this rule to solve a problem of maintaining network controllability while adding new nodes to the system. ## 2 Results ### 2.1 Network structural controllability and node classification A dynamical system is controllable if it can be driven from any initial state to any desired final state within finite time. In many systems, altering the state of a few nodes is sufficient to drive the dynamics of the whole network. For a linear time-invariant system, the minimum driver node set (MDS) can be identified efficiently [4]. First, a directed network is converted into a bipartite graph by splitting a single node in the directed network into two nodes in a bipartite graph, forming two disjoint sets of + and - nodes. Consequently, a directed link from node $i$ to $j$ in the directed network becomes a link from node $i^{+}$ to node $j^{-}$ in the bipartite graph. Then the maximum matching [46, 47, 48] of the bipartite graph is identified where one node can at most match another node via one link. The the unmatched nodes in the - set are the driver nodes. By imposing properly chosen signals on these $N_{D}$ driver nodes of the MDS, we can yield control over the whole system. Figure 1: (a) A directed network with five nodes. (b) The directed network in ($\bf a$) can be transferred into a bipartite network by splitting a node to two nodes in the bipartite network. The maximum matching is then performed in the bipartite network, leaving node $1^{-}$, $4^{-}$ and $5^{-}$ unmatched. (c) One minimum driver node set (MDS) is obtained with the maximum matching in ($\bf b$). The whole network is controllable by controlling node 1, 4 and 5. (d) Node classification based on the likelihood of being a driver node. Node 1 is critical, node 2 is redundant and nodes 3, 4, 5 are intermittent. The number of driver nodes necessary and sufficient for control, $N_{D}$, is fixed for a given network. However, there are multiple choices of MDSs with the same $N_{D}$ (Fig. 1a ), giving raise to a multiplicity feature [12, 13]. Correspondingly, a node classification scheme is proposed based a node’s likelihood of being in the MDS. A node may appear in all MDSs. Hence this node is critical because the network can not be under control without controlling this node. A node may not appear in any MDSs. Consequently this node is redundant as it does not require any external inputs. The rest kind of nodes that may appear in some but not all MDSs is intermittent. It has been found that a node is critical if and only if it has no incoming links [12]. Hence, the fraction of critical nodes in the network ($n_{c}$) is solely determined by the degree distribution, which equals the fraction of nodes with zero in- degree. The redundant nodes in a network can be identified by an algorithm with O($LN$) complexity. The intermittent nodes are therefore readily known once the critical and redundant nodes are identified. ### 2.2 Controllability change in growing networks An aim of this work to answer the question about how the number of driver nodes $N_{D}$ changes when a new node is added to the network with new links pointing to / from the existing nodes. For simplicity, we separately analyze two cases when the new node has only incoming links and only outgoing links. Indeed, the correlation between a node’s in- and out-degree does not affect the overall controllability[49]. Therefore, the general case when adding a new node with both incoming and outgoing links can be considered as a process that adds one node with only outgoing links and one node with only incoming links, and then merges these two nodes as a single node. We first consider adding a single node to a network which has only outgoing links. Our conclusion is that if there is one new link connected to a non- redundant node (either a critical node or an intermittent node, denoted by NR node for short) in the original network, the number of drive nodes will stay the same. Otherwise, if all links are connected to redundant nodes (denoted by R nodes for short) in the original network, the number of drive nodes will increase by 1. While we put detailed proof in the Appendix (see Appendix A), this conclusion can be intuitively understood as follows. A node without incoming links always requires an independent external signal to control. Hence when this node is added to a network, the number of driver nodes will either increase by 1 or stay unchanged, depending on whether an existing driver node in the original network would become a non-driver node after adding this new node. Since a redundant node could never become a driver node, linking to them will not change the original number of external signals. In this case, $N_{D}$ will increase by 1. In contrast, a critical node is always a driver node, and an intermittent node can become a driver node in some circumstances. Connecting to these two types of nodes can save one original signal, making $N_{D}$ stay the same. The the situation that a single node with only incoming links is added to a network can be analyzed in a similar way by introducing the transpose network, in which the direction of all links in the original network is reversed. The value $N_{D}$ is the same in both the original network and the transpose network. Therefore, the problem that how $N_{D}$ would change if adding a node with only incoming links is equivalent to the problem that how $N_{D}$ would change if adding a node with only outgoing links in the transpose network. Correspondingly, we can first identify a node’s category in the transpose network, i.e. identify whether a node is redundant or non-redundant in the transpose network. Then we can apply the result above and reach a conclusion that if there is one new link from a non-redundant node in the transpose network (denoted by NRT node for short), $N_{D}$ will stay the same. Otherwise, if all links are from the redundant nodes in the transpose network (denoted by RT nodes for short), $N_{D}$ will increase by 1. Figure 2: (a) A network with five nodes that can be control via three driver nodes ($N_{D}$=3). The category of the node, i.e. R or NR in the original network, and RT or NRT in the transpose network, is labeled for each of the five nodes. When a new node with one out-going link and one-incoming link is added to the network, (b) $N_{D}$ increases by 1 when the out-link connects to a R node and the in-link is from a RT node. (c) $N_{D}$ keeps the same when the out-link connects to a NR node and the in-link is from a RT node. (d) $N_{D}$ decreases by 1 when the out-link connects to a NR node and the in-link is from a NRT node. To quickly summarize, when a node with only outgoing links is added, there are two options in the change of $N_{D}$: (1) $N_{D}$ increases by 1 if all links are connected to R nodes; (2) $N_{D}$ keeps the same if at least one link is connected to a NR node. Likewise, when a node with only incoming links is added, there are also two options in the change of $N_{D}$: (1) $N_{D}$ increases by 1 if all links are from RT nodes; (2) $N_{D}$ keeps the same if at least one link is from a NRT node. As mentioned above, the general case when adding a new node with both incoming and outgoing links can be considered as a process that adds one node with only outgoing links and one node with only incoming links, and then merges these two nodes as a single node. It is also noteworthy that $N_{D}$ will decrease by 1 during this particular merging process (see Appendix B). Taken together, we have the general conclusion on the change of controllability as follows: * - Identify the category of all existing nodes, i.e. R or NR in the original network, and RT or NRT in the transpose network (Fig. 2a). * - $N_{D}$ increases by 1 if all out-links are connected to R nodes and all in- links are from RT nodes (Fig. 2b). * - $N_{D}$ keeps the same if all out-links are connected to R nodes and at least one in-link is from a NRT node, or at least one out-link is connected to a NR node and all in-links are from RT nodes (Fig. 2c). * - $N_{D}$ decreases by 1 if at least one out-link is connected to a NR node and at least one in-link is from a NRT node (Fig. 2d). ### 2.3 A case study of network argumentation problem A recent study raises an interesting problem about network argumentation: what is the maximum number of nodes that can be added to a network while keeping $N_{D}$ unchanged [43]. The problem is under several constraints such that some trivial solutions are excluded. First, the new nodes added have only one out-going link. The constraint on one link slightly simplifies the problem. But it also means that the new nodes added are not able to form a cycle. Second, the new nodes are not allowed to connect to critical nodes, i.e. the nodes in the network with zero in-degree. Therefore, the trivial solution that new nodes connect one after another to form a directed path is excluded. Finally, the MDS needed to control the original network is recorded and the new nodes are not allowed to connect to any nodes in the original MDS. Indeed, one trivial solution of this problem is to connect the new node to the node in the MDS to keep $N_{D}$ unchanged. This constraint excludes this trivial solution, and it significantly increases the difficulties of the problem, which will be explained later. The problem itself has several implications to real systems [43, 50], which is not the focus of our work. We are interested in identifying the maximum number of nodes $N_{a}$ that can be added to the network in this problem. One intuitive answer is $N_{a}=N_{D}-N_{c}$. The reason is as follows. Because the new node added has zero in-degree, it needs to be always controlled and be the driver node once it is added. To keep $N_{D}$ the same, we can add at most $N_{D}$ of these new nodes. Because nodes with zero in-degree are not allowed to be connected, the number of critical nodes in the original network should be deducted, which yields the answer $N_{a}=N_{D}-N_{c}$. Figure 3: An example of how the original MDS can affect $N_{a}$. (a) Node 1, 3, 4, 5 form the MDS. After adding a new node s connecting to node 6, node 2 remains to be a NR node, allowing an extra node to be added. In this case, $N_{a}=2$. (b) Node 1, 2, 3, 5 form the MDS. After adding a new node s connecting to node 6, there is no NR node that is not included in the original MDS. In this case, $N_{a}=1$. However, this solution does not effectively satisfy the third constraint. Because the new node can not connect to nodes in the original MDS, the way that the original network is controlled will affect $N_{a}$. Indeed, in our conclusion of controllability change, we show that the new node has to connect to a NR node to keep $N_{D}$ the same. But if the NR node is also in the MDS, it is not allowed to be connected. Fig. 3 shows a good example about how the original MDS would affect $N_{a}$. Hence, $N_{D}$ \- $N_{c}$ should be the upper bound of $N_{a}$, but $N_{a}$ in many cases can be less than $N_{D}$ \- $N_{c}$. The exact value of $N_{a}$ turns out to be highly non-trivial, related with solving an integer programming. But based on the principle identified, we can use a greedy algorithm to find the local maximum, denoted by $N^{o}_{a}$, which represents the lower bound of $N_{a}$. The idea is to identify a NR node which is not in the MDS and connect the new node to this NR node. The algorithm (See Appendix C) takes O($NL$) complexity to identify a set of $N^{o}_{a}$ nodes for a given choice of MDS that are introduced to control the original network. We find that $N^{o}_{a}$ identified using our algorithm can be quite less than $N_{D}$ \- $N_{c}$. Such difference varies non-monotonically with the average degree of the network $\langle k\rangle$ which reaches the peak at a intermediate value of $\langle k\rangle$ (Fig. 4a). Figure 4: (a) The upper bound ($N_{D}-N_{c}$) and the lower bound ($N^{o}_{a}$) of $N_{a}$ for ER network with $N=$10000 and varying $\langle k\rangle$. $N^{o}_{a}$ is obtained via one realization of MDS. Both the upper and lower bound vary non-monotonically with $\langle k\rangle$. The difference between the upper and lower bound, $\Delta$ (insert), also varies in a similar trend as those of $N_{D}-N_{c}$ and $N^{o}_{a}$. (b) The average, maximal and minimal value of $N^{o}_{a}$ based on 100 randomly generated MDSs in ER network with $N=$10000 and varying $\langle k\rangle$. $N^{o}_{a}$ depends on the particular choice of MDS and there are multiple MDSs for a given network. To take this multiplicity feature into account, we apply the random sampling method [13] to generate a collection of random MDSs, in which each MDS gives rise to a $N^{o}_{a}$ value. We then calculate the mean, maximal and minimal value of $N^{o}_{a}$ based on the collection of MDSs (Fig. 4b). In general, the mean and maximal value of $N^{o}_{a}$ are very close. Statistically $N^{o}_{a}$ is not significantly affected by the multiple choices of MDSs. But there exist rare cases when the $N^{o}_{a}$ value is much less than its mean. Figure 5: The relationship between the degree correlation and the $N_{a}$ in ER network with $N=10000$ and $\langle k\rangle=4$. The four types of degree correlation are denoted by $r^{in-in},r^{in-out},r^{out-out},r^{out-in}$. The upper bound ($N_{D}-N_{c}$), the lower bound ($N^{o}_{a}$), and the difference between the two ($\Delta$, in the inset) change very similarly. Finally, we analyzed the effect of degree correlation on $N_{a}$. In most real systems, connections between nodes are not neutral [51, 52, 53]. Nodes are with certain tendency to connect to nodes with similar or different degree. Such tendency, or degree correlation, is usually quantified by Pearson correlation coefficient between the degree of two nodes connected by a single link [53, 51]. In directed networks, a node is characterized by both in- and out-degree. Hence, there are four different quantification of degree correlation[54, 49]. More specifically, for a direct link starts at node _s_ and ends at node _t_ , the degree correlation r is given by: $r^{(\alpha,\beta)}=\frac{L^{-1}\sum_{i}\alpha_{i}^{s}\beta_{i}^{t}-[L^{-1}\sum_{i}1/2(\alpha_{i}^{s}+\beta_{i}^{t})]^{2}}{L^{-1}\sum_{i}1/2(\alpha_{i}^{s^{2}}+\beta_{i}^{t^{2}})-[L^{-1}\sum_{i}1/2(\alpha_{i}^{s}+\beta_{i}^{t})]^{2}},$ (1) where $L$ is the total number of links in the network, $\alpha$, $\beta$ $\in\\{in,out\\}$ corresponds to the two different types of degree. The four types of degree correlation are hence denoted by $r^{in-in},r^{in-out},r^{out- out},r^{out-in}$. Since $N_{c}$ does not change with degree correlation but only depends on the number of nodes with 0 in-degree ($P_{\text{in}}(0)$), the upper bound of $N_{a}$, which is $N_{D}-N_{c}$, changes with $N_{D}$ alone. It does not change with $r^{in-out}$, increases with the absolute value of $r^{in-in}$ and $r^{out-out}$, and monotonically decreases with $r^{out-in}$ [49] (Fig. 5). The lower bound $N^{o}_{a}$, identified using our method, follows a similar trend of $N_{D}$ in all cases. Furthermore, the difference $\Delta$ between the upper and lower bound also shows a similar trend as that of $N_{a}$ and $N^{o}_{a}$. ## 3 Discussion In summary, we study the change of network controllability in growing networks. We introduce two sets of node categories, R and NR, and RT and NRT. We find that the number of driver nodes $N_{D}$ can increase by 1, stay the same or decrease by 1 when a new node is added. The change relies on the categories of nodes (R or NR) that the out-going links connects to and the categories of nodes (RT or NRT) that the in-coming links are from. This principle on the change of controllability helps us to solve a recently proposed problem on network augmentation, the maximum number of nodes $N_{a}$ that can be added to a network while keeping $N_{D}$ unchanged. We propose an algorithm that can efficiently finds the lower bound of $N_{a}$. We demonstrate how the upper bound and lower bound change with average degree and the degree correlation of the network. The results presented have many potential applications in future works [55]. Network expansion or augmentation is a ubiquitous feature in our rapidly growing technological society such as adding nodes or edges to an existing network. Generally, when the network is going to be larger, there will be more nodes required to achieve full control and also the cost of control the network will increase. Our approach can offer insights for future work exploring the augmentation of nodes in control and offer fundamental tools to explore control in temporal complex systems. ## Acknowledgement This work is supported by the Natural Science Foundation of China (No. 6160309). M. C. is also supported by the Nature Science Foundation of Jiangsu Province (No. BK20150344), China Postdoctoral Science Foundation (No. 2016M601885). ## Appendix A Our conclusion says that if there is one new link connected to a non-redundant node (either a critical node or an intermittent node, denoted by NR node for short) in the original network, the number of drive nodes will stay the same. Otherwise, if all links are connected to redundant nodes (denoted by R nodes for short) in the original network, the number of drive nodes will increase by 1. The proof of this conclusion is best described in a bipartite graph. Therefore, we will change the terminology from the “driver node in a directed network” to the “matched or unmatched node in the - set of a bipartite graph”. In particular there are several equivalent terms. number of driver nodes $=$ number of nodes in a set (either $+$ or $-$) $-$ number of matched pairs (or number of matched nodes in a set) redundant node $=$ always matched node in the - set of a bipartite graph, i.e. the node is matched in all different maximum matching configurations non-redundant node $=$ not always matched node in the - set of a bipartite graph. This includes the node that is not matched in the current maximum matching configuration, and the node that is currently matched but can be unmatched in a different maximum matching configuration Now let us consider the case when a new node $s$ with one out-going link is added to a directed network (Fig. 6a). In the bipartite graph representation, this is to add a node $s^{-}$ with zero link and a node $s^{+}$ with one link. Assuming that the node $s^{+}$ connects to is $t^{-}$, there are 3 different situations: 1. 1. Node $t^{-}$ is unmatched. Then a new maximum matching is achieved by matching node $s^{+}$ and node $t^{-}$ (Fig. 6b). 2. 2. Node $t^{-}$ is matched in the current maximum matching configuration, but it is not always matched. Because node $t^{-}$ is not always matched, there are configurations that node $t^{-}$ is unmatched but yields the same number of matched pairs. Then we can always change the matching configuration to such that node $t^{-}$ is unmatched and them match the pair node $s^{+}$ and node $t^{-}$. In the terminology of Hopcroft-Karp algorithm that is applied to find the maximum matching in bipartite graph [46], this means that there exists an augmentation path that can go through node $s^{+}$ and node $t^{-}$. 3. 3. Node $t^{-}$ is matched and is always matched. In this case, there is no maximum matching configuration with $t^{-}$ unmatched. Hence, node $s^{+}$ and node $t^{-}$ can not be matched. In other words, there is no augmentation path that can go through node $s^{+}$ and node $t^{-}$. Figure 6: (a) A new node s with one out-going link is added to a directed network with five nodes. (b) The corresponding bipartite network with a maximum matching obtained. Node $1^{-}$, $4^{-}$ and $5^{-}$ are unmatched in the original network. (c) Node $1^{-}$ is unmatched. If node $s^{-}$ connects to node $1^{-}$, a new maximum matching is achieved with three matched pairs. (d) Node $3^{-}$ is currently matched, but it is not always matched. Consequently, there is a different matching configuration that preserves the same number of matched pairs while leaving node $3^{-}$ unmatched. Therefore, when node $s^{-}$ connects to node $3^{-}$, a new maximum matching is with three matched pairs can be achieved. In case 1 and 2 (connecting to a node that is not always matched), the number of maximum matching increases by 1, which offsets the increase of total number of nodes. Consequently, the number of drive nodes stays the same. In case 3 (connecting to an always matched node), the number of maximum matching stays the same but the number of nodes increases by 1. Hence, the number of drive nodes will increase by 1. ## Appendix B Assume that node $s_{1}$ is with only in-coming links and node $s_{2}$ is with only out-going link. In the corresponding bipartite network, node $s_{1}^{+}$ and node $s_{2}^{-}$ are with zero degree. Now consider the case that node $s_{1}$ and node $s_{2}$ are merged together to form a node $s$ with both in- coming and out-going links. In the bipartite network, it corresponds to the process that node $s_{1}^{+}$ and node $s_{2}^{+}$, node $s_{1}^{-}$ and node $s_{2}^{-}$ are merged together. Note that node $s_{1}^{+}$ and node $s_{2}^{-}$ can not be matched in the original bipartite network. Therefore, the merging does not change the number of matched pairs. But the number of nodes in each set is reduced by 1. Hence, $N_{D}$ will decrease by 1 in this merging process as $N_{D}=$ number of nodes in a set $-$ number of matched pairs (see Appendix A). ## Appendix C When adding a node with one out-going link, $N_{D}$ keeps the same if the new node connects to a NR node. To satisfy other constraints, the connected node should not belong to the original MDS (the nodes with zero in-degree in the original network are also in the MDS). The MDS corresponds to a set of - nodes that are not matched in the bipartite network. Therefore, the connected node should be matched but not always matched. Correspondingly, the algorithm is as follows: 1\. From the original MDS, identify a set $M$ of matched nodes in the - set of the bipartite network. Build the matching configuration corresponds to the set of matched nodes. 2\. Pick an element in $M$ (denoted by node $i^{-}$). 3\. Check if node $i^{-}$ is always matched by un-matching this node and check if an alternative matching configuration exists that preserves the number of matched pairs. This is equivalent to check if an argumentation path exist that starts with the node that matches node $i^{-}$. 4\. If node is $i^{-}$ is always matched, repeat step 2. Otherwise, node $i$ is a NR node and the new node should connect to it. 5\. Update the matching configuration after adding the new node. Repeat step 2 to identify another NR node to connect. Note that the maximum matching will change when the new node is added. A NR node can become R node. Therefore, we need to test each node one by one. The complexity to find one maximum matching is O($N^{0.5}$ $L$). A node in the $M$ requires a breadth first search (O($L$) complexity) to test if it is not always matched. The complexity to update the maximum matching after adding a new node is O($L$). The number of nodes in $M$ is proportional to $N$. Therefore, the complexity to find $N_{a}^{o}$ is O($NL$). ## References * [1] R. E. Kalman, Mathematical description of linear dynamical systems, Journal of the Society for Industrial and Applied Mathematics, Series A: Control 1 (2) (1963) 152–192. * [2] D. G. D. G. Luenberger, Introduction to dynamic systems; theory, models, and applications, Tech. rep. (1979). * [3] C. K. Chui, G. Chen, Linear systems and optimal control, Vol. 18, Springer Science & Business Media, 2012. * [4] Y.-Y. Liu, J.-J. Slotine, A.-L. Barabási, Controllability of complex networks, Nature 473 (7346) (2011) 167. * [5] G. Yan, J. Ren, Y.-C. Lai, C.-H. Lai, B. Li, Controlling complex networks: How much energy is needed?, Phys. Rev. Lett. 108 (21) (2012) 218703. * [6] G. Yan, G. Tsekenis, B. Barzel, J. J. Slotine, Y. Y. Liu, A.-L. Barabási, Spectrum of controlling and observing complex networks, Nat. Phys. 11 (9). * [7] Y.-Z. Sun, S.-Y. Leng, Y.-C. Lai, C. Grebogi, W. Lin, Closed-loop control of complex networks: A trade-off between time and energy, Phys. Rev. Lett. 119 (19) (2017) 198301. * [8] S.-M. Chen, Y.-F. Xu, S. Nie, Robustness of network controllability in cascading failure, Physica A 471 (2017) 536–539. * [9] J. Ding, P. Tan, Y.-Z. Lu, Optimizing the controllability index of directed networks with the fixed number of control nodes, Neurocomputing 171 (2016) 1524–1532. * [10] L.-Z. Wang, Y.-Z. Chen, W.-X. Wang, Y.-C. Lai, Physical controllability of complex networks, Sci. Rep. 7 (2017) 40198. * [11] S. Nie, X. Wang, H. Zhang, Q. Li, B. Wang, Robustness of controllability for networks based on edge-attack, PLoS One 9 (2) (2014) e89066. * [12] T. Jia, Y.-Y. Liu, E. Csóka, M. Pósfai, J.-J. Slotine, A.-L. Barabási, Emergence of bimodality in controlling complex networks, Nat. Commun. 4 (2013) 2002. * [13] T. Jia, A.-L. Barabási, Control capacity and a random sampling method in exploring controllability of complex networks, Sci. Rep. 3. * [14] T. Jia, M. Pósfai, Connecting core percolation and controllability of complex networks, Sci. Rep. 4 (2014) 5379. * [15] X. Zhang, J. Han, W. Zhang, An efficient algorithm for finding all possible input nodes for controlling complex networks, Sci. Rep. 7 (1) (2017) 10677. * [16] M. Pósfai, P. Hövel, Structural controllability of temporal networks, New J. Phys. 16 (12) (2014) 123055. * [17] M. Pósfai, J. Gao, S. P. Cornelius, A.-L. Barabási, R. M. D’Souza, Controllability of multiplex, multi-time-scale networks, Phys. Rev. E 94 (3) (2016) 032316. * [18] Y. Pan, X. Li, Towards a graphic tool of structural controllability of temporal networks, in: Circuits and Systems (ISCAS), 2014 IEEE International Symposium on, IEEE, 2014, pp. 1784–1787. * [19] G. Menichetti, L. Dall’Asta, G. Bianconi, Control of multilayer networks, Sci. Rep. 6 (2016) 20706. * [20] J. Gao, Y.-Y. Liu, R. M. D’souza, A.-L. Barabási, Target control of complex networks, Nat. Commun. 5 (2014) 5415. * [21] Y. Tang, F. Qian, H. Gao, J. Kurths, Synchronization in complex networks and its application–a survey of recent advances and challenges, Annu. Rev. Control 38 (2) (2014) 184–198. * [22] C. Zhao, W.-X. Wang, Y.-Y. Liu, J.-J. Slotine, Intrinsic dynamics induce global symmetry in network controllability, Sci. Rep. 5 (2015) 8422. * [23] P. Wang, S. Xu, Spectral coarse grained controllability of complex networks, Physica A 478 (2017) 168–176. * [24] A. Vinayagam, T. E. Gibson, H.-J. Lee, B. Yilmazel, C. Roesel, Y. Hu, Y. Kwon, A. Sharma, Y.-Y. Liu, N. Perrimon, et al., Controllability analysis of the directed human protein interaction network identifies disease genes and drug targets, Proc. Natl. Acad. Sci. 113 (18) (2016) 4976–4981. * [25] X.-F. Zhang, L. Ou-Yang, Y. Zhu, M.-Y. Wu, D.-Q. Dai, Determining minimum set of driver nodes in protein-protein interaction networks, BMC Bioinf. 16 (1) (2015) 146. * [26] X. Liu, L. Pan, Detection of driver metabolites in the human liver metabolic network using structural controllability analysis, BMC Syst. Biol. 8 (1) (2014) 51. * [27] B. Wang, L. Gao, Y. Gao, Y. Deng, Y. Wang, Controllability and observability analysis for vertex domination centrality in directed networks, Sci. Rep. 4 (2014) 5399. * [28] V. Ravindran, V. Sunitha, G. Bagler, Controllability of human cancer signaling network, in: Signal Processing and Communication (ICSC), 2016 International Conference on, IEEE, 2016, pp. 363–367. * [29] V. Ravindran, V. Sunitha, G. Bagler, Identification of critical regulatory genes in cancer signaling network using controllability analysis, Physica A 474 (2017) 134–143. * [30] J. Li, L. Dueñas-Osorio, C. Chen, B. Berryhill, A. Yazdani, Characterizing the topological and controllability features of us power transmission networks, Physica A 453 (2016) 84–98. * [31] P. Wang, D. Wang, J. Lu, Controllability analysis of a gene network for arabidopsis thaliana reveals characteristics of functional gene families, IEEE/ACM Trans. Comput. Biol. Bioinform. * [32] T. Nepusz, T. Vicsek, Controlling edge dynamics in complex networks, Nat. Phys. 8 (7) (2012) 568. * [33] S.-P. Pang, W.-X. Wang, F. Hao, Y.-C. Lai, Universal framework for edge controllability of complex networks, Sci. Rep. 7 (1) (2017) 4224. * [34] Z. Yuan, C. Zhao, Z. Di, W.-X. Wang, Y.-C. Lai, Exact controllability of complex networks, Nat. Commun. 4. * [35] X.-D. Gao, Z. Shen, W.-X. Wang, Emergence of complexity in controlling simple regular networks, EPL 114 (6) (2016) 68002. * [36] J. C. Jarczyk, F. Svaricek, B. Alt, Strong structural controllability of linear systems revisited, in: Decision and Control and European Control Conference (CDC-ECC), 2011 50th IEEE Conference on, IEEE, 2011, pp. 1213–1218. * [37] J. C. Nacher, T. Akutsu, Minimum dominating set-based methods for analyzing biological networks, Methods 102 (2016) 57–63. * [38] M. Ishitsuka, T. Akutsu, J. C. Nacher, Critical controllability in proteome-wide protein interaction network integrating transcriptome, Sci. Rep. 6 (2016) 23541. * [39] F. Molnár Jr, N. Derzsy, É. Czabarka, L. Székely, B. K. Szymanski, G. Korniss, Dominating scale-free networks using generalized probabilistic methods, Sci. Rep. 4 (2014) srep06308. * [40] Y.-D. Xiao, S.-Y. Lao, L.-L. Hou, L. Bai, Edge orientation for optimizing controllability of complex networks, Phys. Rev. E 90 (4) (2014) 042804. * [41] L. Hou, S. Lao, M. Small, Y. Xiao, Enhancing complex network controllability by minimum link direction reversal, Phys. Lett. A 379 (20) (2015) 1321–1325. * [42] W.-X. Wang, X. Ni, Y.-C. Lai, C. Grebogi, Optimizing controllability of complex networks by minimum structural perturbations, Phys. Rev. E 85 (2) (2012) 026115\. * [43] J. Wang, X. Yu, L. Stone, Effective augmentation of complex networks, Sci. Rep. 6 (2016) 25627. * [44] D. Thalmeier, V. Gómez, H. J. Kappen, Action selection in growing state spaces: Control of network structure growth, J. Phys. A: Math. Theor. 50 (3) (2016) 034006. * [45] A.-L. Barabási, R. Albert, Emergence of scaling in random networks, Science 286 (5439) (1999) 509–512. * [46] J. E. Hopcroft, R. M. Karp, An n5/2 algorithm for maximum matching in bipartite graphs 2 (4) (1971) 122–125. * [47] L. Zdeborov, M. Mzard, The number of matchings in random graphs, Journal of Statistical Mechanics-Theory and Experiment 2006 (5) (2006) 2006. * [48] T. Jia, R. F. Spivey, B. Szymanski, G. Korniss, An analysis of the matching hypothesis in networks, PLoS One. 10 (6) (2015) e0129804. * [49] M. Pósfai, Y.-Y. Liu, J.-J. Slotine, A.-L. Barabási, Effect of correlations on network controllability, Sci. Rep. 3. * [50] M. Jalili, Effective augmentation of networked systems and enhancing pinning controllability, Physica A 500 (2018) 155–161. * [51] M. E. Newman, Mixing patterns in networks, Phys. Rev. E 67 (2) (2003) 026126. * [52] R. Pastor-Satorras, A. Vázquez, A. Vespignani, Dynamical and correlation properties of the internet, Phys. Rev. Lett. 87 (25) (2001) 258701. * [53] M. E. Newman, Assortative mixing in networks, Phys. Rev. Lett. 89 (20) (2002) 208701\. * [54] J. G. Foster, D. V. Foster, P. Grassberger, M. Paczuski, Edge direction and the structure of networks., Proc. Natl. Acad. Sci. U.S.A. 107 (24) (2010) 10815. * [55] L.-Z. Wang, R.-Q. Su, Z.-G. Huang, X. Wang, W.-X. Wang, C. Grebogi, Y.-C. Lai, A geometrical approach to control and controllability of nonlinear dynamical networks, Nat. Commun. 7 (2016) 11323.
capbtabboxtable[][] # Few-Shot Semantic Parsing for New Predicates Zhuang Li, Lizhen Qu , Shuo Huang, Gholamreza Haffari Faculty of Information Technology Monash University <EMAIL_ADDRESS> <EMAIL_ADDRESS>corresponding author ###### Abstract In this work, we investigate the problems of semantic parsing in a few-shot learning setting. In this setting, we are provided with $k$ utterance-logical form pairs per new predicate. The state-of-the-art neural semantic parsers achieve less than 25% accuracy on benchmark datasets when $k=1$. To tackle this problem, we proposed to i) apply a designated meta-learning method to train the model; ii) regularize attention scores with alignment statistics; iii) apply a smoothing technique in pre-training. As a result, our method consistently outperforms all the baselines in both one and two-shot settings. ## 1 Introduction Semantic parsing is the task of mapping natural language (NL) utterances to structured meaning representations, such as logical forms (LF). One key obstacle preventing the wide application of semantic parsing is the lack of task-specific training data. New tasks often require new predicates of LFs. Suppose a personal assistant (e.g. Alexa) is capable of booking flights. Due to new business requirement it needs to book ground transport as well. A user could ask the assistant ”How much does it cost to go from Atlanta downtown to airport?”. The corresponding LF is as follows: _(lambda $0 e (exists $1 (and ( ground_transport $1 )_ --- _(to_city $1 atlanta:ci )(from_airport $1 atlanta:ci)_ _( =(ground_fare $1 ) $0 ))))_ where both ground_transport and ground_fare are new predicates while the other predicates are used in flight booking, such as to_city, from_airport. As manual construction of large parallel training data is expensive and time- consuming, we consider the few-shot formulation of the problem, which requires only a handful of utterance-LF training pairs for each new predicate. The cost of preparing few-shot training examples is low, thus the corresponding techniques permit significantly faster prototyping and development than supervised approaches for business expansions. Semantic parsing in the few-shot setting is challenging. In our experiments, the accuracy of the state-of-the-art (SOTA) semantic parsers drops to less than 25%, when there is only one example per new predicate in training data. Moreover, the SOTA parsers achieve less than 32% of accuracy on five widely used corpora, when the LFs in the test sets do not share LF _templates_ in the training sets Finegan-Dollak et al. (2018). An LF template is derived by _normalizing_ the entities and attribute values of an LF into typed variable names Finegan-Dollak et al. (2018). The few-shot setting imposes two major challenges for SOTA neural semantic parsers. First, it lacks sufficient data to learn effective representations for new predicates in a supervised manner. Second, new predicates bring in new LF templates, which are mixtures of known and new predicates. In contrast, the tasks (e.g. image classification) studied by the prior work on few-shot learning Snell et al. (2017); Finn et al. (2017) considers an instance exclusively belonging to either a known class or a new class. Thus, it is non-trivial to apply conventional few-shot learning algorithms to generate LFs with mixed types of predicates. To address above challenges, we present ProtoParser, a transition-based neural semantic parser, which applies a sequence of parse actions to transduce an utterance into an LF template and fills the corresponding slots. The parser is pre-trained on a training set with known predicates, followed by fine-tuning on a support set that contains few-shot examples of new predicates. It extends the attention-based sequence-to-sequence architecture Sutskever et al. (2014) with the following novel techniques to alleviate the specific problems in the few-shot setting: * • Predicate-droput. Predicate-droput is a meta-learning technique to improve representation learning for both known and new predicates. We empirically found that known predicates are better represented with supervisely learned embeddings, while new predicates are better initialized by a metric-based few- shot learning algorithm Snell et al. (2017). In order to let the two types of embeddings work together in a single model, we devised a training procedure called predicate-dropout to simulate the testing scenario in pre-training. * • Attention regularization. In this work, new predicates appear approximately once or twice during training. Thus, it is insufficient to learn reliable attention scores in the Seq2Seq architecture for those predicates. In the spirit of supervised attention Liu et al. (2016), we propose to regularize them with alignment scores estimated by using co-occurrence statistics and string similarity between words and predicates. The prior work on supervised attention is not applicable, because it requires either large parallel data Liu et al. (2016), significant manual effort Bao et al. (2018); Rabinovich et al. (2017), or it is designed only for applications other than semantic parsing Liu et al. (2017); Kamigaito et al. (2017). * • Pre-training smoothing. The vocabulary of predicates in fine-tuning is higher than that in pre-training, which leads to a distribution discrepancy between the two training stages. Inspired by Laplace smoothing Manning et al. (2008), we achieve significant performance gain by applying a smoothing technique during pre-training to alleviate the discrepancy. Our extensive experiments on three benchmark corpora show that ProtoParser outperforms the competitive baselines with a significant margin. The ablation study demonstrates the effectiveness of each individual proposed technique. The results are statistically significant with p$\leq$0.05 according to the Wilcoxon signed-rank test Wilcoxon (1992). ## 2 Related Work #### Semantic parsing There is ample of work on machine learning models for semantic parsing. The recent surveys Kamath and Das (2018); Zhu et al. (2019) cover a wide range of work in this area. The semantic formalism of meaning representations range from lambda calculas Montague (1973), SQL, to abstract meaning representation Banarescu et al. (2013). At the core of most recent models Chen et al. (2018); Cheng et al. (2019); Lin et al. (2019); Zhang et al. (2019b); Yin and Neubig (2018) is Seq2Seq with attention Bahdanau et al. (2014) by formulating the task as a machine translation problem. Coarse2Fine Dong and Lapata (2018) reports the highest accuracy on GeoQuery Zelle and Mooney (1996) and Atis Price (1990) in a supervised setting. IRNet Guo et al. (2019) and RATSQL Wang et al. (2019) are two best performing models on the Text-to-SQL benchmark, Spider Yu et al. (2018). They are also designed to be able to generalize to unseen database schemas. However, supervised models perform well only when there is sufficient training data. #### Data Sparsity Most semantic parsing datasets are small in size. To address this issue, one line of research is to augment existing datasets with automatically generated data Su and Yan (2017); Jia and Liang (2016); Cai and Yates (2013). Another line of research is to exploit available resources, such as knowledge bases Krishnamurthy et al. (2017); Herzig and Berant (2018); Chang et al. (2019); Lee (2019); Zhang et al. (2019a); Guo et al. (2019); Wang et al. (2019), semantic features in different domains Dadashkarimi et al. (2018); Li et al. (2020), or unlabeled data Yin et al. (2018); Kočiskỳ et al. (2016); Sun et al. (2019). Those works are orthogonal to our setting because our approach aims to efficiently exploit a handful of labeled data of new predicates, which are not limited to the ones in knowledge bases. Our setting also does not require involvement of humans in the loop such as active learning Duong et al. (2018); Ni et al. (2019) and crowd-sourcing Wang et al. (2015); Herzig and Berant (2019). We assume availability of resources different than the prior work and focus on the problems caused by new predicates. We develop an approach to generalize to unseen LF templates consisting of both known and new predicates. #### Few-Shot Learning Few-shot learning is a type of machine learning problems that provides a handful of labeled training examples for a specific task. The survey Zhu et al. (2019) gives a comprehensive overview of the data, models, and algorithms proposed for this type of problems. It categorizes the models into multitask learning Hu et al. (2018), embedding learning Snell et al. (2017); Vinyals et al. (2016), learning with external memory Lee and Choi (2018); Sukhbaatar et al. (2015), and generative modeling Reed et al. (2017) in terms of what prior knowledge is used. Lee et al. (2019) tackles the problem of poor generalization across SQL templates for SQL query generation in the one-shot learning setting. In their setting, they assume all the SQL templates on test set are shared with the templates on support set. In contrast, we assume only the sharing of new predicates between a support set and a test set. In our one-shot setting, only around 10% of LF templates on test set are shared with the ones in the support set of GeoQuery dataset. ## 3 Semantic Parser ProtoParser follows the SOTA neural semantic parsers Dong and Lapata (2018); Guo et al. (2019) to map an utterance into an LF in two steps: template generation and slot filling111Code and datasets can be found in this repository: https://github.com/zhuang-li/few-shot-semantic-parsing. It implements a designated transition system to generate templates, followed by filling the slot variables with values extracted from utterances. To address the challenges in the few-shot setting, we proposed three training methods, detailed in Sec. 4. Many LFs differ only in mentioned atoms, such as entities and attribute values. An LF template is created by replacing the atoms in LFs with typed slot variables. As an example, the LF template of our example in Sec. 1 is created by substituting i) a typed atom variable $v_{e}$ for the entity “atlanta:ci”; ii) a shared variable name $v_{a}$ for all variables “$\$0$“ and “$\$1$“. _(lambda $v_{a}$ e (exists $v_{a}$ (and ( ground_transport $v_{a}$ )_ --- _(to_city $v_{a}$ $v_{e}$ )(from_airport $v_{a}$ $v_{e}$) ( =(ground_fare $v_{a}$ ) $v_{a}$ ))))_ t | Actions ---|--- $t_{1}$ | GEN [(ground_transport $v_{a}$)] $t_{2}$ | GEN [(to_city $v_{a}$ $v_{e}$)] $t_{3}$ | GEN [(from_airport $v_{a}$ $v_{e}$)] $t_{4}$ | GEN [(= (ground_fare $v_{a}$) $v_{a}$)] $t_{5}$ | REDUCE [and :- NT NT NT NT] $t_{6}$ | REDUCE [exists :- $v_{a}$ NT] $t_{7}$ | REDUCE [lambda :- $v_{a}$ e NT] Table 1: An example action sequence. Formally, let ${\bm{x}}=\\{x_{1},...,x_{n}\\}$ denote an NL utterance, and its LF is represented as a semantic tree ${\bm{y}}=(\mathcal{V},\mathcal{E})$, where $\mathcal{V}=\\{v_{1},...,v_{m}\\}$ denotes the node set with $v_{i}\in\mathcal{V}$, and $\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}$ is its edge set. The node set $\mathcal{V}=\mathcal{V}_{p}\cup\mathcal{V}_{v}$ is further divided into a template predicate set $\mathcal{V}_{p}$ and a slot value set $\mathcal{V}_{v}$. A template predicate node represents a predicate symbol or a term, while a slot value node represents an atom mentioned in utterances. Thus, a semantic tree ${\bm{y}}$ is composed of an abstract tree $\tau_{{\bm{y}}}$ representing a template and a set of slot value nodes $\mathcal{V}_{v,{\bm{y}}}$ attaching to the abstract tree. In the few-shot setting, we are provided with a train set $\mathcal{D}_{train}$, a support set $\mathcal{D}_{s}$, and a test set $\mathcal{D}_{test}$. Each example in either of those sets is an utterance-LF pair $({\bm{x}}_{i},{\bm{y}}_{i})$. The new predicates appear only in $\mathcal{D}_{s}$ and $\mathcal{D}_{test}$ but not in $\mathcal{D}_{train}$. For $K$-shot learning, there are $K$ $({\bm{x}}_{i},{\bm{y}}_{i})$ per each new predicate $p$ in $\mathcal{D}_{s}$. Each new predicate appears also in the test set. The goal is to maximize the accuracy of estimating LFs given utterances in $\mathcal{D}_{test}$ by using a parser trained on $\mathcal{D}_{train}\cup\mathcal{D}_{s}$. ### 3.1 Transition System We apply the transition system Cheng et al. (2019) to perform a sequence of transition actions to generate the template of a semantic tree. The transition system maintains partially-constructed outputs using a stack. The parser starts with an empty stack. At each step, it performs one of the following transition actions to update the parsing state and generate a tree node. The process repeats until the stack contains a complete tree. * • GEN [$y$] creates a new leaf node $y$ and pushes it on top of the stack. * • REDUCE [$r$]. The reduce action identifies an implication rule $\text{head}:-\text{body}$. The rule body is first popped from the stack. A new subtree is formed by attaching the rule head as a new parent node to the rule body . Then the whole subtree is pushed back to the stack. Table 1 shows such an action sequence for generating the above LF template. Each action produces known or new predicates. ### 3.2 Base Parser ProtoParser generates an LF in two steps: i) template generation, ii) slot filling. The base architecture largely resembles Cheng et al. (2019). #### Template Generation Given an utterance, the task is to generate a sequence of actions $\mathbf{a}=a_{1},...,a_{k}$ to build an abstract tree $\tau_{{\bm{y}}}$. We found out LFs often contain idioms, which are frequent subtrees shared across LF templates. Thus we apply a template normalization procedure in a similar manner as Iyer et al. (2019) to preprocess all LF templates. It collapses idioms into single units such that all LF templates are converted into a compact form. The neural transition system consists of an encoder and a decoder for estimating action probabilities. $P(\mathbf{a}|{\bm{x}})=\prod_{t=1}^{|\mathbf{a}|}P(a_{t}|{\bm{a}}_{<t},{\bm{x}})$ (1) Encoder We apply a bidirectional Long Short-term Memory (LSTM) network Gers et al. (1999) to map a sequence of $n$ words into a sequence of contextual word representations $\\{\mathbf{e}\\}_{i=1}^{n}$. Template Decoder The decoder applies a stack-LSTM Dyer et al. (2015) to generate action sequences. A stack-LSTM is an unidirectional LSTM augmented with a pointer. The pointer points to a particular hidden state of the LSTM, which represents a particular state of the stack. It moves to a different hidden state to indicate a different state of the stack. At time $t$, the stack-LSTM produces a hidden state $\mathbf{h}^{d}_{t}$ by $\mathbf{h}^{d}_{t}=\text{LSTM}(\mu_{t},\mathbf{h}^{d}_{t-1})$, where $\mu_{t}$ is a concatenation of the embedding of the action $\mathbf{c}_{a_{t-1}}$ estimated at time $t-1$ and the representation $\mathbf{h}_{y_{t-1}}$ of the partial tree generated by history actions at time $t-1$. As a common practice, $\mathbf{h}^{d}_{t}$ is concatenated with an attended representation $\mathbf{h}_{t}^{a}$ over encoder hidden states to yield $\mathbf{h}_{t}$, with $\mathbf{h}_{t}=\mathbf{W}\begin{bmatrix}\mathbf{h}_{t}^{d}\\\ \mathbf{h}_{t}^{a}\end{bmatrix}$, where $\mathbf{W}$ is a weight matrix and $\mathbf{h}^{a}_{t}$ is created by soft attention, $\mathbf{h}^{a}_{t}=\sum_{i=1}^{n}P(\mathbf{e}_{i}|\mathbf{h}^{d}_{t})\mathbf{e}_{i}$ (2) We apply dot product to compute the normalized attention scores $P(\mathbf{e}_{i}|\mathbf{h}^{d}_{t})$ Luong et al. (2015). The supervised attention Rabinovich et al. (2017); Yin and Neubig (2018) is also applied to facilitate the learning of attention weights. Given $\mathbf{h}_{t}$, the probability of an action is estimated by: $P(a_{t}|\mathbf{h}_{t})=\frac{\exp(\mathbf{c}_{a_{t}}^{\intercal}\mathbf{h}_{t})}{\sum_{a^{\prime}\in\mathcal{A}_{t}}\exp(\mathbf{c}_{a^{\prime}}^{\intercal}\mathbf{h}_{t})}$ (3) where $\mathbf{c}_{a}$ denotes the embedding of action $a$, and $\mathcal{A}_{t}$ denotes the set of applicable actions at time $t$. The initialization of those embeddings will be explained in the following section. #### Slot Filling A tree node in a semantic tree may contain more than one slot variables due to template normalization. Since there are two types of slot variables, given a tree node with slot variables, we employ a LSTM-based decoder with the same architecture as the Template decoder to fill each type of slot variables, respectively. The output of such a decoder is a value sequence of the same length as the number of slot variables of that type in the given tree node. ## 4 Few-Shot Model Training The few-shot setting differs from the supervised setting by having a support set in testing in addition to train/test sets. The support set contains $k$ utterance-LF pairs per new predicate, while the training set contains only known predicates. To evaluate model performance on new predicates, the test set contains LFs with both known and new predicates. Given the support set, we can tell if a predicate is known or new by checking if it only exists in the train set. We take two steps to train our model: i) pre-training on the training set, ii) fine-tuning on the support set. Its predictive performance is measured on the test set. We take the two-steps approach because i) our experiments show that this approach performs better than training on the union of the train set and the support set; ii) for any new support sets, it is computationally more time efficient than training from scratch on the union of the train set and the support set. There is a distribution discrepancy between the train set and the support set due to new predicates, the meta-learning algorithms Snell et al. (2017); Finn et al. (2017) suggest to simulate the testing scenario in pre-training by splitting each batch into a meta-support set and a meta-test set. The models utilize the information (e.g. prototype vectors) acquired from the meta- support set to minimize errors on the meta-test set. In this way, the meta- support and meta-test sets simulate the support and test sets sharing new predicates. However, we cannot directly apply such a training procedure due to the following two reasons. First, each LF in the support and test sets is a mixture of both known predicates and new predicates. To simulate the support and test sets, the meta-support and meta-test sets should include both types of predicates as well. We cannot assume that there are only one type of predicates. Second, our preliminary experiments show that if there is sufficient training data, it is better off training action embeddings of known predicates $\mathbf{c}$ (Eq. (3)) in a supervised way, while action embeddings initialized by a metric-based meta-learning algorithm Snell et al. (2017) perform better for rarely occurred new predicates. Therefore, we cope with the differences between known and new predicates by using a customized initialization method in fine-tuning and a designated pre-training procedure to mimic fine-tuning on the train set. In the following, we introduce fine- tuning first because it helps understand our pre-training procedure. ### 4.1 Fine-tuning During fine-tuning, the model parameters and the action embeddings in Eq. (3) for known predicates are obtained from the pre-trained model. The embedding of actions that produce new predicates $\mathbf{c}_{a_{t}}$ are initialized using prototype vectors as in prototypical networks Snell et al. (2017). The prototype representations act as a type of regularization, which shares the similar idea as the deep learning techniques using pre-trained models. A prototype vector of an action $a_{t}$ is constructed by using the hidden states of the template decoder collected at the time of predicting $a_{t}$ on a support set. Following Snell et al. (2017), a prototype vector is built by taking the mean of such a set of hidden states $\mathbf{h}_{t}$. $\mathbf{c}_{a_{t}}=\frac{1}{|M|}\sum_{\mathbf{h}_{t}\in M}\mathbf{h}_{t}$ (4) where $M$ denotes the set of all hidden states at the time of applying the action $a_{t}$. After initialization, the whole model parameters and the action embeddings are further improved by fine-tuning the model on the support set with a supervised training objective $\mathcal{L}_{f}$. $\mathcal{L}_{f}=\mathcal{L}_{s}+\lambda\Omega$ (5) where $\mathcal{L}_{s}$ is the cross-entropy loss and $\Omega$ is an attention regularization term explained below. The degree of regularization is adjusted by $\lambda\in\mathbb{R}^{+}$. #### Attention Regularization We address the poorly learned attention scores $P(\mathbf{e}_{i}|\mathbf{h}^{d}_{t})$ of infrequent actions by introducing a novel attention regularization. We observe that the probability $P(a_{j}|x_{i})=\frac{\text{count}(a_{j},x_{i})}{\text{count}(x_{i})}$ and the character similarity between the predicates generated by action $a_{j}$ and the token $x_{i}$ are often strong indicators of their alignment. The indicators can be further strengthened by manually annotating the predicates with their corresponding natural language tokens. In our work, we adopt $1-dist(a_{j},x_{i})$ as the character similarity, where $dist(a_{j},x_{j})$ is normalized Levenshtein distance Levenshtein (1966). Both measures are in the range $[0,1]$, thus we apply $g(a_{j},x_{i})=\sigma(\cdot)P(a_{j}|x_{i})+(1-\sigma(\cdot)char\\_sim(a_{j},x_{i})$ to compute alignment scores, where the sigmoid function $\sigma(\mathbf{w}^{\intercal}_{p}\mathbf{h}^{d}_{t})$ combines two constant measures into a single score. The corresponding normalized attention scores is given by $P^{\prime}(x_{i}|a_{k})=\frac{g(a_{k},x_{i})}{\sum_{j=1}^{n}g(a_{k},x_{j})}$ (6) The attention scores $P(x_{i}|a_{k})$ should be similar to $P^{\prime}(x_{i}|a_{k})$. Thus, we define the regularization term as $\Omega=\sum_{ij}|P(x_{i}|a_{j})-P^{\prime}(x_{i}|a_{j})|$ during training. ### 4.2 Pre-training The pre-training objective are two-folds: i) learn action embeddings for known predicates in a supervised way, ii) ensure our model can quickly adapt to the actions of new predicates, whose embeddings are initialized by prototype vectors before fine-tuning. #### Predicate-dropout Starting with randomly initialized model parameters, we alternately use one batch for the meta-loss $\mathcal{L}_{m}$ and one batch for optimizing the supervised loss $\mathcal{L}_{s}$. In a batch for $\mathcal{L}_{m}$, we split the data into a meta-support set and a meta-test set. In order to simulate existence of new predicates, we randomly select a subset of predicates as ”new”, thus their action embeddings $\mathbf{c}$ are replaced by prototype vectors constructed by applying Eq. (4) over the meta-support set. The actions of remaining predicates keep their embeddings learned from previous batches. The resulted action embedding matrix $\mathbf{C}$ is the combination of both. $\mathbf{C}=(1-\mathbf{m}^{\intercal})\mathbf{C}_{s}+\mathbf{m}^{\intercal}\mathbf{C}_{m}$ (7) where $\mathbf{C}_{s}$ is the embedding matrix learned in a supervised way, and $\mathbf{C}_{m}$ is constructed by using prototype vectors on the meta- support set. The mask vector $\mathbf{m}$ is generated by setting the indices of actions of the ”new” predicates to ones and the other to zeros. We refer to this operation as predicate-dropout. The training algorithm for the meta-loss is summarised in Algorithm 1. In a batch for $\mathcal{L}_{s}$, we update the model parameters and all action embeddings with a cross-entropy loss $\mathcal{L}_{s}$, together with the attention regularization. Thus, the overall training objective becomes $\mathcal{L}_{p}=\mathcal{L}_{m}+\mathcal{L}_{s}+\lambda\Omega$ (8) #### Pre-training smoothing Due to the new predicates, the number of candidate actions during the prediction of fine-tuning and testing is larger than the one during pre- training. That leads to distribution discrepancy between pre-training and testing. To minimize the differences, we assume a prior knowledge on the number of actions for new predicates by adding a constant $k$ to the denominator of Eq. (3) when estimating the action probability $P(a_{t}|\mathbf{h}_{t})$ during pre-training. $P(a_{t}|\mathbf{h}_{t})=\frac{\exp(\mathbf{c}_{a_{t}}^{\intercal}\mathbf{h}_{t})}{\sum_{a^{\prime}\in\mathcal{A}_{t}}\exp(\mathbf{c}_{a^{\prime}}^{\intercal}\mathbf{h}_{t})+k}$ (9) We do not consider this smoothing technique during fine-tuning and testing. Despite its simplicity, the experimental results show a significant performance gain on benchmark datasets. Input : Training set $\mathcal{D}$, supervisely trained action embedding $\mathcal{C}_{s}$, number of meta-support examples $k$, number of meta-test examples $n$ per one support example, predicate-dropout ratio $r$ Output : The loss $\mathcal{L}_{m}$. Extract a template set $\mathcal{T}$ from the training set $\mathcal{D}$ Sample a subset $\mathcal{T}_{i}$ of size $k$ from $\mathcal{T}$ $S$ := $\emptyset$ # meta-support set $Q$ := $\emptyset$ # meta-test set for _t in $\mathcal{T}_{i}$_ do Sample a meta-support example $s^{\prime}$ with template $t$ from $\mathcal{D}$ without replacement Sample a meta-test set $Q^{\prime}$ of size $n$ with template $t$ from $\mathcal{D}$ $S=S\cup s^{\prime}$ $Q=Q\cup Q^{\prime}$ end for Build a prototype matrix $\mathcal{C}_{m}$ on $S$ Extract a predicate set $\mathcal{P}$ from $S$ Sample a subset $\mathcal{P}_{s}$ of size $r\times|\mathcal{P}|$ from $\mathcal{P}$ as new predicates Build a mask $\mathbf{m}$ using $\mathcal{P}_{s}$ With $\mathcal{C}_{s}$, $\mathcal{C}_{m}$ and $\mathbf{m}$, apply Eq. (7) to compute $\mathbf{C}$ Compute $\mathcal{L}_{m}$, the cross-entropy on $Q$ with $\mathbf{C}$ Algorithm 1 Predicate-Dropout ## 5 Experiments #### Datasets. We use three semantic parsing datasets: Jobs, GeoQuery, and Atis. Jobs contains 640 question-LF pairs in Prolog about job listings. GeoQuery Zelle and Mooney (1996) and Atis Price (1990) include 880 and 5,410 utterance-LF pairs in lambda calculas about US geography and flight booking, respectively. The number of predicates in Jobs, GeoQuery, Atis is 15, 24, and 88, respectively. All atoms in the datasets are anonymized as in Dong and Lapata (2016). For each dataset, we randomly selected $m$ predicates as the new predicates, which is 3 for Jobs, and 5 for GeoQuery and Atis. Then we split each dataset into a train set and an evaluation set. And we removed the instances, the template of which is unique in each dataset. The number of such instances is around 100, 150 and 600 in Jobs, GeoQuery, and Atis. The ratios between the evaluation set and the train set are 1:4, 2:5, and 1:7 in Jobs, GeoQuery, and ATIS, respectively. Each LF in an evaluation set contains at least a new predicate, while an LF in a train set contains only known predicates. To evaluate $k$-shot learning, we build a support set by randomly sampling $k$ pairs per new predicate without replacement from an evaluation set, and keep the remaining pairs as the test set. To avoid evaluation bias caused by randomness, we repeat the above process six times to build six different splits of support and test set from each evaluation set. One for hyperparameter tuning and the rest for evaluation. We consider at most 2-shot learning due to the limited number of instances per new predicate in each evaluation set. #### Training Details. We pre-train our parser on the training sets for {80, 100} epochs with the Adam optimizer Kingma and Ba (2014). The batch size is fixed to 64. The initial learning rate is 0.0025, and the weights are decayed after 20 epochs with decay rate 0.985. The predicate dropout rate is 0.5. The smoothing term is set to {3, 6}. The number of meta-support examples is 30 and the number of meta-test examples per support example is 15. The coefficient of attention regularization is set to 0.01 on Jobs and 1 on the other datasets. We employ the 200-dimensional GLOVE embedding Pennington et al. (2014) to initialize the word embeddings for utterances. The hidden state size of all LSTM models Hochreiter and Schmidhuber (1997) is 256. During fine-tuning, the batch size is 2, the learning rates and the epochs are selected from {0.001, 0.0005} and {20, 30, 40, 60, 120}, respectively. | Jobs | GeoQuery | Atis | Jobs | GeoQuery | Atis | p-values ---|---|---|---|---|---|---|--- Seq2Seq (pt) | 11.27 | 20.00 | 17.23 | 14.58 | 33.01 | 18.76 | 3.32e-04 Seq2Seq (cb) | 11.70 | 7.64 | 2.25 | 21.49 | 14.36 | 7.91 | 6.65e-06 Seq2Seq (os) | 14.18 | 11.38 | 4.45 | 30.46 | 33.59 | 10.17 | 5.30e-05 Coarse2Fine (pt) | 10.91 | 24.07 | 17.44 | 13.83 | 35.63 | 21.08 | 1.48e-04 Coarse2Fine (cb) | 9.28 | 14.50 | 0.42 | 19.61 | 28.93 | 9.25 | 2.35e-06 Coarse2Fine (os) | 6.73 | 10.35 | 5.26 | 16.08 | 28.55 | 17.73 | 1.13e-05 IRNet (pt) | 16.00 | 20.00 | 17.12 | 19.06 | 35.05 | 20.11 | 2.86e-05 IRNet (cb) | 19.67 | 21.90 | 5.60 | 28.22 | 44.08 | 15.73 | 2.76e-03 IRNet (os) | 14.91 | 18.78 | 4.95 | 30.84 | 40.97 | 18.05 | 2.47e-04 DA | 18.91 | 9.67 | 4.29 | 21.31 | 20.88 | 17.18 | 1.13e-06 PT-MAML | 11.64 | 9.76 | 6.83 | 17.76 | 22.52 | 12.28 | 1.73e-06 Ours | 27.09 | 27.49 | 19.27 | 32.5 | 48.45 | 22.48 | Table 2: Evaluation of learning results on three datasets. (Left) The one-shot results. (Right) The two-shot results. #### Baselines. We compared our methods with five competitive baselines, Seq2Seq with attention Luong et al. (2015), Coarse2Fine Dong and Lapata (2018), IRNet Guo et al. (2019), PT-MAML Huang et al. (2018) and DA Li et al. (2020). Coarse2Fine is the best performing supervised model on the standard split of GeoQuery and Atis datasets. PT-MAML is a few-shot learning semantic parser that adopts Model-Agnostic Meta-Learning Finn et al. (2017). We adapt PT-MAML in our scenario by considering a group of instances that share the same template as a pseudo-task. DA is the most recently proposed neural semantic parser applying domain adaptation techniques. IRNet is the strongest semantic parser that can generalize to unseen database schemas. In our case, we consider a list of predicates in support sets as the columns of a new database schema and incorporate the schema encoding module of IRNet into the encoder of our base parser. We choose IRNet over RATSQL Wang et al. (2019) because IRNet achieves superior performance on our datasets. We consider three different supervised learning settings. First, we pre-train a model on a train set, followed by fine-tuning it on the corresponding support set, coined pt. Second, a model is trained on the combination of a train set and a support set, coined cb. Third, the support set in cb is oversampled by 10 times and 5 times for one-shot and two-shot respectively, coined os. #### Evaluation Details. The same as prior work Dong and Lapata (2018); Li et al. (2020), we report accuracy of exactly matched LFs as the main evaluation metric. To investigate if the results are statistically significant, we conducted the Wilcoxon signed-rank test, which assesses whether our model consistently performs better than another baseline across all evaluation sets. It is considered superior than t-test in our case, because it supports comparison across different support sets and does not assume normality in data Demšar (2006). We include the corresponding $p$-values in our result tables. ### 5.1 Results and Discussion Table 2 shows the average accuracies and significance test results of all parsers compared on all three datasets. Overall, ProtoParser outperforms all baselines with at least 2% on average in terms of accuracy in both one-shot and two-shot settings. The results are statistically significant w.r.t. the strongest baselines, IRNet (cb) and Coarse2Fine (pt). The corresponding p-values are 0.00276 and 0.000148, respectively. Given one-shot example on Jobs, our parser achieves 7% higher accuracy than the best baseline, and the gap is 4% on GeoQuery with two-shots examples. In addition, none of the SOTA baseline parsers can consistently outperform other SOTA parsers when there are few parallel data for new predicates. In one-shot setting, the best supervised baseline IRNet (cb) can achieve the best results on GeoQuery and Jobs among all baselines, and on two-shot setting, it performs best only on GeoQuery. It is also difficult to achieve good performance by adapting the existing meta- learning or transfer learning algorithms to our problem, as evident by the moderate performance of PT-MAML and DA on all datasets. The problems of few-shot learning demonstrate the challenges imposed by infrequent predicates. There are significant proportions of infrequent predicates on the existing datasets. For example, on GeoQuery, there are 10 predicates contributing to only 4% of the total frequency of all 24 predicates, while the top two frequent predicates amount to 42%. As a result, the SOTA parsers achieve merely less than 25% and 44% of accuracy with one- shot and two-shots examples, respectively. In contrast, those parsers achieve more than 84% accuracy on the standard splits of the same datasets in the supervised setting. Infrequent predicates in semantic parsing can also be viewed as a class imbalance problem, when support sets and train sets are combined in a certain manner. In this work, the ratio between the support set and the train set in Jobs, GeoQuery, and ATIS is 1:130, 1:100, and 1:1000, respectively. Different models prefer different ways of using the train sets and support sets. The best option for Coarse2Fine and Seq2Seq is to pre-train on a train set followed by fine-tuning on the corresponding support set, while IRNet favors oversampling in two-shot setting. #### Ablation Study | Jobs | GeoQuery | Atis | Jobs | GeoQuery | Atis | p-values ---|---|---|---|---|---|---|--- Ours | 27.09 | 27.49 | 19.27 | 32.50 | 48.45 | 22.48 | \- sup | 23.63 | 18.86 | 12.91 | 26.91 | 39.51 | 14.89 | 1.44e-05 \- proto | 22.91 | 18.77 | 13.24 | 29.16 | 38.93 | 16.81 | 1.77e-05 \- reg | 29.27 | 18.10 | 13.66 | 31.03 | 39.61 | 18.58 | 9.60e-04 \- strsim | 22.18 | 19.62 | 10.14 | 28.41 | 47.09 | 19.98 | 9.27e-04 \- cond | 23.27 | 19.05 | 9.63 | 27.66 | 40.97 | 17.50 | 4.37e-05 \- smooth | 24.36 | 23.60 | 15.23 | 30.84 | 44.95 | 18.71 | 3.27e-03 Table 3: Ablation study results. (Left) The one-shot learning results. (Right) The two-shot learning results. We examine the effect of different components of our parser by removing each of them individually and reporting the corresponding average accuracy. As shown in Table 3, removing any of the components almost always leads to statistically significant drop of performance. The corresponding p-values are all less than 0.00327. To investigate predicate-dropout, we exclude either supervised-loss during pre-training (-sup) or initialization of new predicate embeddings by prototype vectors before fine-tuning (-proto). It is clear from Table 3 that ablating either supervisely trained action embeddings or prototype vectors hurts performance severely. We further study the efficacy of attention regularization by removing it completely (-reg), removing only the string similarity feature (-strsim), or conditional probability feature (-cond). Removing the regularization completely degrades performance sharply except on Jobs in the one-shot setting. Our further inspection shows that model learning is easier on Jobs than on the other two datasets. Each predicate in Jobs almost always aligns to the same word across examples, while a predicate can align with different word/phrase in different examples in GeoQuery and ATIS. The performance drop with -strsim and -cond indicates that we cannot only reply on a single statistical measure for regularization. For instance, we cannot always find predicates take the same string form as the corresponding words in input utterances. In fact, the proportion of predicates present in input utterances is only 42%, 38% and 44% on Jobs, ATIS, and GeoQuery, respectively. Furthermore, without pre-training smoothing (- smooth), the accuracy drops at least 1.6% in terms of mean accuracy on all datasets. Smoothing enables better model parameter training by more accurate modelling in pre-training. #### Support Set Analysis Figure 1: (Round) The support set with the lowest accuracy. (Box) The support set with the highest accuracy. We observe that all models consistently achieve high accuracy on certain support sets of the same dataset, while obtaining low accuracies on the other ones. We illustrate the reasons of such effects by plotting the evaluation set of GeoQuery. Each data point in Figure 1 depicts an representation, which is generated by the encoder of our parser after pre-training. We applied T-SNE Maaten and Hinton (2008) for dimension reduction. We highlight two support sets used in the one-shot setting on GeoQuery. All examples in the highest performing support set tend to scatter evenly and cover different dense regions in the feature space, while the examples in the lowest performing support set are far from a significant number of dense regions. Thus, the examples in good support sets are more representative of the underlying distribution than the ones in poor support sets. When we leave out each example in the highest performing support set and re-evaluate our parser each time, we observe that the good ones (e.g. the green box in Figure 1) locate either in or close to some of the dense regions. ## 6 Conclusion and Future Work We propose a novel few-shot learning based semantic parser, coined ProtoParser, to cope with new predicates in LFs. To address the challenges in few-shot learning, we propose to train the parser with a pre-training procedure involving predicate-dropout, attention regularization, and pre- training smoothing. The resulted model achieves superior results over competitive baselines on three benchmark datasets. ## References * Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. _arXiv preprint arXiv:1409.0473_. * Banarescu et al. (2013) Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013\. Abstract meaning representation for sembanking. In _Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse_ , pages 178–186. * Bao et al. (2018) Yujia Bao, Shiyu Chang, Mo Yu, and Regina Barzilay. 2018. Deriving machine attention from human rationales. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 1903–1913. * Cai and Yates (2013) Qingqing Cai and Alexander Yates. 2013. Semantic parsing freebase: Towards open-domain semantic parsing. In _Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity_ , pages 328–338. * Chang et al. (2019) Shuaichen Chang, Pengfei Liu, Yun Tang, Jing Huang, Xiaodong He, and Bowen Zhou. 2019. Zero-shot text-to-sql learning with auxiliary task. _arXiv preprint arXiv:1908.11052_. * Chen et al. (2018) Bo Chen, Le Sun, and Xianpei Han. 2018. Sequence-to-action: End-to-end semantic graph generation for semantic parsing. _arXiv preprint arXiv:1809.00773_. * Cheng et al. (2019) Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2019. Learning an executable neural semantic parser. _Computational Linguistics_ , 45(1):59–94. * Dadashkarimi et al. (2018) Javid Dadashkarimi, Alexander Fabbri, Sekhar Tatikonda, and Dragomir R Radev. 2018\. Zero-shot transfer learning for semantic parsing. _arXiv preprint arXiv:1808.09889_. * Demšar (2006) Janez Demšar. 2006. Statistical comparisons of classifiers over multiple data sets. _Journal of Machine learning research_ , 7(Jan):1–30. * Dong and Lapata (2016) Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. _arXiv preprint arXiv:1601.01280_. * Dong and Lapata (2018) Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. _arXiv preprint arXiv:1805.04793_. * Duong et al. (2018) Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip Cohen, and Mark Johnson. 2018. Active learning for deep semantic parsing. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 43–48. * Dyer et al. (2015) Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A Smith. 2015\. Transition-based dependency parsing with stack long short-term memory. _arXiv preprint arXiv:1505.08075_. * Finegan-Dollak et al. (2018) Catherine Finegan-Dollak, Jonathan K Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-sql evaluation methodology. _arXiv preprint arXiv:1806.09029_. * Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , pages 1126–1135. JMLR. org. * Gers et al. (1999) Felix A Gers, Jürgen Schmidhuber, and Fred Cummins. 1999. Learning to forget: Continual prediction with lstm. * Guo et al. (2019) Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-sql in cross-domain database with intermediate representation. _arXiv preprint arXiv:1905.08205_. * Herzig and Berant (2018) Jonathan Herzig and Jonathan Berant. 2018. Decoupling structure and lexicon for zero-shot semantic parsing. _arXiv preprint arXiv:1804.07918_. * Herzig and Berant (2019) Jonathan Herzig and Jonathan Berant. 2019. Don’t paraphrase, detect! rapid and effective data collection for semantic parsing. _arXiv preprint arXiv:1908.09940_. * Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. _Neural computation_ , 9(8):1735–1780. * Hu et al. (2018) Zikun Hu, Xiang Li, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Few-shot charge prediction with discriminative legal attributes. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 487–498. * Huang et al. (2018) Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wen-tau Yih, and Xiaodong He. 2018\. Natural language to structured query generation via meta-learning. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 732–738. * Iyer et al. (2019) Srinivasan Iyer, Alvin Cheung, and Luke Zettlemoyer. 2019. Learning programmatic idioms for scalable semantic parsing. _arXiv preprint arXiv:1904.09086_. * Jia and Liang (2016) Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. _arXiv preprint arXiv:1606.03622_. * Kamath and Das (2018) Aishwarya Kamath and Rajarshi Das. 2018. A survey on semantic parsing. _CoRR_ , abs/1812.00978. * Kamigaito et al. (2017) Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Masaaki Nagata, Hiroya Takamura, and Manabu Okumura. 2017. Supervised attention for sequence-to-sequence constituency parsing. _IJCNLP 2017_ , page 7. * Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_. * Kočiskỳ et al. (2016) Tomáš Kočiskỳ, Gábor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. _arXiv preprint arXiv:1609.09315_. * Krishnamurthy et al. (2017) Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 1516–1526. * Lee (2019) Dongjun Lee. 2019. Clause-wise and recursive decoding for complex and cross-domain text-to-sql generation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 6047–6053. * Lee et al. (2019) Dongjun Lee, Jaesik Yoon, Jongyun Song, Sanggil Lee, and Sungroh Yoon. 2019. One-shot learning for text-to-sql generation. _arXiv preprint arXiv:1905.11499_. * Lee and Choi (2018) Yoonho Lee and Seungjin Choi. 2018. Gradient-based meta-learning with learned layerwise metric and subspace. _arXiv preprint arXiv:1801.05558_. * Levenshtein (1966) Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In _Soviet physics doklady_ , volume 10, pages 707–710. * Li et al. (2020) Zechang Li, Yuxuan Lai, Yansong Feng, and Dongyan Zhao. 2020. Domain adaptation for semantic parsing. _arXiv preprint arXiv:2006.13071_. * Lin et al. (2019) Kevin Lin, Ben Bogin, Mark Neumann, Jonathan Berant, and Matt Gardner. 2019. Grammar-based neural text-to-sql generation. _arXiv preprint arXiv:1905.13326_. * Liu et al. (2016) Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. In _Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers_ , pages 3093–3102. * Liu et al. (2017) Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1789–1798. * Luong et al. (2015) Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. _arXiv preprint arXiv:1508.04025_. * Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. _Journal of machine learning research_ , 9(Nov):2579–2605. * Manning et al. (2008) Christopher D Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. _Introduction to information retrieval_. Cambridge university press. * Montague (1973) Richard Montague. 1973. The proper treatment of quantification in ordinary english. In _Approaches to natural language_ , pages 221–242. Springer. * Ni et al. (2019) Ansong Ni, Pengcheng Yin, and Graham Neubig. 2019. Merging weak and active supervision for semantic parsing. _arXiv preprint arXiv:1911.12986_. * Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_ , pages 1532–1543. * Price (1990) Patti J Price. 1990. Evaluation of spoken language systems: The atis domain. In _Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990_. * Rabinovich et al. (2017) Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1139–1149. * Reed et al. (2017) Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, SM Eslami, Danilo Rezende, Oriol Vinyals, and Nando de Freitas. 2017. Few-shot autoregressive density estimation: Towards learning to learn distributions. _arXiv preprint arXiv:1710.10304_. * Snell et al. (2017) Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In _Advances in Neural Information Processing Systems_ , pages 4077–4087. * Su and Yan (2017) Yu Su and Xifeng Yan. 2017. Cross-domain semantic parsing via paraphrasing. _arXiv preprint arXiv:1704.05974_. * Sukhbaatar et al. (2015) Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In _Advances in neural information processing systems_ , pages 2440–2448. * Sun et al. (2019) Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, and Daxin Jiang. 2019. Neural semantic parsing in low-resource settings with back-translation and meta-learning. _arXiv preprint arXiv:1909.05438_. * Sutskever et al. (2014) I Sutskever, O Vinyals, and QV Le. 2014. Sequence to sequence learning with neural networks. _Advances in NIPS_. * Vinyals et al. (2016) Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In _Advances in neural information processing systems_ , pages 3630–3638. * Wang et al. (2019) Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2019. Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers. _arXiv preprint arXiv:1911.04942_. * Wang et al. (2015) Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , volume 1, pages 1332–1342. * Wilcoxon (1992) Frank Wilcoxon. 1992. Individual comparisons by ranking methods. In _Breakthroughs in statistics_ , pages 196–202. Springer. * Yin and Neubig (2018) Pengcheng Yin and Graham Neubig. 2018. Tranx: A transition-based neural abstract syntax parser for semantic parsing and code generation. _arXiv preprint arXiv:1810.02720_. * Yin et al. (2018) Pengcheng Yin, Chunting Zhou, Junxian He, and Graham Neubig. 2018. StructVAE: Tree-structured latent variable models for semi-supervised semantic parsing. In _The 56th Annual Meeting of the Association for Computational Linguistics (ACL)_. * Yu et al. (2018) Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. _arXiv preprint arXiv:1809.08887_. * Zelle and Mooney (1996) John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In _Proceedings of the national conference on artificial intelligence_ , pages 1050–1055. * Zhang et al. (2019a) Rui Zhang, Tao Yu, He Yang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019a. Editing-based sql query generation for cross-domain context-dependent questions. _arXiv preprint arXiv:1909.00786_. * Zhang et al. (2019b) Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019b. Broad-coverage semantic parsing as transduction. _arXiv preprint arXiv:1909.02607_. * Zhu et al. (2019) Q. Zhu, X. Ma, and X. Li. 2019. Statistical learning for semantic parsing: A survey. _Big Data Mining and Analytics_ , 2(4):217–239. ## Appendix A Template Normalization Input : A set of abstract trees $\mathcal{T}$, a minimal support $\tau$ Output : A set of normalized trees $O$ := mapping of subtrees to their occurrences in $\mathcal{T}$. for _tree $t$ in $\mathcal{T}$_ do update occurrence of all leaf nodes $v$ of $t$ to $O[v]$ end for while _$O$ updated with new trees_ do for _tree $t$, occur_list $l$ in $\mathcal{O}$_ do build occurrence list $l^{\prime}$ for supertree $t^{\prime}$ of $t$ if _size( $l^{\prime}$) $\geq$ size($l$)_ then $O[t^{\prime}]=l^{\prime}$ end if end for end while for _tree $t$, occur_list $l$ in $\mathcal{O}$_ do if _size( $l$) $\geq\tau$_ then collapse $t$ into a node for all $t^{\prime}$ in $l$. end if end for Algorithm 2 Template Normalization t | Stack | Action ---|---|--- $t_{1}$ | [] | GEN [(ground_transport $v_{a}$)] $t_{2}$ | [(ground_transport $v_{a}$)] | GEN [(to_city $v_{a}$ $v_{e}$)] $t_{3}$ | [(ground_transport $v_{a}$), (to_city $v_{a}$ $v_{e}$)] | GEN [(from_airport $v_{a}$ $v_{e}$)] $t_{4}$ | [(ground_transport $v_{a}$), (to_city $v_{a}$ $v_{e}$), (from_airport $v_{a}$ $v_{e}$)] | GEN [(= (ground_fare $v_{a}$) $v_{a}$)] $t_{5}$ | [(ground_transport $v_{a}$), (to_city $v_{a}$ $v_{e}$), | | (from_airport $v_{a}$ $v_{e}$), (= (ground_fare $v_{a}$) $v_{a}$)] | REDUCE [and :- NT NT NT NT] $t_{6}$ | [(and (ground_transport $v_{a}$) (to_city $v_{a}$ $v_{e}$) | | (from_airport $v_{a}$ $v_{e}$) (= (ground_fare $v_{a}$) $v_{a}$))] | REDUCE [exists :- $v_{a}$ NT] $t_{7}$ | [(exists $v_{a}$ (and (ground_transport $v_{a}$) (to_city $v_{a}$ $v_{e}$) | | (from_airport $v_{a}$ $v_{e}$) (= (ground_fare $v_{a}$) $v_{a}$)))] | REDUCE [lambda :- $v_{a}$ e NT] $t_{8}$ | [(lambda $v_{a}$ e (exists $v_{a}$ (and (ground_transport $v_{a}$) (to_city $v_{a}$ $v_{e}$) | | (from_airport $v_{a}$ $v_{e}$) (= (ground_fare $v_{a}$) $v_{a}$))))] | Table 4: The transition sequence for LF template parsing ”how much is the ground transportation between atlanta and downtown?”. Many LF templates in the existing corpora have shared subtrees in the corresponding abstract semantic trees. The tree normalization algorithm aims to treat those subtrees as single units. The identification of such shared structured is conducted by finding frequent subtrees. Given an LF dataset, the support of a tree $t$ is the number of LFs that it occurs as a subtree. We call a tree frequent if its support is greater and equal to a pre-specified minimal support. We also observe that in an LF dataset, some frequent subtrees always have the same supertree. For example, ground_fare $1 is always the child of =( $\dots$, $0 ) in the whole dataset. We call a subtree complete w.r.t. a dataset if any of its supertrees in the dataset occur significantly more often than that subtree. Another observation is that some tree nodes have fixed siblings. In order to check if two tree nodes sharing the same root are fixed siblings, we merge the two tree paths together. If the merged tree has the same support as that of the of the two trees, we call the two trees pass the fixed sibling test. In the same manner, we collapse tree nodes with fixed siblings, as well as their parent node into a single tree node to save unnecessary parse actions. Thus, the normalization is conducted by collapsing a frequent complete abstract subtree into a tree node. We call a tree normalized if all its frequent complete abstract subtrees are collapsed into the corresponding tree nodes. The pseudocode of the tree normalization algorithm is provided in Algorithm 2. ## Appendix B One Example Transition Sequence As in Table 4, we provide an example transition sequence to display the stack states and the corresponding action sequence when parsing the utterance in Introduction ”how much is the ground transportation between atlanta and downtown?”.
Symmetries, Spinning Particles and the TCFH of D=4,5 Minimal Supergravities G. Papadopoulos and E. Pérez-Bolaños Department of Mathematics King’s College London Strand London WC2R 2LS, UK <EMAIL_ADDRESS> edgar.perez$\underline{\leavevmode\nobreak\<EMAIL_ADDRESS> ###### Abstract We find that spinning particles with suitable couplings propagating in certain supersymmetric backgrounds of $D=4$, $N=2$ and $D=5$, $N=1$ minimal supergravities are invariant under symmetries generated by the twisted covariant form hierarchies of these theories. We also compare our results with the symmetries of spinning particles generated by Killing-Yano forms which are responsible for the separability properties of some gravitational backgrounds. ## It has been known for sometime that the geodesic equation as well as several classical field equations, like Klein-Gordon, Hamilton-Jacobi, Dirac and Maxwell equations, in some gravitational backgrounds can be separated, for reviews see [1, 2] and references within. A celebrated example of such background is the Kerr black hole solution [3, 4, 5, 6]. The separability properties of gravitational backgrounds are due to the existence of hidden symmetries generated by rank 2 Killing-Stäckel tensors111This is a symmetric tensor $K$ which satisfies $\nabla_{\mu}K_{\nu\rho}=\nabla_{(\mu}K_{\nu\rho)}$. and the observation that the Killing tensors are squares of Killing-Yano (KY) forms [7, 8, 9], see footnote 2 for the definitions of these forms. Furthermore it was pointed out in [10] that the KY forms generate fermionic symmetries, like supersymmetries, in actions that model the propagation of spinning particles [11, 12, 13, 14] in such backgrounds. As a result, there is a relation between the separability properties of gravitational backgrounds and conservation laws in spinning particle systems. Recently, it has been shown in [15] that the conditions satisfied by the Killing spinor form bi-linears of supersymmetric backgrounds in all supergravity theories including higher curvature corrections can be arranged as twisted covariant form hierarchies (TCFH) [16]. This means that there is a connection $\nabla^{\cal F}$ which depends on the fluxes ${\cal F}$ of a theory such that $\displaystyle\nabla_{X}^{\cal F}{\cal X}=i_{X}{\cal P}+X\wedge{\cal Q}\leavevmode\nobreak\ ,$ (1) where ${\cal X}$ is a multi-form spanned by all the (Killing spinor) form bilinears, ${\cal P}$ and ${\cal Q}$ are multi-forms which depend on ${\cal F}$ and the form bilinears and $X$ is a vector field on the spacetime. We have also denoted with $X$ the associated one-form to $X$, $X(\cdot)=g(\cdot,X)$. An alternative way to state the above condition is that the highest weight representation in the decomposition of the tensor $\nabla^{\cal F}{\cal X}$ in orthogonal irreducible representations vanishes. In general the connection $\nabla^{\cal F}$ is not form degree preserving. A consequence of the TCFH is that ${\cal X}$ satisfies a generalisation222For the standard CKY equation $\nabla^{\cal F}=\nabla$, where $\nabla$ is the Levi-Civita connection. A form $\chi$ that satisfies the CKY equation is KY provided it is also co-closed $\delta\chi=0$. While $\chi$ is a closed conformal KY (CCKY) form, iff $d\chi=0$. of the Conformal-Killing-Yano (CKY) equation with respect to the connection $\nabla^{\cal F}$, i.e. form bilinears satisfy the condition $\displaystyle(\nabla_{X}^{\cal F}{\cal X})|_{p}=i_{X}(d^{\cal F}{\cal X}|_{p+1})-{1\over D-p+1}X\wedge(\delta^{\cal F}{\cal X})|_{p-1}\leavevmode\nobreak\ ,$ (2) where we have restricted the equation to a form bilinear of degree $p$ and $D$ is the spacetime dimension. The operations $d^{\cal F}$ and $\delta^{\cal F}$ are defined by the above equation and denote exterior differentiation and its (formal) adjoint with respect to the connection $\nabla^{\cal F}$. In particular, $d^{\cal F}$ is defined by skew-symmetrising the indices in (2) while $\delta^{\cal F}$ is defined by taking a contraction of (2) with respect to the spacetime metric. The validity of (2) for all supersymmetric solutions raises the question on whether the form bilinears can be used to investigate the separability properties of these backgrounds, and on whether they generate symmetries in spinning particles propagating on such backgrounds. In this article, we shall demonstrate that the form bilinears of a large class of supersymmetric $D=4$, $N=2$ and $D=5$, $N=1$ minimal supergravity backgrounds generate symmetries in spinning particle actions with appropriate couplings. The key observation is that some of the conditions for invariance of the particle actions of [17] under some fermionic transformations can also be expressed as TCFHs. In this case, the associated TCFH connection depends of the couplings of the particle action and acts on forms that determine the infinitesimal fermionic symmetries of the system. Thus the task is to match the TCFHs of supersymmetric backgrounds with those of the spinning particle symmetries after an appropriate identification of the supergravity fields with the couplings of the particle system and of the form bilinears with the forms that generate the fermionic symmetries, respectively. We shall demonstrate that this can be achieved in a variety of cases. We shall also comment on the use of the form bilinears to investigate the separability properties of supersymmetric backgrounds. The supercovariant connection of minimal $D=4$, $N=2$ supergravity is $\displaystyle{\cal D}_{\mu}\equiv\nabla_{\mu}+{i\over 4}F_{ab}\Gamma^{ab}\Gamma_{\mu}\leavevmode\nobreak\ ,$ (3) where $F$ is a 2-form field strength, $dF=0$. The field equations also imply that $F$ is co-closed, $d{}^{*}F=0$. If $\epsilon$ is a Killing spinor, ${\cal D}_{\mu}\epsilon=0$, the form bilinears of the theory up to a Hodge duality are $\displaystyle f=\langle\epsilon,\epsilon\rangle_{D}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ h=\langle\epsilon,\Gamma_{5}\epsilon\rangle_{D}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ K=\langle\epsilon,\Gamma_{a}\epsilon\rangle_{D}\,e^{a}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (4) $\displaystyle Y^{1}=\langle\epsilon,\Gamma_{a}\Gamma_{5}\epsilon\rangle_{D}\,e^{a}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ Y^{3}+iY^{2}=\langle\tilde{\epsilon},\Gamma_{a}\Gamma_{5}\epsilon\rangle_{D}\,e^{a}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (5) $\displaystyle\omega^{1}={1\over 2}\langle\epsilon,\Gamma_{ab}\epsilon\rangle_{D}\,e^{a}\wedge e^{b}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \omega^{3}+i\omega^{2}={1\over 2}\langle\tilde{\epsilon},\Gamma_{ab}\epsilon\rangle_{D}\,e^{a}\wedge e^{b}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (6) where the spacetime metric $g=\eta_{ab}e^{a}e^{b}$ with $e^{a}=e^{a}_{\mu}dx^{\mu}$ a local co-frame, $\langle\cdot,\cdot\rangle_{D}$ is the Dirac inner product, $C$ is a charge conjugation matrix such that $C*\Gamma_{a}=-\Gamma_{a}C*$ and $C*C*=-1$, and $\tilde{\epsilon}=C*\epsilon$. $C=\Gamma_{3}$ in the conventions of [18]. Observe that if $\epsilon$ is a Killing spinor so is $\tilde{\epsilon}$. The TCFH of the theory [15] reads $\displaystyle\nabla_{\mu}f=iF_{\mu\nu}K^{\nu}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \nabla_{\mu}h={}^{*}F_{\mu\nu}K^{\nu}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \nabla_{\mu}K_{\nu}=ifF_{\mu\nu}-h\,{}^{*}F_{\mu\nu}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (7) $\displaystyle\nabla_{\mu}Y^{r}_{\nu}+2{}^{*}F_{\mu\rho}\omega^{r\rho}{}_{\nu}=2{}^{*}F_{[\mu|\rho|}\omega^{r\rho}{}_{\nu]}-{1\over 2}g_{\mu\nu}{}^{*}F_{\rho\lambda}\omega^{r\rho\lambda}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ r=1,2,3\leavevmode\nobreak\ ,$ (8) $\displaystyle\nabla_{\mu}\omega^{r}_{\nu\rho}-4\,{}^{*}F_{\mu[\nu}Y^{r}_{\rho]}=-3\,{}^{*}F_{[\mu\nu}Y^{r}_{\rho]}-2\,g_{\mu[\nu}{}^{*}F_{\rho]\lambda}Y^{r\lambda}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ r=1,2,3\leavevmode\nobreak\ .$ (9) In what follows we shall also consider the TCFH associated with the dual 2-forms $\chi^{r}$ of $\omega^{r}$ which can be defined as $\displaystyle\chi^{1}=-{i\over 2}\langle\epsilon,\Gamma_{ab}\Gamma_{5}\epsilon\rangle_{D}\,e^{a}\wedge e^{b}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \chi^{3}+i\chi^{2}=-{i\over 2}\langle\tilde{\epsilon},\Gamma_{ab}\Gamma_{5}\epsilon\rangle_{D}\,e^{a}\wedge e^{b}\leavevmode\nobreak\ .\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (10) One can show that the Killing spinor equations imply the TCFH $\displaystyle\nabla_{\mu}Y^{r}_{\nu}+2{}F_{\mu\rho}\chi^{r\rho}{}_{\nu}=2{}F_{[\mu|\rho|}\chi^{r\rho}{}_{\nu]}-{1\over 2}g_{\mu\nu}{}F_{\rho\lambda}\chi^{r\rho\lambda}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ r=1,2,3$ (11) $\displaystyle\nabla_{\mu}\chi^{r}_{\nu\rho}-4\,F_{\mu[\nu}Y^{r}_{\rho]}=-3\,F_{[\mu\nu}Y^{r}_{\rho]}-2\,g_{\mu[\nu}F_{\rho]\lambda}Y^{r\lambda}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ r=1,2,3\leavevmode\nobreak\ .$ (12) It is clear from (9) that $K$ is a Killing vector which also leaves $F$ invariant, ${\cal L}_{K}F=0$. To determine whether the above TCFH generates symmetries in a particle system propagating in the supersymmetric backgrounds of $D=4$, $N=2$ supergravity consider the worldline action $\displaystyle S=\int dtd\theta\big{(}-{i\over 2}g_{\mu\nu}D\phi^{\mu}\partial_{t}\phi^{\nu}+iq_{\mu\nu}D\phi^{\mu}D\phi^{\nu}\psi+{1\over 2}\psi D\psi\big{)}\leavevmode\nobreak\ ,$ (13) where $\phi$ is a bosonic and $\psi$ is a fermionic superfield that depend on the worldline time $t$ and the odd coordinate $\theta$ and $D=\partial_{\theta}+i\theta\partial_{t}$ with $D^{2}=i\partial_{t}$. The fields have components $\phi=\phi|,\lambda=D\phi|$, $\psi=\psi|$ and $A=D\psi|$, where the restriction means evaluation at $\theta=0$. $\phi$ and $A$ are worldline bosons while the rest of the components are worldline fermions. The couplings of the theory are the spacetime metric $g$ and the 2-form $q$ which depend of $\phi$. Later $q$ will be identified with either $F$ or its dual ${}^{*}F$. This action is manifestly invariant under one worldline supersymmetry. To write the action above we adopted the reality conditions $\displaystyle(i\partial_{t})^{*}=i\partial_{t}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \theta^{*}=\theta\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \phi^{*}=\phi\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \phi^{*}=\phi\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \psi^{*}=-\psi\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ (\chi\lambda)^{*}=\chi^{*}\lambda^{*}\leavevmode\nobreak\ ,$ (14) for every two worldline fermions $\chi$ and $\lambda$. With these reality conditions the coupling of the theory $g$ and $q$ are real. Such a choice of reality conditions is not unique. For example one could have chosen $\psi^{*}=\psi$ at the cost of removing the imaginary unit $i$ from the coupling term $q(D\Phi)^{2}\psi$ in the action. However such a choice is not suitable for the application we are investigating. The action (13) is a special case of a general class of actions for spinning particles presented in [17]. Identifying $q$ with either $F$ or ${}^{*}F$, the Killing vector $K$ of the TCFH (9) generates the infinitesimal transformation $\displaystyle\delta\phi^{\mu}=aK^{\mu}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \delta\psi=0\leavevmode\nobreak\ ,$ (15) which is a symmetry of the action. Thus the isometries of the supersymmetric backgrounds of $D=4$, $N=2$ supergravity generate a symmetry in the particle system action (13). It remains to see whether the remaining conditions of the TCFH (9) are associated with symmetries. For this consider the fermionic transformations $\displaystyle\delta\phi^{\mu}=i\alpha I^{\mu}{}_{\nu}D\phi^{\nu}+\alpha L^{\mu}\psi\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \delta\psi=i\alpha M_{\mu}\partial_{t}\phi^{\mu}\leavevmode\nobreak\ ,$ (16) where $I,L$ and $M$ depend on $\phi$ and $\alpha$ is a anti-commuting infinitesimal parameter. The reality condition for $\alpha$ is chosen as $\alpha^{*}=-\alpha$ which has as a consequence the presence of imaginary unit in the $ID\phi$ term of the infinitesimal transformation of $\phi$. Again this is essential for the application we shall present below. With this choice of reality condition the tensors $I$, $L$ and $M$ are real. After some simplification the conditions for the invariance of the action (13) under the infinitesimal transformations (16) can be expressed333The are many inequivalent ways to write the conditions for the invariance of the action (13) under the transformations (16). However, the form given below is suitable for the investigation of this example. as $\displaystyle\nabla_{\mu}I_{\nu\rho}-4q_{\mu[\nu}M_{\rho]}=-6q_{[\mu\nu}M_{\rho]}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ I_{[\nu\rho]}=I_{\nu\rho}\leavevmode\nobreak\ ,$ (17) $\displaystyle L_{\mu}=-M_{\mu}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \nabla_{\mu}M_{\nu}+2q_{\mu\rho}I^{\rho}{}_{\nu}=0\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ dq_{\lambda[\mu\nu}I^{\lambda}{}_{\rho]}=0\leavevmode\nobreak\ .$ (18) Note that if instead we had chosen as reality conditions $\psi^{*}=\psi$ and $\alpha^{*}=\alpha$ with the rest remaining the same, the sign of the term $qI$ in the conditions above would have been different. The differential conditions as stated in (18) on the tensors associated to the infinitesimal transformations (16) are in a TCFH form with connection which depends on the coupling $q$ of the theory. To compare (9) with (18), one has to consider three copies of the transformation (16) generated by the tensors $I^{r}$ and $M^{r}$, $r=1,2,3$ and set $\displaystyle I^{r}=\omega^{r}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ M^{r}=Y^{r}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ q={}^{*}F\leavevmode\nobreak\ .$ (19) With these identifications the connection part of the TCFHs in (9) and (18) match. However consistency requires that the right-hand-side of the last two equations in (9) must vanish. As a result, $\omega^{r},Y^{r}$ are parallel with respect to the TCFH connection. Note also that $d{}^{*}F=0$ as a consequence of the field equations and so the last condition in (18) is satisfied. The commutators of the symmetries (16) can be easily be computed [17]. After the identification (19) and for the backgrounds investigated below, it can be easily seen that that they do not close to the standard supersymmetry algebra $\\{Q^{r},Q^{s}\\}=\delta^{rs}H$ in one dimension. This is in agreement with the commutators of the fermionic symmetries generated by KY forms in [10]. The supersymmetric solutions of minimal $D=4$, $N=2$ supergravity have been classified in [19]. A class of backgrounds for which the right-hand-side of the last two equations in (9) vanishes are those that admit a null Killing spinor, i.e. a spinor for which the bilinear $K$ is null. For all such backgrounds, one can demonstrate as a consequence of the Killing spinor equations that the non-vanishing components of the fluxes and form bilinears are $\displaystyle K=K_{-}e^{-}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ Y^{r}=Y_{-}^{r}\,e^{-}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \omega^{r}=\omega_{-i}^{r}\,e^{-}\wedge e^{i}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (20) $\displaystyle F=F_{-i}\,e^{-}\wedge e^{i}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ {}^{*}F={}^{*}F_{-i}\,e^{-}\wedge e^{i}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (21) see [18] for more details, where $(e^{+},e^{-},e^{i})$ is a co-frame such that the metric $g=2e^{+}e^{-}+\delta_{ij}e^{i}e^{j}$, $i,j=1,2$, i.e. the form bilinears and the flux $F$ are null forms. Using this, one can easily verify that the right-hand-side of the last two equations in (9) vanishes. Therefore particles systems described by (13) propagating on backgrounds with a null Killing spinor and couplings the spacetime metric $g$ and $q={}^{*}F$ admit symmetries (16) generated by the associated form bilinears. Such solutions include for example pp-wave type of backgrounds. One can also consider the symmetries generated by the TCFH (12). The investigation for this is similar to the one we have presented above for the TCFH (9). The only difference is that in this case $I^{r}=\chi^{r}$ and $q=F$. Thus again the spinning particles described by the action (13) with couplings the spacetime metric $g$ and $q=F$ admit symmetries (16) generated by the form bilinears $Y^{r}$ and $\chi^{r}$. A similar analysis can be performed for supersymmetric backgrounds with a time-like Killing spinor, i.e. $K$ is a time-like vector. However in this case one can show that either the condition ${}^{*}F_{\mu\nu}Y^{r\nu}=0$ which arises from the comparison of (9) with (18) or $F_{\mu\nu}Y^{r\nu}=0$ which arises from the comparison of (12) with (18), for all $r=1,2,3$, require that $F=0$. This is because the 1-forms $Y^{r}$ are spacelike and span the three spatial directions of the spacetime, see [18]. The only solutions with $F=0$ are locally isometric to Minkowski spacetime. Next let us turn to investigate the TCFH of $D=5$, $N=1$ minimal supergravity. The supercovariant connection of the theory is $\displaystyle{\cal D}_{\mu}\equiv\nabla_{\mu}-{i\over 4\sqrt{3}}\big{(}\Gamma_{\mu}{}^{\nu\rho}F_{\nu\rho}-4F_{\mu\nu}\Gamma^{\nu}\big{)}\leavevmode\nobreak\ .$ (22) If $\epsilon$ is a Killing spinor, ${\cal D}_{\mu}\epsilon=0$ the independent (Killing spinor) form bi-linears up to a Hodge duality operation are $\displaystyle f=\langle\epsilon,\epsilon\rangle_{D}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ K=\langle\epsilon,\Gamma_{a}\epsilon\rangle_{D}\,e^{a}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \omega^{1}={1\over 2}\langle\epsilon,\Gamma_{ab}\epsilon\rangle_{D}\,e^{a}\wedge e^{b}\leavevmode\nobreak\ ,$ (23) $\displaystyle\omega^{2}+i\omega^{3}={1\over 2}\langle\epsilon,\Gamma_{ab}\tilde{\epsilon}\rangle_{D}\,e^{a}\wedge e^{b}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (24) where $e^{a}$, $a=0,1,2,3,4$ is a co-frame such that the metric is $g=\eta_{ab}e^{a}e^{b}$, $\tilde{\epsilon}=\Gamma_{12}*\epsilon$ in the conventions of [18]. If $\epsilon$ is a Killing spinor, then $i\epsilon$, $\tilde{\epsilon}$ and $i\tilde{\epsilon}$ are also Killing spinors. Supersymmetric backgrounds of this theory preserve either 4 or 8 supersymmetries and have been classified in [20]. The conditions imposed by the Killing spinor equation on the form bilinears have been derived in [20]. Writing them in a TCFH form, one finds [15] that $\displaystyle\nabla_{\mu}f=-{2i\over\sqrt{3}}F_{\mu\nu}K^{\nu}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \nabla_{\mu}K_{\nu}={1\over\sqrt{3}}{}^{*}F_{\mu\nu\rho}K^{\rho}-{2i\over\sqrt{3}}F_{\mu\nu}f\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (25) $\displaystyle\nabla_{\mu}\omega^{r}_{\nu\rho}-2\sqrt{3}\,{}^{*}F_{\lambda\mu[\nu}\omega^{r\lambda}{}_{\rho]}=-2\sqrt{3}\,{}^{*}F_{\lambda[\nu\rho}\omega^{r\lambda}{}_{\mu]}+{2\over\sqrt{3}}g_{\mu[\nu}\,{}^{*}F_{\rho]\alpha\beta}\omega^{r\alpha\beta}\leavevmode\nobreak\ ,$ (26) where $\mu,\nu,\rho=0,1,2,3,4$ are spacetime indices and $r=1,2,3$. In what follows, it is also useful to state the TCFH for the form bilinears $\displaystyle\lambda^{1}={1\over 3!}\langle\epsilon,\Gamma_{abc}\epsilon\rangle_{D}\,e^{a}\wedge\cdots\wedge e^{c}\leavevmode\nobreak\ ,\lambda^{2}+i\lambda^{3}={1\over 3!}\langle\epsilon,\Gamma_{abc}\tilde{\epsilon}\rangle_{D}\,e^{a}\wedge\cdots\wedge e^{c}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (27) which are Hodge duals to $\omega^{r}$. This reads $\displaystyle\nabla_{\mu}\lambda^{r}_{\nu_{1}\nu_{2}\nu_{3}}-3\sqrt{3}\,{}^{*}F_{\alpha\mu[\nu_{1}}\lambda^{r\alpha}{}_{\nu_{2}\nu_{3}]}=-4\sqrt{3}\,{}^{*}F_{\alpha[\mu\nu_{1}}\lambda^{r\alpha}{}_{\nu_{2}\nu_{3}]}+2\sqrt{3}g_{\mu[\nu_{1}}\,{}^{*}F_{\nu_{2}|\alpha\beta|}\lambda^{r\alpha\beta}{}_{\nu_{3}]}\leavevmode\nobreak\ .$ (28) To find whether the above TCFHs are associated with symmetries of a particle system propagating on the spacetime consider the action $\displaystyle S=-{1\over 2}\int dtd\theta\big{(}i\,g_{\mu\nu}D\phi^{\mu}\partial_{t}\phi^{\nu}+{1\over 6}c_{\mu\nu\rho}D\phi^{\mu}D\phi^{\nu}D\phi^{\rho}\big{)}\leavevmode\nobreak\ ,$ (29) where the superfields $\phi$ are as in (13) and $c$ is a spacetime 3-form which depends on $\phi$. Actions with such couplings have been considered before in [17]. This action is manifestly invariant under one supersymmetry. Next consider the fermionic symmetry $\displaystyle\delta\phi^{\mu}=\alpha I^{\mu}{}_{\nu}D\phi^{\nu}\leavevmode\nobreak\ ,$ (30) where the infinitesimal parameter $\alpha$ satisfies the reality condition $\alpha^{*}=\alpha$. Invariance of the action under this fermionic symmetry implies [21] that $\displaystyle\hat{\nabla}_{\mu}I_{\nu\rho}=\hat{\nabla}_{[\mu}I_{\nu\rho]}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ I_{\mu\nu}=I_{[\mu\nu]}\leavevmode\nobreak\ ,$ (31) $\displaystyle di_{I}c-3i_{I}dc=0\leavevmode\nobreak\ ,$ (32) where $\displaystyle\hat{\nabla}_{\mu}X^{\nu}=\nabla_{\mu}X^{\nu}+{1\over 2}c^{\nu}{}_{\mu\rho}X^{\rho}\leavevmode\nobreak\ ,$ (33) is a connection with skew-symmetric torsion $c$ and $i_{I}$ denotes the inner derivation444The inner derivation of a $n$-form $\chi$ with respect to the vector $(k-1)$-form $L$ is $i_{L}\chi={1\over(k-1)!(n-1)!}L^{\nu}{}_{\mu_{1}\dots\mu_{k-1}}\chi_{\nu\mu_{k}\dots\mu_{n+k-2}}dx^{\mu_{1}}\wedge\dots\wedge dx^{\mu_{k+n-2}}$. with respect to $I$. One can also consider the invariance of the action (29) under the infinitesimal (bosonic) transformations $\displaystyle\delta\phi^{\mu}=aL^{\mu}{}_{\nu\rho}D\phi^{\nu}D\phi^{\rho}\leavevmode\nobreak\ .$ (34) These transformations leave the action invariant provided [22] that $\displaystyle L_{\mu\nu\rho}=L_{[\mu\nu\rho]}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \hat{\nabla}_{\mu}L_{\nu_{1}\nu_{2}\nu_{3}}=\hat{\nabla}_{[\mu}L_{\nu_{1}\nu_{2}\nu_{3}]}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ di_{L}c+4i_{L}dc=0\leavevmode\nobreak\ .$ (35) Such transformations will be useful to explore (28). To identify the symmetries of a particle system with action (29) propagating in the supersymmetric $D=5$, ${\cal N}=1$ supergravity background, one has to match the conditions of the TCFH (26) with those of the invariance (32) of the particle system (29). For this let us consider three independent transformations (30) generated by the tensors $I^{r}$, $r=1,2,3$ and identify $I^{r}$ with the 2-form bilinears $\omega^{r}$ of TCFH, i.e. $I^{r}=\omega^{r}$. Comparing the TCFH connection on $\omega^{r}$ with that on $I^{r}$ in (32), one concludes that the coupling $c$ of the particle system should be chosen as $\displaystyle c=2\sqrt{3}\,{}^{*}F\leavevmode\nobreak\ .$ (36) Then consistency of (26) with (32) after this identification requires that $\displaystyle{}^{*}F_{\rho\alpha\beta}\omega^{r\alpha\beta}=0\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ di_{\omega^{r}}{}^{*}F-3i_{\omega^{r}}d{}^{*}F=0\leavevmode\nobreak\ .$ (37) These two conditions impose strong restrictions on the possible backgrounds for which the particle system (29) admits (30) as a symmetry. Before we turn to investigate (37) for various backgrounds, observe that $K$ is a Killing vector that leaves $F$ invariant. As a result $\delta\phi^{\mu}=aK^{\mu}$ is a symmetry of (29). To find the backgrounds that satisfy (37), let us begin with the supersymmetric backgrounds of $D=5$, ${\cal N}=1$ supergravity that admit a time-like Killing spinor, i.e. a Killing spinor such that vector bilinear $K$ is time-like [20]. For such backgrounds the Killing spinor can be written as $\epsilon=1V$ in the conventions of [18], where $V$ is a spacetime function and the metric and 2-form flux are given as $\displaystyle ds^{2}=-V^{4}(dt+\beta)^{2}+V^{-2}d\mathring{s}^{2}\leavevmode\nobreak\ ,$ (38) $\displaystyle F={\sqrt{3}\over 2}de^{0}-{1\over 3}(d\beta)_{\rm asd}\leavevmode\nobreak\ ,$ (39) with $K=\partial_{t}$ and $e^{0}=V^{2}(dt+\beta)$, where $(d\beta)_{\rm asd}$ is the anti-self dual component of $d\beta$ and $d\mathring{s}^{2}$ is a 4-dimensional hyper-Kähler metric. In our conventions $\omega^{r}$ are self- dual and in addition $d\omega^{r}=0$, $i_{K}\omega^{r}=0$ and $\displaystyle\omega^{r}_{\rho\mu}\omega^{s\rho}{}_{\nu}=\delta^{rs}(V^{4}g_{\mu\nu}+K_{\mu}K_{\nu})+\epsilon^{rs}{}_{q}V^{2}\omega^{q}_{\mu\nu}\leavevmode\nobreak\ .$ (40) Also one finds that ${\cal L}_{K}\omega^{r}=0$. The first condition in (37) implies that $\displaystyle V={\rm const}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (d\beta)_{ij}\omega^{rij}=0\leavevmode\nobreak\ .$ (41) As $V$ is constant set for convenience $V=1$. Furthermore, the equations of motion imply that $(d\beta)_{\rm asd}=0$. As $d\omega^{r}=0$, set $\displaystyle d\beta=u_{r}\omega^{r}\leavevmode\nobreak\ ,$ (42) where $u$ is a constant vector. Without loss of generality pick $(u_{r})=(1,0,0)$. Implementing all the restrictions mentioned above, the resulting solution is expressed as $\displaystyle ds^{2}=-(dt+\beta)^{2}+d\mathring{s}^{2}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ F={\sqrt{3}\over 2}\omega^{1}\leavevmode\nobreak\ .$ (43) The solution can be viewed locally as a circle fibration over a 4-dimensional hyper-Kähler manifold whose fibre $U(1)$ curvature is given by $\omega^{1}$. It turns out that the last condition in (37) is also satisfied for the transformations (30) generated by $\omega^{2}$ and $\omega^{3}$. Thus the action (29) with couplings given in (43) is invariant under the transformations (30) generated by $\omega^{2}$ and $\omega^{3}$. Note that this is unlike what has been encountered before in the context of supersymmetric sigma models where two supersymmetries of the type (30) generated by $I^{2}$ and $I^{3}$, respectively, always imply the existence of a third supersymmetry generated by $I^{2}I^{3}$. However to derive this, there have been some assumptions. In particular it had been taken that $I^{2}$ and $I^{3}$ are invertible and the sigma model manifold is (almost) hyper-complex. However here $\omega^{2}$ and $\omega^{3}$ are not invertible as spacetime tensors and $\omega^{1}$ is singled out as the curvature of the $U(1)$ bundle over the hyper-Kähler manifold. It remains to solve (37) for $D=5$, ${\cal N}=1$ supergravity backgrounds that admit a null Killing spinor. In such a case the Killing vector bilinear $K=\partial_{u}$ is null and one can show that there is a co-frame $\displaystyle e^{+}=du+Vdv+n_{I}dx^{I}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ e^{-}=h^{-1}dv\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ e^{i}=h\delta^{i}_{I}dx^{I}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2,3\leavevmode\nobreak\ ,$ (44) where $(u,v,x^{I})$, $I=1,2,3$ are the spacetime coordinates and $V,h,n_{I}$ depend only on $x^{I}$ and $v$. Moreover one has that $\displaystyle\omega^{r}=e^{-}\wedge e^{r}\leavevmode\nobreak\ ,$ (45) and $\displaystyle ds^{2}=2e^{-}e^{+}+\delta_{ij}e^{i}e^{j}\leavevmode\nobreak\ ,$ (46) $\displaystyle F=-{1\over 4\sqrt{3}}\mathring{\epsilon}_{I}{}^{JK}h^{-2}(dn)_{JK}dv\wedge dx^{I}-{\sqrt{3}\over 4}\mathring{\epsilon}_{IJ}{}^{K}\partial_{K}hdx^{I}\wedge dx^{J}\leavevmode\nobreak\ ,$ (47) where $\mathring{\epsilon}$ is the Levi-Civita tensor of the flat metric. The first condition in (37) for all $\omega^{r}$ implies that $h$ must depend only on $v$, $h=h(v)$. It turns out that this condition is also sufficient for the second condition in (37) to be satisfied. There are many solutions with $h=h(v)$. For example one can take $n=0$, $h=1$ in which case the field equations imply that $V$ is harmonic function on $\hbox{\mybb R}^{3}$ with delta function sources and the solution a multi pp-wave. Another solution is to take $h=1$, $n=n(x^{I})$. Then the field equations imply, see e.g. [18], that $\displaystyle\partial^{I}dn_{IJ}=0\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \partial^{I}\partial_{J}V={1\over 6}dn^{IJ}dn_{IJ}\leavevmode\nobreak\ ,$ (48) i.e. $dn$ satisfies the Maxwell equation on $\hbox{\mybb R}^{3}$ and $V$ the Laplace equation with a source term. For a solution set $n_{I}=\lambda_{IJ}x^{J}$, $\lambda$ constant 2-form and $V=(1/3)\delta^{IJ}\lambda_{IK}\lambda_{JL}x^{K}x^{L}+V_{0}$, where $V_{0}$ is a harmonic function on $\hbox{\mybb R}^{3}$ with delta-function sources. The commutators of two (30) transformations can easily be computed and involve the Nijenhuis tensor of the $I$’s that generate the transformations. In the examples explored above, these do not satisfy the standard supersymmetry algebra in one dimension. This can be easily seen as $\omega^{r}$ does not satisfy the algebra of imaginary unit quaternions, see (40). Nevertheless the commutator is a symmetry of the action (29). Next let us consider whether the 3-form bilinears (27) generate symmetries for the action (29). For this consider three transformation as in (34) generated by the tenasors $L^{r}$ and identify $L^{r}=\lambda^{r}$. Then consistency of (28) with (35) requires that $\displaystyle c=2\sqrt{3}{}^{*}F\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ di_{\lambda^{r}}{}^{*}F+4i_{\lambda^{r}}d{}^{*}F=0\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ F_{\gamma\mu}\omega^{r\gamma}{}_{\nu}-F_{\gamma\nu}\omega^{r\gamma}{}_{\mu}=0\leavevmode\nobreak\ .$ (49) The third condition above arises from the requirement that the last term in the TCFH (28) must vanish. There are solutions to the conditions (49) for supersymmetric backgrounds with both a timelike and null Killing spinors. In the former case, the last condition in (49) together with the field equations imply that $V=1$ and $(d\beta)_{\rm ads}=0$. Without loss of generality one again can choose $d\beta=\omega^{1}$. The spinning particles described by the action (29) on such such backgrounds are invariant under the transformation generated by $\lambda^{1}$ but they are not invariant under the transformations generated by $\lambda^{2}$ and $\lambda^{3}$. For backgrounds with a null Killing spinor, one again finds as a consequence of the last equation in (49) that $h=h(v)$. Then the analysis proceeds as for the symmetries generated by $\omega^{r}$ giving the same backgrounds as solutions. The 2-form bilinears of both $D=4$ and $D=5$ supergravities that generate symmetries in the spinning particle actions we have investigated are not principal. This means that they do not have 2 independent eigenvalues. The existence of a principal CCKY form on a background implies the separability of the geodesic equations and some of the classical field equations, see e.g. [1, 2]. Therefore one should not expect that the backgrounds we have investigated exhibit similar separability properties unless they admit additional symmetries, e.g. additional rotation or axial symmetries. To give an explicit example consider the solution of $D=5$ supergravity which is locally a circle bundle over a 4-dimensional hyper-Kähler manifold. We have found that a particle system in such a background admits additional fermionic symmetries. However if one chooses as a hyper-Kähler manifold one without additional isometries, e.g $K_{3}$, one should not expect that the geodesic equations of the 5-dimensional solution to be separable. The separability properties of $D=5$ $N=1$ supergravity backgrounds have been investigated before in [23, 24, 25]. These authors explored the properties of the generalized CKY equation which is the CKY equation with respect to a connection with skew-symmetric torsion, like $\hat{\nabla}$ in (33). In particular they considered generalized closed CKY 2-forms, i.e. 2-forms which are closed with respect to $\hat{d}$ the exterior derivative associated to $\hat{\nabla}$. The 2-form bilinears $\omega^{r}$ we have considered here do not satisfy the same conditions as the generalized closed CKY forms. In particular, $\omega^{r}$ satisfy the generalized CKY equation as a consequence of (26) with skew-symmetric torsion $c$ given in (36). However for general supersymmetric solutions $\omega^{r}$ do not satisfy the closure (or indeed the co-closure) condition with respect to $\hat{\nabla}$, i.e. $\hat{d}\omega^{r}\not=0$. Of course as a consequence of the TCFH in (26) $\omega^{r}$ are closed, $d\omega^{r}=0$, in the standard sense. Therefore the gravitational backgrounds investigated in [23, 24] and in this paper are different. ## References * [1] M. Cariglia, “Hidden Symmetries of Dynamics in Classical and Quantum Physics,” Rev. Mod. Phys. 86 (2014) 1283 doi:10.1103/RevModPhys.86.1283 [arXiv:1411.1262 [math-ph]]. * [2] V. Frolov, P. Krtous and D. Kubiznak, “Black holes, hidden symmetries, and complete integrability,” Living Rev. Rel. 20 (2017) no.1, 6 doi:10.1007/s41114-017-0009-9 [arXiv:1705.05482 [gr-qc]]. * [3] B. Carter, “Global structure of the Kerr family of gravitational fields,” Phys. Rev. 174 (1968) 1559. * [4] B. Carter, “Hamilton-Jacobi and Schrodinger separable solutions of Einstein’s equations,” Commun. Math. Phys. 10 (1968) no.4, 280-310 doi:10.1007/BF03399503 * [5] B. Carter, “Killing Tensor Quantum Numbers And Conserved Currents In Curved Space,” Phys. Rev. D 16 (1977) 3395. * [6] S. Chandrasekhar, “The Solution Of Dirac’s Equation In Kerr Geometry,” Proc. Roy. Soc. Lond. A 349 (1976) 571. * [7] M. Walker and R. Penrose, “On quadratic first integrals of the geodesic equations for type [22] spacetimes,” Commun. Math. Phys. 18 (1970), 265-274 doi:10.1007/BF01649445 * [8] R. Penrose, Ann. N.Y. Acad. Sci. 224 (1973) 125. * [9] R. Floyd, The dynamics of Kerr fields,. Ph. D. Thesis, London (1973). * [10] G. W. Gibbons, R. H. Rietdijk and J. W. van Holten, “SUSY in the sky,” Nucl. Phys. B 404 (1993), 42-64 doi:10.1016/0550-3213(93)90472-2 [arXiv:hep-th/9303112 [hep-th]]. * [11] F. A. Berezin and M. S. Marinov, “Particle Spin Dynamics as the Grassmann Variant of Classical Mechanics,” Annals Phys. 104 (1977), 336 doi:10.1016/0003-4916(77)90335-9 * [12] L. Brink, S. Deser, B. Zumino, P. Di Vecchia and P. S. Howe, Phys. Lett. B 64 (1976), 435 [erratum: Phys. Lett. B 68 (1977), 488] doi:10.1016/0370-2693(76)90115-5 * [13] L. Brink, P. Di Vecchia and P. S. Howe, “A Lagrangian Formulation of the Classical and Quantum Dynamics of Spinning Particles,” Nucl. Phys. B 118 (1977), 76-94 doi:10.1016/0550-3213(77)90364-9 * [14] A. Barducci, R. Casalbuoni and L. Lusanna, “Supersymmetries and the Pseudoclassical Relativistic electron,” Nuovo Cim. A 35 (1976), 377 doi:10.1007/BF02730291 * [15] G. Papadopoulos, “Twisted form hierarchies, Killing-Yano equations and supersymmetric backgrounds,” JHEP 07 (2020), 025 doi:10.1007/JHEP07(2020)025 [arXiv:2001.07423 [hep-th]]. * [16] J. Gutowski and G. Papadopoulos, “Eigenvalue estimates for multi-form modified Dirac operators,” J. Geom. Phys. 160 (2021), 103954 doi:10.1016/j.geomphys.2020.103954 [arXiv:1911.02281 [math.DG]]. * [17] R. A. Coles and G. Papadopoulos, “The Geometry of the one-dimensional supersymmetric nonlinear sigma models,” Class. Quant. Grav. 7 (1990), 427-438 doi:10.1088/0264-9381/7/3/016 * [18] U. Gran, J. Gutowski and G. Papadopoulos, “Classification, geometry and applications of supersymmetric backgrounds,” Phys. Rept. 794 (2019), 1-87 doi:10.1016/j.physrep.2018.11.005 [arXiv:1808.07879 [hep-th]]. * [19] K. P. Tod, “All Metrics Admitting Supercovariantly Constant Spinors,” Phys. Lett. B 121 (1983), 241-244 doi:10.1016/0370-2693(83)90797-9 * [20] J. P. Gauntlett, J. B. Gutowski, C. M. Hull, S. Pakis and H. S. Reall, “All supersymmetric solutions of minimal supergravity in five- dimensions,” Class. Quant. Grav. 20 (2003), 4587-4634 doi:10.1088/0264-9381/20/21/005 [arXiv:hep-th/0209114 [hep-th]]. * [21] G. W. Gibbons, G. Papadopoulos and K. S. Stelle, “HKT and OKT geometries on soliton black hole moduli spaces,” Nucl. Phys. B 508 (1997), 623-658 doi:10.1016/S0550-3213(97)00599-3 [arXiv:hep-th/9706207 [hep-th]]. * [22] G. Papadopoulos, “Killing-Yano Equations with Torsion, Worldline Actions and G-Structures,” Class. Quant. Grav. 29 (2012), 115008 doi:10.1088/0264-9381/29/11/115008 [arXiv:1111.6744 [hep-th]]. * [23] D. Kubiznak, H. Kunduri and Y. Yasui, “Generalized Killing-Yano equations in D=5 gauged supergravity,” Nucl. Phys. B 678 (2009) 240 [arXiv:hep-th/0905.0722]. * [24] T. Houri, D. Kubiznak, C. M. Warnick and Y. Yasui, “Generalized hidden symmetries and the Kerr-Sen black hole,” JHEP 1007 (2010) 055 [arXiv:1004.1032 [hep-th]]. * [25] T. Houri, D. Kubiznak, C. Warnick and Y. Yasui, “Symmetries of the Dirac operator with skew-symmetric torsion,” Class. Quant. Grav. 27 (2010) 185019 [arXiv:1002.3616 [hep-th]].
# Coloured Scalars Mediated Rare Charm Meson Decays to Invisible Fermions Svjetlana Fajfer<EMAIL_ADDRESS>Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, 1000 Ljubljana, Slovenia Anja Novosel <EMAIL_ADDRESS>Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, 1000 Ljubljana, Slovenia (August 27, 2024) ###### Abstract We consider effects of coloured scalar mediators in decays $c\to u$ invisibles. In particular, in these processes, as invisibles, we consider massive right-handed fermions. The coloured scalar $\bar{S}_{1}\equiv(\bar{3},1,-2/3)$, due to its coupling to weak singlets up- quarks and invisible right-handed fermions ($\chi$), is particularly interesting. Then, we consider $\tilde{R}_{2}\equiv(\bar{3},2,1/6)$, which as a weak doublet is a subject of severe low-energy constraints. The $\chi$ mass is considered in the range $(m_{K}-m_{\pi})/2\leq m_{\chi}\leq(m_{D}-m_{\pi})/2$. We determine branching ratios for $D\to\chi\bar{\chi}$, $D\to\chi\bar{\chi}\gamma$ and $D\to\pi\chi\chi$ for several $\chi$ masses, using most constraining bounds. For $\bar{S}_{1}$, the most constraining is $D^{0}-\bar{D}^{0}$ mixing, while in the case of $\tilde{R}_{2}$ the strongest constraint comes from $B\to K\not{E}$. We find in decays mediated by $\bar{S}_{1}$ that branching ratios can be $\mathcal{B}(D\to\chi\bar{\chi})<10^{-8}$ for $m_{\chi}=0.8$ GeV, $\mathcal{B}(D\to\chi\bar{\chi}\gamma)\sim 10^{-8}$ for $m_{\chi}=0.18$ GeV, while $\mathcal{B}(D^{+}\to\pi^{+}\chi\bar{\chi})$ can reach $\sim 10^{-8}$ for $m_{\chi}=0.18$ GeV. In the case of $\tilde{R}_{2}$ these decay rates are very suppressed. We find that future tau-charm factories and Belle II experiments offer good opportunities to search for such processes. Both $\bar{S}_{1}$ and $\tilde{R}_{2}$ might have masses within LHC reach. ## I Introduction Low-energy constraints of physics beyond Standard Model (BSM) are well established for down-like quarks by numerous searches in processes with hadrons containing one $b$ or/and $s$ quark. However, in the up-quark sector, searches are performed in top decays, suitable for LHC studies, while in charm hadron processes at b-factories or/and $\tau$-charm factories. Recently, an extensive study on $c\to u\nu\bar{\nu}$ appeared in Ref. Bause:2020xzj , pointing out that observables very small in the Standard Model (SM) offer unique (null) tests of BSM physics. Namely, for charm flavour changing neutral current (FCNC) processes, severe Glashow-Iliopoulos-Maiani (GIM) suppression occurs. The decay $D^{0}\to\nu\bar{\nu}$ amplitude is helicity suppressed in the SM. The authors of Badin:2010uh made very detailed study of heavy meson decays to invisibles, assuming that the invisibles can be scalars or fermions with both helicites. They found out that in the SM branching ratio ${\mathcal{B}}(D^{0}\to\nu\bar{\nu})=1.1\times 10^{-31}$. Then the authors of Bhattacharya:2018msv found that the decay width of $D^{0}\to{\it invisibles}$ in the SM is actually dominated by the contribution of $D^{0}\to\nu\bar{\nu}\nu\bar{\nu}$. These studies’ main message is that SM provides no irreducible background to analysis of invisibles in decays of charm (and beauty) mesons. They also suggested Badin:2010uh , that in searches for a Dark Matter candidate, it might be important to investigate process with $\chi\bar{\chi}\gamma$ in the final state, since a massless photon eliminates the helicity suppression. We also determine branching ratios for such decay modes. The authors of Ref. Bause:2020xzj computed the expected event rate for the charm hadron decays to a final hadronic state and neutrino - anti-neutrino states. They found out that in experiments like Belle II, which can reach per- mile efficiencies or better, these processes can be seen. In addition future FCC-ee might be capable of measuring branching ratios of $\mathcal{O}(10^{-6})$ down to $\mathcal{O}(10^{-8})$, in particular $D^{0}$, $D^{+}_{(s)}$ and $\Lambda_{c}^{+}$ decay modes. On the other hand, the Belle collaboration already reached bound of the branching ratio for ${\mathcal{B}}(D^{0}\to{\it invisibles})=9.4\times 10^{-5}$ and the Belle II experiment is expected to improve it Kou:2018nap . The other $e^{+}e^{-}$ machines as BESSIII Ablikim:2019hff and future FCC-ee running colliders at the $Z$ energies Abada:2019lih ; Abada:2019zxq with a significant charm production with ${\mathcal{B}}(Z\to c\bar{c})\simeq 0.22$ Abada:2019zxq provide us with excellent tools for precision study of charm decays. In this work we focus on the particular scenarios with coloured scalars or leptoquarks as mediators of the invisible fermions interaction with quarks. The coloured scalar might have the electric charge of $2/3$ or $-1/3$ depending on the interactions with up or down quarks. Instead of using general assumption on the lepton flavour structure from Bause:2020xzj and justifying Belle bound from Lai:2016uvj , we rely on observables coming from the $D^{0}-\bar{D}^{0}$ oscillations and in the case of weak doublets, we include constraints from other flavour processes. Motivated by previous works of Refs. Bause:2020xzj ; Bause:2020obd ; Badin:2010uh ; Golowich:2009ii ; MartinCamalich:2020dfe ; Faisel:2020php , we investigate $c\to u\bar{\chi}\chi$ with $\chi$ being a massive $SU(2)_{L}$ singlet. Coloured scalars carry out interactions between invisible fermions and quarks. Namely, leptoquarks usually denote the boson interacting with quarks and leptons. However, the state $\bar{S}_{1}$ does not interact with the SM leptons and, therefore, it is more appropriate to call it coloured scalar. Our approach is rather minimalistic due to only two Yukawa couplings and the mass of coloured scalar. The effective Lagrangian and coloured scalar mediators are introduced in Sec. II. In Sec. III we describe effects of $\bar{S}_{1}$ mediator in rare charm decays, while in Sec. IV we give details of $\tilde{R}_{2}$ mediation in the same processes. Sec. V contains conclusions and outlook. ## II Coloured Scalars in $c\to u\chi\bar{\chi}$ In experimental searches, the transition $c\to u\,{\it invisibles}$ might be approached in processes $c\to u\not{E}$ with $\not{E}$ being missing energy. Therefore, invisibles can be either SM neutrinos or new right-handed neutral fermions (having quantum numbers of right-handed neutrinos), or scalars/vectors as suggested in Ref. Badin:2010uh . The authors of Refs. Bause:2020xzj ; Bause:2020obd considered in detail general framework of New Physics (NP) in $c\to u$ invisibles, relying on $SU(2)_{L}$ invariance and data on charged lepton processes Bause:2020obd . They found that these assumptions allow upper limits as large as few $10^{-5}$, while in the limit of lepton universality branching ratios can be as large as $10^{-6}$. To consider invisible fermions, having quantum numbers of right-handed neutrinos, and being massive, we extend the effective Lagrangian by additional operators as described in Refs. Bhattacharya:2018msv ; Dorsner:2016wpm $\displaystyle\begin{aligned} \mathcal{L}_{\text{eff}}=&\sqrt{2}G_{F}\bigg{[}c^{LL}(\overline{u}_{L}\gamma_{\mu}c_{L})(\overline{\nu}_{L}\gamma^{\mu}\nu^{\prime}_{L})\\\ &+c^{RR}(\overline{u}_{R}\gamma_{\mu}c_{R})(\overline{\nu}_{R}\gamma^{\mu}\nu^{\prime}_{R})+c^{LR}(\overline{u}_{L}\gamma_{\mu}c_{L})(\overline{\nu}_{R}\gamma^{\mu}\nu^{\prime}_{R})\\\ &+c^{RL}(\overline{u}_{R}\gamma_{\mu}c_{R})(\overline{\nu}_{L}\gamma^{\mu}\nu^{\prime}_{L})\ +g^{LL}(\overline{u}_{L}c_{R})(\overline{\nu}_{L}\nu^{\prime}_{R})\\\ &+g^{RR}(\overline{u}_{R}c_{L})(\overline{\nu}_{R}\nu^{\prime}_{L})+g^{LR}(\overline{u}_{L}c_{R})(\overline{\nu}_{R}\nu^{\prime}_{L})\\\ &+g^{RL}(\overline{u}_{R}c_{L})(\overline{\nu}_{L}\nu^{\prime}_{R})+h^{LL}(\overline{u}_{L}\sigma^{\mu\nu}c_{R})(\overline{\nu}_{L}\sigma_{\mu\nu}\nu^{\prime}_{R})\\\ &+h^{RR}(\overline{u}_{R}\sigma^{\mu\nu}c_{L})(\overline{\nu}_{R}\sigma_{\mu\nu}\nu^{\prime}_{L})\bigg{]}+\text{h. c.}.\end{aligned}$ (1) In Ref. Bause:2020xzj right-handed massless neutrinos are considered. Also, in Ref. Faisel:2020php authors considered charm meson decays to invisible fermions, which have negligible masses. In the following, we consider massive right-handed fermions and use further the notation $\nu_{R}\equiv\chi_{R}$. Following Dorsner:2016wpm , we write in Table 1 interactions of the coloured scalar $\bar{S}_{1}$ and $\tilde{R}_{2}$ with the up quarks and $\tilde{R}_{2}$ and $S_{1}$ with down quarks. Cloured Scalar | Invisible fermion ---|--- $S_{1}=(\bar{3},1,1/3)$ | $\bar{d}_{R}^{C\,i}\chi^{j}S_{1}$ $\bar{S}_{1}=(\bar{3},1,-2/3)$ | $\bar{u}^{C\,i}_{R}\chi^{j}\bar{S}_{1}$ $\tilde{R}_{2}=(\bar{3},2,1/6)$ | $\bar{u}_{L}^{i}\chi^{j}\tilde{R}_{2}^{2/3}$ $\tilde{R}_{2}=(\bar{3},2,1/6)$ | $\bar{d}_{L}^{i}\chi^{j}\tilde{R}_{2}^{-1/3}$ Table 1: The coloured scalars $\bar{S}_{1}$, $S_{1}$ and $\tilde{R}_{2}$ interactions with invisible fermions and quarks. Here we use only right-handed couplings of $S_{1}$. Indices $i,j$ refer to quark generations. We concentrate only on coloured scalar and scalar leptoquark due to difficulties with vector leptoquarks. Namely, the simplest way to consider vector leptoquarks in an ultra-violet complete theory is when they play the role of gauge bosons. For example, $U_{1}$ is one of the gauge bosons in some of Pati-Salam unification schemes Bordone:2018nbg ; DiLuzio:2017vat . However, other particles with masses close to $U_{1}$ with many new parameters in such theories, making it rather difficult to use without additional assumptions. Coloured scalars contributing to transition $c\to u\chi\bar{\chi}$ have following Lagrangians, as already anticipated in in Dorsner:2016wpm $\mathcal{L}(\bar{S}_{1})\supset\bar{y}^{RR}_{1\,ij}\bar{u}_{R}^{C\,i}\,\chi_{R}^{j}\,\bar{S}_{1}+\textrm{h.c.}.$ (2) $\mathcal{L}(\tilde{R}_{2})\supset(V\tilde{y}^{LR}_{2})_{ij}\bar{u}_{L}^{i}\,\chi_{R}^{j}\tilde{R}_{2}^{2/3}+\tilde{y}^{LR}_{2\,ij}\bar{d}_{L}^{i}\,\chi_{R}^{j}\tilde{R}_{2}^{-1/3}+\textrm{h.c.}.$ (3) Here, we give only terms containing interactions of quarks with right-handed $\chi$. The $S_{1}$ scalar leptoquark, in principle, might mediate $c\to u\chi\bar{\chi}$ on the loop level, with one $W$ boson changing down-like quarks to $u$ and $c$. Obviously, such a loop process is also suppressed by loop factor $1/(16\pi^{2})$ and $G_{F}$ making it negligible. Also, due to the right-handed nature of $\chi$, one can immediately see that in the case of $\bar{S}_{1}$, the effective Lagrangian has only one contribution ${\mathcal{L}}_{\text{eff}}={\sqrt{2}}G_{F}c^{RR}\left(\bar{u}_{R}\gamma_{\mu}c_{R}\right)\left(\bar{\chi}_{R}\gamma^{\mu}\chi_{R}\right),$ (4) with $c^{RR}=\frac{v^{2}}{2M_{\bar{S}_{1}}^{2}}\bar{y}^{RR}_{1\,c\chi}\,\bar{y}^{RR\ast}_{1\,u\chi}.$ (5) In the case of $\tilde{R}_{2}$ ${\mathcal{L}}_{\text{eff}}={\sqrt{2}}G_{F}c^{LR}\left(\bar{u}_{L}\gamma_{\mu}c_{L}\right)\left(\bar{\chi}_{R}\gamma\chi_{R}\right),$ (6) with $c^{LR}=-\frac{v^{2}}{2M_{\tilde{R}_{2}}^{2}}\left(V\tilde{y}_{2}^{LR}\right)_{u\chi}\left(V\tilde{y}_{2}^{LR}\right)_{c\chi}^{\ast}.$ (7) For the mass of $\chi$, kinematically allowed, in the $c\to u\chi\bar{\chi}$ decay, one can relate this amplitude to $b\to s\chi\bar{\chi}$ or in $s\to d\chi\bar{\chi}$. However, it was found Ruggiero that the experimental rates for $K\to\pi\nu\bar{\nu}$ are very close to the SM rate Buras:2015qea , leaving very little room for NP contributions. Therefore, we avoid this kinematic region and consider mass of $\chi$ to be $m_{\chi}\geq(m_{K}-m_{\pi})/2$, while the charm decays allow $m_{\chi}\leq(m_{D}-m_{\pi})/2$. For our further study it is very important that $\chi$ is a weak singlet and therefore LHC searches of high-$p_{T}$ lepton tails Angelescu:2020uug ; Fuentes-Martin:2020lea are not applicable for the constraints of interactions in the cases we consider. However, further study of final states containing mono-jets and missing at LHC and future High luminosity colliders will shed more light on these processes. ## III $\bar{S}_{1}$ in $c\to u\chi\bar{\chi}$ Due to its quantum numbers, the coloured scalar $\bar{S}_{1}$ and $\chi$ can interact only with up-like quarks. Most generally, the number of $\chi$’s can be three and the matrix $y_{1\,ij}^{RR}$ can have $9\times 2$ parameters. Here, we consider one $\chi$, that can couple to both $u$ and $c$ quarks. These two couplings might enter in amplitudes for processes with down-like quarks at loop-level, as discussed in Fajfer:2020tqf . Obviously, due to the right-handed nature of $\chi$, one can immediately see that in the case of $\bar{S}_{1}$, the effective Lagrangian has only the contribution $\mathcal{L}_{\text{eff}}=\sqrt{2}G_{F}\frac{v^{2}}{2M^{2}}\bar{y}_{1\,c\chi}^{RR}\bar{y}^{RR*}_{1\,u\chi}(\overline{u}_{R}\gamma_{\mu}c_{R})(\overline{\chi}_{R}\gamma^{\mu}\chi_{R}).$ (8) First, we discuss constraints from $D^{0}-\bar{D}^{0}$ mixing and then consider exclusive decays $D^{0}\to\chi\bar{\chi}$, $D^{0}\to\bar{\chi}\chi\gamma$, and $D\to\pi\chi\bar{\chi}$. The authors of Ref. Faisel:2020php considered scalar leptoquarks allowing each up-quark can couple to different flavour of lepton or right-handed neutrino. In such a way, they avoid constraints from the $D^{0}-\bar{D}^{0}$ mixing. #### III.0.1 Constraints from $D^{0}-\bar{D}^{0}$ The strongest constraints on $\chi$ interactions with $u$ and $c$ comes from the $D^{0}-\bar{D}^{0}$ oscillations. The interactions in Eqs. (2) and (3) can generate transition $D^{0}-\bar{D}^{0}$. Coloured scalar $\bar{S}_{1}$ contributes to the operator entering the effective Lagrangian Fajfer:2015mia ; Dorsner:2016wpm ${\mathcal{L}}_{\text{eff}}^{Dmix}=-C_{6}\left(\bar{c}\gamma_{\mu}P_{R}u\right)\left(\bar{c}\gamma^{\mu}P_{R}u\right),$ (9) with the Wilson coefficient given by $C_{6}=\frac{1}{64\pi^{2}M_{\bar{S}_{1}}^{2}}\left(\bar{y}^{RR}_{1\,c\chi}\right)^{2}\left(\bar{y}^{\overline{RR}\ast}_{1\,u\chi}\right)^{2}.$ (10) The standard way to write the hadronic matrix element is $\left<\bar{D}^{0}|(\bar{u}\gamma_{\mu}P_{R}c)(\bar{u}\gamma^{\mu}P_{R}c)|D^{0}\right>=\frac{2}{3}m_{D}^{2}f_{D}^{2}B_{D}$, with the bag parameter $B_{D}(3\,{\rm GeV})=0.757(27)(4)$ calculated in the MS scheme, which was computed by the lattice QCD Carrasco:2015pra and the D meson decay constant defined as $\left<0|\bar{u}\gamma_{\mu}\gamma_{5}c)|D(p)\right>=if_{D}p_{\mu}$, with $f_{D}=0.2042$ GeV Zyla:2020zbs . Due to large nonperturbative contributions, the SM contribution is not well known. Therefore, in the absence of CP violation, the robust bound on the product of the couplings can be obtained by requiring that the mixing frequency should be smaller than the world average $x=2|M_{12}|/\Gamma=(0.43^{+0.10}_{-0.11})\%$ by HFLAV Amhis:2019ckw . The bound on this Wilson coefficient can be derived following Fajfer:2020tqf ; Fajfer:2008tm $\left|rC_{6}(M_{\bar{S}_{1}})\right|\frac{2m_{D}f_{D}^{2}B_{D}}{3\Gamma_{D}}<x,$ (11) with a renormalisation factor $r=0.76$ due to running of $C_{6}$ from scale $M_{\bar{S}_{1}}\simeq 1.5$ TeV down to $3$ GeV. One can derive $|C_{6}|<2.3\times 10^{-13}$ ${\rm GeV}^{-2}$ or $\left|\bar{y}^{RR}_{1\,c\chi}\,\bar{y}^{RR\ast}_{1\,u\chi}\right|<1.2\times 10^{-5}M_{\bar{S}_{1}}/GeV.$ (12) $c^{RR}<\frac{0.363\,{\rm GeV}}{M_{\bar{S}_{1}}({\rm GeV})}.$ (13) Figure 1: The product of Yukawa couplings $\left|\bar{y}^{RR}_{1\,c\chi}\,\bar{y}^{RR\ast}_{1\,u\chi}\right|$ as a function of the $\bar{S}_{1}$ mass. The pink line denotes the bound derived from Belle result Lai:2016uvj , while the turquoise one is obtained with the bound from $D^{0}-\bar{D}^{0}$ oscillations. #### III.0.2 $D^{0}\to\chi\bar{\chi}$ The amplitude for this process can be written as ${\mathcal{M}}(D^{0}\to\chi\bar{\chi})=\frac{\sqrt{2}}{2}G_{F}f_{D}c^{RR}m_{\chi}\bar{u}_{\chi}(p_{1})\gamma_{5}v_{\chi}(p_{2}),$ (14) giving the branching ratio ${\mathcal{B}}(D^{0}\to\chi\bar{\chi})=\frac{1}{\Gamma_{D}}\frac{G_{F}^{2}f_{D}^{2}m_{D}}{16\pi}\left|c^{RR}\right|^{2}m_{\chi}^{2}\sqrt{1-\frac{4m_{\chi}^{2}}{m_{D}^{2}}}.$ (15) Using Belle bound ${\mathcal{B}}(D^{0}\to\chi\bar{\chi})<9.4\times 10^{-5}$ Lai:2016uvj , one can find easily the bound on Wilson coefficient $\left|c^{RR}\right|_{Belle}<0.046$. This value is derived for the mass $m_{\chi}=0.8$ GeV. We analyse the dependence on the mass of $\bar{S}_{1}$, allowing the mass of $\chi$ to be $(m_{K}-m_{\pi})/2<m_{\chi}<(m_{D}-m_{\pi})/2$, and assume the branching ratio for ${\mathcal{B}}(D^{0}\to\chi\bar{\chi})<10^{-10}$, $10^{-9}$ and $10^{-8}$, with $\left|\bar{y}^{RR}_{1\,c\chi}\,\bar{y}^{RR\ast}_{1\,u\chi}\right|=1$. We present our result in Fig. 2 and find that mass of $\bar{S}_{1}$, using these reasonable assumptions, can be within LHC reach. $m_{\chi}$ (GeV) | ${\mathcal{B}}(D^{0}\to\chi\bar{\chi})_{D-\bar{D}}$ ---|--- 0.18 | $<1.1\times 10^{-9}$ 0.50 | $<7.4\times 10^{-9}$ 0.80 | $<1.1\times 10^{-8}$ Table 2: Branching ratios for ${\mathcal{B}}(D^{0}\to\chi\bar{\chi})$ for three selected values of $m_{\chi}$. The constraints from the $D^{0}-\bar{D}^{0}$ mixing is used, with $c^{RR}\leq 3.63\times 10^{-4}$, assuming $M_{\bar{S}_{1}}=1000$ GeV. Figure 2: The allowed mass region for $\bar{S}_{1}$ in the range $(m_{K}-m_{\pi})/2<m_{\chi}<(m_{D}-m_{\pi})/2$. The regions are obtained assuming ${\mathcal{B}}(D^{0}\to\chi\bar{\chi})<10^{-10}$, $10^{-9}$ and $10^{-8}$, for the product $\left|\bar{y}^{RR}_{1\,c\chi}\,\bar{y}^{RR\ast}_{1\,u\chi}\right|=1$. #### III.0.3 $D^{0}\to\chi\bar{\chi}\gamma$ The authors of Ref. Badin:2010uh suggested, that the helicity suppression, present in the $D^{0}\to\chi\bar{\chi}$ amplitude for $m_{\chi}=0$, is lifted by an additional photon in the final state and therefore $D^{0}\to\chi\bar{\chi}\gamma$ might bring additional information on detection of invisibles in the final state. They found that the branching decay is $\displaystyle{\mathcal{B}}(D^{0}\to\chi\bar{\chi}\gamma)=\frac{G_{F}^{2}F_{DQ}^{2}f_{D}^{2}|c^{RR}|^{2}m_{D}^{2}\alpha}{1152\pi^{2}\Gamma_{D}\sqrt{1-4x_{\chi}^{2}}}Y(x_{\chi}).$ (16) In the above equations $x_{\chi}=m_{\chi}/m_{D}$, $F_{DQ}=2/3(-1/(m_{D}-m_{c})+1/m_{c})$, $f_{D}=0.2042$ GeV Zyla:2020zbs and $Y(x_{\chi})$ is given in Appendix. Coefficient $c^{RR}$ is constrained by Eq. (13). $m_{\chi}$ (GeV) | ${\mathcal{B}}(D^{0}\to\chi\bar{\chi}\gamma)_{D-\bar{D}}$ | ${\mathcal{B}}(D^{0}\to\chi\bar{\chi}\gamma)_{Belle}$ ---|---|--- 0.18 | $<2.1\times 10^{-11}$ | $<1.3\times 10^{-7}$ 0.50 | $<6.9\times 10^{-12}$ | $<6.3\times 10^{-9}$ 0.80 | $<8.4\times 10^{-14}$ | $<2.2\times 10^{-10}$ Table 3: Bounds on the branching ratio for ${\mathcal{B}}(D^{0}\to\chi\bar{\chi}\gamma)$. In the second column the constraints from the $D^{0}-\bar{D}^{0}$ mixing is used, assuming $M_{\bar{S}_{1}}=1000$ GeV. In the third column Belle bound ${\mathcal{B}}(D^{0}\to\not{E})<9.4\times 10^{-5}$ is used. Comparing these results with the SM result presented in Ref. Badin:2010uh ${\mathcal{B}}(D^{0}\to\nu\bar{\nu}\gamma)_{SM}=3.96\times 10^{-14}$, we see that the existing Belle bound allows significant branching ratio, while the bounds from the $D^{0}-\bar{D}^{0}$ mixing, for larger values of $m_{\chi}$, lead to the branching ratio to be close to the SM results. Due to the mass of $\chi$, the photon energy can be in the range $0\leq E_{\gamma}\leq(m_{D}^{2}-m_{\chi}^{2})/(2m_{D})$, which in principle would distinguish the SM contribution from the contributions with massive invisible fermions. #### III.0.4 $D\to\pi\chi\bar{\chi}$ The rare charm decays due to GIM-mechanism cancellation are usually dominated by long distance contributions. Long distance contributions to exclusive decay channel $D\to\pi\nu\bar{\nu}$ were considered in Ref. Burdman:2001tf . For example, the branching ratio $BR(D^{+}\to\pi^{+}\rho^{0}\to\pi^{+}\nu\bar{\nu})<5\times 10^{-16}$. The authors of Burdman:2001tf discussed another possibility $D^{+}\to\tau^{+}\nu\to\pi^{+}\bar{\nu}\nu$ and found that the branching ratio should be smaller than $1.8\times 10^{-16}$. An interesting study of these effects was done in Ref. Kamenik:2009kc , implying that in order to avoid these effects one should make cuts in the invariant $\chi\bar{\chi}$ mass square, $q_{cut}^{2}<(m_{\tau}^{2}-m_{\pi}^{2})(m_{D}^{2}-m_{\tau}^{2})/m_{\tau}^{2}$. The amplitude for $D\to\pi\chi\bar{\chi}$ can be written as $\displaystyle{\cal M}(D\to\pi\chi\bar{\chi})$ $\displaystyle=$ $\displaystyle{\sqrt{2}}G_{F}c^{RR}\bar{u}_{\chi}(p_{1})\gamma_{\mu}P_{R}v_{\chi}(p_{2})$ (17) $\displaystyle\left<\pi(k)|\bar{u}\gamma^{\mu}P_{R}|D(p)\right>,$ with the standard form-factors definition $\displaystyle\left<\pi(k)|\bar{u}\gamma^{\mu}(1\pm\gamma_{5})|D(p)\right>=f_{+}(q^{2})\left[(p+k)^{\mu}-\frac{m_{D}^{2}-m^{2}_{\pi}}{q^{2}}q^{\mu}\right]$ $\displaystyle+f_{0}(q^{2})\frac{m^{2}_{D}-m^{2}_{\pi}}{q^{2}}q^{\mu},$ (18) with $q=p-k$. We follow the update of the form-factors in Ref. Fleischer:2019wlx . This enables us to write the amplitudes in the form given in Ref. Fajfer:2015mia $\displaystyle\mathcal{M}(D(p)\to\pi(k)\chi(p_{1})\bar{\chi}(p_{2}))=\frac{\sqrt{2}}{2}G_{F}[V(q^{2})\bar{u}_{\chi}(p_{1})\not{p}v_{\chi}(p_{2})$ $\displaystyle+A(q^{2})\bar{u}_{\chi}(p_{1})\not{p}\gamma_{5}v_{\chi}(p_{2})+P(q^{2})\bar{u}_{\chi}(p_{1})\gamma_{5}v_{\chi}(p_{2})],$ (19) with the following definitions $\displaystyle V(q^{2})$ $\displaystyle=A(q^{2})\equiv c^{RR}f_{+}(q^{2})$ (20) $\displaystyle P(q^{2})$ $\displaystyle\equiv-c^{RR}m_{\chi}\left[f_{+}(q^{2})-\frac{m_{D}^{2}-m_{\pi}^{2}}{q^{2}}(f_{0}(q^{2})-f_{+}(q^{2}))\right].$ We can the differential decay rate distribution as $\frac{d\mathcal{B}(D\rightarrow\pi\bar{\chi}\chi)}{dq^{2}}=\frac{1}{\Gamma_{D}}N\lambda^{1/2}\beta\left[2a(q^{2})+\frac{2}{3}c(q^{2})\right].$ (21) with notation $\lambda\equiv\lambda(m_{D}^{2},m_{\pi}^{2},q^{2})$, ($\lambda(x,y,z)=(x+y+z)^{2}-4(xy+yz+zx)$), $\beta=\sqrt{1-4m_{\chi}^{2}/q^{2}}$ and $N=\frac{G_{F}^{2}}{64(2\pi)^{3}m_{D}^{3}}$. Note that in case of charged charm meson there is a multiplication by 2 in the differential decay rate compared to neutral $D$. Figure 3: Branching fraction for $D^{+}\to\pi^{+}\chi\bar{\chi}$ and $D^{0}\to\pi^{0}\chi\bar{\chi}$ as a function of $m_{\chi}$. The integration bounds should be $4m_{\chi}^{2}\leq q^{2}\leq(m_{D}-m_{\pi})^{2}$ in the case of $m_{\chi}=0.5,\,0.8$, while instead of $m_{\chi}=0.18$ GeV, $q^{2}_{cut}$ is used from Ref. Kamenik:2009kc , giving the lowest mass of the invisibles should be searched in the region $m_{\chi}\geq\sqrt{q^{2}_{cut}/4}\simeq 0.29$ GeV. This enables us to avoid the region in which the effects of the long distance dynamics dominates. $m_{\chi}$ (GeV) | ${\mathcal{B}}(D^{0}\to\pi^{0}\chi\bar{\chi})_{D-\bar{D}}$ | ${\mathcal{B}}(D^{+}\to\pi^{+}\chi\bar{\chi})_{D-\bar{D}}$ ---|---|--- 0.18 | $<5.9\times 10^{-9}$ | $<3.0\times 10^{-8}$ 0.50 | $<3.2\times 10^{-9}$ | $<1.6\times 10^{-8}$ 0.80 | $<1.5\times 10^{-10}$ | $<7.6\times 10^{-10}$ Table 4: Branching ratios for ${\mathcal{B}}(D\to\pi\chi\bar{\chi})$. In the second and the third columns the constraint from the $D^{0}-\bar{D}^{0}$ mixing is used, assuming the mass of $M_{\bar{S}_{1}}=1000$ GeV. In the case $m_{\chi}=0.18$, the cut in integration variable is done by taking $q_{cut}^{2}$, as described in the text. One can use the Belle bound Lai:2016uvj for $\mathcal{B}(D\to\not{E})$ and determine $c^{RR}$ from $D^{0}\to\chi\bar{\chi}$ for each $\chi$ mass. We obtain $\mathcal{B}(D^{0}\to\pi^{0}\chi\bar{\chi})_{Belle}\leq 4.9\times 10^{-4},\,4.0\times 10^{-5},\,1.2\times 10^{-6}$ and $\mathcal{B}(D^{+}\to\pi^{+}\chi\bar{\chi})_{Belle}\leq 2.5\times 10^{-3},\,2.1\times 10^{-4},\,6.1\times 10^{-6}$ for $m_{\chi}=0.18,\,0.5,\,0.8$ GeV respectively. Obviously, the current Belle bound used in the Wilson coefficient leads to the significant increase of the branching ratios for both decay modes. Although the charm meson mixing is very constraining for the relevant couplings, the calculated branching ratios reaching the order $10^{-8}$ might be possible to observe at the future tau- charm factories and Belle II experiment. ## IV $\tilde{R}_{2}$ in $c\to u\chi\bar{\chi}$ The $\tilde{R}_{2}$ leptoquark is a weak doublet and it interacts with quark doublets (3). Therefore, the appropriate couplings, $\tilde{y}^{LR}_{2\,s\chi}\,\tilde{y}_{2\,b\chi}^{LR\ast}$ can be constrained from the $b\to s\chi\bar{\chi}$ and $s\to d\chi\bar{\chi}$ decays, as well as from observables coming from the $B_{s}-\bar{B}_{s}$, $B_{d}-\bar{B}_{d}$, $K^{0}-\bar{K}^{0}$ oscillations as in Fajfer:2020tqf . We consider the most constraining bounds coming from decays $B\to K\not{E}$ and from the oscillations of $B_{s}-\bar{B}_{s}$, relevant for the $\chi$ mass region $(m_{K}-m_{\pi})/2<m_{\chi}<(m_{D}-m_{\pi})/2$. The decay $B\to K\not{E}$ was recently studied by the authors of Ref. Li:2020dpc . They pointed out that current bound on the rate $B\to K\not{E}$ when the SM branching ratio for $B\to K\nu\bar{\nu}$ is subtracted from the experimental bound on $\mathcal{B}(B^{+}\to K^{+}\not{E})$ is the most constraining. They derived ${\mathcal{B}}(B\to K\not{E})<9.7\times 10^{-6}$as the strongest bound among $B\to H_{s}\not{E}$ ($H_{s}$ is a hadron containing the $s$ quark). #### IV.0.1 Constraints from $B\to K\not{E}$ and $B_{s}-\bar{B}_{s}$ oscillations The amplitude for $B\to K\chi\bar{\chi}$ can be written as $\displaystyle{\cal M}(B\to K\chi\bar{\chi})$ $\displaystyle=$ $\displaystyle{\sqrt{2}}G_{F}c^{LR}_{B}\bar{u}_{\chi}(p_{1})\gamma_{\mu}P_{R}v_{\chi}(p_{2})$ (22) $\displaystyle\left<K(k)|\bar{u}\gamma^{\mu}P_{L}|B(p)\right>.$ In the case of Wilson coefficient $c^{LR}_{B}$ it is easy to find Dorsner:2016wpm $c^{LR}_{B}=-\frac{v^{2}}{2M^{2}_{\tilde{R}_{2}}}\tilde{y}^{LR}_{2\,s\chi}\tilde{y}^{LR\ast}_{2\,b\chi}.$ (23) The integration over the phase space depends on the mass of $m_{\chi}$ we chose. Here we can choose a mass $\chi$, which we used in D decays $(m_{K}-m_{\pi})/2<m_{\chi}<(m_{D}-m_{\pi})/2$. The bounds on the Wilson coefficient in Eq. (23) are following $|c^{LR}_{B}|<3.3\times 10^{-4}$, $<4.9\times 10^{-4}$ and $<9.1\times 10^{-4}$ for $m_{\chi}=0.18,\,0.50,\,0.80$ GeV. There are two box diagrams with $\chi$ within the box contributing to the $B_{s}-\bar{B}_{s}$ oscillations. The contribution of $\tilde{R}_{2}$ box diagrams to the effective Lagrangian for the $B_{s}-\bar{B}_{s}$ oscillation is $\displaystyle{\cal L}^{NP}_{\Delta B=2}=-\frac{1}{128\,\pi^{2}}\frac{\left(\tilde{y}^{LR}_{2\,s\chi}\right)^{2}\left(\tilde{y}^{LR\,\ast}_{2\,b\chi}\right)^{2}}{M_{\tilde{R}_{2}}^{2}}$ $\displaystyle\times\left(\bar{s}\gamma_{\mu}P_{R}b\right)\,\left(\bar{s}\gamma^{\mu}P_{R}b\right).$ (24) We can understand this result in terms of the recent study of new physics in the $B_{s}-\bar{B}_{s}$ oscillation in DiLuzio:2019jyq . The authors of DiLuzio:2019jyq introduced the following notation of the New Physics (NP) contribution containing the right-handed operators as ${\cal L}_{\Delta B=2}^{NP}\supset-\frac{4G_{F}}{\sqrt{2}}(V_{tb}V_{ts}^{*})^{2}C_{bs}^{RR}\left(\bar{s}\gamma_{\mu}P_{R}b\right)\,\left(\bar{s}\gamma^{\mu}P_{R}b\right).$ (25) Following their notation, one can write the modification of the SM contribution by the NP as in Ref. DiLuzio:2019jyq $\frac{\Delta M_{s}^{SM+NP}}{\Delta M_{s}^{SM}}=\left|1+\frac{\eta^{6/23}}{R_{loop}^{SM}}\,C_{bs}^{RR}\right|$ (26) They found that $R_{loop}^{SM}=(1.31\pm 0.010)\times 10^{-3}$ and $\eta=\alpha_{s}(\mu_{NP})/\alpha_{s}(\mu_{b})$. Relying on the Lattice QCD results of the two collaborations FNAL/MILC Bazavov:2016nty , HPQCD Dowdall:2019bea , the FLAG averaging group Aoki:2019cca published following results, which we use in our calculations $\displaystyle\Delta M_{s}^{FLAG2019}$ $\displaystyle=$ $\displaystyle(20.1^{+1.2}_{-1.6})\,ps^{-1}=(1.13^{+0.07}_{-0.09})\,\Delta M_{s}^{exp},$ From these results, one can easily determine bound $\Bigg{|}\frac{\left(\tilde{y}^{LR}_{2\,s\chi}\right)^{2}\left(\tilde{y}^{LR\,\ast}_{2\,b\chi}\right)^{2}}{M_{\tilde{R}_{2}}^{2}}\Bigg{|}\leq 1.39\times 10^{-8}\,{\rm GeV}^{-2},$ (28) The same couplings $\tilde{y}^{LR}_{2\,s\chi}\,\tilde{y}^{LR\,\ast}_{2\,b\chi}$ enter in the $D^{0}-\bar{D}^{0}$ mixing (9) and condition (12), and one can derive $\displaystyle\left[\left(V_{us}\tilde{y}^{LR}_{2\,s\chi}\right)\,\left(V_{cb}\tilde{y}^{LR}_{2\,b\chi}\right)^{\ast}+\left(V_{cs}\tilde{y}^{LR}_{2\,s\chi}\right)\,\left(V_{ub}\tilde{y}^{LR}_{2\,b\chi}\right)^{\ast}\right]\Bigg{|}_{D-\bar{D}}$ $\displaystyle<1.2\times 10^{-5}M_{\tilde{R}_{2}}/{\rm GeV}.$ (29) The bound on coefficients in (28) lead to the one order of magnitude stronger constraint then one in (29), $\tilde{y}^{LR}_{2\,s\chi}\,\tilde{y}^{LR\,\ast}_{2\,b\chi}<1.58\times 10^{-6}M_{\tilde{R}_{2}}/{\rm GeV}$. In our numerical calculations we use this bound and do not specify the mass of $\tilde{R}_{2}$. However, one can combine these constraints and determined the $\tilde{R}_{2}$ mass, which can satisfy both conditions. In Fig. (4) we present dependence of the couplings $\tilde{y}^{LR}_{2\,s\chi}\tilde{y}^{LR\,\ast}_{2\,b\chi}$ as a function of mass $M_{\tilde{R}_{2}}$ for masses using constrain from $B_{s}^{0}-\bar{B}_{s}^{0}$ mixing and from the bound $\mathcal{B}(B^{+}\to K^{+}\not{E})<9.7\times 10^{-6}$ for $m_{\chi}=0.18,\,0.5,\,0.8$ GeV. Figure 4: The allowed mass for $\tilde{R}_{2}$. Constraints are derived from the $B_{s}^{0}-\bar{B}_{s}^{0}$ mixing and from the bound $\mathcal{B}(B^{+}\to K^{+}\not{E})<9.7\times 10^{-6}$ for $m_{\chi}=0.18,\,0.5,\,0.8$ GeV. From Fig. 4 we see that the largest mass of $\tilde{R}_{2}$, which satisfies both conditions is $M_{\tilde{R}_{2}}\simeq 4400,\,7100,\,10800$ GeV for the masses $m_{\chi}=0.8,\,0.5,\,0.18$ GeV respectively. All $\tilde{R}_{2}$ masses below these limiting values are allowed and interestingly, they are within LHC reach. #### IV.0.2 $\tilde{R}_{2}$ in ${\mathcal{B}}(D^{0}\to\chi\bar{\chi})$, ${\mathcal{B}}(D^{0}\to\chi\bar{\chi}\gamma)$ and ${\mathcal{B}}(D^{+}\to\pi^{+}\chi\bar{\chi})$ Using the same expressions as in the previous section, we calculate branching ratios for $D^{0}\to\chi\bar{\chi}$, $D^{0}\to\chi\bar{\chi}\gamma$ and present them in Table 5. The results for $D\to\pi\chi\bar{\chi}$ are presented in Table 6. The Wilson coefficient $c^{LR}_{D}$ is obtained using the constraint from $B\to K{\it missing\,energy}$. For $m_{\chi}=0.18,\,0.5,\,0.8$ GeV they are $c^{LR}_{D}=|(V_{us}V_{cb}^{\ast}+V_{cs}V_{ub}^{\ast})c^{LR}_{B}|=4.4\times 10^{-6},\,6.6\times 10^{-6},\,1.2\times 10^{-5}$. $m_{\chi}$ (GeV) | ${\mathcal{B}}(D^{0}\to\chi\bar{\chi})$ | ${\mathcal{B}}(D^{0}\to\chi\bar{\chi}\gamma)$ ---|---|--- $0.18$ | $<1.6\times 10^{-13}$ | $<1.9\times 10^{-15}$ $0.50$ | $<2.4\times 10^{-12}$ | $<1.4\times 10^{-15}$ $0.80$ | $<1.3\times 10^{-11}$ | $<2.7\times 10^{-16}$ Table 5: Branching ratios for ${\mathcal{B}}(D^{0}\to\chi\bar{\chi})$ and ${\mathcal{B}}(D^{0}\to\chi\bar{\chi}\gamma)$. The bounds on the Wilson coefficient $c^{LR}_{D}$ derived from the ${\mathcal{B}}(B\to K\not{E})<9.7\times 10^{-6}$ for selected masses of $\chi$ from the range $(m_{K}-m_{\pi})/2<m_{\chi}<(m_{D}-m_{\pi})/2$. $m_{\chi}$ (GeV) | ${\mathcal{B}}(D^{0}\to\pi^{0}\chi\bar{\chi})$ | ${\mathcal{B}}(D^{+}\to\pi^{+}\chi\bar{\chi})$ ---|---|--- $0.18$ | $<8.7\times 10^{-13}$ | $<4.5\times 10^{-12}$ $0.50$ | $<1.1\times 10^{-12}$ | $<5.4\times 10^{-12}$ $0.80$ | $<1.7\times 10^{-13}$ | $<8.7\times 10^{-13}$ Table 6: Branching ratios for ${\mathcal{B}}(D^{0}\to\pi^{0}\chi\bar{\chi})$ and ${\mathcal{B}}(D^{+}\to\pi^{+}\chi\bar{\chi})$. The bounds on the Wilson coefficient $c^{LR}_{D}$ derived from ${\mathcal{B}}(B\to K\not{E})<9.7\times 10^{-6}$. In the case $m_{\chi}=0.18$, the cut in the integration variable is done by taking $q_{cut}^{2}$, as described in the text. Compared with the coloured scalar $\bar{S}_{1}$ mediation, the branching ratios for all three decay modes are suppressed for several orders of magnitude, indicating the important role of constraints from $B$ mesons. Such suppressed branching ratios of the all rare charm decays mediated by $\tilde{R}_{2}$ is almost impossible to observe. On the other hand, decays of hadrons containing $b$ quarks , mediated by $\tilde{R}_{2}$ a much more suitable for searches of invisible fermions. ## V Summary and outlook We have presented a study on rare charm decays with invisible massive fermions $\chi$ in the final state. The mass of $\chi$ is taken to be in the range $(m_{K}-m_{\pi})/2<m_{\chi}<(m_{D}-m_{\pi})/2$, since the current experimental results on $\mathcal{B}(K\to\pi\nu\bar{\nu})$ are very close to the SM result, almost excluding the presence of New Physics. We considered two cases with coloured scalar mediators of the up-quarks interaction with $\chi$. The simplest model is one with $\bar{S}_{1}=(\bar{3},1,-2/3)$, which couples only to weak up-quark singlets, and the second mediator is $\tilde{R}_{2}=(\bar{3},2,1/6)$ which couples to weak quark doublets. In the case of $\bar{S}_{1}$, the relevant constraint comes from the $D^{0}-\bar{D}^{0}$ oscillations. We have calculated branching ratios for $D^{0}\to\chi\bar{\chi}$, $D^{0}\to\chi\bar{\chi}\gamma$ and $D\to\pi\chi\bar{\chi}$. The charm meson mixing severely constrain the branching ratio $D^{0}\to\chi\bar{\chi}$ in comparison with the experimental result for the branching ratio of $D^{0}\to\not{E}$. For our choice of $m_{\chi}$ the branching ratio for $D^{0}\to\chi\bar{\chi}\gamma$ can be calculated using experimental bound on the rate for $D^{0}\to\not{E}$. In this case, there is an enhancement factor up to three orders of magnitude smaller, depending on the mass of $\chi$ in comparison with the constraints from the $D^{0}-\bar{D}^{0}$ oscillations. The branching ratios for $D\to\pi\chi\bar{\chi}$, based on charm mixing constraint, are of the order $10^{-9}-10^{-7}$, suitable for searches at future tau-charm factories, BESIII and Belle II experiments. In the case of $\tilde{R}_{2}$, for the mass range of $\chi$ relevant for charm meson rare decays, we rely on constraints coming from $\mathcal{B}(B\to K\not{E})$ and from the $B_{s}^{0}-\bar{B}_{s}^{0}$ mixing. We find that the all three decay modes $D^{0}\to\chi\bar{\chi}$, $D^{0}\to\chi\bar{\chi}\gamma$ and $D^{+}\to\pi^{+}\chi\bar{\chi}$ are now having branching ratios for a factor $3-4$ orders of magnitude smaller then in the case of coloured scalar $\bar{S}_{1}$ mediation, making them very difficult for the observation. Interestingly, the mass of both mediators $\bar{S}_{1}$ and $\tilde{R}_{2}$ are in the range of LHC reach, and hopefully, searches for mono-jets and missing energy might put constraints on their masses. ## VI Acknowledgment The work of SF was in part financially supported by the Slovenian Research Agency (research core funding No. P1-0035). The work of AN was partially supported by the Advanced Grant of European Research Council (ERC) 884719 — FAIME. ## VII Appendix ### VII.1 Phase space factors In eq. (16) phase space function $Y(x_{\chi})$ is used $\displaystyle Y(x_{\chi})$ $\displaystyle=$ $\displaystyle 1-2x_{\chi}^{2}+3x_{\chi}^{2}(3-6x_{\chi}^{2}+4x_{\chi}^{4}){\sqrt{1}-4x_{\chi}^{2}}$ (30) $\displaystyle\times$ $\displaystyle\log\left(\frac{2x_{\chi}}{1+\sqrt{1-4x_{\chi}^{2}}}\right)-11x_{\chi}^{4}+12x_{\chi}^{6}.$ In Eq. (21) $a(q^{2})$ and $c(q^{2})$ are introduced denoting $\displaystyle a(q^{2})$ $\displaystyle=\frac{\lambda}{2}(|V(q^{2})|^{2}+|A(q^{2})|^{2})+8m_{\chi}^{2}m_{D}^{2}|A(q^{2})|^{2}$ (31) $\displaystyle+2q^{2}$ $\displaystyle|P(q^{2})|^{2}+4m_{\chi}(m_{D}^{2}-m_{\pi}^{2}+q^{2})\text{Re}[A(q^{2})P(q^{2})^{*}],$ $\displaystyle c(q^{2})$ $\displaystyle=-\frac{\lambda\beta^{2}}{2}(|V(q^{2})|^{2}+|A(q^{2})|^{2}).$ ### VII.2 $D\to\pi$ form factors Following Lubicz:2017syv one can use $z-$expansion with $z=\frac{\sqrt{t_{+}-q^{2}}-\sqrt{t_{+}-t_{0}}}{\sqrt{t_{+}-q^{2}}+\sqrt{t_{+}-t_{0}}},$ (32) with $t_{+}=(m_{D}+m_{\pi})^{2}$ and $t_{0}=(m_{D}+m_{\pi})(\sqrt{m}_{D}-\sqrt{m}_{\pi})$. The form factors can be written as $\displaystyle f^{D\to\pi}_{+}(q^{2})=\frac{f^{D\to\pi}(0)+c^{D\to\pi}_{+}(z-z_{0})(1+\frac{1}{2}(z+z_{0}))}{1-P_{V}q^{2}},$ (33) $\displaystyle f^{D\to\pi}_{0}(q^{2})=\frac{f^{D\to\pi}(0)+c^{D\to\pi}_{0}(z-z_{0})(1+\frac{1}{2}(z+z_{0}))}{1-P_{S}q^{2}},$ (34) where $z_{0}=z(0,t_{0}^{\pi})$. The fit parameters are given in Table (7). For the most recent discussion on form-factors see also Becirevic:2020rzi . $f(0)$ | $c_{+}$ | $P_{V}$ (GeV)-2 | $c_{0}$ | $P_{S}$ (GeV)-2 ---|---|---|---|--- 0.6117 (354) | -1.985 (347) | 0.1314 (127) | -1.188 (256) | 0.0342 (122) Table 7: Fit parameters for $f_{0}$, $f_{+}$ in the $z$-series expansion for $D\to\pi$ Lubicz:2017syv . ### VII.3 $B\to K$ form factors Most recent results are presented in FLAG report Aoki:2019cca $f_{+}^{BK}(q^{2})=\frac{r_{1}}{1-\frac{q^{2}}{m_{R}^{2}}}+\frac{r_{2}}{1-\frac{q^{2}}{m_{R}^{2}}},$ (35) $f_{0}^{BK}(q^{2})=\frac{r_{1}}{1-\frac{q^{2}}{m_{R}^{\prime 2}}}.$ (36) The parameters are $r_{1}=0.162$, $r_{2}=0.173$, $m_{R}=5.41$ GeV and $m_{R^{\prime}}=6.12$ GeV, as in Aoki:2019cca . ## References * (1) R. Bause, H. Gisbert, M. Golz, G. Hiller, Rare charm $\bm{c\to u\,\nu\bar{\nu}}$ dineutrino null tests for $\bm{e^{+}e^{-}}$-machines (10 2020). arXiv:2010.02225. * (2) A. Badin, A. A. Petrov, Searching for light Dark Matter in heavy meson decays, Phys. Rev. D 82 (2010) 034005. arXiv:1005.1277, doi:10.1103/PhysRevD.82.034005. * (3) B. Bhattacharya, C. M. Grant, A. A. Petrov, Invisible widths of heavy mesons, Phys. Rev. D 99 (9) (2019) 093010. arXiv:1809.04606, doi:10.1103/PhysRevD.99.093010. * (4) W. Altmannshofer, et al., The Belle II Physics Book, PTEP 2019 (12) (2019) 123C01, [Erratum: PTEP 2020, 029201 (2020)]. arXiv:1808.10567, doi:10.1093/ptep/ptz106. * (5) M. Ablikim, et al., Future Physics Programme of BESIII, Chin. Phys. C 44 (4) (2020) 040001. arXiv:1912.05983, doi:10.1088/1674-1137/44/4/040001. * (6) A. Abada, et al., FCC Physics Opportunities: Future Circular Collider Conceptual Design Report Volume 1, Eur. Phys. J. C 79 (6) (2019) 474. doi:10.1140/epjc/s10052-019-6904-3. * (7) A. Abada, et al., FCC-ee: The Lepton Collider: Future Circular Collider Conceptual Design Report Volume 2, Eur. Phys. J. ST 228 (2) (2019) 261–623. doi:10.1140/epjst/e2019-900045-4. * (8) Y.-T. Lai, et al., Search for $D^{0}$ decays to invisible final states at Belle, Phys. Rev. D 95 (1) (2017) 011102. arXiv:1611.09455, doi:10.1103/PhysRevD.95.011102. * (9) R. Bause, H. Gisbert, M. Golz, G. Hiller, Exploiting $CP$-asymmetries in rare charm decays, Phys. Rev. D 101 (11) (2020) 115006. arXiv:2004.01206, doi:10.1103/PhysRevD.101.115006. * (10) E. Golowich, J. Hewett, S. Pakvasa, A. A. Petrov, Relating D0-anti-D0 Mixing and D0 —$>$ l+ l- with New Physics, Phys. Rev. D 79 (2009) 114030\. arXiv:0903.2830, doi:10.1103/PhysRevD.79.114030. * (11) J. Martin Camalich, M. Pospelov, P. N. H. Vuong, R. Ziegler, J. Zupan, Quark Flavor Phenomenology of the QCD Axion, Phys. Rev. D 102 (1) (2020) 015023. arXiv:2002.04623, doi:10.1103/PhysRevD.102.015023. * (12) G. Faisel, J.-Y. Su, J. Tandean, Exploring charm decays with missing energy in leptoquark models (12 2020). arXiv:2012.15847. * (13) I. Doršner, S. Fajfer, A. Greljo, J. Kamenik, N. Košnik, Physics of leptoquarks in precision experiments and at particle colliders, Phys. Rept. 641 (2016) 1–68. arXiv:1603.04993, doi:10.1016/j.physrep.2016.06.001. * (14) M. Bordone, C. Cornella, J. Fuentes-Martín, G. Isidori, Low-energy signatures of the $\mathrm{PS}^{3}$ model: from $B$-physics anomalies to LFV, JHEP 10 (2018) 148. arXiv:1805.09328, doi:10.1007/JHEP10(2018)148. * (15) L. Di Luzio, A. Greljo, M. Nardecchia, Gauge leptoquark as the origin of B-physics anomalies, Phys. Rev. D 96 (11) (2017) 115011. arXiv:1708.08450, doi:10.1103/PhysRevD.96.115011. * (16) G. Ruggiero, New Result on $K^{+}\to\pi^{+}\nu\bar{\nu}$ from the NA62 Experiment, KAON2019, Perugia, Italy, 10-13 September 2019. * (17) A. J. Buras, D. Buttazzo, J. Girrbach-Noe, R. Knegjens, ${K}^{+}\to{\pi}^{+}\nu\overline{\nu}$ and ${K}_{L}\to{\pi}^{0}\nu\overline{\nu}$ in the Standard Model: status and perspectives, JHEP 11 (2015) 033. arXiv:1503.02693, doi:10.1007/JHEP11(2015)033. * (18) A. Angelescu, D. A. Faroughy, O. Sumensari, Lepton Flavor Violation and Dilepton Tails at the LHC, Eur. Phys. J. C 80 (7) (2020) 641. arXiv:2002.05684, doi:10.1140/epjc/s10052-020-8210-5. * (19) J. Fuentes-Martin, A. Greljo, J. Martin Camalich, J. D. Ruiz-Alvarez, Charm physics confronts high-pT lepton tails, JHEP 11 (2020) 080. arXiv:2003.12421, doi:10.1007/JHEP11(2020)080. * (20) S. Fajfer, D. Susič, Coloured Scalar Mediated Nucleon Decays to Invisible Fermion (10 2020). arXiv:2010.08367. * (21) S. Fajfer, N. Košnik, Prospects of discovering new physics in rare charm decays, Eur. Phys. J. C 75 (12) (2015) 567. arXiv:1510.00965, doi:10.1140/epjc/s10052-015-3801-2. * (22) N. Carrasco, P. Dimopoulos, R. Frezzotti, V. Lubicz, G. C. Rossi, S. Simula, C. Tarantino, $\Delta$S=2 and $\Delta$C=2 bag parameters in the standard model and beyond from Nf=2+1+1 twisted-mass lattice QCD, Phys. Rev. D 92 (3) (2015) 034516. arXiv:1505.06639, doi:10.1103/PhysRevD.92.034516. * (23) P. Zyla, et al., Review of Particle Physics, PTEP 2020 (8) (2020) 083C01. doi:10.1093/ptep/ptaa104. * (24) Y. S. Amhis, et al., Averages of $b$-hadron, $c$-hadron, and $\tau$-lepton properties as of 2018 (9 2019). arXiv:1909.12524. * (25) S. Fajfer, N. Kosnik, Leptoquarks in FCNC charm decays, Phys. Rev. D 79 (2009) 017502. arXiv:0810.4858, doi:10.1103/PhysRevD.79.017502. * (26) G. Burdman, E. Golowich, J. L. Hewett, S. Pakvasa, Rare charm decays in the standard model and beyond, Phys. Rev. D 66 (2002) 014009. arXiv:hep-ph/0112235, doi:10.1103/PhysRevD.66.014009. * (27) J. F. Kamenik, C. Smith, Tree-level contributions to the rare decays B+ —$>$ pi+ nu anti-nu, B+ —$>$ K+ nu anti-nu, and B+ —$>$ K*+ nu anti-nu in the Standard Model, Phys. Lett. B 680 (2009) 471–475. arXiv:0908.1174, doi:10.1016/j.physletb.2009.09.041. * (28) R. Fleischer, R. Jaarsma, G. Koole, Testing Lepton Flavour Universality with (Semi)-Leptonic $D_{(s)}$ Decays, Eur. Phys. J. C 80 (2) (2020) 153. arXiv:1912.08641, doi:10.1140/epjc/s10052-020-7702-7. * (29) G. Li, T. Wang, Y. Jiang, J.-B. Zhang, G.-L. Wang, Spin-$1/2$ invisible particles in heavy meson decays (4 2020). arXiv:2004.10942. * (30) L. Di Luzio, M. Kirk, A. Lenz, T. Rauh, $\Delta M_{s}$ theory precision confronts flavour anomalies, JHEP 12 (2019) 009. arXiv:1909.11087, doi:10.1007/JHEP12(2019)009. * (31) A. Bazavov, et al., $B^{0}_{(s)}$-mixing matrix elements from lattice QCD for the Standard Model and beyond, Phys. Rev. D 93 (11) (2016) 113016. arXiv:1602.03560, doi:10.1103/PhysRevD.93.113016. * (32) R. Dowdall, C. Davies, R. Horgan, G. Lepage, C. Monahan, J. Shigemitsu, M. Wingate, Neutral B-meson mixing from full lattice QCD at the physical point, Phys. Rev. D 100 (9) (2019) 094508. arXiv:1907.01025, doi:10.1103/PhysRevD.100.094508. * (33) S. Aoki, et al., FLAG Review 2019: Flavour Lattice Averaging Group (FLAG), Eur. Phys. J. C 80 (2) (2020) 113. arXiv:1902.08191, doi:10.1140/epjc/s10052-019-7354-7. * (34) V. Lubicz, L. Riggio, G. Salerno, S. Simula, C. Tarantino, Scalar and vector form factors of $D\to\pi(K)\ell\nu$ decays with $N_{f}=2+1+1$ twisted fermions, Phys. Rev. D 96 (5) (2017) 054514, [Erratum: Phys.Rev.D 99, 099902 (2019), Erratum: Phys.Rev.D 100, 079901 (2019)]. arXiv:1706.03017, doi:10.1103/PhysRevD.96.054514. * (35) D. Bečirević, F. Jaffredo, A. Peñuelas, O. Sumensari, New Physics effects in leptonic and semileptonic decays (12 2020). arXiv:2012.09872.
# Relieving the $H_{0}$ tension with a new interacting dark energy model Li-Yang Gao Ze-Wei Zhao She-Sheng Xue Xin Zhang11footnotetext: Corresponding author. ###### Abstract We investigate an extended cosmological model motivated by the asymptotic safety of gravitational field theory, in which the matter and radiation densities and the cosmological constant receive a correction parametrized by the parameters $\delta_{G}$ and $\delta_{\Lambda}$, leading to that both the evolutions of the matter and radiation densities and the cosmological constant slightly deviate from the standard forms. Here we explain this model as a scenario of vacuum energy interacting with matter and radiation. We consider two cases of the model: (i) ${\tilde{\Lambda}}$CDM with one additional free parameter $\delta_{G}$, with $\delta_{\rm G}$ and $\delta_{\Lambda}$ related by a low-redshift limit relation and (ii) e${\tilde{\Lambda}}$CDM with two additional free parameters $\delta_{G}$ and $\delta_{\Lambda}$ that are independent of each other. We use two data combinations, CMB+BAO+SN (CBS) and CMB+BAO+SN+$H_{0}$ (CBSH), to constrain the models. We find that, in the case of using the CBS data, neither ${\tilde{\Lambda}}$CDM nor e${\tilde{\Lambda}}$CDM can effectively alleviate the $H_{0}$ tension. However, it is found that using the CBSH data the $H_{0}$ tension can be greatly relieved by the models. In particular, in the case of e${\tilde{\Lambda}}$CDM, the $H_{0}$ tension can be resolved to 0.71$\sigma$. We conclude that as an interacting dark energy model, ${\tilde{\Lambda}}$CDM is much better than $\Lambda(t)$CDM in the sense of both relieving the $H_{0}$ tension and fitting to the current observational data. ## 1 Introduction The Hubble constant $H_{0}$ is the first cosmological parameter, which was introduced by Edwin Hubble to describe the current expansion of the universe, and it has been measured for about one century. Precisely measuring the value of the Hubble constant is extremely important for cosmology because it determines the absolute scale of the universe. But with the development of precision cosmology, cosmologists now face an increasingly puzzling problem, i.e., the discrepancy between the value of $H_{0}$ inferred from the early universe using the cosmic microwave background (CMB) data observed by the Planck satellite assuming a base $\Lambda$CDM cosmology [1] and the one directly measured by using the Cepheid-supernovae distance ladder [2]. Based on the CMB measurements from Planck TT,TE,EE+lowE+lensing [1] and baryon acoustic oscillation (BAO) measurements from galaxy redshift surveys [3, 4, 5], it is found that in the base $\Lambda$CDM model we have $H_{0}=(67.36\pm 0.54)~{}\rm{km~{}s^{-1}~{}Mpc^{-1}}$ [1]. On the other hand, the direct measurement of the Hubble constant from the Hubble Space Telescope using the distance ladder method gives the result of $H_{0}=(74.03\pm 1.42)~{}\rm{km~{}s^{-1}~{}Mpc^{-1}}$, which shows a $4.4\sigma$ tension in statistical significance with the early-universe result from the Planck CMB measurement (for some reviews on this tension, see Refs. [7, 6, 8, 9, 10, 11]). The reasons for this tension are usually ascribed to systematic errors or new physics. To solve this problem, a number of articles have attempted to address the systematic errors in these two methods [12, 13, 14, 15, 16, 17, 18], but no reliable evidence is found and the tension actually still exists. Therefore, it is of great importance to measure the Hubble constant in other independent ways. In fact, besides the Cepheid-supernova distance ladder, there are also two distance ladders, i.e., the ones using Mira variables [19] or red giants [20] instead of Cepheids to calibrate type Ia supernovae (SNIa). Other late- universe measurement methods also include the observations of strong lensing time delays [21], water masers [22], surface brightness fluctuations [23], gravitational waves from neutron star mergers [24], different ages of galaxies as cosmic clocks [25, 26], baryonic Tully-Fisher relation [27], and so forth. All these observations show that the late-universe estimations of $H_{0}$ disagree with the prediction from the Planck CMB observation in conjunction with the base $\Lambda$CDM cosmology at the 4–6$\sigma$ level. On the other hand, there have been lots of theoretical ideas [28, 29, 30, 31, 35, 32, 33, 34, 36, 42, 37, 38, 39, 40, 41] to address the Hubble tension by extending the standard model of cosmology. For example, in the aspect of the late universe, one may consider dynamical dark energy instead of the cosmological constant, or the interaction between dark energy and dark matter, and in the aspect of the early universe, one may consider the extra relativistic degrees of freedom, early dark energy, or the self-interaction among neutrinos. A comprehensive analysis of many typical extended cosmological models [43] shows that among these extended models actually no one can truly resolve the Hubble tension. In this paper, we wish to investigate a new extension to the standard $\Lambda$CDM model, which is motivated by the asymptotic safety of gravitational field theory [44], from the perspective of how to relieve the $H_{0}$ tension. As the universe expands and the energy (time) scale varies, the gravitational coupling parameter G and the cosmological constant $\Lambda$ will vary following scaling laws and approach to the present values $G_{0}$ and $\Lambda_{0}$. This implies that in the normal Friedmann equations of $\Lambda$CDM the matter (radiation) term $\Omega_{\rm m,r}$ and the cosmological constant term $\Omega_{\Lambda}$ could receive an additional scaling factor $(1+z)^{\delta}$ with $\delta\ll 1$. To constrain the model parameter $\delta$ and address the $H_{0}$ tension issue, we adopt the combination of the latest cosmological datasets $\rm CMB+BAO+SN$ with or without the $H_{0}$ prior from the local measurement, compared to the $\Lambda$CDM model and some other typical cosmological models. In our analysis, we fit all the models to the same datasets and examine the $H_{0}$ tension by taking $\Lambda$CDM as a benchmark model. The structure of this paper is arranged as follows. In Section 2, we present the description of the new extended cosmological model. Section 3 briefly describes the data and methods used in this work. The results and related analysis are presented in Section 4. We test the robustness of results in Section 5. Conclusion is given in Section 6. ## 2 Motivation and cosmological models The $\Lambda$CDM model has usually been viewed as a standard model of cosmology at the present. In the $\Lambda$CDM model, the expansion history of the universe, described by the Hubble expansion rate, is given by the Friedmann equation, $\displaystyle H^{2}=\frac{8\pi G}{3}(\rho_{\rm m}+\rho_{\rm r}+\rho_{\rm\Lambda}),$ (2.1) where $H$ is the Hubble parameter, $G$ is the gravitational constant, and the densities of matter and radiation evolve with redshift as $\rho_{\rm m,r}=\rho^{0}_{\rm m,r}(1+z)^{3(1+w_{\rm m,r})}$, with their equations of state $w_{\rm m}=0$ for non-relativistic particles and $w_{\rm r}=1/3$ for relativistic particles. The cosmological constant $\Lambda$ describes the vacuum energy density, which serves as dark energy in this model. The vacuum energy density is given by $\rho_{{}_{\Lambda}}=\rho^{0}_{{}_{\Lambda}}\equiv\Lambda/(8\pi G_{0})$, which has a negative pressure with the equation of state $p_{{}_{\Lambda}}=w_{{}_{\Lambda}}\rho_{{}_{\Lambda}}$, with $w_{{}_{\Lambda}}=-1$. Note that here in fact we use $\Lambda$ to denote the “effective” cosmological constant $\Lambda\simeq 4.2\times 10^{-66}~{}{\rm eV}^{2}=2.8\times 10^{-122}~{}m_{\rm Pl}^{2}$ with $m_{\rm Pl}$ the Planck mass. Actually, the puzzling problem of why the original vacuum energy density could precisely cancel with the “bare” cosmological constant leading to such a small value of $\Lambda$ is still an open question, also known as the cosmological constant problem, which is usually viewed to be closely relevant to quantum gravity, and we will not deeply discuss this issue in this paper. Here we present a new extended cosmological model. The principle of the new model discussed in this work is the same as in Ref. [44], and we assume that the gravitational constant varies with redshift. As a consequence, the cosmological constant $\Lambda$ will also change with the redshift because of this assumption. In this paper, the quantities with subscript or superscript “$0$” stand for their present values ($z=0$), i.e., $G_{0}$ and $\Lambda_{0}$ are the present values of gravitational constant and cosmological constant, respectively, while $\rho^{0}_{\rm m}$, $\rho^{0}_{\rm r}$, and $\rho^{0}_{{}_{\Lambda}}$ are the densities of matter, radiation, and dark energy at the present, respectively. As one of the fundamental theories for interactions in nature, the classical Einstein theory of gravity, which plays an essential role in the standard model of modern cosmology ($\Lambda$CDM), should be realized in the scaling- invariant domain of a fixed point of its quantum field theory. It was suggested by Weinberg [45] that the quantum field theory of gravity regularized with an ultraviolet (UV) cutoff might have a non-trivial UV-stable fixed point and asymptotic safety, namely the renormalization group (RG) flows are attracted into the UV-stable fixed point with a finite number of physically renormalizable operators for the gravitational field. Ref. [44] studied the asymptotic safety of the quantum field theory of gravity, namely the gravitational “constant” $G$ and the cosmological “constant” $\Lambda$ are time varying, approaching to the point $(G_{0},\Lambda_{0})$ where two relevant operators of Ricci scalar term R and cosmological term $\Lambda$ of classical Einstein gravity are realized. This implies the “scaling laws” (ansatz) $G/G_{0}=(1+z)^{-\delta_{\rm G}}$ and $\Lambda/\Lambda_{0}=(1+z)^{\delta_{\Lambda}}$, where the two “critical exponents” (parameters) $\delta_{\rm G}\ll 1$ and $\delta_{\Lambda}\ll 1$ are related. This motivates us to extend the $\Lambda$CDM model by assuming $\displaystyle(G/G_{0})\rho_{\rm m,r}$ $\displaystyle=$ $\displaystyle\rho^{0}_{\rm m,r}(1+z)^{3(1+w_{\rm m,r})-\delta_{G}},$ (2.2) $\displaystyle(G/G_{0})\rho_{{}_{\Lambda}}$ $\displaystyle=$ $\displaystyle\rho^{0}_{{}_{\Lambda}}(1+z)^{+\delta_{\Lambda}},$ (2.3) where $w_{\rm m}\approx 0$, $w_{\rm r}\approx 1/3$, and $\rho_{{}_{\Lambda}}\equiv\Lambda/(8\pi G)$ is time varying, but $w_{{}_{\Lambda}}=-1$ still holds. The parameter $\delta_{G}$ is the same for the matter $\rho_{\rm m}$ and radiation $\rho_{\rm r}$ terms, assuming the deviation is only due to time-varying $G$. Two Friedmann equations are extended to $\displaystyle E^{2}(z)$ $\displaystyle=$ $\displaystyle\Omega_{\rm m}(1+z)^{(3-\delta_{\rm G})}+\Omega_{\rm r}(1+z)^{(4-\delta_{\rm G})}+\Omega_{{}_{\Lambda}}(1+z)^{\delta_{\Lambda}},$ (2.4) $\displaystyle(1+z)\frac{d}{dz}E^{2}(z)$ $\displaystyle=$ $\displaystyle 3\Omega_{\rm m}(1+z)^{(3-\delta_{\rm G})}+4\Omega_{\rm r}(1+z)^{(4-\delta_{\rm G})},$ (2.5) where $E(z)\equiv H(z)/H_{0}$. Here, $\Omega_{\rm m}$, $\Omega_{\rm r}$, and $\Omega_{{}_{\Lambda}}=1-\Omega_{\rm m}-\Omega_{\rm r}$ are the present-day fractional energy densities of matter, radiation, and dark energy, respectively. Eq. (2.5) comes from the generalized energy conservation law [44] for varying gravitational and cosmological “constants” interacting with matter and radiation. It reduces to the matter conservation in usual Friedman equations for constants $\Lambda$ and $G$. Substituting Eq. (2.4) to Eq. (2.5), we find the relation of the parameters $\delta_{\rm G}$ and $\delta_{\Lambda}$, $\displaystyle\delta_{\Lambda}$ $\displaystyle\approx$ $\displaystyle\delta_{\rm G}\left(\frac{\Omega_{\rm m}+\Omega_{\rm r}}{\Omega_{{}_{\Lambda}}}\right)\approx 0.47~{}\delta_{\rm G},$ (2.6) in the low redshift ($z\rightarrow 0$) limit. Nonzero $\delta_{\rm G,\Lambda}$ show that dark energy and matter interact and can be converted from one to another. They obey the total energy conservation (2.5). The relations of small parameters $\delta_{\rm G,\Lambda}$ to other interacting models of dark energy and matter can be found in Eqs. (10)–(15) of Ref. [46]. Notwithstanding the absence of the detailed and explicit interpretation of such a modelling $E(z)$, we are in the position of providing some insights into possible physics. The parameters $\delta_{\rm G}$ and $\delta_{\Lambda}$ effectively represent the possible physical effects or combinations of these effects in addition to those of the $\Lambda$CDM model, such as: small time- varying gravitational constant $G$ and inhomogeneity of matter distribution in different redshift $z$; the transition from the radiation-dominated era to the matter-dominated era, and vice versa, depending on species of normal particles or dark matter particles; and massive particle production and annihilation due to the interaction between dark energy (vacuum energy) and other cosmological components [47, 48]. $\delta_{\rm G}>0$ or $\delta_{\rm G}<0$ implies that the decrease of $\rho_{\rm m,r}$ is slower or faster than that of $\Lambda$CDM. Actually, we can treat the model as a kind of interacting dark energy (vacuum energy) model, and thus the effects of $\delta_{\rm G}\not=0$ and $\delta_{\Lambda}\not=0$ in the late universe are expected. Here, we wish to emphasize the usage of the terminology of “vacuum energy” in the following of this work; actually we exactly refer to “vacuum energy” with the case of $w=-1$. In general, the value of the parameter $\delta_{G}$ can be different for matter ($\rho_{\rm m}$) and radiation ($\rho_{\rm r}$) terms in $E^{2}(z)$ in Eq. (2.4), since dark energy should interact differently with matter and radiation. Therefore, we consider in this article two cases: (i) $\delta_{\rm G,\Lambda}$ related by the relation Eq. (2.6) and (ii) $\delta_{\rm G,\Lambda}$ independent from each other. Henceforth, for a short notation and readers’ convenience, the one-parameter extended model for the first case with the relation (2.6) is called the “varying $\Lambda$”CDM, represented by the symbol ${{\tilde{\Lambda}}}$CDM. Whereas, because the second case has one more parameter than the ${{\tilde{\Lambda}}}$CDM model, the two-parameter extension is called the extended ${{\tilde{\Lambda}}}$CDM, also abbreviated as e${{\tilde{\Lambda}}}$CDM. In this article, we compare the ${\tilde{\Lambda}}$CDM model with other one- parameter extensions of the $\Lambda$CDM model, i.e., $w$CDM and $\Lambda(t)$CDM. Besides, we compare the e${\tilde{\Lambda}}$CDM model with the Chevallier-Polarski-Linder (CPL) model, both are two-parameter extensions to $\Lambda$CDM. These models used for comparison are summarized as follows: 1\. the $w$CDM model [49, 50]: The equation-of-state parameter $w$ is treated as a constant free parameter instead of $w=-1$. We adopt $E^{2}(z)=\Omega_{\rm m}(1+z)^{3}+\Omega_{\rm r}(1+z)^{4}+\Omega_{{}_{\Lambda}}(1+z)^{3(1+w)}$. 2\. the $\Lambda(t)$CDM model [51, 52, 53]: The vacuum energy with $w_{{}_{\Lambda}}=-1$ serves as dark energy, and the interaction between dark energy (vacuum energy) and cold dark matter is described by the equations $\dot{\rho}_{\rm de}=Q$ and $\dot{\rho}_{\rm c}=-3H\rho_{\rm c}-Q$. Here, the subscript “de” is for dark energy and the subscript “c” is for cold dark matter. The interaction term $Q=-\beta H\rho_{c}$ determines characteristics of energy transfer between dark energy and dark matter, and $\beta$ is a dimensionless coupling parameter. 3\. the CPL model [49, 50]: We have $w(a)=w_{0}+w_{a}(1-a)$, where $w_{0}$ and $w_{a}$ are free parameters, and $E^{2}(z)=\Omega_{\rm m}(1+z)^{3}+\Omega_{\rm r}(1+z)^{4}+\Omega_{{}_{\Lambda}}(1+z)^{3(1+w_{0}+w_{a})}{\rm exp}(-\frac{3w_{a}z}{1+z})$. There are three interacting dark energy models, i.e., the ${\tilde{\Lambda}}$CDM model, the e${\tilde{\Lambda}}$CDM model, and the $\Lambda(t)$CDM model, considered in this paper. The former two models are motivated from the time-varying gravitational “constant” $G$ and the cosmological “constant” $\Lambda$, which effectively lead to the interaction between dark energy and matter. The last one is a phenomenological fluid model with an assumptive direct interaction between dark energy and dark matter, whose interaction term $Q$ is not derived from first principles and its form is purely phenomenological and for the convenience of calculation. In the next section, we will use the observational datasets to constrain the ${{\tilde{\Lambda}}}$CDM, e${{\tilde{\Lambda}}}$CDM, $w$CDM, $\Lambda(t)$CDM, and CPL models from the point of view of alleviating the $H_{0}$ tension. The results are compared with the base 6-parameter $\Lambda$CDM model that is taken as a benchmark model in this work. ## 3 Data and method We summarize the observational data used in this work below. Model | $\Lambda$CDM | $w$CDM | $\Lambda$(t)CDM | ${\tilde{\Lambda}}$CDM ---|---|---|---|--- $\Omega_{\rm b}$ | $0.0489\pm 0.0005$ | $0.0480^{+0.0013}_{-0.0012}$ | $0.0491^{+0.0010}_{-0.0009}$ | $0.0499^{+0.0019}_{-0.0018}$ $\Omega_{\rm c}$ | $0.2638\pm 0.0055$ | $0.2606^{+0.0071}_{-0.0067}$ | $0.2622^{+0.0094}_{-0.0086}$ | $0.2610^{+0.0072}_{-0.0071}$ $w$ | $-$ | $-1.0256^{+0.0364}_{-0.0360}$ | $-$ | $-$ $\beta$ | $-$ | $-$ | $0.0022^{+0.0063}_{-0.0060}$ | $-$ $\delta_{\rm G}$ | $-$ | $-$ | $-$ | $0.0019^{+0.0032}_{-0.0032}$ $H_{0}~{}[{\rm km~{}s^{-1}~{}Mpc^{-1}}]$ | $67.70^{+0.44}_{-0.43}$ | $68.25^{+0.87}_{-0.89}$ | $67.49^{+0.81}_{-0.85}$ | $66.95^{+1.39}_{-1.35}$ $\Omega_{\rm m}$ | $0.3127\pm 0.0059$ | $0.3087^{+0.0082}_{-0.0077}$ | $0.3113^{+0.0097}_{-0.0088}$ | $0.3109^{+0.0066}_{-0.0065}$ ${H_{0}}~{}{\rm tension}$ | $4.25\sigma$ | $3.46\sigma$ | $3.98\sigma$ | $3.59\sigma$ $\chi_{\rm min}^{2}$ | $1043.539$ | $1043.068$ | $1042.297$ | $1043.201$ $\Delta{\rm AIC}$ | $0$ | $1.529$ | $0.758$ | $1.662$ $\Delta{\rm BIC}$ | $0$ | $6.492$ | $5.721$ | $6.625$ Table 1: The constraint results of parameters in the $\Lambda$CDM model and the one-parameter extension models with the CBS data. Model | $\Lambda$CDM | $w$CDM | $\Lambda$(t)CDM | ${\tilde{\Lambda}}$CDM ---|---|---|---|--- $\Omega_{\rm b}$ | $0.0483\pm 0.0005$ | $0.0458^{+0.0011}_{-0.0010}$ | $0.0481\pm 0.0009$ | $0.0452^{+0.0013}_{-0.0012}$ $\Omega_{\rm c}$ | $0.2569^{+0.0052}_{-0.0050}$ | $0.2491^{+0.0058}_{-0.0057}$ | $0.2600^{+0.0093}_{-0.0088}$ | $0.2688^{+0.0069}_{-0.0072}$ $w$ | $-$ | $-1.0832^{+0.0324}_{-0.0339}$ | $-$ | $-$ $\beta$ | $-$ | $-$ | $-0.0030\pm 0.0062$ | $-$ $\delta_{\rm G}$ | $-$ | $-$ | $-$ | $-0.0062^{+0.0025}_{-0.0023}$ $H_{0}~{}[\rm km~{}s^{-1}~{}Mpc^{-1}]$ | $68.26\pm 0.42$ | $69.88^{+0.77}_{-0.76}$ | $68.50^{+0.85}_{-0.82}$ | $70.69^{+1.06}_{-1.08}$ $\Omega_{\rm m}$ | $0.3053^{+0.0057}_{-0.0054}$ | $0.2949^{+0.0067}_{-0.0066}$ | $0.3080^{+0.0094}_{-0.0090}$ | $0.3140^{+0.0065}_{-0.0068}$ ${H_{0}}~{}{\rm tension}$ | $3.90\sigma$ | $2.57\sigma$ | $3.36\sigma$ | $1.88\sigma$ $\chi_{\rm min}^{2}$ | $1061.659$ | $1055.035$ | $1060.435$ | $1055.394$ $\Delta{\rm AIC}$ | $0$ | $-4.624$ | $0.776$ | $-4.265$ $\Delta{\rm BIC}$ | $0$ | $0.339$ | $5.739$ | $0.698$ Table 2: The constraint results of parameters in the $\Lambda$CDM model and the one-parameter extension models with the CBSH data. Figure 1: Observational constraints on $H_{0}$ and $\Omega_{\rm m}$ ($68.3\%$ and $95.4\%$ confidence level) in the $\Lambda$CDM, $w$CDM, $\Lambda$(t)CDM, and ${\tilde{\Lambda}}$CDM models using the $\rm CBS$ data. Here, $H_{0}$ is in units of ${\rm km~{}s^{-1}~{}Mpc^{-1}}$. Figure 2: Observational constraints on $H_{0}$ and $\Omega_{\rm m}$ ($68.3\%$ and $95.4\%$ confidence level) in the $\Lambda$CDM, $w$CDM, $\Lambda$(t)CDM, and ${\tilde{\Lambda}}$CDM models using the CBSH data combination. Here, $H_{0}$ is in units of ${\rm km~{}s^{-1}~{}Mpc^{-1}}$. Figure 3: Observational constraints on $H_{0}$, $\Omega_{\rm m}$, and $\delta_{\rm G}$ ($68.3\%$ and $95.4\%$ confidence level) in the ${\tilde{\Lambda}}$CDM model using the $\rm CBS$ and $\rm CBSH$ data combinations. Here, $H_{0}$ is in units of ${\rm km~{}s^{-1}~{}Mpc^{-1}}$. 1\. CMB: In this work, we use the distance prior data from Planck 2018 [54] for convenience. 2\. BAO: The BAO data used in this work include five data points from three observations, i.e., $z_{\rm eff}=0.016$ from the 6dF Galaxy Survey [3]; $z_{\rm eff}=0.15$ from Main Galaxy Sample of Data Release 7 of Sloan Digital Sky Survey [4]; $z_{\rm eff}=0.38$, $z_{\rm eff}=0.51$, and $z_{\rm eff}=0.61$ from the Data Release 12 of Baryon Oscillation Spectroscopic Survey [5]. 3\. SNIa: We employ the SNIa Pantheon compilation [55] containing 1048 data points. 4\. $H_{0}$: The measurement result of ${H_{0}}=(74.03\pm 1.42)~{}\rm{km~{}s^{-1}Mpc^{-1}}$ from distance ladder given by the SH0ES team [2] is used as a Gaussian prior. We use the Markov-chain Monte Carlo (MCMC) package CosmoMC [56] to perform the cosmological fits. We consider two data combinations in this work, namely, CMB+BAO+SN (abbreviated as CBS) and CBS+$H_{0}$ (abbreviated as CBSH). It should be emphasized that Bayesian joint analyses cannot automatically show inconsistencies between datasets. However, for the purpose of investigating whether our models can relieve the tension or not, we still combine the local $H_{0}$ measurement with the CMB+BAO+SN dataset to perform joint analyses as conducted by some other researches [57, 58, 59, 60]. Since the cosmological models have different numbers of free parameters, using only $\chi^{2}_{\rm min}$ values to compare models is obviously unfair. Thus we use Akaike information criterion (AIC) and Bayesian information criterion (BIC) to perform some punishments to the models having more parameters, which embodies the principle of Occam’s Razor to some extent. We adopt AIC and BIC [61, 62, 63, 64] given by $\displaystyle{\rm AIC}\equiv\chi^{2}+2d,\quad{\rm BIC}\equiv\chi^{2}+d\ln N,$ (3.1) where $d$ is the number of free parameters and $N$ is the number of observational data points. The $\chi^{2}$ functions for the two data combinations are given by $\displaystyle\chi^{2}$ $\displaystyle=$ $\displaystyle\chi^{2}_{\rm CMB}+\chi^{2}_{\rm BAO}+\chi^{2}_{\rm SN},$ (3.2) $\displaystyle\chi^{2}$ $\displaystyle=$ $\displaystyle\chi^{2}_{\rm CMB}+\chi^{2}_{\rm BAO}+\chi^{2}_{\rm SN}+\chi^{2}_{H_{0}}.$ (3.3) The $\Lambda$CDM model is taken as a benchmark model in the comparison, and thus its AIC and BIC values are set to be zero. For other cosmological models, only the differences from $\Lambda$CDM, $\Delta$AIC$=\Delta\chi^{2}+2\Delta d$ and $\Delta$BIC$=\Delta\chi^{2}+\Delta d\ln N$, are important and should be considered. Figure 4: Observational constraints on $H_{0}$ and $\Omega_{\rm m}$ ($68.3\%$ and $95.4\%$ confidence level) in the $\Lambda$CDM, CPL, and $e{\tilde{\Lambda}}$CDM models using the $\rm CBS$ data. Here, $H_{0}$ is in units of ${\rm km~{}s^{-1}~{}Mpc^{-1}}$. Figure 5: Observational constraints on $H_{0}$ and $\Omega_{\rm m}$ ($68.3\%$ and $95.4\%$ confidence level) in the $\Lambda$CDM, CPL, and $e{\tilde{\Lambda}}$CDM models using the $\rm CBSH$ data. Here, $H_{0}$ is in units of ${\rm km~{}s^{-1}~{}Mpc^{-1}}$. Figure 6: Observational constraints ($68.3\%$ and $95.4\%$ confidence level) on $H_{0}$, $\Omega_{\rm m}$, $\delta_{\rm G}$, and $\delta_{\Lambda}$ in the $e{\tilde{\Lambda}}$CDM model using the CBS and CBSH data. Here, $H_{0}$ is in units of ${\rm km~{}s^{-1}~{}Mpc^{-1}}$. Data | $\rm CBS$ | ${\rm CBSH}$ ---|---|--- Model | $\rm CPL$ | ${e\tilde{\Lambda}}$CDM | $\rm CPL$ | ${e\tilde{\Lambda}}$CDM $\Omega_{b}$ | $0.0481^{+0.0012}_{-0.0013}$ | $0.0488^{+0.0036}_{-0.0035}$ | $0.0457^{+0.0011}_{-0.0010}$ | $0.0425^{+0.0015}_{-0.0014}$ $\Omega_{c}$ | $0.2603^{+0.0073}_{-0.0069}$ | $0.2604^{+0.0072}_{-0.0073}$ | $0.2478^{+0.0066}_{-0.0054}$ | $0.2607^{+0.0072}_{-0.0073}$ $w_{0}$ | $-1.0439^{+0.0964}_{-0.0846}$ | $-$ | $-1.1216^{+0.0930}_{-0.0848}$ | $-$ $w_{a}$ | $0.0823^{+0.2852}_{-0.3685}$ | $-$ | $0.1517^{+0.3113}_{-0.3585}$ | $-$ $\delta_{\rm G}$ | $-$ | $0.0009^{+0.0042}_{-0.0043}$ | $-$ | $-0.0066^{+0.0023}_{-0.0022}$ $\delta_{\Lambda}$ | $-$ | $-0.0525^{+0.1365}_{-0.1466}$ | $-$ | $-0.2832^{+0.1025}_{-0.0966}$ $H_{0}~{}[{\rm km~{}s^{-1}~{}Mpc^{-1}}]$ | $68.23^{+0.90}_{-0.86}$ | $67.71^{+2.64}_{-2.40}$ | $69.98^{+0.71}_{-0.81}$ | $72.69^{+1.23}_{-1.28}$ $\Omega_{\rm m}$ | $0.3084^{+0.0083}_{-0.0080}$ | $0.3092^{+0.0078}_{-0.0081}$ | $0.2935^{+0.0075}_{-0.0062}$ | $0.3031^{+0.0073}_{-0.0073}$ ${H_{0}}~{}{\rm tension}$ | $3.47\sigma$ | $2.18\sigma$ | $2.51\sigma$ | $0.71\sigma$ $\chi_{\rm min}^{2}$ | $1043.045$ | $1043.037$ | $1054.865$ | $1047.409$ $\Delta{\rm AIC}$ | $3.498$ | $3.501$ | $-2.794$ | $-10.250$ $\Delta{\rm BIC}$ | $13.431$ | $13.423$ | $7.133$ | $-0.323$ Table 3: The constraint results of parameters in the two-parameter extension models with the CBS and CBSH data. ## 4 Results and discussion We show the posterior distributions of cosmological parameters in the $\Lambda$CDM model and the one-parameter extensions to $\Lambda$CDM in Figs. 1–3 and report the detailed results in Tabs. 1 and 2. Fig. 1 and Table 1 show the results of using the CBS data to constrain the $\Lambda$CDM model and its one-parameter extensions, i.e., $w$CDM, $\Lambda(t)$CDM, and ${\tilde{\Lambda}}$CDM. We can see that, in this case, only $w$CDM can slightly alleviate the $H_{0}$ tension, with the best-fit value of $H_{0}$ equal to 68.25 km s-1 Mpc-1; $\Lambda(t)$CDM and ${\tilde{\Lambda}}$CDM even get smaller $H_{0}$ values (best-fit values), and they are equal to $67.49^{+0.81}_{-0.85}$ km s-1 Mpc-1 and $66.95^{+1.39}_{-1.35}$ km s-1 Mpc-1, respectively. This is because using the CBS data leads to the results (central values) of $w<-1$ in $w$CDM, $\beta>0$ in $\Lambda(t)$CDM, and $\delta_{G}>0$ in ${\tilde{\Lambda}}$CDM. It is known that the phantom energy case of $w<-1$ can lead to a larger $H_{0}$. The cases of $\beta>0$ in $\Lambda(t)$CDM and $\delta_{G}>0$ in ${\tilde{\Lambda}}$CDM do not realize an effective phantom, but on the contrary they actually realize an effective quintessence, and thus in this situation $\Lambda(t)$CDM and ${\tilde{\Lambda}}$CDM cannot effectively alleviate the $H_{0}$ tension. We can see from Fig. 1 that basically both $\Lambda(t)$CDM and ${\tilde{\Lambda}}$CDM are in good agreement with the $\Lambda$CDM cosmology in the case of CBS constraint. In addition, from Table 1 we find that $\Lambda$CDM is the best one in fitting to the CBS data, and the other three extension models actually cannot provide a good fit to the CBS data, which can be seen from their large values of $\Delta{\rm AIC}$ and $\Delta{\rm BIC}$. However, when the $H_{0}$ direct measurement from the SH0ES team is added in the data combination, the situation will be dramatically changed. Now, we consider the ${\rm CBS}+H_{0}$ data combination (also abbreviated as CBSH), and the constraint results are shown in Fig. 2 and Table 2. We find that in this case $w$CDM and ${\tilde{\Lambda}}$CDM can yield larger values of $H_{0}$, but $\Lambda(t)$CDM still cannot make $H_{0}$ larger. Actually, even though the $H_{0}$ prior is involved in the data combination, one cannot detect the coupling between vacuum energy and cold dark matter in $\Lambda(t)$CDM; the constraint on $\beta$ is $\beta=-0.0030\pm 0.0062$. Therefore, $\Lambda(t)$CDM cannot help alleviate the $H_{0}$ tension (the tension is still in 3.36$\sigma$). Although $w$CDM slightly prefers a phantom energy with $w=-1.0832^{+0.0324}_{-0.0339}$, and the anti-correlation between $w$ and $H_{0}$ can help relieve the $H_{0}$ tension, it still cannot lead to a large enough value of $H_{0}$; it gives $H_{0}=69.88^{+0.77}_{-0.76}$ km s-1 Mpc-1, and the tension is still in 2.57$\sigma$. Evidently, the focus is on ${\tilde{\Lambda}}$CDM. When the $H_{0}$ prior is added in the data combination, ${\tilde{\Lambda}}$CDM yields a much larger $H_{0}$, i.e., $H_{0}=70.69^{+1.06}_{-1.08}$ Mpc-1, leading to the $H_{0}$ tension enormously relieved (the tension is now in 1.88$\sigma$). This is owing to the fact that a non-zero $\delta_{G}$ is obtained in this case, i.e., $\delta_{\rm G}=-0.0062^{+0.0025}_{-0.0023}$. A negative $\delta_{G}$ implies that the “cosmological constant” in ${\tilde{\Lambda}}$CDM becomes larger and larger with the cosmological evolution, and thus actually it is an effective phantom providing stronger repulsive force driving the cosmic acceleration. The faster late-time cosmic expansion means a larger $H_{0}$, and thus a more negative $\delta_{G}$ will yield a larger $H_{0}$. In Fig. 3, we compare the constraints from CBS and CBSH on ${\tilde{\Lambda}}$CDM. We can clearly see that, when the $H_{0}$ prior is added, the situation is dramatically changed, as the value of $\delta_{G}$ is changed from the case of consistent with 0 to the one with a negative value. The anti-correlation between $\delta_{G}$ and $H_{0}$ is also explicitly shown, and we can immediately find that a negative $\delta_{G}$ leads to a high value of $H_{0}$. In the cases of CBS and CBSH, the $H_{0}$ tension is in 3.59$\sigma$ and 1.88$\sigma$, respectively. In addition, it is easily found that, in the CBS case, ${\tilde{\Lambda}}$CDM is not favored, but in the CBSH case, ${\tilde{\Lambda}}$CDM is strongly preferred (see the negative values of $\Delta$AIC and $\Delta$BIC in Table 2). Therefore, for ${\tilde{\Lambda}}$CDM, we find that $\delta_{G}$ is very sensitive to $H_{0}$, and the local measurement of $H_{0}$ in the datasets becomes a dominant factor in the cosmological fit. But for $\Lambda(t)$CDM, the coupling parameter $\beta$ is not sensitive to $H_{0}$. We can thus conclude that ${\tilde{\Lambda}}$CDM as a kind of interacting dark energy model behaves much better than $\Lambda(t)$CDM in the sense of resolving the $H_{0}$ tension. Next, let us see the situation of the two-parameter extension models, i.e., the CPL and e${\tilde{\Lambda}}$CDM models. The main results are shown in Figs. 4–6 and Table 3. The comparison of CPL and e${\tilde{\Lambda}}$CDM is given in Figs. 4 and 5; Fig. 4 shows the case of CBS and Fig. 5 shows the case of CBSH. From Fig. 4, we find that in the CBS case neither CPL nor e${\tilde{\Lambda}}$CDM can effectively alleviate the $H_{0}$ tension. In this case in e${\tilde{\Lambda}}$CDM both $\delta_{G}$ and $\delta_{\Lambda}$ are well consistent with 0, and thus the value of $H_{0}$ cannot be increased (see Table 3 for detailed results). From Fig. 5, we find that, once the $H_{0}$ prior is added in the data combination, the situation for e${\tilde{\Lambda}}$CDM is changed dramatically. In this case, we have $\delta_{G}=-0.0066^{+0.0023}_{-0.0022}$ and $\delta_{\Lambda}=-0.2832^{+0.1025}_{-0.0966}$, showing the results of $\delta_{G}<0$ and $\delta_{\Lambda}<0$ at the more than 2$\sigma$ level. Hence, e${\tilde{\Lambda}}$CDM in the CBSH case can also yield an effective phantom behavior, which leads to a high value of $H_{0}$, i.e., $H_{0}=72.69^{+1.23}_{-1.28}$ km s-1 Mpc-1. Therefore, in the CBSH case, e${\tilde{\Lambda}}$CDM can well resolve the $H_{0}$ tension, with the tension relieved to 0.71$\sigma$ level. The comparison of the values of $\Delta{\rm AIC}$ and $\Delta{\rm BIC}$ is explicitly shown in Table 3, and we can see that the e${\tilde{\Lambda}}$CDM model in the CBSH case is the best one (with $\Delta{\rm AIC}=-10.250$ and $\Delta{\rm BIC}=-0.323$) in the sense of both relieving the $H_{0}$ tension and fitting to the observational data. In Fig. 6, for the constraints on e${\tilde{\Lambda}}$CDM, we make a comparison for the cases of CBS and CBSH. From the posterior distributions of $\delta_{G}$, $\delta_{\Lambda}$, and $H_{0}$, we can clearly see their shifts after the addition of the $H_{0}$ prior into the data combination. ## 5 Robustness of results There may still be some concerns about the robustness of our results. The first concern could arise from the belief that baryons and radiation should receive less modifications than cold dark matter, i.e., the interaction between dark energy and radiation (or baryons) is tightly constrained. Using the CBS and CBSH datasets, we make several attempts to study how different values of $\delta_{G}$ associate to the matter and radiation terms in $E^{2}(z)$ in Eq. (2.4), to illustrate the effects of different components on the results. Indeed, we find that the coupling of cold dark matter and dark energy plays an important role in the interaction between matter and dark energy. Therefore, in this article, we present the results of the particular case in which only cold dark matter and dark energy are interacting, $E^{2}(z)=\Omega_{\rm c}(1+z)^{(3-\delta_{\rm G})}+\Omega_{\rm b}(1+z)^{3}+\Omega_{\rm r}(1+z)^{4}+\Omega_{{}_{\Lambda}}(1+z)^{\delta_{\Lambda}}.$ (5.1) Namely, we only consider the corrections on the evolutions of cold dark matter and dark energy, and assume that $\delta_{\rm G}$ and $\delta_{\Lambda}$ are independent of each other, so the resulting model can be considered as a limiting case of the e${\tilde{\Lambda}}$CDM model. Hereafter, this limiting e${\tilde{\Lambda}}$CDM model is abbreviated as l${\tilde{\Lambda}}$CDM. Data | $\rm CBS$ | $\rm CBSH$ ---|---|--- $\delta_{\rm G}$ | $0.0007^{+0.0050}_{-0.0047}$ | $-0.0071^{+0.0041}_{-0.0037}$ $\delta_{\Lambda}$ | $-0.0595^{+0.1431}_{-0.1403}$ | $-0.3442^{+0.1143}_{-0.1056}$ $H_{0}~{}[{\rm km~{}s^{-1}~{}Mpc^{-1}}]$ | $68.06\pm 1.36$ | $71.10^{+0.94}_{-1.07}$ $\Omega_{\rm m}$ | $0.3090\pm 0.0080$ | $0.2973^{+0.0073}_{-0.0063}$ Table 4: The constraint results of $\delta_{\rm G}$, $\delta_{\Lambda}$, $H_{0}$, and $\Omega_{\rm m}$ in the l${\tilde{\Lambda}}$CDM model with the CBS and CBSH datasets. The constraint results using the CBS and CBSH datasets are listed in Table 4. The l${\tilde{\Lambda}}$CDM model gives $H_{0}=(68.06\pm 1.36)$ km s-1 Mpc-1 with the CBS dataset and a relatively larger value $H_{0}=71.10^{+0.94}_{-1.07}$ km s-1 Mpc-1 with the CBSH dataset. The $H_{0}$ tension is relieved to $1.68\sigma$ with the CBSH dataset. This result implies that the interaction of dark energy and cold dark matter plays a dominant role in the background evolution of the e${\tilde{\Lambda}}$CDM model. Therefore, our models can still be effective in resolving the $H_{0}$ tension, even if the modifications of the evolutions of radiation and baryons are negligible. The second concern is that we used the CMB distance prior to constrain the models rather than the full power spectrum of Planck 2018\. In the following, we test the difference of these two data in constraining the e${\tilde{\Lambda}}$CDM model. We use the MontePython code [65] to perform the MCMC analysis. We also use the two data combinations as above, i.e., CBS (full spectrum) and CBSH (full spectrum), in which the Planck TT,TE,EE+lowE+lensing data [1] are used as the CMB data, and other cosmological data are still the same as in Section 3. We list the results in Table 5 and compare them with the previous results using distance prior of CMB in Table 3. We find that the mean values of parameters slightly shift and the errors greatly shrink. For the e${\tilde{\Lambda}}$CDM model, the CBS and CBS (full spectrum) datasets can give $H_{0}=67.71^{+2.64}_{-2.40}$ km s-1 Mpc-1 and $H_{0}=(68.17\pm 0.87)$ km s-1 Mpc-1, respectively; the CBSH and CBSH (full spectrum) datasets can give $H_{0}=72.69^{+1.23}_{-1.28}$ km s-1 Mpc-1 and $H_{0}=(73.05\pm 0.56)$ km s-1 Mpc-1, respectively. These results show that although the full power spectrum data of CMB can provide more information than the distance prior, the main conclusions still hold. There is another tension between the Planck base-$\Lambda$CDM cosmology and galaxy clustering of the matter fluctuations. As a result of the full CMB anisotropies data, the amplitude of the matter power spectrum $\sigma_{8}$ and its related parameter $S_{8}=\sigma_{8}(\Omega_{m}/0.3)^{0.5}$ can be constrained and the $\sigma_{8}$/$S_{8}$ tension can also be evaluated. We discuss the $\sigma_{8}$/$S_{8}$ tension in the CBSH (full spectrum) dataset, because the e${\tilde{\Lambda}}$CDM model can effectively relieve the $H_{0}$ tension in this dataset. The CBSH (full spectrum) dataset gives $\sigma_{8}=0.8099\pm 0.0060$ and $S_{8}=0.8194\pm 0.0101$ in the $\Lambda$CDM model and $\sigma_{8}=0.8720\pm 0.0090$ and $S_{8}=0.8310\pm 0.0110$ in the e${\tilde{\Lambda}}$CDM model. We find that the value of $\sigma_{8}$ in the e${\tilde{\Lambda}}$CDM model increases than that in the $\Lambda$CDM model, but the value of $S_{8}$ only slightly changes because $\Omega_{\rm m}$ tends to decrease in the e${\tilde{\Lambda}}$CDM model. We compare with the results from the combination of the KiDS/Viking and SDSS data, $\sigma_{8}=0.760^{+0.025}_{-0.020}$ and $S_{8}=0.766^{+0.020}_{-0.014}$ [66]. In the $\Lambda$CDM model, the $\sigma_{8}$ and $S_{8}$ tensions are in $2.36\sigma$ and $2.44\sigma$, respectively, while the ones in the e${\tilde{\Lambda}}$CDM model are in $4.26\sigma$ and $3.21\sigma$, respectively. As a crosscheck, we also compare our constraint results with the results in the literature [1, 43, 67] and find that they are statistically consistent. For example, in the $w$CDM model, we obtain $H_{0}=68.25^{+0.87}_{-0.89}$ km s-1 Mpc-1 using the CBS data as shown in Table 1, and Ref. [1] gives $H_{0}=(68.34\pm 0.81)$ km s-1 Mpc-1 using the Planck 2018 TT,TE,EE+lowE+lensing+BAO+Pantheon data. Moreover, in the $\Lambda$(t)CDM model, we obtain $H_{0}=68.50^{+0.85}_{-0.82}~{}\rm km~{}s^{-1}~{}Mpc^{-1}$ using the CBSH data as shown in Table 2, and Ref. [43] gives $H_{0}=(69.36\pm 0.82)~{}\rm km~{}s^{-1}~{}Mpc^{-1}$ using also the CBSH data, but in which the Planck 2015 data and an earlier local $H_{0}$ measurement are used. Through all these tests of the robustness of the results, we further confirm that our models are helpful to relieve the $H_{0}$ tension. Data | CBS (full spectrum) | CBSH (full spectrum) ---|---|--- Model | $\Lambda$CDM | $e{\tilde{\Lambda}}$CDM model | $\Lambda$CDM | $e{\tilde{\Lambda}}$CDM model $\delta_{\rm G}$ | $-$ | $0.00030\pm 0.00101$ | $-$ | $-0.00387^{+0.00054}_{-0.00072}$ $\delta_{\Lambda}$ | $-$ | $-0.1102\pm 0.1201$ | $-$ | $-0.2511^{+0.0162}_{-0.0183}$ $H_{0}~{}[{\rm km~{}s^{-1}~{}Mpc^{-1}}]$ | $67.71\pm 0.41$ | $68.17\pm 0.87$ | $68.01\pm 0.40$ | $73.05\pm 0.56$ $\Omega_{\rm m}$ | $0.3108\pm 0.0055$ | $0.3075\pm 0.0080$ | $0.3068\pm 0.0053$ | $0.2724^{+0.0052}_{-0.0045}$ $\sigma_{8}$ | $0.8111\pm 0.0061$ | $0.8150\pm 0.0121$ | $0.8099\pm 0.0060$ | $0.8720\pm 0.0090$ $S_{8}$ | $0.8263\pm 0.0102$ | $0.8252\pm 0.0132$ | $0.8194\pm 0.0101$ | $0.8310\pm 0.0110$ Table 5: The constraint results of $\delta_{\rm G}$, $\delta_{\Lambda}$, $H_{0}$, $\Omega_{\rm m}$, $\sigma_{8}$, and $S_{8}$ in the $\Lambda$CDM model and the $e{\tilde{\Lambda}}$CDM model with the CBS (full spectrum) and CBSH (full spectrum) datasets. ## 6 Conclusion In this work, we consider a phenomenological cosmological model motivated by the asymptotic safety of gravitational field theory. In this model, the matter and radiation densities and the cosmological constant receive a correction parametrized by the parameters $\delta_{G}$ and $\delta_{\Lambda}$, leading to that both the evolutions of the matter and radiation densities and the cosmological constant slightly deviate from the standard forms. Actually, this model can be explained by the scenario of vacuum energy interacting with matter and radiation. Furthermore, we consider two cases of the model: (i) ${\tilde{\Lambda}}$CDM with one additional free parameter $\delta_{G}$, in which $\delta_{\rm G}$ and $\delta_{\Lambda}$ are related by a low-redshift limit relation and (ii) e${\tilde{\Lambda}}$CDM with two additional free parameters $\delta_{G}$ and $\delta_{\Lambda}$ independent of each other. We use the current observational data (CBS and CBSH) to constrain the models. We find that, when using the CBS data, neither ${\tilde{\Lambda}}$CDM nor e${\tilde{\Lambda}}$CDM can effectively alleviate the $H_{0}$ tension. In this case, we obtain that both $\delta_{G}$ and $\delta_{\Lambda}$ are around 0, and thus the models are well consistent with $\Lambda$CDM. Actually, in this case, the CBS data prefer $\Lambda$CDM more over ${\tilde{\Lambda}}$CDM and e${\tilde{\Lambda}}$CDM. However, when the direct measurement of $H_{0}$ by the SH0ES team is added in the data combination (i.e., CBSH is considered), the situation is dramatically changed. We find that in this case both $\delta_{G}<0$ and $\delta_{\Lambda}<0$ are obtained at the more than 2$\sigma$ significance. We find that, when using the CBSH data to constrain ${\tilde{\Lambda}}$CDM and e${\tilde{\Lambda}}$CDM, the $H_{0}$ tension can be greatly relieved. In particular, for example, in the case of e${\tilde{\Lambda}}$CDM, the $H_{0}$ tension can be resolved to 0.71$\sigma$. In addition, through an analysis of model selection using the information criteria, we find that the CBSH data prefer e${\tilde{\Lambda}}$CDM over $\Lambda$CDM. We also perform some tests on the robustness of our results, including a limiting case in which the modifications of the evolutions of radiation and baryons are negligible, a comparison of using the CMB distance prior and the full power spectrum of Planck 2018 to constrain parameters, and a crosscheck with the previous results in other works. These tests confirm our results, so we can conclude that, from a comprehensive analysis, e${\tilde{\Lambda}}$CDM as an interacting dark energy model is much better than $\Lambda(t)$CDM in the sense of both relieving the $H_{0}$ tension and fitting to the current observational data. ## Acknowledgments This work was supported by the National Natural Science Foundation of China (Grant Nos. 11975072, 11875102, 11835009, and 11690021), the Liaoning Revitalization Talents Program (Grant No. XLYC1905011), the Fundamental Research Funds for the Central Universities (Grant No. N2005030), and the Top- Notch Young Talents Program of China (Grant No. W02070050). ## References * [1] N. Aghanim et al. [Planck], Astron. Astrophys. 641 (2020), A6 doi:10.1051/0004-6361/201833910 [arXiv:1807.06209 [astro-ph.CO]]. * [2] A. G. Riess, S. Casertano, W. Yuan, L. M. Macri and D. Scolnic, Astrophys. J. 876, no. 1, 85 (2019) doi:10.3847/1538-4357/ab1422 [arXiv:1903.07603 [astro-ph.CO]]. * [3] F. Beutler et al., Mon. Not. Roy. Astron. Soc. 416, 3017 (2011) doi:10.1111/j.1365-2966.2011.19250.x [arXiv:1106.3366 [astro-ph.CO]]. * [4] A. J. Ross, L. Samushia, C. Howlett, W. J. Percival, A. Burden and M. Manera, Mon. Not. Roy. Astron. Soc. 449, no. 1, 835 (2015) doi:10.1093/mnras/stv154 [arXiv:1409.3242 [astro-ph.CO]]. * [5] S. Alam et al. [BOSS Collaboration], Mon. Not. Roy. Astron. Soc. 470, no. 3, 2617 (2017) doi:10.1093/mnras/stx721 [arXiv:1607.03155 [astro-ph.CO]]. * [6] E. Di Valentino, O. Mena, S. Pan, L. Visinelli, W. Yang, A. Melchiorri, D. F. Mota, A. G. Riess and J. Silk, [arXiv:2103.01183 [astro-ph.CO]]. * [7] L. Verde, T. Treu and A. G. Riess, Nature Astron. 3, 891 doi:10.1038/s41550-019-0902-0 [arXiv:1907.10625 [astro-ph.CO]]. * [8] E. Di Valentino, Nature Astron. 1 (2017) no.9, 569-570 doi:10.1038/s41550-017-0236-8 [arXiv:1709.04046 [physics.pop-ph]]. * [9] E. Di Valentino, L. A. Anchordoqui, O. Akarsu, Y. Ali-Haimoud, L. Amendola, N. Arendse, M. Asgari, M. Ballardini, S. Basilakos and E. Battistelli, et al. [arXiv:2008.11284 [astro-ph.CO]]. * [10] W. L. Freedman, Nature Astron. 1 (2017), 0121 doi:10.1038/s41550-017-0121 [arXiv:1706.02739 [astro-ph.CO]]. * [11] A. G. Riess, Nature Rev. Phys. 2 (2019) no.1, 10-12 doi:10.1038/s42254-019-0137-0 [arXiv:2001.03624 [astro-ph.CO]]. * [12] D. N. Spergel, R. Flauger and R. Hložek, Phys. Rev. D 91 (2015) no.2, 023518 doi:10.1103/PhysRevD.91.023518 [arXiv:1312.3313 [astro-ph.CO]]. * [13] G. E. Addison, Y. Huang, D. J. Watts, C. L. Bennett, M. Halpern, G. Hinshaw and J. L. Weiland, Astrophys. J. 818, no. 2, 132 (2016) doi:10.3847/0004-637X/818/2/132 [arXiv:1511.00055 [astro-ph.CO]]. * [14] N. Aghanim et al. [Planck Collaboration], Astron. Astrophys. 607, A95 (2017) doi:10.1051/0004-6361/201629504 [arXiv:1608.02487 [astro-ph.CO]]. * [15] G. Efstathiou, Mon. Not. Roy. Astron. Soc. 440, no. 2, 1138 (2014) doi:10.1093/mnras/stu278 [arXiv:1311.3461 [astro-ph.CO]]. * [16] W. Cardona, M. Kunz and V. Pettorino, JCAP 1703, 056 (2017) doi:10.1088/1475-7516/2017/03/056 [arXiv:1611.06088 [astro-ph.CO]]. * [17] B. R. Zhang, M. J. Childress, T. M. Davis, N. V. Karpenka, C. Lidman, B. P. Schmidt and M. Smith, Mon. Not. Roy. Astron. Soc. 471, no. 2, 2254 (2017) doi:10.1093/mnras/stx1600 [arXiv:1706.07573 [astro-ph.CO]]. * [18] B. Follin and L. Knox, Mon. Not. Roy. Astron. Soc. 477, no. 4, 4534 (2018) doi:10.1093/mnras/sty720 [arXiv:1707.01175 [astro-ph.CO]]. * [19] C. D. Huang, A. G. Riess, W. Yuan, L. M. Macri, N. L. Zakamska, S. Casertano, P. A. Whitelock, S. L. Hoffmann, A. V. Filippenko and D. Scolnic, doi:10.3847/1538-4357/ab5dbd [arXiv:1908.10883 [astro-ph.CO]]. * [20] W. Yuan, A. G. Riess, L. M. Macri, S. Casertano and D. Scolnic, Astrophys. J. 886 (2019), 61 doi:10.3847/1538-4357/ab4bc9 [arXiv:1908.00993 [astro-ph.GA]]. * [21] K. C. Wong, S. H. Suyu, G. C. F. Chen, C. E. Rusu, M. Millon, D. Sluse, V. Bonvin, C. D. Fassnacht, S. Taubenberger and M. W. Auger, et al. Mon. Not. Roy. Astron. Soc. 498 (2020) no.1, 1420-1439 doi:10.1093/mnras/stz3094 [arXiv:1907.04869 [astro-ph.CO]]. * [22] D. W. Pesce, J. A. Braatz, M. J. Reid, A. G. Riess, D. Scolnic, J. J. Condon, F. Gao, C. Henkel, C. M. V. Impellizzeri and C. Y. Kuo, et al. Astrophys. J. Lett. 891 (2020) no.1, L1 doi:10.3847/2041-8213/ab75f0 [arXiv:2001.09213 [astro-ph.CO]]. * [23] J. B. Jensen, J. L. Tonry and G. A. Luppino, Astrophys. J. 505 (1998), 111 doi:10.1086/306163 [arXiv:astro-ph/9804169 [astro-ph]]. * [24] B. P. Abbott et al. [LIGO Scientific, Virgo, 1M2H, Dark Energy Camera GW-E, DES, DLT40, Las Cumbres Observatory, VINROUGE and MASTER], Nature 551 (2017) no.7678, 85-88 doi:10.1038/nature24471 [arXiv:1710.05835 [astro-ph.CO]]. * [25] R. Jimenez and A. Loeb, Astrophys. J. 573 (2002), 37-42 doi:10.1086/340549 [arXiv:astro-ph/0106145 [astro-ph]]. * [26] M. Moresco, L. Pozzetti, A. Cimatti, R. Jimenez, C. Maraston, L. Verde, D. Thomas, A. Citro, R. Tojeiro and D. Wilkinson, JCAP 05 (2016), 014 doi:10.1088/1475-7516/2016/05/014 [arXiv:1601.01701 [astro-ph.CO]]. * [27] J. Schombert, S. McGaugh and F. Lelli, Astron. J. 160 (2020) no.2, 71 doi:10.3847/1538-3881/ab9d88 [arXiv:2006.08615 [astro-ph.CO]]. * [28] M. Li, X. D. Li, Y. Z. Ma, X. Zhang and Z. Zhang, JCAP 1309, 021 (2013) doi:10.1088/1475-7516/2013/09/021 [arXiv:1305.5302 [astro-ph.CO]]. * [29] D. Camarena and V. Marra, Phys. Rev. D 98, no. 2, 023537 (2018) doi:10.1103/PhysRevD.98.023537 [arXiv:1805.09900 [astro-ph.CO]]. * [30] V. Salvatelli, A. Marchini, L. Lopez-Honorez and O. Mena, Phys. Rev. D 88, no. 2, 023531 (2013) doi:10.1103/PhysRevD.88.023531 [arXiv:1304.7119 [astro-ph.CO]]. * [31] A. A. Costa, X. D. Xu, B. Wang, E. G. M. Ferreira and E. Abdalla, Phys. Rev. D 89, no. 10, 103531 (2014) doi:10.1103/PhysRevD.89.103531 [arXiv:1311.7380 [astro-ph.CO]]. * [32] W. Yang, S. Pan and D. F. Mota, Phys. Rev. D 96, no. 12, 123508 (2017) doi:10.1103/PhysRevD.96.123508 [arXiv:1709.00006 [astro-ph.CO]]. * [33] E. Di Valentino, S. Pan, W. Yang and L. A. Anchordoqui, [arXiv:2102.05641 [astro-ph.CO]]. * [34] W. Yang, S. Pan, E. Di Valentino, O. Mena and A. Melchiorri, [arXiv:2101.03129 [astro-ph.CO]]. * [35] E. Di Valentino, A. Melchiorri and O. Mena, Phys. Rev. D 96, no. 4, 043503 (2017) doi:10.1103/PhysRevD.96.043503 [arXiv:1704.08342 [astro-ph.CO]]. * [36] G. B. Zhao, M. Raveri, L. Pogosian, Y. Wang, R. G. Crittenden, W. J. Handley, W. J. Percival, F. Beutler, J. Brinkmann and C. H. Chuang, et al. Nature Astron. 1 (2017) no.9, 627-632 doi:10.1038/s41550-017-0216-z [arXiv:1701.08165 [astro-ph.CO]]. * [37] M. Martinelli and I. Tutusaus, Symmetry 11 (2019) no.8, 986 doi:10.3390/sym11080986 [arXiv:1906.09189 [astro-ph.CO]]. * [38] G. Alestas, L. Kazantzidis and L. Perivolaropoulos, Phys. Rev. D 101 (2020) no.12, 123516 doi:10.1103/PhysRevD.101.123516 [arXiv:2004.08363 [astro-ph.CO]]. * [39] E. Di Valentino, Mon. Not. Roy. Astron. Soc. 502 (2021) no.2, 2065-2073 doi:10.1093/mnras/stab187 [arXiv:2011.00246 [astro-ph.CO]]. * [40] G. Efstathiou, [arXiv:2007.10716 [astro-ph.CO]]. * [41] W. Yang, E. Di Valentino, S. Pan, Y. Wu and J. Lu, Mon. Not. Roy. Astron. Soc. 501 (2021) no.4, 5845-5858 doi:10.1093/mnras/staa3914 [arXiv:2101.02168 [astro-ph.CO]]. * [42] Q. G. Huang and K. Wang, Eur. Phys. J. C 76 (2016) no.9, 506 doi:10.1140/epjc/s10052-016-4352-x [arXiv:1606.05965 [astro-ph.CO]]. * [43] R. Y. Guo, J. F. Zhang and X. Zhang, JCAP 02 (2019), 054 doi:10.1088/1475-7516/2019/02/054 [arXiv:1809.02340 [astro-ph.CO]]. * [44] S. S. Xue, Nucl. Phys. B 897 (2015), 326-345 doi:10.1016/j.nuclphysb.2015.05.022 [arXiv:1410.6152 [gr-qc]], and Modern Physics Letters A, (2020) 2050123, DOI: 10.1142/S0217732320501230, https://arxiv.org/abs/2004.10859 . * [45] S. Weinberg, Phys. Rev. D 81 (2010), 083535 doi:10.1103/PhysRevD.81.083535 [arXiv:0911.3165 [hep-th]]. * [46] D. Bégué, C. Stahl and S. S. Xue, Nucl. Phys. B 940 (2019), 312-320 doi:10.1016/j.nuclphysb.2019.01.001 [arXiv:1702.03185 [astro-ph.CO]]. * [47] S. S. Xue, [arXiv:2006.15622 [gr-qc]]. * [48] S. S. Xue, [arXiv:1910.03938 [gr-qc]]. * [49] M. Chevallier and D. Polarski, Int. J. Mod. Phys. D 10, 213 (2001) doi:10.1142/S0218271801000822 [gr-qc/0009008]. * [50] E. V. Linder, Phys. Rev. Lett. 90, 091301 (2003) doi:10.1103/PhysRevLett.90.091301 [astro-ph/0208512]. * [51] R. Y. Guo, Y. H. Li, J. F. Zhang and X. Zhang, JCAP 1705, 040 (2017) doi:10.1088/1475-7516/2017/05/040 [arXiv:1702.04189 [astro-ph.CO]]. * [52] L. Feng, J. F. Zhang and X. Zhang, Phys. Dark Univ. , 100261 [Phys. Dark Univ. 23, 100261 (2019)] doi:10.1016/j.dark.2018.100261 [arXiv:1712.03148 [astro-ph.CO]]. * [53] R. Y. Guo, J. F. Zhang and X. Zhang, Chin. Phys. C 42, no. 9, 095103 (2018) doi:10.1088/1674-1137/42/9/095103 [arXiv:1803.06910 [astro-ph.CO]]. * [54] L. Chen, Q. G. Huang and K. Wang, JCAP 02 (2019), 028 doi:10.1088/1475-7516/2019/02/028 [arXiv:1808.05724 [astro-ph.CO]]. * [55] D. M. Scolnic et al., Astrophys. J. 859, no. 2, 101 (2018) doi:10.3847/1538-4357/aab9bb [arXiv:1710.00845 [astro-ph.CO]]. * [56] A. Lewis and S. Bridle, Phys. Rev. D 66, 103511 (2002) doi:10.1103/PhysRevD.66.103511 [astro-ph/0205436]. * [57] V. Poulin, T. L. Smith, T. Karwal and M. Kamionkowski, Phys. Rev. Lett. 122 (2019) no.22, 221301 doi:10.1103/PhysRevLett.122.221301 [arXiv:1811.04083 [astro-ph.CO]]. * [58] P. Agrawal, F. Y. Cyr-Racine, D. Pinner and L. Randall, [arXiv:1904.01016 [astro-ph.CO]]. * [59] M. X. Lin, G. Benevento, W. Hu and M. Raveri, Phys. Rev. D 100 (2019) no.6, 063542 doi:10.1103/PhysRevD.100.063542 [arXiv:1905.12618 [astro-ph.CO]]. * [60] E. Di Valentino, A. Melchiorri, O. Mena and S. Vagnozzi, Phys. Rev. D 101 (2020) no.6, 063502 doi:10.1103/PhysRevD.101.063502 [arXiv:1910.09853 [astro-ph.CO]]. * [61] M. Szydłowski, A. Krawiec, A. Kurek and M. Kamionka, Eur. Phys. J. C 75 (2015) no.99, 5 doi:10.1140/epjc/s10052-014-3236-1 [arXiv:0801.0638 [astro-ph]]. * [62] S. del Campo, J. C. Fabris, R. Herrera and W. Zimdahl, Phys. Rev. D 83 (2011), 123006 doi:10.1103/PhysRevD.83.123006 [arXiv:1103.3441 [astro-ph.CO]]. * [63] D. Huterer, D. Shafer, D. Scolnic and F. Schmidt, JCAP 05 (2017), 015 doi:10.1088/1475-7516/2017/05/015 [arXiv:1611.09862 [astro-ph.CO]]. * [64] A. R. Liddle, Mon. Not. Roy. Astron. Soc. 351 (2004), L49-L53 doi:10.1111/j.1365-2966.2004.08033.x [arXiv:astro-ph/0401198 [astro-ph]]. * [65] B. Audren, J. Lesgourgues, K. Benabed and S. Prunet, JCAP 02 (2013), 001 doi:10.1088/1475-7516/2013/02/001 [arXiv:1210.7183 [astro-ph.CO]]. * [66] C. Heymans, T. Tröster, M. Asgari, C. Blake, H. Hildebrandt, B. Joachimi, K. Kuijken, C. A. Lin, A. G. Sánchez and J. L. van den Busch, et al. Astron. Astrophys. 646 (2021), A140 doi:10.1051/0004-6361/202039063 [arXiv:2007.15632 [astro-ph.CO]]. * [67] D. Camarena and V. Marra, doi:10.1093/mnras/stab1200 [arXiv:2101.08641 [astro-ph.CO]].
11institutetext: Institute for Astronomy (IfA), University of Vienna, Türkenschanzstrasse 17, A-1180 Vienna 11email<EMAIL_ADDRESS>22institutetext: Instituto de Astrofísica e Ciências do Espaço, Universidade de Lisboa, OAL, Tapada da Ajuda, PT1349-018 Lisboa, Portugal 33institutetext: Departamento de Física, Faculdade de Ciências da Universidade de Lisboa, Edifício C8, Campo Grande, PT1749-016 Lisboa, Portugal 44institutetext: Division of Physics, Math, and Astronomy, California Institute of Technology, Pasadena, CA, USA 55institutetext: Space Telescope Science Institute, Baltimore, MD, USA 66institutetext: Physics and Astronomy Dept., Michigan State University, East Lansing, MI 48824, USA # The VLT-MUSE and ALMA view of the MACS 1931.8-2635 brightest cluster galaxy Ciocan B. I 11 Ziegler B. L 11 Verdugo M 11 Papaderos P 112233 Fogarty K 4455 Donahue M 66 Postman M 44 (Received 11.2020 ; accepted 01.2021) We reveal the importance of ongoing in-situ star formation in the Brightest Cluster Galaxy (BCG) in the massive cool-core CLASH cluster MACS 1931.8-2635 at a redshift of z=0.35 by analysing archival VLT-MUSE optical integral field spectroscopy. Using a multi-wavelength approach, we assess the stellar and warm ionized medium components, spatially resolved by the VLT-MUSE spectroscopy, and link them to the molecular gas by incorporating sub-mm ALMA observations. We measure the fluxes of strong emission lines such as: $\rm{[O\textsc{ii}]}\>\lambda 3727$, $\rm{H\beta}$, $\rm{[O\textsc{iii}]}\>\lambda 5007$, $\rm{H\alpha}$, $\rm{[N\textsc{ii}]}\>\ \lambda 6584$ and $\rm{[S\textsc{ii}]}\>\lambda 6718,6732$, allowing us to determine the physical conditions of the warm ionized gas, such as electron temperature, electron density, extinction, ionization parameter, (O/H) gas metallicities, star formation rates and gas kinematics, as well as the star formation history of the system. Our analysis reveals the ionizing sources in different regions of the galaxy. The ionized gas flux brightness peak corresponds to the location of the supermassive black hole in the BCG and the system shows a diffuse warm ionized gas tail extending 30 kpc in N-E direction. The ionized and molecular gas are co-spatial and co-moving, with the gaseous component in the tail likely falling inward, providing fuel for star formation and accretion-powered nuclear activity. The gas is ionized by a mix of star formation and other energetic processes which give rise to LINER-like emission, with active galactic nuclei emission dominant only in the BCG core. We measure a star formation rate of $\sim 97\>\rm{M_{\odot}/yr}$, with its peak at the BCG core. However, star formation accounts for only 50-60% of the energetics needed to ionize the warm gas. In situ star formation generated by thermally unstable intracluster medium cooling and/or dry mergers dominate the stellar mass growth of the BCG at z¡0.5 and these mechanisms account for the build-up of 20% of the stellar mass of the system. Our measurements reveal that the most central regions of the BCG contain the lowest gas phase oxygen abundance whereas the $\rm{H\alpha}$ arm exhibits slightly more elevated values, suggesting the transport of gas out to large distances from the centre due to active galactic nuclei outbursts. The galaxy is a dispersion dominated system, typical for massive, elliptical galaxies. The gas and stellar kinematics are decoupled, with the gaseous velocity fields being more closely related to the bulk motions of the intracluster medium. ###### Key Words.: galaxies: clusters: general – galaxies: clusters: individual (MACS J1931.8-2635) – galaxies: clusters: ionisation – galaxies: clusters: kinematics and dynamics ## 1 Introduction The centres of massive clusters are often dominated by massive elliptical galaxies (brightest cluster galaxies, BCGs hereafter), suggesting a strong link between the formation and evolution of the BCG and that of the host cluster. For example, Lauer et al. (2014) have observationally demonstrated that the structural properties of the BCG depend on its specific location within the cluster, such that BCGs which are closer to the X-ray centre or which have smaller peculiar velocities (relative to the cluster mean), have more extended envelopes. This suggests that the inner regions of BCGs are formed outside the cluster but interactions, both gravitative and hydrodynamical, in the heart of the cluster lead to the growth of the envelopes of these systems. The unique properties of BCGs were proposed to have arisen through a few special mechanisms. Such a mechanism was proposed by Fabian & Nulsen (1977) to explain the formation and evolution of these systems as the result of cooling flows. Another explanation for the formation of these galaxies involves galactic cannibalism due to dynamical friction (Ostriker & Hausman, 1977). More recent theoretical models favour a two–phase hierarchical formation scenario for BCGs: rapid cooling and in-situ star formation at high redshifts followed by a growth through repeated mergers (e.g. De Lucia et al. 2007, Bellstedt et al. 2016, Lavoie et al. 2016, Burke et al. 2015, Cerulo et al. 2019). Several observations confirm the existence of BCGs in a state of an ongoing/recent merger phase, exhibiting, by e.g., close companion galaxies (Rines et al., 2007). However, many BCGs located in cool-core clusters, still exhibit signatures of significant star formation (SF) ( e.g. Dopita et al. 2010, Tremblay et al. 2012) and also harbour significant amounts of molecular gas even at lower redshifts (e.g. McNamara et al. 2014, Olivares et al. 2019, Fogarty et al. 2019), indicating that their stellar mass can also be built up via in-situ SF at later epochs. Thermally unstable residual intracluster medium (ICM) cooling was shown to explain the on-going SF in cluster cores (e.g.Voit & Donahue 2015). However, in the absence of a source of external heating, cluster cores would experience rapid cooling with $\tau_{cool}<1$ Gyr and extremely high rates of mass deposition - up to $\sim$ 1000 $\rm{M_{\odot}/yr}$ \- onto the BCG, promoting starbursts (e.g McNamara & Nulsen 2007). The lack of such observational signatures hints to the fact that a source of heating must be present, and the best candidate is the central active galactic nucleus (AGN). Thus, in recent years it has become clear that galaxy cluster cores are particularly well suited to study the feedback processes that are thought to inhibit the cooling of gas. Cosmological hydrodynamical simulations have shown that AGN feedback can regulate the cooling in the ICM, such that some ICM condensation occurs, but the overall energy injected by the AGN into the ICM offsets radiative cooling and prevents the formation of the $1000\>\rm{s\>M\odot/yr}$ cooling flows (Li et al. 2015, Li et al. 2017, Gaspari et al. 2018). The modelling of mechanical-mode AGN feedback has demonstrated that, as feedback acts on the system, the outflows it drives can promote condensation of the hot, low-entropy ambient medium by raising some of it to greater altitudes. This adiabatic uplift promotes condensation by lowering the cooling time - free fall time $\rm{(t_{cool}/t_{ff})}$ ratio of the uplifted gas. The diffuse X-ray emitting gas in the cluster centre is expected to become highly susceptible to local thermal instabilities once $\rm{(t_{cool}/t_{ff})}$ drops below a threshold of $\sim$10\. Once this happens, the condensates can rain down toward the bottom of the potential well, giving rise to kpc long cold filaments threading the BCG and its outskirts. This rain of cold gas into the galaxy, at first, provides additional fuel for star formation and AGN feedback and temporarily boosts the strength of the outflows, but eventually, those strengthening outflows add enough heat to the ambient medium to raise $\rm{(t_{cool}/t_{ff})}$ high enough to stop the condensation. Therefore, this precipitation-regulated feedback is a “cold feedback” mechanism that fuels a central black hole through “chaotic cold accretion” (Gaspari et al. 2013, Voit & Donahue 2015, Persad & Sharma 2015, Voit et al. 2017). For example, Donahue et al. (2015) used CLASH-HST UV photometric data to study the UV morphologies and SFRs of 25 CLASH BCGs with redshifts $z\sim 0.2-0.9$. They demonstrated that the only cluster cores hosting BCGs with detectable SF are those with low entropy X-ray emitting gas in their centres, in accordance with theoretical models. These galaxies exhibit a wide diversity of morphologies, with strong UV excesses systems showing distinctive knots, multiple elongated clumps, and extended filaments of emission that differ from the smooth profiles of the UV-quiet BCGs. These filamentary structures, which are similar to those seen in SF BCGs at lower z, suggest bi-polar streams of clumpy star formation. The unobscured star formation rates (SFRs) estimated from the UV images are in the order of $80\>M_{\odot}yr^{-1}$ in the most extended and highly structured systems. The morphology of the star-forming UV structures is very similar to the cold-gas structures, which are produced in simulations of precipitation-driven AGN feedback (Li et al., 2014b). Likewise, Tremblay et al. (2015) have analysed the star-forming clouds and filaments in the central regions of 16 lower redshift z ¡ 0.3 cool core BCGs based on a multi-wavelength approach. The systems exhibit SFRs from effectively 0 up to $\sim 150M_{\odot}/yr$. Most of the filaments they study are FUV-bright and star forming and the authors suggest that they have either been uplifted by the radio lobe or buoyant X-ray cavity or have formed in situ by jet-triggered star formation or rapid cooling in the cavities’ compressed shell. For the great majority of the BCGs, the maximal projected radius at which FUV emission is observed corresponds to a $\rm{(t_{cool}/t_{ff})}\sim 10$, in accordance with theoretical predictions. To reveal the importance of ongoing in-situ star formation to the total mass build-up in the most massive galaxies of our universe, we present our investigation of the warm ionised gas in the BCG of MACS 1931.8-2635 using VLT-MUSE integral field spectroscopic (IFS) observations. The measurement of strong spectral line fluxes enables the investigation of the properties of the ionised gas and stellar component. ALMA sub-mm observations of the M1931 BCG from Fogarty et al. (2019) allow us to link the ionised gas properties to those of the cold molecular gas. The paper is structured as follows: in section 2, we describe the main physical properties of the MACS 1931.8-2635 galaxy cluster and its BCG; in section 3, we present the archival MUSE observations of MACS 931.8-2635 and describe the additional sources of data used in this work. This section also describes the main steps of the data reduction and processing. Section 4 describes the analysis of the MUSE IFS observations based on different tools and pipelines. Section 5 presents the spatially resolved properties of the ionised gas (gas maps, kinematic maps, ionisation sources, electron temperature, electron density, extinction, ionisation parameter, SFRs, gas phase metallicities). In this section, we also present the comparisons between ionised and molecular gas properties. The star formation history of the system, as well as the kinematics of the stellar component, are also discussed in this section. In Section 6, we discuss implications for the formation of multiphase gas in the MACS 1931.8-2635 BCG, and the possible relationships between features observed in the optical, X-ray, and radio. We summarise our conclusions in section 7. Throughout this study, we have used the concordance $\rm{\Lambda CDM}$ cosmology with $\rm{H_{0}}=70\>\rm{km\>s^{-1}\>Mpc^{-1}}$, $\Omega_{0}=0.32$, $\Omega_{\Lambda}=0.68$. With these cosmological parameters, 1” subtends $\sim 5$ kpc at the redshift of z=0.352 of the BCG. ## 2 Data ### 2.1 The MACS 1931.8-2635 galaxy cluster and its BCG MACS 1931.8-2635 (hereafter M1931) is a massive, X-ray luminous, cool-core galaxy cluster at a redshift $z\sim 0.35$. The system was observed as part of the Cluster Lensing And Supernova survey with Hubble - CLASH - (Postman et al., 2012) and CLASH-VLT survey (Rosati et al., 2014). According to Postman et al. (2012), the galaxy cluster has a X-ray temperature $\rm{k\cdot T_{x}}=6.7$ keV and a bolometric luminosity $\rm{L_{b}=20.9\cdot 10^{44}\>ergs\>s^{-1}}$. Merten et al. (2015) reported a virial radius for the M1931 cluster of $\rm{r_{vir}}=1.61$ Mpc/h and a virial mass $\rm{M_{vir}=0.83\pm 0.06\cdot 10^{15}\>M\odot/h}$ from their lensing analysis. On the other hand, Umetsu et al. (2016) derived a virial mass for the M1931 cluster, based on comprehensive analysis of strong-lensing, weak-lensing shear and magnification data of $\rm{M_{vir}=1.802\pm 0.9\cdot 10^{15}\>M\odot/h}$. Figure 1 displays a composite Hubble Space Telescope (HST) image of the M1931 BCG at a redshift of z=0.352, showing the F160W image in red, the F814W in green and the F390W in blue. The white contours display the $\rm{H\alpha}$ flux intensity, as measured from the MUSE data cube. This figure reveals a filamentary system with intense nebular emission. The extended emission in this galaxy seen in the HST data was reported and characterised by Fogarty et al. (2019). The M1931 BCG has been shown to have elevated SFR ranging from $\sim 90\>\rm{M_{\odot}/yr}$ from UV data (Donahue et al., 2015) to $\sim 150\>\rm{M_{\odot}/yr}$ from IR data (Santos et al., 2016) up to $\sim 250\>\rm{M_{\odot}/yr}$ from UV through far-IR (Fogarty et al., 2017). Such high levels of star formation are not only atypical for ”red and dead” elliptical galaxies but, they also imply a phase of significant ongoing stellar mass growth. The M1931 BCG has stellar mass of $\rm{M_{*}\sim 5.9\pm 1.1\cdot 10^{11}\>M_{\odot}}$ (Bellstedt et al., 2016). Observations have also demonstrated that this galaxy harbours one of the most X-ray luminous cool- cores yet discovered, with an equivalent mass cooling-rate of $\sim 165\>\rm{M_{\odot}/yr}$ according to Ehlert et al. (2011), hinting that it might be undergoing a similar phase of ICM condensation like for e.g. in the Phoenix galaxy cluster (McDonald et al., 2012b). The central ICM entropy is estimated to be $\rm{K_{0}=k_{B}\cdot T_{X}\cdot n_{e}^{-2/3}=14\pm 4\>keV\>cm^{3}}$ according to Donahue et al. (2015) and hence, the M1931 BCG has a low core entropy $(\rm{K_{0}<30\>keV\>cm^{3}})$, which is a necessary condition for a multi-phase, star forming system (Voit & Donahue, 2015). Ehlert et al. (2011) has demonstrated that on scales of $r\sim 200$ kpc, a spiral of cooler, denser X-ray gas is observed to wrap around the core. Such spiral structures arise naturally from mergers and subsequent sloshing. Both the X-ray and optical data reveal oscillatory motion of the cool core along a roughly north–south direction as well as extended Intra Cluster Light (ICL), suggesting that the BCG likely experienced a merger within the north–south direction. Sub-mm ALMA observations (Fogarty et al., 2019) of M1931 have revealed that this BCG harbours one of the largest known reservoirs of cold gas in a cluster core ($\rm{M_{H_{2}}=1.9\pm 0.3\cdot 10^{10}M_{\odot}}$) as well as large amounts of dust, with several dust clumps having temperatures less than 10 K. The M1931 BCG represents an example of a cluster with a rapidly cooling core and powerful AGN feedback. The AGN outburst combined with merger-induced motion has most likely led to the cool-core undergoing destruction. This system is probably transitioning between two dominant modes of fuelling for star formation and feedback. It might be evolving from a quasar-mode cooling and feedback typical for higher-redshift cool cores to a weaker and less efficient feedback and cooling mode typical of lower-redshift cluster cores. ## 3 Data processing ### 3.1 Observations and data reduction M1931 was observed in June 2015 with the ESO-VLT MUSE integral field spectrograph (Bacon et al., 2014) under the GTO program 095.A-0525 (PI: Kneib, J.-P.). The MUSE pointing consists of three exposures of t=2924 s each, all centred on the cluster core. Both the raw and reduced data can be found on the ESO Science Archive Facility. We have used the raw data and reduced it using the standard calibrations provided by the ESO-MUSE pipeline, version 1.2.1 (Weilbacher et al., 2020). To reduce the sky residuals, we have used an additional tool, the Zurich Atmosphere Purge version 1.0 - ZAP \- (Soto et al., 2016) on the calibrated cubes. ZAP is a tool specially developed for the reduction of IFU data and its core functionality is sky subtraction based on principal component analysis (PCA). The tool employs filtering and data segmentation to enhance the inherent capabilities of PCA for sky subtraction, it constructs a sky residual spectrum for each spaxel which can then be subtracted from the original data cube and it, therefore, reduces sky emission residuals while preserving the flux and line shapes of the astronomical objects. After accounting for the sky residuals for the three exposures, we combined them using the MUSE Python Data Analysis Framework - MPDAF\- (Bacon et al., 2016) into a single data cube. The final calibrated data cube has a FoV of $1.1\>\rm{arcmin^{2}}$, a spatial sampling of $0.2"$ in the wavelength range $4750-9350\>\AA$ and a spectral resolution of $\sim 2.5\>\AA$. The optical IFU data is supplemented by sub-mm ALMA observations from Fogarty et al. (2019), allowing us to link the warm ionised gas to the cold molecular gas component. This sub-mm data set contains Band 3 (beam size: 0.82 arcsec x 0.53 arcsec), Band 6 (beam size: 0.87 arcsec x 0.72 arcsec) and Band 7 (beam size: 0.94 arcsec x 0.78 arcsec) ALMA observations. For the astrometric mismatch correction, we also made use of the CLASH SUBARU and HST photometric data for the M1931 cluster (Postman et al., 2012). ### 3.2 Astrometric alignment MUSE ALMA Our analysis of the MUSE and ALMA data requires that the images are aligned to a common astrometric reference frame. For this, we have used the white-light image of the MUSE cube and corrected it for the astrometric mismatch to the SUBARU Suprime-Cam r-band image available from the CLASH project website111https://archive.stsci.edu/prepds/clash/, which has excellent alignment with the HST observations. According to Fogarty et al. (2019), the ALMA data is aligned with the HST data, and any systematic astrometric alignment errors between the two data sets is not of significant concern. Thus, accounting for the astrometric mismatch between the MUSE image and the Subaru image is equivalent to accounting for the astrometric mismatch between MUSE and ALMA. For the correction of the astrometric mismatch, we have developed a PYTHON code using different MPDAF routines. To test the validity of the alignment between the MUSE and SUBARU images, we have used an additional PYTHON package - the image registration package - and we measure an offset between the 2 images, using the DFT upsampling methods, of [0.072, -0.17] pixels (i.e. [0.014,-0.034] acrcsec). These values are smaller than one third of the point spread function (PSF) of the MUSE IFS, which is 3.9 pixels, and hence, the two images are almost perfectly aligned. To further test the quality of our astrometric correction, we used the catalogue with the coordinates of Two Micron All-Sky Survey stars which fall in the field of view of MUSE and investigated their position in our images. We observe a very good agreement with a minimal offset. Moreover, the position we derive for the AGN from the BPT analysis (see Sect. 5.5) from the MUSE data coincides with the location of the sub-mm continuum point source, as observed in the ALMA data. ### 3.3 MUSE ALMA ratio maps We have created $\rm{H\alpha}$-CO(1-0) and $\rm{H\alpha}$-CO(3-2) flux, velocity and velocity dispersion ratio (or difference) maps by dividing (or subtracting) the ALMA moment maps from the corresponding MUSE maps. To be able to compare the two data sets, we have smoothed the MUSE maps to match the resolution of the ALMA data, using the PYTHON astropy.convolution function. Then, we have applied the reproject task from the PYTHON astropy package to resample the ALMA data onto the MUSE data pixel grids. This function resamples the data to a new projection using interpolation and it essentially tells the user which pixels in the new image had a corresponding pixel in the old image. Then, the re-projected ALMA image was divided (or subtracted) from the MUSE map. It is worth mentioning that due to the lower spectral resolution of the ALMA data, molecular gas kinematics were measured only in the most central regions of the system. ## 4 Analysis In the following section, we describe the analysis of the MUSE data, which allows us to study both the gaseous and stellar component of the BCG. Using both the population spectral synthesis code Fitting Analysis using Differential Evolution Optimisation - FADO \- (Gomes & Papaderos, 2017) and PORTO3D (Papaderos et al. 2013, Gomes et al. 2016), as well as the Muse Python Data Analysis Framework - MPDAF \- (Bacon et al., 2016), we reliably measure the fluxes of strong emission lines in the optical spectrum such as $\rm{[O\textsc{ii}]}\>\lambda 3727$, $\rm{H\beta}$, $\rm{[O\textsc{iii}]}\>\lambda 5007$, $\rm{H\alpha}$, $\rm{[N\textsc{ii}]}\>\lambda 6584$ and $\rm{[S\textsc{ii}]}\>\lambda 6718,6732$, allowing us to investigate the ionising sources, as well as to derive the electron temperature, electron density, colour-excess, ionisation parameter, star formation rates, (O/H) gas metallicities and gas kinematics. The star formation history was recovered by employing FADO and PORTO3D. The stellar kinematics are probed using the Galaxy IFU Spectroscopy Tool - GIST \- (Bittner et al., 2019), which implements the Voronoi binning routine (Cappellari & Copin, 2003) and the Python implementation of the Penalized PiXel-Fitting routine - pPXF \- (Cappellari & Emsellem, 2004). ### 4.1 Emission line flux measurements and star formation history with FADO From the MUSE data-cube, whose field of view encloses the central regions of the whole M1931 cluster, we extracted a sub-cube centred on the BCG using different MPDAF routines. This sub-cube consists of 90x90 spectral pixels (spaxels), i.e. in total 8100 pixels. Each spaxel of this MUSE sub-cube is fitted with the FADO pipeline (Gomes & Papaderos, 2017), which is a tool specially designed to perform population spectral synthesis (PSS), with the additional capability of automatically deriving emission line fluxes and equivalent widths (EWs). This tool can identify the star formation history (SFH) that reproduces self-consistently both the observed nebular characteristics of a star forming galaxy and the stellar SED. FADO is the first PSS (i.e., ’reverse’) code to employ genetic optimisation under self-consistency boundary conditions. This tool uses an advanced variant of the genetic Differential Evolution Optimization (DEO) algorithm of Storn & Price (1997) which has the advantage of permitting reliable convergence at an affordable expense of computational time. Further improvements of FADO in comparison to other PSS codes include i) the use of artificial intelligence (AI) concepts for an initial spectroscopic classification and optimisation of the library of simple stellar populations (SSP) spectra, ii) the consideration of nebular emission in spectral fits and iii) the consistency between the best-fitting SFH and the observed nebular emission characteristics of a star-forming galaxy. The main modules of FADO are the following: 1) pre-processing of spectral data, 2) spectral synthesis through genetic DEO and 3) computation and storage of the model output. After the initial guess for the fitting strategy, which depends on the spectral classification and signal-to-noise of the input spectra, the SSP library is optimised through AI concepts. There are three fitting strategies: Full-Consistency (FC) mode, meaning that the spectral modelling aims at consistency between observed and predicted SED continuum and Balmer emission-line luminosities and EWs - this being the default fitting mode of FADO and the one we have used for the fitting - the nebular-continuum mode and the stellar mode. In the fitting module, FADO incorporates various quality checks such as the auto-determination of the emission-line ratios prior to fitting, automatic clipping of spurious spectral features and examination of the supplied error spectrum. At the first stage, emission-line fluxes and EWs are measured and their quality is investigated. The quality control first involves a sequential check of various quantities and their errors inferred from DEO-based Gaussian line fitting and de-blending, such as the full width at half maximum (FWHM) and the difference between the central wavelength of emission lines. This first step is devised to identify the spurious spectral features (i.e. residuals from cosmics, from sky correction, noise peaks) as outliers and to reject them. FADO also checks whether various emission-line ratios fall within the range of theoretically expected values. This second module also deals with the decision-tree based choices of fitting strategy and convergence schemes, with the computation of predicted Balmer- line luminosities and nebular continuum and with the estimation of the uncertainties. The third FADO module deals with the final measurement of emission-line fluxes and EWs (the widths are not fixed to the same value for all lines, but determined individually) and with the computation of secondary evolutionary quantities, such as light- and mass-weighted stellar age and metallicity. Before the fitting, all the spaxels were corrected for Galactic foreground extinction with the noao onedspec deredden IRAF routine, using the empirical selective extinction function of Cardelli et al. (1989). For the fitting routine, we have used the library of SSP spectra from Bruzual & Charlot (2003). The SSPs have ages between $1\>x\>10^{5}$ and $1\>x\>10^{10}$ yrs, a resolution of 3 $\AA$ across the wavelength range 3200 to 9500 $\AA$ and a wide range of metallicities (1/200, 1/50, 1/5, 1/2.5, 1 and 2.5 $Z_{\odot}$) for Padova 1994 (Girardi et al., 1996) evolutionary tracks. FADO also allows the user to specify which extinction law to be used and we have chosen the Calzetti law extended to the FUV (Calzetti et al., 2001) for the fitting. It is worth mentioning that the fluxes offered by FADO are corrected for underlying stellar absorption. Figure 2 shows in the upper panel the integrated spectrum of the 90x90 spaxels MUSE cube, centred on the M1931 BCG in orange, revealing intense nebular emission. The best-fitting synthetic SED is shown in light-blue and it is composed of stellar and nebular continuum emission - the dark green and red spectra, respectively. The 5 panels from the lower side show the Gaussian fits to the strongest emission lines in the spectrum. These panels allow the user to inspect the quality of the kinematical fitting. The code is designed such as to fit the $\rm{[O\textsc{ii}]}$ as a doublet, even if the lines are completely blended, as in our case. To overcome this problem, we simply added the fluxes offered by FADO for the $\rm{[O\textsc{ii}]}$ emission lines. It is clear from Fig. 2, that the tool manages to properly fit the $\rm{H\alpha}$ and $\rm{[N\textsc{ii}]}$ emission lines, which are blended in the spectrum of the M1931 BCG. FADO rejects the $\rm{H\alpha}$ \+ $\rm{[N\textsc{ii}]}$ de- blending solution if the $\rm{[N\textsc{ii}]}\>6548/6584$ lines differ in their FWHM by more than an error-dependent tolerance bound defined by the error of the individual line fluxes, or in the case that the redshift- corrected difference between the central wavelength between the $\rm{H\alpha}$ and $\rm{[N\textsc{ii}]}$6548/6584 lines do not match the nominal value. We recover a median value for the flux errors for $\rm{log(H\alpha/}\rm{[N\textsc{ii}]}\>\lambda 6584)$ of $\pm 0.029$. It is worth mentioning that the uncertainties in the correction for underlying absorption are included in the errors quoted by FADO for the flux measurements of the emission lines. Therefore, the flux measurements for the blended $\rm{H\alpha}$ \+ $\rm{[N\textsc{ii}]}$ lines can be considered to be robust. Additionally, we used the IFU data analysis pipeline Porto3D (Papaderos et al. 2013, Gomes et al. 2016) to double-check emission-line fluxes, EWs and the SFH, finding an overall good agreement with FADO. Whereas Porto3D uses the PSS code Starlight (Cid Fernandes et al., 2005) for fitting the stellar continuum, it shares with FADO essential aspects in the analysis of the residual nebular emission, the computation of spectral synthesis byproducts (e.g., mass- and light-weighted stellar age, luminosity fraction of stellar populations younger than various age cutoffs) and the storage and graphical output of the results. Additionally, it integrates a rectification technique that suppresses residuals between observed and synthetic stellar SED, this way improving on the extraction and analysis of the nebular component in weak-line sources, such as early-type galaxies (Gomes et al., 2016) and galaxy bulges (Breda & Papaderos, 2018). As a final consistency check, we have used MPDAF (Bacon et al., 2016), for the measurements of the emission line fluxes. We develop a PYTHON code performing for each spaxel simultaneous Gaussian line fitting for the emission lines of interest, after subtracting the stellar continuum. The fit to each emission line is automatically weighted by the variance of the spectrum. The free parameters of the code are the peak position of the Gaussians, their standard deviation and the amplitude. We find a very good agreement between the flux measurements offered by the three different tools, within the errors. Nevertheless, we chose to use the FADO measurements to study the properties of the ionised gas, because this tool aims at self-consistency between observed and predicted SED continuum and Balmer emission-line luminosities and EWs. For our analysis, we consider only the spaxels which have a SNR¿10 in the emission line of interest, as well as a $\rm{SNR_{emission\>line}>SNR_{20\>\AA\>blue\>continuum\>window}}$ and $\rm{SNR_{emission\>line}>SNR_{20\>\AA\>red\>continuum\>window}}$. All the emission lines which have a $\rm{SNR}>10$ also have EWs$>10\>\AA$, giving us the confidence, that there is a true line detection in the spectrum. ### 4.2 Determination of the ionising sources The ionisation sources in the BCG of M1931 were investigated by means of different diagnostic diagrams. Using a set of four strong emission lines, one can reliably distinguish between SF galaxies, Seyfert II galaxies, LINERs and composite galaxies, where gas excitation is powered both by SF regions and an AGN. We have used three different diagnostic diagrams on our data set: the classical BPT diagram (Baldwin et al., 1981) as well as the $\rm{[O\textsc{iii}]/H\beta}$ vs $\rm{[S\textsc{ii}]/H\alpha}$ and the $\rm{[O\textsc{iii}]/H\beta}$ vs $\rm{[O\textsc{i}]/H\alpha}$ diagnostic diagram of Veilleux & Osterbrock (1987). We have also tested predictions from different fully radiative shock models calculated with the shock and photoionisation code MAPPINGS V from the Mexican Million Models database of Alarie et al. (2019) on the data set, to see whether large scale shocks are responsible for ionising the gas. The database contains models based on previous projects such as an replica of the Allen et al. (2008) grids including incomplete shock models, an extension of the Allen et al. (2008) grids computed for low metallicities using the abundances of Gutkin et al. (2016), an extension of the Allen et al. (2008) grids computed for different shock ages and grids for low shock velocities by Delgado-Inglada et al. (2014). All the different grids were tested on M1931 using different metallicities, pre-shock densities, and shock velocities. To more thoroughly investigate the ionising sources in the BCG, we have also used the spectral decomposition method of Davies et al. (2017). The authors have introduced a method to isolate the contributions of star formation, AGN activity and LINER emission (in their case LINER emission is associated to shock excitation, but LINER-like emission can arise due to a manifold of mechanisms) to the emission line luminosities of individual spatially resolved regions in galaxies. The method works as follows: from the distribution of the spaxels in the diagnostic diagrams, one should select three ‘basis spectra’, one representative of pure star formation, one for AGN emission, and one for LINER emission. Then, one assumes that the observed luminosity L of any emission line i in the spectrum of any spaxel j from the IFU data cube can be expressed as a linear superposition of the line luminosities of the SF region basis spectrum, the LINER basis spectrum and the AGN basis spectrum, through the following equation: $\rm{L_{i}(j)=m(j)\cdot L_{i}(SF)+n(j)\cdot L_{i}(AGN)+k(j)\cdot L_{i}(LINER)}$ (1) For each spaxel of the M1931 data cube centred on the BCG, we calculate the superposition coefficients m(j), n(j) and k(j) by performing least-squares minimization on equation 1 applied to the extinction-corrected fluxes of the four strongest emission lines observed in our spectra, namely $\rm{[O\textsc{iii}]}\>\lambda 5007$, $\rm{H\alpha}$, $\rm{[N\textsc{ii}]}\>\lambda 6584$ and $\rm{[S\textsc{ii}]}\>\lambda 6718,6732$. Then, we use the computed superposition coefficients to calculate the luminosities of the emission lines of interest associated with star formation, LINER-like excitation and AGN activity for each spaxel of the data cube. The observed luminosities seem to be well reproduced by linear superpositions of the line luminosities extracted from the three basis spectra. However, the selection of the 3 basis spectra characteristic for SF, AGN and LINER emission is quite subjective and the computed model luminosities are highly dependent on the choice of these spectra. Additionally, these basis spectra, although clearly classifiable as SF, AGN or LINER, may contain contributions by other gas excitation mechanisms. Therefore, spectral decomposition into these three basis elements may be regarded as a first approximation. For this reason, we have re-calculated the superposition parameters m(j), n(j) and k(j) and the model luminosities by performing a second iteration. After the first iteration, we calculated the luminosity of the $\rm{[O\textsc{iii}]}$, $\rm{H\alpha}$, $\rm{[N\textsc{ii}]}$ and $\rm{[S\textsc{ii}]}$ emission lines associated purely with SF for each spaxel of the data-cube. This was done by subtracting from the observed luminosity of each of the aforementioned 4 emission lines the contribution from AGN and LINER emission. The median values for the luminosities of the $\rm{[O\textsc{iii}]}$, $\rm{H\alpha}$, $\rm{[N\textsc{ii}]}$ and $\rm{[S\textsc{ii}]}$ emission lines of all spaxels were then used as the new luminosities for the star formation basis spectrum. The emission line luminosities for the AGN and LINER basis spectra were the same ones as in the first iteration. Then, we computed the new values for m(j), n(j) and k(j) by conducting least-squares minimization on equation 1 with the new values for $\rm{L_{i}(SF)}$. The recovered model luminosities are in very good agreement to the observed ones. ### 4.3 Determination of the electron density and temperature with Pyneb The electron temperature and density were computed by means of the PyNeb tool (Luridiana et al., 2015), a PYTHON package for analysing emission lines. PyNeb’s main functionality is to solve for line emissivities and to determine electron temperature and density given observed diagnostic line ratios. The tool works by solving the equilibrium equations for an n-level atom for collisionally excited lines and in the case of recombination lines, it works by interpolation in emissivity tables. Our main interest was the getCrossTemDen function of the tool, a class which cross-converges the temperature and density derived from two sensitive line ratios, by inputting the quantity derived with one line ratio into the other and then iterating. When the iteration process ends, the two diagnostics are expected to give self-consistent results. The first line ratio provided by the user must be a temperature-sensitive one and the second a density-sensitive one. A large number of published diagnostic ratios are stored in PyNeb by default, and we have tested four of them: ($\rm{[O\textsc{iii}]}4363/5007$, $\rm{[S\textsc{ii}]}6731/6716$), ($\rm{[N\textsc{ii}]}5755/6548$, $\rm{[S\textsc{ii}]}6731/6716$) , ($\rm{[N\textsc{ii}]}5755/6584$, $\rm{[S\textsc{ii}]}6731/6716$) and ($\rm{[N\textsc{ii}]}5755/6584$, $\rm{[Ar\textsc{iv}]}4740/4711$). However, it is worth mentioning that we measure very weak $\rm{[O\textsc{iii}]}\>4363$, $\rm{[Ar\textsc{iv}]}\>4740$ and $\rm{[Ar\textsc{iv}]}\>4711$ emission for M1931 BCG, making these diagnostics less robust. ### 4.4 Determination of the star formation rate and colour excess E(B-V) The SFR was computed based on the extinction corrected luminosity of the $\rm{H\alpha}$ line, both for the integrated spectrum and for each spaxel of the MUSE cube subtending the BCG. The $\rm{H\alpha}$ emission line is one of the most reliable SFR indicators, as this nebular emission arises directly from the recombination of HII gas ionised by the most massive O- and early B-type stars and, therefore, traces the star formation over the lifetimes of these stars. We make the simplifying assumption that the SFR is nearly constant over the past $\sim 100$ Myr and that case B recombination applies and therefore, the $\rm{H\alpha}$ luminosity can be used for estimating the SFR following the conversion proposed by (Kennicutt, 1998b) for solar metallicity and a Salpeter IMF: $\rm{SFR({M}_{{}_{\odot}}\cdot yr^{-1})=7.9\cdot 10^{-42}L(H\alpha)}\>(\rm{\frac{ergs}{s}})$ (2) However, the intensities of emission lines arising from gas nebulae are affected by selectively absorbing material on the line of sight to the observer. Therefore, the luminosity of the $\rm{H\alpha}$ emission line was corrected for extinction based on the Balmer decrement following the equations introduced by Brocklehurst et al. (1971). The colour excess was also calculated based on the Balmer decrement, following the same set of equations. We also computed the SFR surface density as $\rm{\Sigma SFR=SFR/area}$. According to the cosmology, at z=0.352 the observed scale is 4.9 kpc/arcsec. As each spatial pixel is 0.2 arcsec, this translates to each pixel having a side of 0.98 kpc. ### 4.5 Determination of the oxygen abundance and ionisation parameter with HII-CH-mistry The chemical abundances of galaxies are an important tool to study galaxy evolution, as they reflect the complex interplay between star formation, gas outflows through winds and supernovae, and galactic gas inflows. The gas phase oxygen abundances for the M1931 BCG were computed by applying the direct $\rm{T_{e}}$ based methods described by Pérez-Montero (2017) (equations 38, 40 and 41), which use in addition to the emission line flux ratios, the temperature and density of the ionised gas. As a consistency check, we have also used the HII-CH-mistry pipeline (Pérez- Montero, 2014) for the computation of the gas phase metallicity. Based on the diagnostic diagrams (see sect. 5.5), we have seen that the M1931 BCG does not have a strong optical AGN, and therefore, for the computation of (O/H)s, we have used HII region grids calculated with Cloudy v.17 (Ferland, 2013) using the POPSTAR synthesis evolutionary models for an instantaneous burst with an age of 1 Myr and a Chabrier initial mass function (Chabrier, 2003). The grids of photoionization models cover a wide range of input conditions of (O/H) and (N/O) abundances and ionisation parameter. The values offered by the code for the (O/H) of each spaxel are in very good agreement (within the errors) with the one obtained by us employing the direct $\rm{T_{e}}$ methods, giving us the confidence that the measurements are robust. The ionisation parameter was also computed using the HII-CH-mistry tool of Pérez-Montero (2014). ### 4.6 Determination of the stellar kinematics with GIST The stellar kinematics were recovered using the Galaxy IFU Spectroscopy Tool - GIST\- (Bittner et al., 2019), a pipeline written in PYTHON, which extracts stellar kinematics, performs emission-line analysis and derives stellar population properties from full spectral fitting by exploiting the well-known penalized PiXel-Fitting -pPXF\- (Cappellari & Emsellem, 2004) and Gas and Absorption Line Fitting -GandALF\- (Sarzi et al., 2006) routines. This pipeline also implements the Voronoi binning (Cappellari & Copin, 2003) routine. We have used this tool only to recover the stellar kinematics via pPXF. However, it is worth mentioning that the fluxes recovered from GandALF are in good agreement to the ones recovered from FADO, Porto3D and MPDAF. To spatially resolve the BCG stellar kinematics we probe each spaxel of the MUSE data-cube centred on the system, where the SNR of the stellar continuum is higher than 10. The SNR is computed in a 1000 $\AA$ window between 5250 $\AA$ and 6250 $\AA$, a region of the spectrum free of strong emission lines. The SNR of the stellar continuum is so low that we need spatial binning and therefore, we have applied the Voronoi tesselation technique. The MUSE cube was tessellated to achieve a SNR of 50 (per bin) in the emission line-free stellar continuum. Therefore, we can measure the velocity fields only in the BCG core, corresponding to a $2\times 2$ arcsec region surrounding the supermassive black hole (SMBH). We proceed with a pPXF fit implementing the high resolution, UV-extended ELODIE models of Maraston & Strömbäck (2011). These SSP models are based on the template stellar library ELODIE (Prugniel et al., 2007) merged with the theoretical spectral library UVBLUE (Rodríguez- Merino et al., 2005). The SSPs have a Salpeter initial mass function (Salpeter, 1955), a metallicity of $Z=0.02\>Z_{\odot}$ and ages ranging from 3 Myr to 15 Gyr. The resolution is 0.55 $\AA$ (FWHM) and the spectral sampling is 0.2 $\AA$, covering the wavelength range 1000 - 6800 $\AA$. In order to optimise the pPXF absorption-line fits, all the regions containing strong emission lines in all the spectra were masked , along with the telluric sky- lines at 5577 $\AA$, 5889 $\AA$, 6157 $\AA$, 6300 $\AA$ and 6363 $\AA$. During the pPXF fitting routine, the Hermite coefficients were kept fixed. ## 5 Results ### 5.1 Ionised gas flux maps Figure 3 shows on the left hand side the spatially resolved map of the $\rm{H\alpha}$ emission in the M1931 BCG in units of $10^{-17}\rm{ergs\cdot s^{-1}\cdot cm^{-2}}$ for spaxels with a $\rm{SNR_{H\alpha}>10}$. The plot from the right hand side displays the $\rm{H\alpha}$ EW map of the BCG in units $\AA$, for which we have applied the same SNR criterium. The spatial scale in (all) the plots corresponds to 90 by 90 kpc. The galaxies’ intensity peak is coincident with the location of the AGN (as derived according to the different diagnostic diagrams, see sect. 5.5), and it shows an elongated $\rm{H\alpha}$ tail extending $\sim$ 30 kpc in N-E direction. The EW peak does not spatially coincide with the emission line brightness peak, but shows more enhanced values in the $\rm{H\alpha}$ arm. We observe a similar distribution of fluxes and EWs for $\rm{[O\textsc{ii}]}\>\lambda 3727$, $\rm{H\beta}$, $\rm{[O\textsc{iii}]}\>\ \lambda 5007$,$\rm{[N\textsc{ii}]}\>\lambda 6584$ and $\rm{[S\textsc{ii}]}\>\lambda 6718,6732$, see Fig. 18. The similarity between the flux maps of the different emission lines suggests that both recombination and forbidden lines probably originate from the same clouds. ### 5.2 Comparison between ionised and molecular gas fluxes We have created ratio maps between the line intensities of the ionised and molecular gas (see Fig. 2 from Fogarty et al. 2019), after normalisation or rescaling by some factor. Figure 4 displays in the panel from the left-hand side the ratio between the $\rm{H\alpha}$ and CO(1-0) flux and in the panel from the right-hand side the ratio between the $\rm{H\alpha}$ and CO(3-2) flux. The $\rm{H\alpha}$ to CO flux ratios are close to unity and more or less constant all along the nebula and the peak of the CO flux intensity is located at the same position as the peak of the ionised gas flux intensity. There are some regions that show more enhanced flux for $\rm{H\alpha}$ than CO, but there is an overall close similarity between the line intensities of the ionised and molecular gas. The molecular gas is not as extended as the warm ionized gas, but this is likely due to the sensitivity limit or due to the maximum resolution scale of ALMA. The molecular filaments are, thus, co-spatial in projection with the warm ionized gas, similar to what has been found in other cool core BCGs (Olivares et al. 2019, Tremblay et al. 2018, Vantyghem et al. 2016). ### 5.3 Ionised gas kinematics The gas kinematics were recovered from the Gaussian fits to the emission lines. We probe each spaxel where the SNR¿10 for the emission lines, and compute both the radial velocity and velocity dispersion of the gas from the fits offered by FADO and MPDAF. We observe very good agreement between the results offered by the two different tools. It is worth mentioning that the kinematics of the $\rm{[O\textsc{ii}]}$ gas were recovered just from the fits offered by MPDAF, making the derived kinematic parameters in this case less robust than for the other emission lines. Figure 5 displays in the panel from the left the $\rm{H\alpha}$ radial velocity in km/s, as recovered from FADO. The velocity map is normalised to the BCG’s rest frame, i.e. to the velocity obtained from the redshift of the central spaxel ($\rm{z_{center}}=0.3526$), whose location is coincident with that of the SMBH. We observe a clear gradient from negative ($\sim-300\leavevmode\nobreak\ \rm{km\leavevmode\nobreak\ s}^{-1}$) to positive velocities ( $\sim 300\leavevmode\nobreak\ \rm{km\leavevmode\nobreak\ s}^{-1}$) relative to the BCGs systemic velocity. We recover a median value for the error of the velocity of $\leavevmode\nobreak\ \pm 30\>\rm{km\leavevmode\nobreak\ s}^{-1}$. The core of the BCG shows mainly negative gas motions while the gas in the $\rm{H\alpha}$ tail shows positive velocities. Such velocity profiles can be indicative of rotation but can also arise from coherently in- or outflowing material with an inclination to the plane of the sky. However, the warm ionised nebula in the innermost 10 kpc of the galaxy is not in dynamical equilibrium, as there are no obvious signs of rotation in the core of the BCG. It is quite complicated to firmly establish whether the gas in the tail is inflowing or outflowing from the BCG core. The $\rm{H\alpha}$ tail does not seem to have a bi-modal symmetry characteristic of jets, and it also does not lie along the axis connecting the cavities observed in the Chandra data by Ehlert et al. (2011). Therefore, the most plausible explanation would be that the redshifted stream of gas in the $\rm{H\alpha}$ tail is radially in-falling towards the centre. Such motions of the gas have been observed in many BCGs (e.g. Hamer et al. 2016, Olivares et al. 2019). The plot on the right hand side shows the spatially resolved $\rm{H\alpha}$ velocity dispersion map. The measured velocity dispersions in all spaxels were corrected for instrumental broadening. The extended gas has a consistently low velocity dispersion in the order of $\sim 150-250\>\rm{km\leavevmode\nobreak\ s}^{-1}$, with a median value for the random error of the velocity dispersion of $\sim\pm 4\>\rm{km\leavevmode\nobreak\ s}^{-1}$. The ionised gas shows additional peaks in the line-width, suggesting the gas is more kinematically disturbed in these regions. Dispersions are lowest near the core and increase towards the northern and southern peripheries in the inner-most regions, with the largest dispersions being coincident with the $\rm{H\alpha}$ EW peak. The rim with an enhanced dispersion along the NE-SW axis lies roughly along the line where the gas velocity changes from negative to positive values. The western side of the $\rm{H\alpha}$ tail shows the lowest velocity dispersion. We measure similar radial velocities and velocity dispersions for all other strong emission lines, see Fig. 19. ### 5.4 Comparison between ionised and molecular gas kinematics We then proceed to compare the recovered kinematics of the warm ionised gas to those of the cold, molecular gas (Fig. 3 from Fogarty et al. 2019). Figure 6 shows the difference between the ionised and molecular gas kinematics. The panels on the left-hand side display the difference between the systemic velocity of the $\rm{H\alpha}$ gas and the velocity of the CO(1-0) gas (top panel) and CO(2-3) gas (bottom panel). The velocity difference maps are predominantly filled with velocity offsets below $\pm 100\>\rm{km\leavevmode\nobreak\ s}^{-1}$, especially in the core of the BCG, where the velocity differences are in the order of $<\pm 50\>\rm{km\leavevmode\nobreak\ s}^{-1}$, indicating that the two gas phases are likely co-moving. The $\sim\pm 100\>\rm{km\leavevmode\nobreak\ s}^{-1}$ velocity differences might be explained by the different spatial resolutions of the MUSE and ALMA data. The two panels from the right-hand side of Fig. 6 show the ratio maps between the velocity dispersion of the $\rm{H\alpha}$ and the CO(1-0) gas on the top side and between the velocity dispersion of the $\rm{H\alpha}$ and the CO(3-2) gas on the bottom side. The maps show almost no structure and are close to unity along the nebula. We observe, thus, very good agreement between the dispersions of the warm and cold gas phases, with a median value for the ratio of $\sim 1.2$. It is worth noting that, some structures with more enhanced velocity dispersions for $\rm{H\alpha}$ are visible in these maps (ratio $\sim 3$), hinting to the fact that the ionised gas is more kinematically disturbed in these regions than the molecular gas. The velocity dispersion ratio map shows that, on average, the $\rm{H\alpha}$ velocity dispersion is by a factor of 1-2 times broader than that for CO(3-2) and CO(1-0). Gaspari et al. (2018) have shown that the warm ionised gas is likely to be more turbulent compared to the cold molecular gas. On the other hand, Tremblay et al. (2018) suggested that lines-of-sight are more likely to intersect larger volumes of warm gas than cold one, which will lead to a broader velocity distribution in the ionised gas than in the molecular component. These two scenarios are, however, hard to distinguish. To conclude, the comparison between the MUSE and ALMA data reveals evidence that the warm ionised and cold molecular nebulae are to some extent co-spatial and co-moving, consistent with the hypothesis that the optical nebula traces the warm envelopes of molecular clouds. The warm and cold gas are ”mixed” in the sense that that pockets of cold (and warm) neutral gas are immersed within the warm ionised gas. ### 5.5 Ionisation sources #### 5.5.1 Diagnostic diagrams Figure 7 shows on the top left hand side the classical BPT diagram (Baldwin et al., 1981), which uses the ratios of $\rm{[O\textsc{iii}]/H\beta}$ vs $\rm{[N\textsc{ii}]/H\alpha}$. The blue solid curve represents the theoretical curve of Kewley et al. (2001a) and the green dashed one the empirical curve of Kauffmann et al. (2003), which separate SF galaxies from AGNs. The orange solid curve of Schawinski et al. (2007) depicts the separation line between Seyfert II galaxies and LINERs. Our results depict a system with mainly composite emission in the BPT, with contribution from both active star formation and an AGN, which is typical for cool-core BCGs (e.g. Loubser et al. 2013, Tremblay et al. 2018). The plots from the top middle and top right depict the BPT distribution on the sky, showing the Seyfert II emission in blue and LINER emission in green, respectively. We can identify the location of the SMBH (shown as a cross in all plots), according to the spaxels which fall in the Seyfert II region. It is also clear from these plots that LINER emission can not be associated with the AGN, as this emission is mainly observed in the outskirts of the system. Figure 7 shows in the middle left panel the $\rm{[O\textsc{iii}]/H\beta}$ vs $\rm{[S\textsc{ii}]/H\alpha}$ and in the lower left panel the $\rm{[O\textsc{iii}]/H\beta}$ vs $\rm{[O\textsc{i}]/H\alpha}$ diagnostic diagram of Veilleux & Osterbrock (1987). The blue curves in both panels represent the separation curve of Kewley et al. (2001a) which divides SF galaxies from AGNs & LINERS. The orange solid curve in both plots depicts the separation curve of Schawinski et al. (2007), which differentiates between Seyfert II galaxies and LINERs. The panels from the middle side of both the middle and lower rows show the $\rm{[S\textsc{ii}]}$ and $\rm{[O\textsc{i}]}$-BPT distribution on the sky, displaying the Seyfert II emission in blue, while the panels from the right-hand side of both rows display the LINER emission in green. The $\rm{[O\textsc{iii}]/H\beta}$ vs $\rm{[S\textsc{ii}]/H\alpha}$ diagnostic depicts a system with both HII and LINER emission, while in the $\rm{[O\textsc{iii}]/H\beta}$ vs $\rm{[O\textsc{i}]/H\alpha}$ plot, which is a sensitive diagnostic for shocks, the majority of spaxels fall in the LINER region. We also observe considerable variation in the emission line ratio maps used for the computation of the diagnostic diagrams, suggesting that the source of the excitation is not localised at a specific region. To conclude, the three diagnostic diagrams reveal mainly composite-LINER emission for the M1931 system, hinting to the fact that there are several mechanisms which ionise the gas. Based on this analysis, we can not draw any definitive conclusions, as the situation for cool-core BCGs is highly complex and likely represents a superposition of several different ionisation sources (e.g. Tremblay et al. 2018, McDonald et al. 2012a). #### 5.5.2 Fully radiative shock models Fig. 8 shows the fully radiative shock model grids, over-plotted on the 3 diagnostic diagrams described above. Each black data point represents a spaxel of the MUSE cube with an SNR¿10 in each emission line used for the diagnostic. As can be seen from this plot, the different shock models partially reproduce the measured emission line ratios, mainly for spaxels which fall in the Seyfert II region of the diagnostic diagrams. The only grids which partially seem to fit the data in all three diagnostic diagrams are the ones with a Solar (red lines) and twice Solar (blue lines) metallicity. We observe similar behaviour for all other shock model grids described in section 4.2 as well. We have also accounted for the contribution of star formation, AGN and LINER- like emission to the luminosity of each emission line in each spaxel of the data cube, allowing us to derive the luminosity of the $\rm{[O\textsc{iii}]}$, $\rm{[N\textsc{ii}]}$, $\rm{[S\textsc{ii}]}$, $\rm{[O\textsc{i}]}$, $\rm{H\alpha}$, and $\rm{H\beta}$ gas associated purely with LINER-like emission. LINER-like emission can arise due to large scale shocks, and if indeed shocks are responsible for ionising the gas in the M1931 BCG, then the fully-radiative shock models should reproduce the pure LINER fluxes of the emission lines in the three diagnostic diagrams. In Fig. 8, the grey data points represent the spaxels of the MUSE cube, whose luminosities are associated with pure LINER-like emission. Even after removing the contribution from star formation and AGN emission from the luminosity of the emission lines, the different shock grids still do not fully reproduce this modelled emission. Therefore, we concluded that lower velocity shocks seem to play only a minor role as an ionising mechanism in the system. The weakness of the $\rm{[O\textsc{iii}]}\>\lambda 4363$ emission also suggests that shocks are not a major ionising mechanism in the M1931 BCG (Voit and Donahue, 1997). This is in accordance with the findings of Donahue et al. (2000), who demonstrated, based on HST imaging of cool-core BCGs that, the radio sources in cool-core galaxy clusters are not injecting significant amounts of energy by strong shocks to the emission-line gas and therefore, shocks cannot be a significant ionising source, nor a source of heating to counterbalance the cooling. Regarding the AGN emission: we see from the diagnostic diagrams that M1931 does not have a strong optical AGN, as just a few spaxels fall in the Seyfert II region. The weakness of $\rm{He\textsc{ii}}\>\lambda\>4686\>\AA$ also rules out such a hard ionising source. Therefore, AGN ionisation seems to influence only the most central regions of the BCG. This is in accordance with the findings of Fogarty et al. (2017), who analysed the impact of AGN emission on the SED fit from the UV through far-IR for M193, and concluded that the effect of AGN emission in M1931 is marginal. Hence, the dominant source of ionisation in M1931 BCG is a mix between star- formation and other energetic processes which can “mimic” LINER emission. Several hypotheses have been proposed for the source of this extended LINER- like emission. Stellar populations may be responsible for this emission, which could arise due to photoionisation of the gas by young starbursts, or by old pAGB stars (e.g. Shields 1992, Olsson et al. 2010, Loubser et al. 2013 , Binette et al. 1994, Stasinska et al. 2008). For e.g. Byler et al. (2019) have studied the predictions of LINER-like emission from pAGB stars, based on fully self-consistent stellar models and photoionization modelling and they have demonstrated that indeed, post-AGB models produce line ratios in the LINER region of the $\rm{[S\textsc{ii}]/H\alpha}$ and $\rm{[O\textsc{i}]/H\alpha}$ diagrams, and in the “composite” region of the standard BPT diagram. This is exactly what we observe in the diagnostic diagrams for the M1931 BCG. However, these post-AGB star models produce $\rm{H\alpha}$ EWs between 0.1 and 2.5 Å, while we observe by far higher $\rm{H\alpha}$ EWs for M1931 ($\rm{EW_{H\alpha}}>50\>\AA$ throughout the whole nebula, see the right-hand panel of Fig. 3). Therefore, we can rule out post-AGB stars as the dominant source of LINER emission in the M1931 BCG. LINER-like emission may also be characteristic of starburst-driven outflows (Sharp & Bland-Hawthorn, 2010), Lyman continuum photon escape through tenuous warm gas (Papaderos et al., 2013), Diffuse Ionized Gas (DIG) in galactic disks (Zhang et al., 2016), or gas heated by the surrounding medium. The latter mechanism includes photoionisation by cosmic rays, collisional heating by cosmic rays, conduction from hot gas, suprathermal electron heating from the hot gas, X-ray photoionisation, turbulent mixing layers (e.g. Donahue et al. 1991, Begelman et al. 1991, Donahue et al. 2011, McDonald et al. 2012a, Sparks et al. 2012). Fogarty et al. (2019) also studied the CO excitation mechanisms in the M1931 BCG, and their analysis of the CO spectral line energy distribution reveals evidence for multiple gas excitation mechanisms in the system, besides star formation. The CO(3-2) transition is highly excited, similar to Quasi Stellar Objects and Ultra Luminous Infrared galaxies, but the CO(4-3) is similar to that observed in normal, star forming galaxies. Therefore, there must be a mechanism not related to starburst which acts as an additional excitation mechanism. The authors conclude that the molecular gas excitation in the BCG is driven by a mix of processes such as SF, radiation from the SMBH and interaction between the cold gas and the ICM, in accordance with our findings. #### 5.5.3 Spectral decomposition method To test the validity of the spectral decomposition method of Davies et al. (2017), one should check whether the fraction of emission attributed to each ionisation mechanism varies smoothly as a function of the diagnostic line ratios, peaking at the line ratios of the relevant basis spectrum and decreasing as the line ratios become closer to those of the other basis spectra. Figure 9 demonstrates that this is exactly what we observe. The upper 3 panels depict the BPT diagram while the lower three ones show the diagnostic diagram of Veilleux & Osterbrock (1987), with each data point colour-coded according to the fractional contribution of SF (left), AGN (middle) and LINER (right) to the total $\rm{H\alpha}$ emission. Figure 10 shows the emission line maps for $\rm{H\alpha}$ on the upper row, $\rm{[O\textsc{iii}]}$ on the second row, $\rm{[N\textsc{ii}]}$ on the third row and $\rm{[S\textsc{ii}]}$ on the fourth row colour coded according to the fractional contribution of SF (left), AGN (middle) and LINER-like emission (right) to the total line luminosity. In the maps showing the fractional contribution of SF to the emission line luminosities (first column in Fig. 10), some ”structures” become visible, such as the clumps in the vicinity of the AGN (above and to the N-W) as well as some clumps in the tail, which are most probably young, SF regions. These regions show in all the emission line maps more enhanced contribution from SF to the total line luminosity, probably making young stars the main ionising mechanism there. These ”structures” are coincident with the compact knots seen in the naturally weighted CO(1-0) intensity image, as well as with the UV knots present in HST photometry. In the BCG core at the location coincident with that of the SMBH, the fractional contribution of SF to the total line- luminosity is the smallest. In all the maps showing the AGN contribution to the total line luminosity (second column in Fig. 10) we can observe an enhanced contribution of AGN emission to the total luminosity of the four emission lines in the most central regions of the system, corresponding to the location of the SMBH. In the maps showing the fractional contribution of LINER-like emission to the total line luminosity (third column in Fig. 10), we observe again a low fraction in the central region of the system corresponding to the location of the AGN. Low LINER-like contribution can be found in the star-forming clumps as well. Possible caveats regarding the spectral decomposition method are related to the fact that the relative luminosities of the emission lines are primarily determined by the relative contributions of different ionisation mechanisms to the line emission, but they are also sensitive to the metallicity and ionisation parameter of the gas. From the spectral decomposition method, we conclude that the main source of ionisation in the M1931 BCG is a mix between SF and other energetic processes which give rise to LINER-like emission, while AGN ionisation is dominant only in the BCG core. Star formation accounts for $\sim 50-60\%$ of the ionised gas emission, whereas AGN emission accounts only for about $\sim 10\%$. The rest of $\sim 30-40\%$ of the energetics needed to excite the gas come from mechanisms which give rise to LINER-like emission. Star formation is, therefore, the main ionising mechanism in the M1931 BCG and the main contribution to the luminosity of the emission lines. ### 5.6 Electron Density and Electron Temperature Figure 11 shows in the upper left panel the electron density vs electron temperature diagnostic diagram for the most central regions of the M1931 BCG. From the M1931 MUSE data cube, we extracted a sub-cube corresponding to the core of the BCG ($2\aas@@fstack{\prime\prime}2\times 2\aas@@fstack{\prime\prime}2$ circular aperture surrounding the AGN) and one corresponding to the $\rm{H\alpha}$ arm and provided the fluxes obtained from the integrated spectra of these two regions to PyNeb. For the core, we obtain degenerate values for the temperature and density when using different line diagnostics. The difference in the $\rm{n_{e}}$ and $\rm{T_{e}}$ obtained from different emission lines for the core of M1931 BCG could be due to the superposition of gas components with different physical conditions and gas excitation mechanisms along the line of sight. As we have seen in sect. 5.5, the central region of the BCG is ionised by a mix of processes, and therefore, in this region, we likely see the luminosity-weighted average of emission lines arising from different volumes with different physical conditions, and eventually also different kinematics. Also, different line ratios are sensitive to different ranges in $\rm{n_{e}}$ and $\rm{T_{e}}$, they therefore likely probe different components in the ionized gas. The complexity of this issue may be demonstrated already on the example of a relatively ’simple’ system, like a blue compact dwarf (BCD) galaxy: James et al. (2009) studied the BCD Mrk 996 based on VLT-VIMOS IFU observations and they have shown that the ionized gas in the SF nucleus of that galaxy consists of a component with normal density $\sim 170\>\rm{cm^{-3}}$ and broad-line component with an electron density reaching $10^{7}\>\rm{cm^{-3}}$, similar to what we observe in the most central regions of the M1931 BCG. The plot from the right-hand side of Fig. 11 also displays the electron density vs electron temperature diagnostic diagram, but for the $\rm{H\alpha}$ tail. The different diagnostics offer consistent values for the electron temperature and density, as can be observed from the intersection of the different curves. The panel from the lower left hand side of Fig. 11 shows the spatially resolved electron temperature map measured in [K], as computed from the $\rm{[N\textsc{ii}]}$ 5755/6584 emission lines for individual spaxels of the M1931 MUSE cube. We measure a median electron temperature of $\rm{T_{e}}\sim 11230$ K and a median random error for the temperature of $\sim\pm 220$ K. The errors for the electron temperature were calculated with the Bootstrapping method, by assuming that the errors in the line fluxes are Gaussian and adding/subtracting them according to a random key from the measured flux values. These new flux measurements for each spaxel were provided to PyNeb for a new computation of the electron temperature and density. This procedure was repeated a few tens of times to accurately recover the errors for the electron temperature. It is worth mentioning that due to the weakness of the temperature-sensitive emission line $\rm{[N\textsc{ii}]}\>\lambda 5755$ we can measure the electron temperature and density only in a small number of spaxels. The spatially resolved electron temperature map shows quite high variations, with the lowest temperatures of about $\rm{T_{e}}\sim 8000$ K in the most central regions of the system. The temperature seems, thus, to decrease towards the core of the BCG. For e.g. Vikhlinin et al. (2005) studied the ICM temperature profiles in cluster cores and found that the temperature decreases in the cores of cool-core galaxy clusters, similar to what we observe for the interstellar medium (ISM) of the BCG. The panel from the lower right-side of Fig. 11 displays the spatially resolved electron density map in units of $\rm{cm^{-3}}$ as computed from the $\rm{[S\textsc{ii}]}\>\lambda 6718,\lambda 6732$ emission lines using the PyNeb tool. We measure a median electron density of $\rm{n_{e}=361\>cm^{-3}}$, which is a value typical for BCGs, with a median error of $\sim\pm 60\>\rm{cm^{-3}}$ (also computed through the Bootstrapping method). The recovered values for $\rm{n_{e}}$ are below the critical density threshold of the $\rm{[S\textsc{ii}]}$ doublet, which is $\rm{n_{e}=3\cdot 10^{3}\>cm^{-3}}$ (Osterbrock and Ferland, 2006), above which the lines become collisionally de-excited. Because of this, we can consider our derivation of the density reliable. ### 5.7 Color excess E(B-V) The panel on the left-hand side of Fig. 12 shows the colour excess map for the M1931 BCG. We recover a median value of E(B-V)=0.135 and median random error of $\pm 0.002$ . The error was computed through error propagation by taking only the flux measurement errors and not the calibration uncertainties into account, meaning that the true errors are larger. The highest extinction, in the order of E(B-V)$\sim 0.4-0.5$ is observed in the BCG core. This is in accordance with the findings of Fogarty et al. (2019), who also observed high dust continuum emission in the core of the system, as well as in the $\rm{H\alpha}$ tail. The regions 1, 2 and 3 marked by the black ellipses are the cold dust regions, for which Fogarty et al. (2019) measure a temperature of $\rm{T_{dust}}<10$ K. In the core of the BCG, the dust temperature $\rm{T_{dust,\>core}}>11$ K. The dust temperature was estimated based on the continuum emission in ALMA Bands 6 (336.0 GHz) and 7 (468.5 GHz), using the equations introduced by Casey (2012) and a value of $\beta=1.6\pm 0.38$. The ionised gas and dust in the M1931 BCG are more or less co-spatial, except maybe for region 3, where we measure low emission line fluxes. Intriguingly, the ionised gas EW peak is coincident with the location of the cold dust region 2. The enhanced excitation of the ionised gas in this region can come from collisional excitation of relativistic particles and cold dusty gas (Sparks et al., 2012). It is worth mentioning that Fogarty et al. (2017) also derived the extinction in the BCG of the M1931 cluster and they obtain a value of A(V) = $0.87\pm 0.21$ assuming the Calzetti Law (Calzetti et al., 2001), which corresponds to E(B-V) = $0.21\pm 0.05$. Therefore, there is a small tension between the value inferred by us and that inferred by Fogarty et al. (2017) for the extinction. A possible explanation for the small discrepancy is that the spaxels with $>10\sigma$ detections of $\rm{H\alpha}$ are systematically less reddened by dust than the broadband filters that Fogarty et al. (2017) used to construct their SED fit from UV through far-IR. However, given the fact that our errors for the colour-excess are under-estimated as they are just the statistical random errors, the two values for E(B-V) can be considered to be consistent. ### 5.8 Ionisation parameter The panel from the right-hand side of Fig. 12 shows the spatially resolved map for the ionisation parameter log(U). This parameter gives the ratio of ionizing photon density to hydrogen density and represents a measure of the dimensionless intensity of ionizing radiation. We recover a median value for the ionisation parameter $log(U)=-2.9$ and a median error of $\pm 0.3$. This value for the ionisation parameter is consistent with those observed for star forming galaxies (e.g. Cresci et al. 2017, Kewley et al. 2019). The highest log(U) values seem to be coincident with the location of the star forming structures which are visible in column 1 of Fig. 10. ### 5.9 Star formation rates Figure 13 shows the spatially resolved SFR map for the M1931 BCG in units of $\rm{M_{\odot}/yr}$. We observe regions with higher SFRs coincident with the BCG core, while in the tail, the SFR levels are lower. From the spaxel by spaxel analysis, we measure a total $\rm{SFR\sim 144\>M_{\odot}/yr}$ and a random error of $\rm{\pm 0.33\>M_{\odot}/yr}$ (computed through error propagation taking only the flux measurements into account). The recovered SFR surface density is $\rm{\Sigma SFR\sim 147\>M_{\odot}\>yr^{-1}\>kpc^{-2}}$. However, these values for the SFR and $\rm{\Sigma SFR}$ are just upper limits, because, as we have seen in section 5.5.3, SF accounts for $\sim 50-60\%$ of the ionised gas emission, AGN ionisation accounts for about $\sim 10\%$, whereas $\sim 30-40\%$ of the energetics needed to excite the gas come from mechanisms which give rise to LINER-like emission. The Kennicutt (1998b) conversion for the estimation of the SFR was designed for pure star forming galaxies and therefore, we should exclude the contribution from AGN and LINER- like emission to the total luminosity of the $\rm{H\alpha}$ emission line to accurately estimate the SFR. By doing so, we recover a $\rm{SFR\sim 97\>M_{\odot}/yr}$ with a random error of $\pm 0.7\>\rm{M_{\odot}/yr}$. The integrated star formation rate surface density that we recovered, after excluding the contribution of LINER-like and AGN emission to the total line luminosity of $\rm{H\alpha}$, is $\rm{\Sigma SFR\sim 100\>M_{\odot}\>yr^{-1}\>kpc^{-2}}$. However, LINER-like line ratios could also arise from an over-abundance of O-stars (but not indicating more star formation, Fogarty et al. 2015), meaning that the LINER-like excitation can also partially be due to star formation. Therefore, it might be that the recovered SFR value of $\sim 97\>\rm{M_{\odot}/yr}$ is slightly under- estimated. It is possibly the case that the ”true” $\rm{H\alpha}$-based SFR is limited above by a maximum of 144 $\rm{M_{\odot}/yr}$ and limited below by minimum of 97 $\rm{M_{\odot}/yr}$. Comparing to other works, Donahue et al. (2015) reported an unobscured SFR$\sim 90\>\rm{M_{\odot}/yr}$ for the M1931 BCG from the CLASH HST rest- frame UV data, without accounting for reddening. Ehlert et al. (2011) has reported a SFR$\sim 170\>\rm{M_{\odot}/yr}$ from broadband Subaru $\rm{H\alpha}$ photometry for the same system. Santos et al. (2016) has computed a SFR$\sim 150\pm 15\>\rm{M_{\odot}/yr}$ from a FIR SED fit using Herschel observations, after removing the AGN contamination. Fogarty et al. (2017) estimated a value for the SFR of $\sim 250\pm 75\>\rm{M_{\odot}/yr}$ from NUV-FIR SED fitting using photometry from the CLASH HST data set in combination with mid- and far-IR data from Spitzer and Herschel. Fogarty et al. (2015), on the other hand, derived a reddening-corrected value for the UV- continuum estimate of the SFR for the M1931 BCG of $280\pm 20\>\rm{M_{\odot}/yr}$ using CLASH HST ACS and WFC3 observations. Their derived SFR agrees with the value they obtained in Fogarty et al. (2017) for the SED fit from the UV-FIR. They also estimated the $\rm{H\alpha}$ luminosity for the system, for which they derived an SFR of $130\pm 40\rm{M_{\odot}/yr}$. All the $\rm{H\alpha}$ based SFR estimates are in accordance with both values derived by us, within the errors. The larger SFR $\sim 250\>\rm{M_{\odot}/yr}$ values reported by Fogarty et al. (2015) and Fogarty et al. (2017) are heavily influenced by the UV continuum, and possibly reflect the SFR over a longer time period. The lower SFR values are most probably representative for the ongoing star-forming activity. According to Fogarty et al. (2019), the M1931 BCG has a molecular gas reservoir with a mass of $\rm{M_{H_{2}}=1.9\pm 0.3\cdot 10^{10}M_{\odot}}$, and together with our inferred SFR, we can calculate the gas depletion time as $\rm{t_{depl}=\frac{M_{gas}}{SFR}}$, and by doing so, we recover a $\rm{t_{depl}\sim 190}$ Myrs. O’Dea et al. (2008) studied BCGs based on imaging with the Spitzer Space Telescope and inferred a typical gas depletion timescale for such systems of about 1 Gyr. Kennicutt et al. (2012) have demonstrated that when the gas depletion time is shorter the SFR efficiency is higher. This might imply that the M1931 BCG has a high star formation efficiency and that currently, the system is consuming the gas at a higher rate than typical. On the other hand, Voit & Donahue (2011) have demonstrated that BCGs with large SFRs ($>10\>\rm{M_{\odot}/yr}$) have depletion times less than 1 Gyr. These depletion times are shorter than those of BCGs with lower SFR levels. The depletion time we infer for the M1931 BCG is thus, not so different from that other BCGs with large SFRs and is also similar to that of other star forming galaxies, especially to those with starburst rates (Díaz- García & Knapen, 2020). Hence, the global Kennicutt star formation relation (Kennicutt, 1998a) relating molecular gas quantity to the SFR is not so different in cool-core, multiphase BCGs and disk galaxies. To conclude, the M1931 BCG shows elevated levels of SF in the order of $\sim 97\>\rm{M_{\odot}/yr}$, with the largest values in the core of the system, in accordance with previous studies. ### 5.10 Oxygen abundances Figure 14 displays the spatially resolved oxygen abundance map for the BCG of the M1931 cluster. We measure a median gas-phase metallicity $\rm{12+log(O/H)}\sim 8$ and an error of $\pm 0.35$, with the lowest (O/H) values in the BCG core, and slightly more enhanced values in the outskirts. For example, Kirkpatrick &McNamara (2015) studied the hot gas-phase metallicity distribution in 29 galaxy cluster cores based on Chandra X-ray data and found that over the life cycle of AGN activity, hot outflows can be responsible for the broadening of abundance peaks in cool-core clusters, and effectively transport metal-enriched gas from the BCG to great distances in the cluster atmospheres. Similarly, Ehlert et al. (2011) studied the ICM metallicity based on Chandra X-ray data in the M1931 cluster and concluded that this cluster core is missing the central metallicity peak which is normally measured in cool core clusters, thus suggesting bulk transport of hot X-ray gas out to large distances from the centre due to AGN outburst. This is similar to what we observe for the warm gas in the ISM of this system, with the lowest metallicity values inferred in the core and slightly more enhanced values in the tail. Ehlert et al. (2011) measure a more or less constant ICM metallicity of $\rm{Z=0.36\>Z_{\odot}}$ out to distances as large as 400 kpc from the M1931 cluster core, which translates to a 12 + log(O/H) = 8.25. This value is consistent to the gas-phase metallicity we measure in the ISM of the BCG. The similarity may hint to the fact that the gas we observe in the ISM of the galaxy has condensed from the ICM. ### 5.11 Star formation history Besides emission line fluxes, equivalent widths and stellar parameters, FADO can also recover the SFH of a galaxy, ensuring consistency between the best- fitting SFH and the observed nebular emission. As a consistency check, we have compared the SFH recovered from FADO to the one recovered from Porto3D and we observe a very good agreement between the two. The SFH module of FADO allows us to recover the stellar mass ever formed within the system: $\rm{M^{*}_{ever\>\>formed}=5.9\cdot 10^{11}M_{\odot}}$. This value is in perfect agreement to the one inferred by Bellstedt et al. (2016) for the mass of the M1931 BCG using only K-band photometry: $\rm{M^{*}=5.9\pm 1.1\cdot 10^{11}M_{\odot}}$. Figure 15 shows the discretised approximation to the best-fitting SFH for the integrated spectrum of the M1931 BCG, as recovered from FADO. The upper panel shows the contribution of the individual SSPs in the best-fitting population vector to the monochromatic luminosity at 6150 $\AA$ and the lower one the contribution to the stellar mass as a function of age. It is clear from these two plots that the M1931 BCG had a complex SFH. Approximately $50\%$ of the systems’ stellar mass, i.e. $\rm{M^{*}\sim 3\cdot 10^{11}\>M_{\odot}}$, was in place at an early epoch, $t_{\frac{1}{2}}\sim 6.98$ Gyr ago (z$\gtrsim 1.5$), followed shortly by subsequent mass build-up episodes, but less strong. This main episode of SF also contributes the most to the luminosity of the galaxy ($\sim 30\%$). This mass build-up episode is shortly followed by another episode, which occurred 6.3 Gyr ago and leads to the formation of $30\%$ of the mass of the system, i.e. to the formation of $\rm{M^{*}\sim 1.7\cdot 10^{11}\>M_{\odot}}$. This second strongest and oldest mass build-up event might be associated with a wet merger (given that there are SSPs of different ages and metallicities contributing to the mass build-up). Hence, most of the mass of the BCG ($\sim 80\%$) was built up at an early epoch, more than 6 Gyr ago, in accordance with theoretical models, which favour the scenario in which BCG growth follows two–phase hierarchical formation, with rapid cooling and in-situ star formation at high redshifts followed by a subsequent growth through repeated mergers, (e.g. De Lucia et al. 2007). Our results are also in accordance with the findings of Collins et al. (2009), who demonstrate that $\sim 90\%$ of BCGs stellar masses are in place by z = 1. Many other mass build-up events occur for the M1931 BCG over its SFH, but they are less strong than the two initial episodes, and only lead, combined, to the build-up of $\sim 20\%$ of the mass of the system. The second two strongest mass build-up events occur at a redshift $\rm{z\lesssim 0.5}$: one $\sim 1$ Gyr ago - contributing $\sim 10\%$ to the stellar mass ever formed (leading to a mass build-up of $\rm{M^{*}\sim 5.3\cdot 10^{10}\>M_{\odot}}$ ) and $\sim 13\%$ to the luminosity and the other $\sim 5.6\cdot 10^{7}$ yrs ago - contributing $\sim 2\%$ to today’s stellar mass (leading to a mass build-up of $\rm{M^{*}\sim 1.1\cdot 10^{10}\>M_{\odot}}$ ) and $\sim 20\%$ to the luminosity. The ages of these two ”post-assembly” mass build-up episodes are a good match to the starburst age range Fogarty et al. (2017) derive for the M1931 BCG using a Bayesian photometry-fitting technique that accounts for both stellar and dust emission from the UV through far-IR. Using a simpler single- burst SFH parameterization, Fogarty et al. (2017) estimate an age range for a ”post-assembly” star formation episode of $\sim 4.5\times 10^{7}-2.2\times 10^{8}$, in pretty good agreement with the ages we recover. Other weak mass build-up events probably occurred for the M1931 BCG, leading to the build-up of less than $1\%$ of the stellar mass and less than $10\%$ of the total luminosity of the system. However, robust conclusions of such minor SF episodes are difficult due to the probabilistic nature of PSS. The most recent mass build-up event occurred 1 Myr ago and lead to the formation of $0.2\%$ of the total mass, i.e. to the formation of $\rm{M^{*}=1.1\cdot 10^{9}M_{\odot}}$. All the weaker episodes of mass build-up, taken at face value, might be due to minor dry mergers or due to ”cooling-flow” induced star formation episodes (or a combination of the two). If we consider all the mass build-up events that happened during the last one Gyr, the redshift of the universe at that time was z$\sim 0.5$, an epoch at which wet mergers have mostly ended in galaxy clusters, which are more or less fully assembled. Therefore, the mass build-up episodes which occurred less than 1 Gyr ago can, most probably, not be attributed to wet mergers and associated minor in situ SF episodes, although a contribution from dry mergers cannot be ruled out. For e.g. Burke et al. (2015) examine the stellar mass assembly in galaxy cluster cores using CLASH data and find that BCGs grow in stellar mass by a factor of 1.4 on average from the accretion of their companions and find major merging to be very rare for their sample. They conclude that minor mergers constitute the dominant process for stellar mass assembly of BCGs at low redshifts, but with the majority of the stellar mass from interactions ending up contributing to the ICL rather than building up the system. Similarly, Webb et al. (2015) studied the SFH of BCGs to z = 1.8 from the SPARCS/SWIRE survey and concluded that, star formation episodes below $z\sim 1$ do not contribute more than 10% to the final total stellar mass of the BCGs and that BCG growth is likely dominated by dry mergers at low z. This is consistent with the recent mass build-up episodes that we infer for the M1931 BCG, which contribute to the build-up of less than 20% of its stellar mass. Another interpretation could be that these weaker mass build-up events are related to ”cooling flow” induced SF episodes, as we know that the M1931 BCG resides in a cool-core cluster. Thermally unstable cooling of the ICM into cold clouds which start sinking towards the SMBH was shown to explain the on- going star formation as well as the AGN activity of BCGs, (e.g.Voit & Donahue 2015). This scenario is supported by the high SFR levels we measure for the M1931 BCG, which can probably not be attributed to wet mergers with satellite galaxies, as we know that satellite cluster galaxies approaching the cluster core should be devoid of gas due to ram pressure stripping and starvation. The co-spatiality and kinematics of the ionised and cold gas component, as well as the metallicity of the ISM, also point to a common origin for the two gas- phases, such as cooling from the ICM. Making a simplified assumption that the SFR was more or less constant for the last 100 Myrs, then a SFR =97 $\rm{M_{\odot}/yr}$ yields a stellar mass of $\sim 10^{10}\rm{M_{\odot}}$ in this time span (and similarly a SFR =144 $\rm{M_{\odot}/yr}$ yields a mass of $\sim 1.4\cdot 10^{10}\rm{M_{\odot}}$ in the last 100 Myrs). According to Fig. 15, in the last 100 Myrs, $\sim 3\%$ from of the total mass of the system was formed, yielding a value of about $\sim 1.7\cdot 10^{10}\rm{M_{\odot}}$. This value is in good agreement to the mass that would have formed in-situ in the last 100 Myrs, given the star formation levels inferred from the $\rm{H\alpha}$ emission line, making, thus, ”cooling-flows” induced SF-episodes a good candidate to explain the recent SFH of the M1931 BCG. The SFH module of FADO also allows us to put some rough constraints on the stellar parameters, such as the SSP ages (and metallicities). We have investigated the stellar age gradients in a region corresponding to the core of the galaxy (i.e. $2\aas@@fstack{\prime\prime}2\times 2\aas@@fstack{\prime\prime}2$ circular aperture around the SMBH) and in a region encompassing the whole system, and we recover mainly positive (to flat) age gradients. Stellar ages are lowest in the core of the system and seem to increase towards the outskirts of the galaxy. These findings are also supported by our spatially resolved SFR map, where we demonstrate that the highest SFR levels are confined to the core of the galaxy. Hence, it is expected that the core of the BCG contains a young population of stars. Loubser et al. (2009) has demonstrated the youngest BCG cores are mostly line- emitting galaxies in cool-core clusters, similarly to what we observe in the case of the M1931 BCG. Likewise, Edwards et al. (2020) investigated age and metallicity gradients in a sample of 23 low redshift BCGs (z¡0.7) and observed positive age gradients for 6 of their systems, and negative gradients for the rest (which is often seen for massive elliptical galaxies). All of the BCGs showing positive age gradients are star forming multi-phase systems, similar to the BCG of the M1931 cluster. To conclude, both dry mergers and/or ”cooling-flow” induced in-situ star formation episodes dominate the mass build-up of M1931 BCGs at late epochs, but they account for only a small fraction of less than $20\%$ of the total mass of the system, while probably in-situ star formation (either due to rapid cooling or wet mergers) at early epochs, more than 6 Gyr ago, lead to the formation of up to $\sim 80\%$ of the stellar mass of the system. ### 5.12 Stellar kinematics Figure 16 shows in the panel on the left-hand side the stellar radial velocity and in the panel on the right hand side the stellar velocity dispersion in the BCG core, as recovered from the GIST pipeline. The stellar velocity map is normalised to the BCGs’ rest frame, i.e. to the velocity of the central bin, whose position is coincident with that of the SMBH. The recovered stellar velocity dispersions are corrected for instrumental broadening. While the stellar radial velocity shows a gradient from -200 to 200 $\rm{km\leavevmode\nobreak\ s}^{-1}$ (with a median error for the radial velocity of $\pm 12\>\rm{km\leavevmode\nobreak\ s}^{-1}$) with respect to the systemic velocity of the BCG, being indicative of rotation, the values for the stellar velocity dispersion are by far higher, ranging from 300 $\rm{km\leavevmode\nobreak\ s}^{-1}$ up to $\sim 650\>\rm{km\leavevmode\nobreak\ s}^{-1}$ (with a median error for the velocity dispersion of $\pm 12\>\rm{km\leavevmode\nobreak\ s}^{-1}$). This means that the BCG core is dispersion dominated, i.e. a slow rotator, which is expected for massive, elliptical galaxies. Since this high stellar velocity dispersion is going to be preserved after the gas was consumed by SF and/or ejected by the AGN, our data shows an intermediate-late stage in the assembly history of present-day massive early-type galaxies (or central dominant galaxies) at the centre of galaxy clusters. ### 5.13 Comparison stellar and gaseous kinematics Figure 17 displays the difference between the systemic velocity of the stars and the $\rm{H\alpha}$ gas on the left-hand side and on the right-hand side, the ratio between the velocity dispersions of the stellar and ionised gaseous component. For this comparison, we have used the stellar kinematics recovered by the GIST pipeline for the un-binned data. We measure velocity differences between the stellar and gaseous component as high as $\sim\pm 200\>\rm{km\leavevmode\nobreak\ s}^{-1}$, and a median value for the ratio between the velocity dispersion of stars and gas of 2.4. Hence, the stars have much higher velocity dispersions than the gas. The gaseous and stellar kinematics in the M1931 BCG core are, thus, decoupled, with the gaseous component not following the potential well of the stars. Such decoupled kinematics have been observed in other BCGs (e.g. Hamer et al. 2016, Loubser et al. 2013). The kinematics of the gas are more closely related to the turbulence and bulk motions in the hot ICM than they are related to the motions of the stars. The high velocity dispersion of the stars argues strongly that the gas has originated from the more quiescent circum-galactic medium, and not from the stars. These findings are in accordance with the predictions of the precipitation model (Voit et al. 2017) as well as of the Chaotic cold Accretion model (Gaspari et al. 2018), which state that the gas motions will track the much larger X-ray gas reservoir rather than the stellar component. The precipitation models also predict that $\sigma_{gas}<0.5\cdot\sigma_{stars}$, in accordance with what we observe for the M1931 BCG. ## 6 Discussion The archival MUSE and ALMA observations used in our analysis of the M1931 BCG reveal an extended, filamentary system, displaying dynamically unrelaxed structures with disturbed motions along the filaments. Both the molecular and ionised gas distribution shows a nuclear emission component closely related to the BCGs’ core, as well as a set of clumpy filaments, which form an elongated $\rm{H\alpha}$ arm, extending $\sim 30$ kpc in NE direction. Both gas phases are co-spatial and co-moving, with the molecular gas being spatially distributed along the brightest emission from the warm ionised nebula. These findings are in accordance with the hypothesis that the optical nebula traces the envelopes of the cold molecular clouds. As we have seen in sect. 5.5, the cloud surfaces are excited by a manifold of mechanisms and are bright in Balmer and forbidden line emission. The pockets of cold gas are immersed within ionised gas, and the in situ SF within the molecular clouds as well as interactions with the ICM are the most plausible mechanisms that give rise to extended $\rm{H\alpha}$ emission all over the core and NE filament. Such correlations between the two gas phases can be interpreted as the manifestation of a common origin, like the condensation of low-entropy ICM gas through thermal instabilities. The ICM surrounding the $\rm{H\alpha}$ tail has the lowest entropy according to Ehlert et al. (2011), therefore, being a prime candidate for the reservoir of gas that cools to become star-forming molecular gas. Moreover, the recovered values for the gas phase metallicity in the BCG are consistent to the ones recovered by Ehlert et al. (2011) for the ICM, within the errors. These findings support the scenario, that the warm gas has condensed from the ICM, without being further polluted by the stars in the system. For example, Fogarty et al. (2017) studied the nature of feedback mechanisms in the 11 CLASH BCGs and their results strongly suggest that thermally unstable ICM plasma with a low cooling time is the source of material that forms the reservoir of cool gas fuelling star formation in the BCGs and that BCG star formation and feedback either exhausts the supply of this material on gigayear timescales or settles into a state with relatively modest and continuous star formation. The role of AGN feedback on the turbulent and chaotic behaviour of the gas was emphasised by Voit et al. (2017) who demonstrated that the AGN jets and bubbles can promote condensation of the hot ambient medium by raising some of it to greater altitudes and thus lowering its $\rm{(t_{cool}/t_{ff})}$ ratio. After the condensates form, they start raining down towards the SMBH, representing additional fuel for both star formation and AGN activity. In simulations, complex arcs of gas fall back into the centres of BCGs after initially being propelled outwards by AGN jets and cavities (Gaspari et al. 2013, Li et al. 2015), similar to what we observe in the M1931 BCG. The significance of uplift and radial infall is now an integral aspect of the precipitation model (Voit & Donahue 2015, Voit et al. 2017) as well as of the Chaotic cold Accretion model (Gaspari et al. 2018, Tremblay et al. 2018). The kinematics of the gas in the M1931 BCG point out to such a scenario. According to Fogarty et al. (2019) and to our observations, the molecular gas in the $\rm{H\alpha}$ tail is probably falling inward at $\sim 300\>\rm{km\leavevmode\nobreak\ s}^{-1}$ , with the ionised gas component closely following these motions, as can be seen from the left-hand panel of Fig. 5. The redshifted stream of gas we observe would, thus, relate to the material that is radially in-falling towards the centre of the system. These infalling clouds can provide a substantial component of the mass flux toward the SMBH accretion reservoir, while the physical conditions of the gas within these clouds could satisfy the criteria for the ignition of star formation. Moreover, the gaseous and stellar kinematics are decoupled, as they do not share a common velocity field, with the stars showing larger velocities than the gas. These distinct properties of the gas velocities and the stars’ velocities suggest that the motion of the gas is more closely related to the turbulence and bulk motions within the ICM than it is related to the motion of the stars in the system. The kinematics of the warm and cold gas phases lead us to conclude that the AGN in the system might have recently experienced an energetic outburst. This AGN outburst temporarily led to the condensation of the uplifted gas, which is now probably cycling back down towards the AGN, promoting an elevated star formation rate of $\sim 100\>\rm{M_{\odot}/yr}$. The observed distribution of dust in the system (Fogarty et al., 2019) points to an uplift mechanism, with dust emission most prominent in the core and at the furthest extremity of the tail. The observed distribution of the gas phase metallicity can also hint to such an uplift mechanism. We observe the lowest (O/H) values of $\sim$ 7.9 in the most central regions of the systems, while in the $\rm{H\alpha}$ tail, the metallicity is slightly higher. This might mean that the metal enriched gas has been expelled outwards from the centre due to AGN outflows. However, the infalling multiphase tail is not aligned with the jet axis implied by features consistent with E-W oriented X-ray cavities observed in the Chandra data (Ehlert et al., 2011). This indicates that the gaseous component either might have originally formed at the interface between the jet-inflated cavities and the ICM and then migrated away from these positions, or it might have formed at an earlier epoch, sufficiently long ago for the X-ray cavities to have dissipated (Fogarty et al., 2019). The cloud envelopes in this system are ionised by a superposition of many physical processes, including photoionisation from young stars and other mechanisms which give rise to LINER-like emission, the best candidates being photoionisation by cosmic rays, conduction from the hot ICM, X-ray photoionisation, turbulent mixing layers or collisional heating. AGN emission dominates only the BCG core, see Figs. 7 and 10. This is in accordance with the findings of Fogarty et al. (2019), whose analysis of the CO spectral line energy distribution in the M1931 BCG also reveals evidence for multiple gas excitation mechanisms including SF, interaction between the molecular gas and the ICM and AGN emission. According to Donahue et al. (2011), such multiphase- star forming BCGs also tend to have unusually luminous vibrational and rotational $\rm{H_{2}}$ lines, which are by far more luminous than in normal, star formation objects. The emission in such galaxies is not fit by $\rm{H_{2}}$ gas at a single excitation temperature, hinting to the fact that there must be a mix of excitation mechanisms at play, in accordance with our findings. The spectral decomposition method of Davies et al. (2017) allows us to infer the fractional contribution of each mechanism to the total luminosity of the emission lines. Ionisation from star formation has a fractional contribution to the luminosity of the emission lines of about 50-60%, whereas AGN emission accounts only for $\sim$ 10%. LINER-like emission accounts for 30-40% of the energetics needed to ionise the gas. The occurrence of such ”composite” emission is typical for cool-core BCG. For example, Tremblay et al. (2018), Iani et al. (2019), Hamer et al. (2016) also studied (cool-core) BCGs based on IFU data and infer composite emission for their systems in the BPT diagram and concluded that the systems are ionised by a superposition of many physical processes. The spectral decomposition method also allows us to recover a more accurate estimate for the SFRs, after excluding the fractional contribution of AGN and LINER emission to the luminosity of the $\rm{H\alpha}$ line, inferring a lower limit of $\rm{SFR\sim 97\>M_{\odot}/yr}$. This value for the SFR, as well as the one we inferred without removing the contribution from both AGN and LINER to the luminosity of the $\rm{H\alpha}$ line, i.e. the SFR value of $\sim 144\>\rm{M_{\odot}/yr}$ , are in accordance with literature $\rm{H\alpha}$ based SFRs for the M1931 BCG, within the errors (Ehlert et al. 2011, Donahue et al. 2015, Fogarty et al. 2015, Fogarty et al. 2017). The properties of the ionised gas that we infer by analysing the spectrum of each spectral pixel of the MUSE cube, such as the electron temperature, density, colour excess and ionisation parameter are typical for star forming systems. We have also computed the physical properties of integrated regions in the system, see appendix C for more details. The temperature that we infer from the $\rm{[N\textsc{ii}]}$ 5755/6584 emission lines is in the order of $\rm{T_{e}}=11000$ K, with the lowest values observed in the most central regions of the system. These values are in accordance with the ones inferred for the BCG of the cool-core cluster Abell 2597 by Voit and Donahue (1997). For example Hamer et al. (2016) and Iani et al. (2019) determined the electron density for their sample of BCGs based on the $\rm{[S\textsc{ii}]}$ 6731/6716 emission line ratio and their inferred values in order of a few hundred $\rm{cm^{-3}}$ are in very good agreement to the value $\rm{n_{e}=361cm^{-3}}$ that we determine for the M1931 BCG. We observe the highest density and extinction ($E(B-V)\sim 0.5$) in the most central regions of the system, in accordance with the findings of Fogarty et al. (2019), who also measures the highest dust continuum emission in the core of the system. Thus, the densest, and perhaps dustiest gas is found to be coincident (and to the South) with the nucleus, where the radio source is also detected. The measured ionisation parameter is in the order of log(U) = -2.9, a value typical for star forming galaxies according to Kewley et al. (2019), and it shows some variation within the nebula, hinting to the fact that the source of excitation is not confined within a specific region. To conclude, it is possible that the molecular gas and ionized nebula at the centre of the M1931 cluster is a galaxy-scale “fountain” similar to what was observed by Tremblay et al. (2018) and Olivares et al. (2019) for their sample of BCGs: the AGN feedback has uplifted the low- entropy gas, that ultimately condensed and began raining back toward the galaxy centre from which it came. This rain of gas back to the SMBH accretion reservoir is promoting SF and further AGN feedback, in accordance with the theoretical predictions of the chaotic cold accretion model (Gaspari et al. 2018). However, these ”cooling- flow” induced SF-episodes, in combination with dry mergers lead to the build- up of less than $20\%$ of the current day stellar mass of the BCG. ## 7 Conclusion Based on VLT-MUSE optical integral field spectroscopy, we investigate the BCG of the massive cool-core CLASH cluster MACS 1931.8-2635 at a redshift of z=0.35, concerning its spatially resolved star formation activity, ionisation sources, chemical abundances, gas and stellar kinematics. The optical MUSE IFS data is supplemented by sub-mm ALMA observations, allowing us to link the properties of the warm ionised gas to those of the cold molecular gas component. Employing different tools on the optical data, we reliable measure i) the fluxes of strong emission lines, which allow us to determine quantitatively the physical conditions of the warm ionised gas, ii) the SFH and iii) the kinematics of the stellar component. The principal findings of our analysis can be summarised as follows: 1. 1. The ionised and molecular gas components are co-spatial. The normalised $\rm{H\alpha}$ to CO flux ratios are close to unity along the nebula and the peak of the CO flux intensity is located at the same position as the peak of the ionised gas flux intensity. 2. 2. The ionised and molecular gas components are co-moving. We measure a gradient from $\sim-300$ to $\sim 300\>\rm{km\leavevmode\nobreak\ s}^{-1}$ with respect to the BCG rest-frame and consistently low velocity dispersions in the order of $\sim 150-250\>\rm{km\leavevmode\nobreak\ s}^{-1}$ for the ionised gas. The diffuse gas confined into the tail is likely falling inward, providing additional fuel for SF and AGN feedback, in accordance with models of chaotic cold accretion. 3. 3. The main source of ionisation in the M1931 BCG is a mix between star formation and other energetic processes which give rise to LINER-like emission, the main candidates being heating of the gas by the surrounding medium and not ionisation by pAGB stars. AGN ionisation dominates only in the BCG core. 4. 4. After applying the spectral decomposition method, we recover an SFR of $\sim 97M_{\odot}/yr$, with the most elevated levels in the BCG core. However, star formation accounts only to $\sim$50-60$\%$ of the energetics that is required to ionise the warm gas. AGN emission accounts for $\sim 10\%$ of the ionised gas emission while the rest of $\sim 30-40\%$ of the energetics needed to excite the gas come from mechanisms which give rise to LINER-like emission. Star formation is, thus, the main contribution to the $\rm{H\alpha}$ flux. 5. 5. The median values recovered from the spatially resolved maps for the electron density, electron temperature, extinction and ionisation parameter are typical for star forming systems and in good agreement with other studies of cool-core BCGs. 6. 6. We measure a median value for the gas phase metallicity of $12+log(O/H)\sim 8\pm 0.35$, with the lowest values observed in the BCG core. Ehlert et al. (2011) measure an ICM metallicity of $12+log(O/H)=8.25$, a value consistent to the gas-phase metallicity we measure in the ISM of the BCG. This can hint to the fact that the warm gas we observe in the ISM of the galaxy has condensed from the ICM. 7. 7. About $80\%$ of the systems stellar mass has formed more than 6 Gyr ago (i.e. at z$\gtrsim 1.5$), followed by subsequent but weaker mass build-up episodes. The SFH of the M1931 BCG is in accordance with theoretical models which suggest a two phase hierarchical formation for BCGs: in-situ SF at high z followed by subsequent mass growth through dry mergers and/or ”cooling-flow” induced SF episodes. Both dry-mergers and/or in-situ SF generated by ICM cooling account to less than 20% of the stellar mass of the M1931 system within the last 1 Gyr (i.e. at z$\lesssim 0.5$). 8. 8. The stellar kinematics reveal a dispersion dominated system, which is typical for massive elliptical galaxies. 9. 9. The gas motions are decoupled from the stellar kinematics and are more closely related to the bulk motions of the ICM. The velocity dispersion of the gaseous component is approximately two times lower than the velocity dispersion of the stellar component, in accordance with the predictions of the precipitation model. Most of the Python codes / Jupyter Notebooks that we have developed for this analysis are publicly available in an online GitHub repository (Ciocan2021-MACS1931-BCG-codes) 222This code repository is archived at DOI: $\rm{10.5281/zenodo.4458242}$, and also available at https://github.com/ciocanbianca/Ciocan2021_MACS1931_BCG_codes. Figure 1: HST composite RGB image of the M1931 BCG: the F160W image is shown in red, the F814W in green and the F390W in blue. The white contours show to the $\rm{H\alpha}$ flux intensity, as measured from MUSE. The cross shows the location of the AGN, as identified according to different diagnostic diagrams, see sect. 5.5. Figure 2: Graphical output of FADO. The panel from the upper side shows the integrated spectrum of the M1931 BCG orange, as recovered from MUSE. The best-fitting synthetic SED is shown in light-blue, composed of the stellar and nebular continuum emission (dark green and red, respectively). The panel from the lower side displays the fits to the strongest emission lines, namely, $\rm{[O\textsc{ii}]}\>\lambda 3727,\lambda 3729$, $\rm{H\beta}$, $\rm{[O\textsc{iii}]}\>\lambda 4959,5007$, $\rm{H\alpha}$, $\rm{[N\textsc{ii}]}\>\lambda 6584$ and $\rm{[S\textsc{ii}]}\>\lambda 6718,6732$. Figure 3: left: Spatially resolved $\rm{H\alpha}$ emission line map for the BCG of the M1931 galaxy cluster. The colour-bar shows the flux of the $\rm{H\alpha}$ line in units of $10^{-17}\rm{ergs\cdot s^{-1}\cdot cm^{-2}}$. right: $\rm{H\alpha}$ equivalent width map measured in $\AA$. The contours correspond to different levels of flux and EW. The white background in both plots corresponds to spaxels with $\rm{SNR_{H\alpha}<10}$. The cross shows the location of the AGN. Figure 4: Comparison between $\rm{H\alpha}$ and CO linear normalised flux. The panel from the left-hand side displays the ratio between $\rm{H\alpha}$ and CO(1-0) fluxes, while the panel from the right-hand side shows the ratio between the fluxes of $\rm{H\alpha}$ and CO(3-2). The cross shows the location of the AGN. The ellipses in the lower right of both panels depict the beam sizes of the ALMA observations. Figure 5: left: Spatially resolved $\rm{H\alpha}$ radial velocity map for the BCG of the M1931 galaxy cluster. The colour-bar displays the radial velocity of the $\rm{H\alpha}$ gas with respect to the BCG rest frame, measured in [km/s]. right: Spatially resolved $\rm{H\alpha}$ velocity dispersion map measured in [km/s]. The white background in both plots corresponds to spaxels with a $\rm{SNR_{H\alpha}<10}$. The cross shows the location of the AGN. The contours show the $\rm{H\alpha}$ flux intensity. Figure 6: Difference between the kinematics of the ionised gas and those of the cold molecular gas. For a better visualisation, we present the zoomed in plots corresponding to a spatial scale of 45 by 45 kpc. left-hand side: Difference between the radial velocity of the $\rm{H\alpha}$ gas and the velocity of the CO(1-0) gas (top) and difference between the velocity of the $\rm{H\alpha}$ gas and the velocity of the CO(3-2) gas (bottom). right-hand side: Ratio between the $\rm{H\alpha}$ velocity dispersion and the CO(1-0) velocity dispersion (top) and ratio between the velocity dispersion of the $\rm{H\alpha}$ gas and the velocity dispersion of the CO(3-2) gas (bottom). The cross in all diagrams shows the location of the AGN, as identified according to the different diagnostic diagrams. The ellipses in the lower right of all panels depict the beam sizes of the ALMA observations. Figure 7: Diagnostic diagrams to distinguish the ionization mechanism of the nebular gas. first row-left: BPT diagram (Baldwin et al., 1981) for all spaxels of the MUSE data cube with SNR¿10 in the emission lines used for the diagnostic. firs row-middle: BPT distribution on the sky showing the spaxels with Seyfert II emission in blue. first row-right: BPT distribution on the sky showing the spaxels which fall in the LINER region of the diagnostic diagram in green. second row-left: Diagnostic diagram of Veilleux & Osterbrock (1987) using the $\rm{[O\textsc{iii}]/{H\beta}}$ vs $\rm{[S\textsc{ii}]/{H\alpha}}$ emission line ratios. second row-middle: Distribution on the sky showing the Seyfert II emission in blue. second row-right: Distribution on the sky showing the LINER emission in green. third row-left: $\rm{[O\textsc{iii}]/{H\beta}}$ vs $\rm{[O\textsc{i}]/{H\alpha}}$ diagnostic diagram of Veilleux & Osterbrock (1987). third row-middle: Distribution on the sky showing the Seyfert II emission in blue. third row-right: Distribution on the sky showing the LINER emission in green. In the latter 2 diagrams, each data point represents a spaxel of the MUSE cube with a SNR¿10 in each emission line used for the diagnostic. The cross from the upper left corner of all three diagnostic diagrams shows the mean error of the flux measurements. The cross in the neighbouring maps show the location of the AGN. Figure 8: Same as Figure 7, but showing in addition the predictions from the fully radiative shock models of Allen et al. (2008) computed with MAPPINGS V. We consider a pre-shock density of 1 $\rm{cm^{-3}}$, shock velocities ranging from 10-350 $\>\rm{km\leavevmode\nobreak\ s}^{-1}$, and different metallicities. The choice of these shock velocities are motivated by the observed velocity dispersion of the nebular gas. The red lines depict a grid with solar metallicity, the blue ones a grid with twice solar metallicity, the cyan lines one with metallicities from Dopita et al. (2005), the orange ones a grid with Large Magellanic Cloud metallicities and the purple ones a grid with Small Magellanic Cloud metallicities. The black data-points represent the spaxels of the MUSE cube, with a SNR¿10 in each emission line used for the diagnostic. The grey data-points represent the spaxels of the MUSE cube, from which we have subtracted the contribution from star formation and AGN emission to the total luminosity of each emission line, after applying the decomposition method of Davies et al. (2017). These data-points are representative for pure LINER-like emission. Figure 9: The upper three panels show the BPT diagram, while the lower three panels show the diagnostic diagram of Veilleux & Osterbrock (1987), with the line ratios extracted from the total emission line fluxes of the individual spaxels. The SF, AGN and LINER base spectra are depicted as the black square, triangle and circle, respectively. The data points are colour coded according to the fraction of $\rm{H\alpha}$ emission attributable to SF (left-hand panels), AGN (middle panels) and ”LINER” (right-hand panels), as computed from the spectral decomposition method described in Davies et al. (2017). Figure 10: Maps depicting the fractional contribution of SF (left), AGN (middle), ”LINER” (right) to the $\rm{H\alpha}$ (first row), $\rm{[O\textsc{iii}]}\>\lambda 5007$ (second row), $\rm{[N\textsc{ii}]}\>\lambda 6584$ (third row) and $\rm{[S\textsc{ii}]}$ (last row) emission lines, as calculated based on the spectral decomposition method. The colour-bar limit displaying the fractional contribution of each ionising mechanism to the total luminosity of the emission lines was set to 0.8, and not the nominal value of 1, for better visualisation purposes. The cross in all diagrams shows the location of the AGN. Figure 11: top-left: Electron density vs electron temperature diagnostic diagram for the core of the M1931 BCG. For this plot, we have used the diagnostics: $\rm{[N\textsc{ii}]}\>\lambda\>5755/6548$, $\rm{[N\textsc{ii}]}\>\lambda\>5755/6584$, $\rm{[S\textsc{ii}]}\>\lambda\>6731/6716$, $\rm{[Ar\textsc{iv}]}\>\lambda\>4740/4711$ and $\rm{[O\textsc{iii}]}\>\lambda\>4363/5007$. The intersection of the curves gives the best fit electron temperature and density. top-right: Same as top- left, but for the $\rm{H\alpha}$ tail of the BCG. bottom-left: Spatially resolved electron temperature map for the BCG of the M1931 cluster. The colour bar shows the temperature measured in [K], as computed using the PyNeb tool, from the $\rm{[N\textsc{ii}]}\>\lambda 5755/6584$ emission lines. bottom- right: Electron density map as computed using the PyNeb tool, from the $\rm{[S\textsc{ii}]}\>\lambda 6731/6716$ doublet. The colour-bar depicts the density in units of $\rm{cm^{-3}}$. The contours in both lower panels show the $\rm{H\alpha}$ flux intensity, while the cross shows the location of the AGN. Figure 12: left: Colour excess map for the M1931 BCG as computed from the Balmer decrement. right: Ionisation parameter map, as computed from the the HII-CH-mistry tool (Pérez-Montero, 2014). The cross in all diagrams shows the location of the AGN. The contours show the $\rm{H\alpha}$ flux intensity. The white background corresponds to the spaxels with a SNR¡10 in the emission lines of interest. Figure 13: Spatially resolved SFR map calculated from the extinction corrected $\rm{H\alpha}$ emission line for each spaxel of the MUSE sub-cube. The colour-bar shows the SFR in units of $\rm{M_{\odot}/yr}$. The cross shows the location of the AGN, while the contours display the $\rm{H\alpha}$ flux intensity. The white background corresponds to the spaxels, which have a $\rm{SNR_{H\alpha}<10}$. Figure 14: Gas phase metallicity (O/H), as computed by the HII-CH-mistry tool. The colour-bar shows the oxygen abundance in units of 12+log(O/H). The cross shows the location of the AGN, while the contours display the $\rm{H\alpha}$ flux intensity. The white background corresponds to the spaxels with a SNR¡10 in the emission lines of interest. Figure 15: Star formation history of the M1931 BCG. The upper panel shows the contribution of the individual SSPs in the best-fitting population vector to the monochromatic luminosity at 6150 $\AA$ as a function of age. The lower panel displays the mass contribution of the SSPs to the total mass of the system as a function of age. The vertical arrow marks the age when $50\%$ of the present-day stellar mass has been in place. The colour coding in both panels display the metallicities of the SSPs, whose values can be found above the upper plot. The vertical bars represent the $1\sigma$ uncertainties. The grey vertical lines connecting the two panels mark the ages of the SSPs. The light-blue shaded area in both panels shows an Akima-smoothed (Akima, 1970) version of the SSP contributions. Figure 16: left: Radial velocity of the stars with respect to the systemic velocity of the most central region of the system. right: Velocity dispersion of the stars in the BCG. The colour-bar in both plots depicts the radial velocity and velocity dispersion in units of [km/s]. Due to the low signal to noise in the stellar continuum, the data needs binning, and we can, therefore, measure the stellar kinematics only in the core of the system. To better visualise the results, we present maps showing just the most central regions of the BCG, corresponding to a spatial scale of 45x45 kpc. The white background corresponds to the spaxels, where the $\rm{SNR_{stellar\>continuum}<10}$. The cross shows the location of the AGN. Figure 17: left: Difference between the radial velocities of the stars and the $\rm{H\alpha}$ gas in the BCG core. right: Ratio between the velocity dispersion of the stars and the $\rm{H\alpha}$ gas in the core of the system. To better visualise the results, we present maps showing just the most central regions of the BCG, corresponding to a spatial scale of 45x45 kpc. The white background corresponds to the spaxels, where the $\rm{SNR_{stellar\>continuum}<10}$. The cross shows the location of the AGN. ###### Acknowledgements. We would like to express our deep gratitude to the members of the observational extragalactic astrophysics group from the department of astrophysics, University of Vienna. Special thanks to Christian Maier and Asmus Böhm for all the valuable discussions and pieces of advice! We would also like to express our gratitude to Maria Luísa Gomes Buzzo for all her help related to analysis of the MUSE data. We would especially like to thank the anonymous referee for providing constructive comments and help in improving the manuscript. This work was supported through FCT grants UID/FIS/04434/2019, UIDB/04434/2020, UIDP/04434/2020 and the project ”Identifying the Earliest Supermassive Black Holes with ALMA (IdEaS with ALMA)” (PTDC/FIS- AST/29245/2017). This research made use of the following PYTHON packages: Astropy (Robitaille & Tollerud, 2013), numpy (van der Walt et al., 2011), matplotlib (Hunter, 2007), MPDAF (Bacon et al., 2016), CMasher (van der Velden, 2020). ## References * Alarie et al. (2019) Alarie, A.; Morisset, C., 2019, RMxAA, 55, 377A * Allen et al. (2004) Allen S. W., Schmidt R. W., et al.,2004, MNRAS, 353, 457 * Allen et al. (2008) Allen, M., G., Groves, B., A., Dopita, M., 2008, ApJS, 178, 20A * Akima (1970) Akima, H. 1970, J. ACM (JACM), 17, 589 * Bacon et al. (2014) Bacon, R., Vernet, J., et al., 2014, MSNGR, 157,13 * Bacon et al. (2016) Bacon, R., Piqueras, L., et al., 2016,ascl:1611.003 * Baldwin et al. (1981) Baldwin, J. A., Phillips, M. M., & Terlevich, R., PASP, 93, 5 * Begelman et al. (1991) Begelman, M.C., Fabian, A. C., 1990, MNRAS, 244P, 26B * Bellstedt et al. (2016) Bellstedt, S., Lidman, C., Muzzin, A., et al., 2016, MNRAS, 460, 2862B * Binette et al. (1994) Binette et al. 1994, A&A 292, 13, * Bittner et al. (2019) Bittner, A.; Falcón-Barroso, J. et al., 2019, A&A,628A,117B * Breda & Papaderos (2018) Breda, I., Papaderos, P., 2018; A&A 614, A48 * Brocklehurst et al. (1971) Brocklehurst, M.,1971, MNRAS.153, 471 * Bruzual & Charlot (2003) Bruzual, G.; Charlot, S., 2003, MNRAS, 344, 1000B * Burke et al. (2015) Burke, C., Collins, C., 2013, MNRAS, 434, 2856 * Burke et al. (2015) Burke, C., Hilton, M., Collins, C., 2015, MNRAS, 449, 2353B * Byler et al. (2019) Byler, N. Dalcanton, J. et al., 2019, AJ, 158, 2B * Calzetti et al. (2001) Calzetti, D., 2001, PASP, 113, 1449C * Cappellari & Copin (2003) Cappellari, M., Copin, Y. 2003, MNRAS, 342, 345C * Cappellari & Emsellem (2004) Cappellari, M., Emsellem, E., 2004,PASP, 116, 138C * Cardelli et al. (1989) Cardelli, J., A., Clayton, G., C., Mathis, J., S., 1989, ApJ, 345, 245C * Casey (2012) Casey, C. M. 2012, MNRAS, 425, 3094 * Cerulo et al. (2019) Cerulo, P., Orellana, G. A., Covone, G., 2019, MNRAS, 487, 3759 * Chabrier (2003) Chabrier G., 2003, PASP, 115, 763 * Cid Fernandes et al. (2005) Cid Fernandes, R.; Mateus, A. Sodré, L., et al., 2005, MNRAS, 358, 363C * Collins et al. (2009) Collins C. A., Stott J. P., Hilton M., Kay S. T., et al., 2009, Nature, 458, 603 * Cresci et al. (2017) Cresci, G., Vanzi, L., Telles, E., et al. 2017, A&A, 604A, 101C * Davies et al. (2017) Davies, R. L.; Groves, B.; Kewley, L. et al., 2017, MNRAS, 470, 4974D * Delgado-Inglada et al. (2014) Delgado-Inglada, G., Morisset, C., Stasińska, G., 2014, MNRAS,440, 536D * De Lucia et al. (2007) De Lucia G., Blaizot J., 2007, MNRAS, 375, 2 * Díaz-García & Knapen (2020) Díaz-García, S.; Knapen, J. H., 2020, A&A, 635A, 197D * Donahue et al. (1991) Donahue, M., Voit, G. M., 1991, ApJ, 381, 361D * Voit and Donahue (1997) Voit G. M., Donahue M., 1997, ApJ, 486, 242 * Donahue et al. (2000) Donahue, M., Mack, J., Voit, G. M., et al., 2000, ApJ, 545, 670D * Donahue et al. (2011) Donahue, M., de Messières, G., E., O’Connell, R., W., Voit, G. M., et al., 2011, ApJ, 732, 40D * Donahue et al. (2015) Donahue, M., Connor, T., et al., 2015, ApJ, 805, 177 * Dopita et al. (2005) Dopita, M. A., Groves, B. A., Fischera, J., et al. 2005, ApJ, 619, 755 * Dopita et al. (2010) Donahue, M., Bruch, S., et al. 2010, ApJ,715,881D * Edwards et al. (2020) Edwards, L. O. V., Salinas, M., Stanley, S., Holguin W., et al. 2020, MNRAS, 491, 2617E * Ehlert et al. (2011) Ehlert, S.; Allen, S. W., et al., 2011, MNRAS, 411, 1641E * Fabian & Nulsen (1977) Fabian A. C., Nulsen P. E. J., 1977, MNRAS, 180, 479 * Ferland (2013) Ferland G. J. et al., 2013, Rev. Mex. Astron. Astrofis., 49, 137 * Fogarty et al. (2015) Fogarty, K., Postman, M., et al., 2015, ApJ, 813, 117F * Fogarty et al. (2017) Fogarty, K., Postman, M., et al., 2017, ApJ, 846, 103 * Fogarty et al. (2019) Fogarty, K., Postman, M., et al. 2019, ApJ, 879, 103F * Gaspari et al. (2013) Gaspari M., Ruszkowski M., Oh S. P., 2013, MNRAS, 432, 3401 * Gaspari et al. (2018) Gaspari, M., McDonald, M., Hamer, S. L., et al. 2018, ApJ, 854, 167 * Gomes et al. (2016) Gomes, J. M.;, Papaderos, P., Kehrig, C., Vílchez, J. M., Lehnert, M. D., 2016, A&A, 588, A68 * Gomes & Papaderos (2017) Gomes, J. M.; Papaderos, P., 2017, A&A,603A, 63G * Girardi et al. (1996) Girardi, L., Bressan, A., Chiosi, C., Bertelli, G., Nasi, E. 1996, A&AS, 117, 113 * Gutkin et al. (2016) Gutkin, J., Charlot, S., Bruzual, G.,2016, MNRAS,462,1757G * Hamer et al. (2016) Hamer S. L., Edge A. C. et al. 2016, MNRAS 460, 1758, 1789 * Hunter (2007) Hunter, J. D. 2007, Computing In Science & Engineering, 9, 90 * Iani et al. (2019) Iani, E., Rodighiero, G., Fritz, J., Cresci, G., et al. 2019, MNRAS, 487,5593I * Kauffmann et al. (2003) Kauffmann, G., Heckman, T. M., Tremonti. C. et al., 2003, MNRAS, 346, 1055 * Kennicutt (1998b) Kennicutt, R., C., Jr. 1998, ARA&A, 36, 189 * Kennicutt (1998a) Kennicutt, R., C., Jr.,1998,ApJ,498,541K * Kennicutt et al. (2012) Kennicutt, R. C., Evans, N. J. 2012, ARA&A, 50, 531 * Kewley et al. (2001a) Kewley, L.J. et al., 2001, ApJS, 132, 37 * Kewley et al. (2001b) Kewley, L. J.; Dopita, M. A et al., 2001, ApJ, 556, 121K * Kewley et al. (2013) Kewley, L. J., Dopita, M. A., Leitherer, C., et al. 2013, ApJ, 774, 100 * Kewley et al. (2019) Kewley, L., J., Nicholls, D.,C., Sutherland, R., S,. et al. 2019,ARA&A, 57, 511K * Kirkpatrick &McNamara (2015) Kirkpatrick, C. C.; McNamara, B. R., 2015, MNRAS, 452, 4361K * Lavoie et al. (2016) Lavoie, S., Willis, J. P., Democles, J., et al., 2016, MNRAS, 462, 4141 * Li et al. (2014b) Li, Y., & Bryan, G. L. 2014b, ApJ, 789, 153 * Li et al. (2015) Li, Y., Bryan, G. L., Ruszkowski, M., et al. 2015, ApJ, 811, 73 * Li et al. (2017) Li, Y., Ruszkowski, M., & Bryan, G. L. 2017, ApJ, 847, 106 * James et al. (2009) James, B. L.; Tsamis, Y. G.; Barlow, M. J., et al., 2009, MNRAS, 398, 2J * Lauer et al. (2014) Lauer, T. R., Postman, M. Strauss, M. A., ET AL. 2014, ApJ, 797, 82 * Loubser et al. (2009) Loubser S. I., Sánchez-Blázquez P., Sansom A. E., Soechting I. K., 2009, MNRAS, 398, 133 * Loubser et al. (2013) Loubser, S. I.; Soechting, I. K., 2013, MNRAS, 431.2933L * Luridiana et al. (2015) Luridiana, V., Morisset, C., Shaw, R. A., 2015, A&A, 573A, 42L * Maiolino & Mannucci (2019) Maiolino, R.; Mannucci, F., 2019, A&A, Rv, 27, 3M * Maraston & Strömbäck (2011) Maraston C., Strömbäck G., 2011, MNRAS, 418, 2785 * McDonald et al. (2012a) McDonald, M., Veilleux, S., & Rupke, D. S. N. 2012, ApJ, 746,153 * McDonald et al. (2012b) McDonald, M., Bayliss, M., Benson, B. A., et al., 2012, Natur, 488, 349M * McNamara & Nulsen (2007) McNamara B. R., Nulsen P. E. J., 2007, ARA&A, 45, 117 * McNamara et al. (2014) McNamara, B. R.; Russell, H. R.; Nulsen, P. E. J. et al., 2014, ApJ, 785, 44M * Merten et al. (2015) Merten, J.; Meneghetti, M.; Postman, M., et al. 2015, ApJ, 806, 4M * Mollá et al. (2009) ollá M., Garcíıa-Vargas M. L., Bressan A., 2009, MNRAS, 398, 451 * O’Dea et al. (2008) O’Dea, C. P., Baum, S. A., Privon, G., et al. 2008, ApJ, 681, 1035 * Olivares et al. (2019) Olivares, V., Salome, P., Combes, F., Hamer, S., et al., 2019, A&A, 631A, 22O * Olsson et al. (2010) Olsson, E., Aalto, S., Thomasson, M., & Beswick, R. 2010, A&A, 513, A11 * Ostriker & Hausman (1977) Ostriker J. P., Hausman M. A., 1977, ApJ, 217, L125 * Osterbrock and Ferland (2006) Osterbrock D. E., Ferland G. J., 2006, Astrophysics of gaseous nebulae and active galactic nuclei, 2006, agna.book,O * Papaderos et al. (2013) Papaderos, P., Gomes, J. M., Vílchez, J. M., Kehrig, C., et al. 2013, A&A, 555, L1 * Persad & Sharma (2015) Prasad D., Sharma P., Babul A., 2015, ApJ, 811, 108 * Pérez-Montero (2014) Pérez-Montero et al., 2014, MNRAS, 441, 2663 * Pérez-Montero (2017) Pérez-Montero, E., 2017, PASP, 129d3001P * Pérez-Montero (2017) Pérez-Montero et al., 2019, MNRAS, 483, 3322 * Pettini & Pagel (2004) Pettini, M.,Pagel, B. E. J., 2004, MNRAS, 348L, 59P * Postman et al. (2012) Postman, M., Coe, M., Benitez, N., et al. 2012, ApJS, 199, 25 * Proxauf et al. (2014) Proxauf, B.; Öttl, S.; Kimeswenger, S., 2014, A&A, 561A, 10P * Prugniel et al. (2007) Prugniel P., Soubiran C., Koleva M., Le Borgne D., 2007, ArXiv Astrophysics e-prints, * Rines et al. (2007) Rines, K., Finn, R. et al. 2007, ApJ,665L,9R * Rodríguez-Merino et al. (2005) Rodríguez-Merino L. H., Chavez M., Bertone E., Buzzoni A., 2005, ApJ, 626, 411 * Rosati et al. (2014) Rosati, P., Balestra, I., Grillo, C., et al., 2014, The Messenger, 158, 48 * Salpeter (1955) Salpeter E. E., 1955, ApJ, 121, 161 * Santos et al. (2016) Santos, J. S., Balestra, I. et al., 2016, MNRAS, 456L, 99 * Sarzi et al. (2006) Sarzi, M., Falcón-Barroso, J., Davies, R. L. 2006, MNRAS, 366,1151S * Schawinski et al. (2007) Schawinski, K., Thomas, D., 2007, MNRAS, 382, 1415 * Sharp & Bland-Hawthorn (2010) Sharp & Bland-Hawthorn 2010, ApJ 711, 818 * Shields (1992) Shields, J. C. 1992, ApJL, 399, L27 * Soto et al. (2016) Soto K. T., Lilly S. J., et al., 2016, MNRAS, 458, 3210 * Sparks et al. (2012) Sparks, W. B., Pringle, J. E., Carswell, R. F., Donahue, M., et al., 2012, ApJ, 750L, 5S * Storn & Price (1997) Storn & Price, 1997, Journal of Global Optimization 11: 341–359 * Stasinska et al. (2008) Stasinska et al. 2008, MNRAS 391, L29 * Tremblay et al. (2012) Tremblay, G. R.; O’Dea, C. P., et al. 2012, MNRAS,424,1042T * Tremblay et al. (2015) Tremblay, G. R.; O’Dea, C. P.; Baum, S. A., et al. 2015, MNRAS, 451, 3768T * Tremblay et al. (2018) Tremblay, G. R., Combes, F.; Oonk, J. B. R., 2018, ApJ, 865, 13T * Robitaille & Tollerud (2013) Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33 * Thomas & Dopita (2018) Thomas, A. D., Dopita, M., et al., 2018, ApJ, 856, 89T * Umetsu et al. (2016) Umetsu, K., Zitrin, A., Gruen, D., et al., 2016, ApJ, 821, 116 * Vantyghem et al. (2016) Vantyghem, A. N., McNamara, B. R., Russell, H. R., et al. 2016, ApJ, 832, 148 * Vale Asari et al. (2016) Vale Asari, N.; Stasinska, G.; Morisset, C., 2016, MNRAS, 460, 1739 * Veilleux & Osterbrock (1987) Veilleux & Osterbrock 1987, ApJ Suppl. 63, 295 * Vikhlinin et al. (2005) Vikhlinin A., Markevitch M., Murray S. S., Jones C., Forman W., Van Speybroeck L., 2005, ApJ, 628, 655 * van der Velden (2020) van der Velden, E. 2020, Journal of Open Source 1 Software, 5(46), 2004 * van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science Engineering, 13, 22 * Voit & Donahue (2011) Voit, G. M., Donahue, M., 2011, ApJ, 738L., 24V * Voit & Donahue (2015) Voit, M., Donahue, M., et al., 2015, ApJ, 799L,1V * Voit et al. (2017) Voit, G. M., Meece, G., Li, Y., et al. 2017, ApJ, 845, 80 * Webb et al. (2015) Webb, T. M. A., Muzzin, A., Noble, A., et al., 2015, ApJ, 814, 96W * Weilbacher et al. (2020) Weilbacher P. M., Palsa, R., Streicher O., et al., 2020, arXiv:200608638W * Zhang et al. (2016) Zhang et al. 2016, MNRAS, 466, 3217 ## Appendix A Emission line flux maps Figure 18 displays the flux maps in the upper two rows for the $\rm{[O\textsc{ii}]}\>\lambda 3727$, $\rm{[O\textsc{iii}]}\>\lambda 5007$, $\rm{H\beta}$, $\rm{[N\textsc{ii}]}\>\lambda 6584$, $\rm{[S\textsc{ii}]}\>\lambda 6718,6732$ emission lines in units of $10^{-17}\rm{ergs\cdot s^{-1}\cdot cm^{-2}}$ . The lower two rows show the EW distribution for the same emission lines, in units of $[\AA]$. We observe similar distribution of fluxes and EWs for all the strong emission lines in the optical spectrum of the M1931 BCG. Figure 18: The upper two rows display the flux maps for the $\rm{[O\textsc{ii}]}\>\lambda 3727$, $\rm{[O\textsc{iii}]}\>\lambda 5007$, $\rm{H\beta}$, $\rm{[N\textsc{ii}]}\>\lambda 6584$, $\rm{[S\textsc{ii}]}\>\lambda 6718,6732$ gas in the M1931 BCG, while the lower two rows show the equivalent widths in units of $[\AA]$ for the same set of emission lines. The white background in all plots correspond to the spaxels with a SNR¡10. The cross shows the location of the AGN. The contours show different levels of flux intensity and EW. ## Appendix B Emission line velocity maps Figure 19 shows the radial velocity and velocity dispersion of the $\rm{[O\textsc{ii}]}\>\lambda 3727$, $\rm{[O\textsc{iii}]}\>\lambda 5007$, $\rm{H\beta}$, $\rm{[N\textsc{ii}]}\>\lambda 6584$, $\rm{[S\textsc{ii}]}\>\lambda 6718,6732$ gas in the BCG of the M1931. We observe very similar kinematics for all the strong emission lines in the galaxy spectrum. Figure 19: Kinematics of the warm ionised gas. The panels from the upper two rows display the spatially resolved radial velocity maps for the$\rm{[O\textsc{ii}]}\>\lambda 3727$, $\rm{[O\textsc{iii}]}\>\lambda 5007$, $\rm{H\beta}$, $\rm{[N\textsc{ii}]}\>\lambda 6584$, $\rm{[S\textsc{ii}]}\>\lambda 6718,6732$ gas in the BCG of the M1931 galaxy cluster. The colour-bar displays the radial velocity of the gas with respect to the BCG rest frame, measured in [km/s]. The panels from the lower 2 rows show the spatially resolved velocity dispersion maps for the same emission lines, measured in [km/s]. The white background in all plots correspond to the spaxels with a SNR¡10. The cross shows the location of the AGN. The contours show the $\rm{H\alpha}$ flux intensity. ## Appendix C Physical properties of integrated regions Figure 20 shows the white light image of the MUSE sub-cube centred on the BCG of the M1931 galaxy cluster. The contours show the $\rm{H\alpha}$ flux intensity. The squares encompass the different CO(1-0) source regions, while the ellipses show the cold dust regions, as defined by Fogarty et al. (2019) \- see their Figs. 4 and 6. We have extracted these 8 regions as single integrated spectra and computed the physical properties of the ionised gas, which are listed in Table 1. The last row of the table contains the physical properties of the ionised gas, as derived from the integrated spectrum of the 90x90 spaxels MUSE sub-cube centred on the BCG. This table lists the flux of the $\rm{H\alpha}$ line in units of [$10^{-15}\rm{ergs\cdot s^{-1}\cdot cm^{-2}}$], the systemic velocity and velocity dispersion of the $\rm{H\alpha}$ gas in [km/s], the electron density $\rm{n_{e}}$ in $\rm{[cm^{-3}]}$, the electron temperature $\rm{T_{e}}$ in [K], the colour excess E(B-V), the ionisation parameter log(U), the SFR in units of $\rm{[M_{\odot}/yr]}$ without correcting for the contribution of LINER-like and AGN emission to the luminosity of the $\rm{H\alpha}$ line, as the Davies et al. (2017) decomposition method works only for a spaxel by spaxel analysis and not for integrated regions, and the oxygen abundance 12+log(O/H). The methods and calibrations used to determine the properties of the ionised gas are the same ones as used for the spaxel by spaxel analysis. It is worth mentioning that we could not recover the electron density and temperature using the PyNeb tool for regions B, 2 and 3 and for the whole sub-cube, due to the weakness of the temperature-sensitive emission lines, and also because in these regions, we probably observe the luminosity-weighted average of emission lines arising from different volumes with different physical conditions. Figure 20: White light image of the MUSE sub-cube centred on the BCG of the M1931 galaxy cluster. The contours show the $\rm{H\alpha}$ flux intensity. The squares encompass the different CO(1-0) source regions, while the ellipses show the cold dust regions, as defined by Fogarty et al. (2019). We have extracted these 8 regions as single spectra and computed the most important physical properties of the ionised gas, which are listed in Table 1. Table 1: This table contains the most important physical properties of the ionised gas, as computed for specific regions of the system. Regions A-E correspond to the CO(1-0) source regions, while the regions 1-3 correspond to the cold dust regions, as defined by Fogarty et al. (2019). We have extracted from the MUSE cube integrated spectra corresponding to these 8 regions, and we have computed the systemic velocity and velocity dispersion of the $\rm{H\alpha}$ gas, the electron density $\rm{n_{e}}$, the electron temperature $\rm{T_{e}}$, the colour excess E(B-V), the ionisation parameter log(U), the SFR $\rm{[M_{\odot}/yr]}$ and the oxygen abundance 12+log(O/H). The las row of the table refers to the integrated spectrum of the whole (90x90 spaxels) sub-cube, displayed in Fig. 20 Region $\rm{H\alpha}$ flux $[10^{-15}\rm{ergs\cdot s^{-1}\cdot cm^{-2}}]$ $\rm{vel_{H\alpha}}$ [km/s] $\rm{\sigma_{H\alpha}}$ [km/s] $\rm{n_{e}\>[cm^{-3}]}$ $\rm{T_{e}}$ [K] E(B-V) log(U) SFR $\rm{[M_{\odot}/yr]}$ $\rm{12+log(O/H)}$ region A $1.07\pm 0.0006$ $-109.3\pm 21.9$ $173.1\pm 3.2$ $29.16$ $9154.4$ $0.261\pm 0.00009$ $-2.946\pm 0.25$ $6.4\pm 0.0015$ $7.87\pm 0.26$ region B $0.26\pm 0.0006$ $-160.6\pm 36.9$ $183.4\pm 3.0$ $-$ $-$ $0.147\pm 0.00032$ $-2.91\pm 0.27$ $1.23\pm 0.0011$ $7.84\pm 0.2$6 region C $0.74\pm 0.00004$ $-142.3\pm 59.7$ $158.1\pm 3.5$ $354.8$ $11704.02$ $0.248\pm 0.00006$ $-2.87\pm 0.2$ $4.28\pm 0.0008$ $7.79\pm 0.23$ region D $2.91\pm 0.00007$ $-89.26\pm 24.66$ $187.6\pm 2.9$ $785.96$ $8350.6$ $0.50\pm 0.000035$ $-2.94\pm 0.19$ $29.72\pm 0.0028$ $7.835\pm 0.139$ region E $0.94\pm 0.00008$ $-145.47\pm 88.0$ $207.7\pm 2.6$ $561.18$ $9114.7$ $0.46\pm 0.000141$ $-2.96\pm 0.23$ $8.72\pm 0.0032$ $7.88\pm 0.203$ region 1 $3.02\pm 0.00010$ $370.43\pm 65.7$ $187.4\pm 2.9$ 30.5 $10125.2$ $0.152\pm 0.00018$ $-3.08\pm 0.3$ $14.08\pm 0.006$ $7.89\pm 0.17$ region 2 $1.32\pm 0.00009$ $333\pm 65.7$ $250.9\pm 2.2$ $-$ $-$ $0.087\pm 0.00035$ $-3.07\pm 0.31$ $5.34\pm 0.004$ $8.37\pm 0.0$ region 3 $0.24\pm 0.00010$ $270.2\pm 34.42$ $251.4\pm 2.2$ $-$ $-$ $0.085\pm 0.0009$ $-2.90\pm 0.2$ $0.67\pm 0.0015$ $7.71\pm 0.13$ whole sub-cube $26.32\pm 0.00046$ $-78.7\pm 4.6$ $261.5\pm 2.1$ $-$ $-$ $0.29\pm 0.0009$ $-3.24\pm 0.12$ $167.3\pm 0.0032$ $8.4\pm 0.13$
# Short-term prediction of Time Series based on bounding techniques Pedro Cadahía<EMAIL_ADDRESS>José M. Bravo Escuela Técnica Superior de Ingeniería, Universidad de Huelva, Carretera Huelva - Palos de la Frontera s/n. 21819. La Rábida - Palos de la Frontera. Huelva. Spain ###### Abstract In this paper it is reconsidered the prediction problem in time series framework by using a new non-parametric approach. Through this reconsideration, the prediction is obtained by a weighted sum of past observed data. These weights are obtained by solving a constrained linear optimization problem that minimizes an outer bound of the prediction error. The innovation is to consider both deterministic and stochastic assumptions in order to obtain the upper bound of the prediction error, a tuning parameter is used to balance these deterministic-stochastic assumptions in order to improve the predictor performance. A benchmark is included to illustrate that the proposed predictor can obtain suitable results in a prediction scheme, and can be an interesting alternative method to the classical non-parametric methods. Besides, it is shown how this model can outperform the preexisting ones in a short term forecast. ###### keywords: Nonparametric methods , Nonlinear models , Optimization , Time series , Univariate predicting method left=1.5in,right=1.5in,top=1.5in,bottom=1.5in ## 1 Introduction The purpose of this paper is to provide a new model for time series set up on the observed past values of the time series, by means of a non-parametric approach. It is well-known fact that in parametric time series analysis the relationship between observed past values of the time series and the prediction is defined by specifying a functional form and a fixed finite number of parameters. Widely studied parametric options are auto-regressive (AR) models, moving average (MA) models, and different combinations as ARMA or ARIMA models [1, 2]. In nonlinear time series, some common parametric structures has been studied, the threshold auto-regressive (TAR) models [3], the exponential auto-regressive (EXPAR) model and smooth-transition auto- regressive (STAR) models are some examples [4, 5]. The performance of the parametric predictor is a consequence of the a priori function form chosen. By contrast, in non-parametric approaches a more flexible class of functions is considered. Non-parametric methods avoid the choosing of a specific functional form. Collected data provides the information to obtain a new prediction. The price to pay is the ’curse of dimensionality’, that is, a possible poor performance in high dimensions prediction problems. Local conditional mean or median method provides a prediction using the mean or the median of a neighborhood of the interest point [6]. The Nadaraya-Watson estimator averages past observations by a kernel function to obtain a prediction [7, 8]. Local linear o polynomial functions of past observations can be used to approximate a nonlinear relationships [9, 10]. Semi-parametric models as nonlinear additive auto-regressive (NAAR) models or functional coefficient auto-regressive (FAR) models have been proposed too [11, 12]. Many researchers have written an extensive review of non-parametric methods applied to time series prediction [13, 14, 15, 16]. In this paper a new non-parametric prediction method is proposed. The prediction is obtained by a weighted sum of past observations. An upper bound of the prediction error is computed under some deterministic and stochastic assumptions. A constrained optimization problem is formulated to minimize the upper bound of the prediction error and to obtain the set of optimal weights used to compute the prediction. The optimization problem includes a parameter to balance the deterministic-stochastic assumptions. This is the main novelty of the proposed method. This parameter can be tuned with training data and a cross-validation scheme to improve the predictor performance [17]. The proposed predictor provides a general framework that encompasses some relevant non-parametrics predictors as the Nadaraya-Watson predictor [7, 8] or predictors based on local linear regression [10], these models have been widle used in the literature [18, 19]. The paper is organized as follows. In Section 2, the problem formulation is addressed. The deterministic and stochastic assumptions are presented in Section 3. The new predictor is proposed in Section 4. Benchmark results are illustrated in Section 5. Finally, Section 6 reports some conclusions. ## 2 Formulation It is considered a discrete111It is assumed a discrete version of data. time series process $\\{Z_{t}\\}$ with $t\in\\{0,\pm 1,\pm 2,\ldots\\}$. At time instant $k$ it is assumed that past data $\\{Z_{t}\\}$ with $t\in\\{k,k-1,k-2,...\\}$ has been observed and there is interest in providing a forecast for predicting $Z_{k+1}$. Once the detrend is applied to the time series,222It should be noted that in coherence with the prediction system and in order to estimate $\mu_{k+1}$, only the past observations can be used, independently of the detrending method used. the time series is now the series $\\{y_{t}\\}$ with $t\in\\{k,k-1,...\\}$, where $Z_{t}=y_{t}+\mu_{t}$, being $\mu_{t}$ the trend component and $y_{k+1}$ the detrended future time series value. It is also denoted by $\\{z_{j}\\}$ with ${j=0,1,...,k}$ the set of the vectors consisting of the observed past values of the time series, that is $z_{j}=[y_{j},y_{j-1},\ldots,y_{j-p-1}]^{T}$ . Henceforth this $p$-dimensional vector set will be called embedding vector. This set of data is used to forecast future values for the time series. It is a must to clarify this point in order to precise the sense of the parametric and non-parametric models used in this article. A parametric approach is characterized by the use of the training set for estimating the parameters of the model and once this inference is done the data set is not used again. The non-parametric approach considered in this work, it is a local approach in which each forecast is obtained by using all the available data set but selecting a neighborhood of the interest point. In this sense, it is assumed that the time series can be generated by an unknown local linear model. ###### Assumption 1 Considering the forecast of $y$ modeled as: $y_{k+1}=r(z_{k})^{T}\Phi_{k}+e_{k}$ (1) where it is assumed that the existence of an unknown vector of parameters $\Phi_{k}\in\mathcal{R}^{n}$, a known function $r(\cdot)$ valuated at the embedding set and an unknown error term $e_{k}$.333This modeling is flexible enough to admit alternative assumptions about the error term. As discussed later, the model is presented by using both deterministic and stochastic bounds for the error term $e_{k}$. In order to complete the presentation of the model it should be discussed in more detail the so called regressor generator function $r(\cdot)$. This function allows transform the original values into vectors of dimension $n_{r}$ by means of the vectors belonging to the embedding set. A formal definition of this regressor generator function is as follows. ###### Definition 1 (Regressor generator function) The function $r(\cdot):\mathcal{R}^{p}\rightarrow\mathcal{R}^{n_{r}}$ specifies the regressor vector components. This function admits any kind of auto-regressive representation, nonlinear expression of past components and different functional forms for decomposing the different components of the time series.444For instance suppose a set $z_{k}=[y_{k},y_{k-1},y_{k-2}]$. Then $r(z_{k})$ could be the function $r(z_{k})=z_{k}$ that is, an auto- regressive model. There exist also alternative configurations such as a nonlinear auto-regressive model $r(z_{k})=[y_{k}^{2},y_{k-1},y_{k-2},y_{k}\cdot y_{k-2}]$ or any possible combination. ###### Definition 2 (Linear Prediction) For an instant $k$, a forecast of $y_{k+1}\in\mathbb{R}$ can be derived through a linear combination of past data, that is: $\begin{array}[]{cll}\hat{y}_{k+1}(\Psi)&=&b_{Y}^{T}\Psi\\\ &=&\displaystyle\sum\limits_{j=1}^{v}\Psi_{j}y_{j}\end{array}$ (2) where $1\leq v\leq k$ , $\Psi\in\mathbb{R}^{v}$ is a weight vector and $b_{Y}=[y_{1},\ldots,y_{v}]^{T}$. When $v=k$, all data is used to forecast $y_{k+1}$. Then, the forecast error can be explained as the difference between $y_{k+1}$ and the linear prediction $\hat{y}_{k+1}(\Psi)$. ###### Definition 3 (Prediction error) It is defined the prediction error $\hat{e}_{k}(\Psi)$, being $k$ the time instant: $\hat{e}_{k}(\Psi)=y_{k+1}-\hat{y}_{k+1}(\Psi).$ (3) Thus, the crux of the matter is how to get not only the weight vector $\Psi$ but also an outer limit of the prediction error. This outer limit is estimated by using the assumed relationship between $z_{j-1}$ and $y_{j}$, with $j=1,2,\ldots,k$ in expression (1). Then, a set of past components $z_{j}$ with $j=0,1,...,k$ should be available. Section 3 formulates these key ideas. ## 3 Assumptions In this section the assumptions are based on some local affine approximations. In order to construct the proposed predictor, the definition of approximation error is used. This is, the result of using the vectors $r(z_{j-1})$ and $\Phi_{k}$ to infer $y_{j}$. 555The reader should note that the point is to relate the k-th prediction error $e_{k}$ and the prediction errors generated by using the k-th vector of unknown parameters $\Phi_{k}$ with the i-th regressors $r(z_{i})$, with $i=0,\ldots,k-1$. ###### Definition 4 (Approximation error) For a vector $\Phi_{k}$, the approximation error $e_{j-1}$ with the pair $(z_{j-1},y_{j})$ being $j=1,2,...,k$ can be defined as: $e_{j-1}=e_{j-1}(\Phi_{k})=y_{j}-r(z_{j-1})^{T}\Phi_{k}.$ (4) From now on the dependency of $e_{j-1}(\Phi_{k})$ with $\Phi_{k}$ is omitted. It should be noted that the value of $\Phi_{k}$ is unknown. The prediction error $\hat{e}_{k}(\Psi)$ may be biased by the selected vector $\Psi$. The theorem 1 suggests an approach to define the prediction error $\hat{e}_{k}(\Psi)$ as a function of the vector $\Psi$ and the aforementioned approximation errors $e_{j}$ . Theorem 1 proposes an expression to characterize the prediction error $\hat{e}_{k}(\Psi)$ as a function of vector $\Psi$ and approximation errors $e_{j}$ previously defined. ###### Theorem 1 For either vector $\Psi\in\mathbb{R}^{v}$ so that $\displaystyle\sum\limits_{j=1}^{v}\Psi_{j}r(z_{j-1})=r(z_{k}),$ (5) so the prediction error $\hat{e}_{k}(\Psi)=y_{k+1}-\hat{y}_{k+1}(\Psi)$ is set as a linear combination of the approximation errors $e_{j}$, this is $\hat{e}_{k}(\Psi)=-\displaystyle\sum\limits_{j=1}^{v}\Psi_{j}e_{j-1}+e_{k}.$ Remark that $\Psi_{i}$ refers to the j-$th$ item of vector $\Psi$. A proof of the theorem can be found in the Appendix section 7.1. Matricially, expression (5) is equivalent to $\Psi\in\\{\Psi\;:\;A^{T}\Psi=r(z_{k})\\}$ where matrix $A$ is: $A^{T}=\left[\begin{array}[]{ccccccc}r(z_{0})&r(z_{1})&...&r(z_{v-1})\\\ \end{array}\right].$ (6) It is necessary to know the vector $\Phi_{k}$ to get an error value $e_{j-1}$. Alternatively, other properties of $e_{j-1}$ can also be assumed. Both deterministic and stochastic options are available in the literature. In a deterministic view, an upper bound of $|e_{j-1}|$ is considered. This idea is discussed in the section 3.1. ### 3.1 Deterministic error In methods with bounded-error [20], a parametric model and an unknown but bounded-error are regarded. An upper limit of this error is expected to estimate a set of consistent parameters. Similar assumptions are presumed in this work in order to develop a predictor with deterministic assumptions. ###### Assumption 2 Constants $\sigma,L\geq 0$ are set such that approximation errors $e_{j-1}$ and $e_{k}$ are delimited by expressions $|e_{j-1}|\leq\sigma+L||z_{j-1}-z_{k}||$ (7) with $j=1,...,k$ and $|e_{k}|\leq\sigma$ (8) being $||\cdot||$ a norm. The error term is bounded by $|e_{k}|\leq\sigma$. The assumption 2 has been broadly used in the bounded-error system identification’s context [20]. Remark that $\sigma$ is the tunning parameter that adds the minimum level of noise considered and $L$ the tunning parameter of uncertainty due to the local affine approximation. ###### Remark 1 Historical data can be used to estimate an approximate value of $\sigma$ and $L$ when no prior knowledge of these constants is available. In [21] a method based on bounded-error and non-counterfeit data is provided. ###### Lemma 1 Considering Assumptions 1 and 2, for any $\Psi$ such that $A^{T}\Psi=r(z_{k})$, prediction error $\hat{e}_{k}(\Psi)=y_{k+1}-\hat{y}_{k+1}(\Psi)$ is bounded by: $|\hat{e}_{k}(\Psi)|\leq\displaystyle\sum\limits_{j=1}^{v}|\Psi_{j}|(\sigma+L||z_{j-1}-z_{k}||)+\sigma.$ (9) Proof.Through a straightforward application of Theorem 1 and bound $|e_{i}|\leq\sigma+L||z_{j}-z_{k}||$ is obtained the expression (9). _QED_ At this point the possibility of considering how to obtain the vector $\Psi$ is established. A wise option is to use the vector that minimizes an upper bound of $|\hat{e}_{k}(\Psi)|$ using the expression (9). ###### Definition 5 (Deterministic predictor) The deterministic prediction $\hat{y}_{k+1}(\Psi^{D})$ is defined by $\hat{y}_{k+1}(\Psi^{D})=\displaystyle\sum\limits_{j=1}^{v}\Psi^{D}_{j}y_{j},$ where vector $\Psi^{D}$ adresses the problem of constrained linear optimization as follows $\begin{array}[]{ccc}\Psi^{D}=&arg\min\limits_{\Psi}&||W_{k}\Psi||_{1}\\\ &s.t.&A^{T}\Psi=r(z_{k})\\\ \end{array}$ (10) where $W_{k}$ is a diagonal matrix with central items $w^{k}_{j,j}=\sigma+L||z_{j-1}-z_{k}||$ with $j=1,...,v$. Then, an upper bound of the absolute value of the prediction error is minimized by the vector $\Psi^{D}$. It is important to note that the notation $\Psi^{D}$ refers to the deterministic nature of the estimate. Expression (10) use $L_{1}$-norm to obtain the vector solution $\Psi^{D}$. In this case, $\Psi^{D}$ is sparse, that is, most of number of components $\Psi^{D}_{i}$ of vector $\Psi^{D}$ are zero. As $\Psi^{D}$ is a sparse matrix and considering Definition 5 then it is deduced that $\hat{y}_{k+1}(\Psi^{D})$ use a relatively short number of measurements $y_{i}$. ### 3.2 Stochastic error The stochastic view consider the approximation error $e_{j}$ as a random variable. So there are some assumptions about the mean and the variance of $e_{j}$. Specifically there are assumptions in the dimension of variance of $e_{j}$. ###### Assumption 3 The independent variables, approximation error $e_{j-1}$ and error term $e_{k}$ have zero mean and variances bounded by $var(e_{j-1})\leq\sigma+L||z_{j-1}-z_{k}||$ and $var(e_{k})\leq\sigma$ accordingly. Positive values of constants $\sigma$ and $L$ is taken as prior knowledge. As indicated in Remark 1, if not available previous knowledge of the constants $\sigma$ and $L$, historical data may be used to obtain an estimation. The variance of error $e_{j-1}$ consists of a minimum value defined by $\sigma$ and a term depending of the local approximation, i.e. $||z_{j-1}-z_{k}||$. it is possible to extend that as $e_{j-1}$ and $e_{k}$ are random variables then $\hat{e}_{k}(\Psi)$ is also random and therefore other properties can be derived. ###### Assumption 4 Taking into account the previous Assumptions 1 and 3, for any $\Psi$ such that $A^{T}\Psi=r(z_{k})$, prediction error $\hat{e}_{k}(\Psi)=y_{k+1}-\hat{y}_{k+1}(\Psi)$ is a random variable with zero mean and variance, it is defined by: $\begin{array}[]{cll}var(\hat{e}_{k}(\Psi))&=&\displaystyle\sum\limits_{j=\underline{1}}^{v}\Psi_{j}^{2}var(e_{j-1})+\sigma\\\ &\leq&\displaystyle\sum\limits_{j=1}^{k}\Psi_{j}^{2}(\sigma+L||z_{j-1}-z_{k}||)+\sigma.\\\ \end{array}$ (11) At this point,it is possible to formulate a predictor that minimize the outer bound of the variance prediction error. ###### Definition 6 (Stochastic prediction) The stochastic prediction $\hat{y}_{k+1}(\Psi^{S})$ is defined by: $\hat{y}_{k+1}(\Psi^{S})=\displaystyle\sum\limits_{j=1}^{v}\Psi^{S}_{j}y_{j},$ being $\Psi^{S}$ a vector that solves a constrained linear optimization problem as follows: $\begin{array}[]{ccc}\Psi^{S}=&arg\min\limits_{\Psi}&\Psi^{T}W_{k}\Psi\\\ &s.t.&A^{T}\Psi=r(z_{k}).\\\ \end{array}$ (12) An explicit notation of this optimization problem is: $\Psi^{S}=W_{k}^{-1}A(A^{T}W_{k}^{-1}A)^{-1}r(z_{k}).$ (13) In the same way, $\Psi^{S}$ highlights the stochastic assumptions considered to get the estimate. The following equality is satisfied $\hat{y}_{k+1}(\Psi^{S})=b_{Y}^{T}\Psi^{S}=r(z_{k})^{T}\Phi^{*},$ where $\Phi^{*}=(A^{T}W_{k}^{-1}A)^{-1}A^{T}W_{k}^{-1}b_{Y}$ is the argument of which minimizes the a quadratic prediction-error, with the following cost function: $\begin{array}[]{rl}J(\Phi)=&(b_{Y}-A\Phi)^{T}W_{k}^{-1}(b_{Y}-A\Phi)\\\ =&\displaystyle\sum\limits_{j=1}^{k}\frac{(y_{j}-r(z_{j-1})^{T}\Phi)^{2}}{(\sigma+L||z_{j-1}-z_{k}||)}.\end{array}$ (14) In this way, the stochastic prediction is equivalent to solve a weighted least-squares problem where the weights are set by the items of the diagonal of $W_{k}$ squared. Commonly, $\Psi^{S}$ is not a sparse vector, this is that most items are non zero numbers. So, in order to get the prediction $y_{k+1}$ a great number of $y_{j}$ would be used. The goal of this paper is to bring a predictor that combines the two predictions based on the different assumptions obtained from $\Psi^{D}$ and $\Psi^{S}$ respectively. Section 4 introduces the key points of this paper. ## 4 Proposed predictor This work proposes to obtain an estimation of the output $y_{k+1}$ by a linear combination of past data $y_{j}$, with $j=1,2,...,v$ where $v\leq k$ ([22]). Next, a formal definition of the proposed predictor is provided. This definition use a constant $\gamma\geq 0$ to balance the deterministic or stochastic nature of the prediction. ###### Definition 7 Given a constant $\gamma\geq 0$, the predictor $\hat{y}_{k+1}(\Psi^{*})$ is defined by $\hat{y}_{k+1}(\Psi^{*})=\displaystyle\sum\limits_{j=1}^{k}\Psi^{*}_{j}y_{j}$ where $\Psi^{*}$ is the optimal solution of: $\begin{array}[]{ccc}\Psi^{*}(\gamma)=&arg\min\limits_{\Psi}&||W_{k}\Psi||_{1}\\\ &s.t.&A^{T}\Psi=r(z_{k})\\\ &&||\Psi-\Psi^{S}||_{1}\leq\gamma\end{array}$ (15) and vector $\Psi^{S}$ is defined in (13). Some qualitative properties of the proposed predictor can be clarified. Note that, expression (15) is a constrained linear convex optimization problem and can be solved in an efficient way [23]. Assuming that (15) has a bounded solution, there is a constant $\bar{\gamma}$ such that if $\gamma\geq\bar{\gamma}$ then equality $\Psi^{*}=\Psi^{D}$ is obtained. Term $||\Psi-\Psi^{S}||_{1}$ of expression (15) takes into account the stochastic Assumption explained in Section 3 to obtain the optimal solutions $\Psi^{*}$. If $\gamma=0$ then $\Psi^{*}=\Psi^{S}$. So, constant $\gamma$ can be seen as a tuning parameter to balance the deterministic or stochastic nature of the considered approximation error. ###### Remark 2 It is important to remark that the proposed predictor encompasses some relevant nonparametrics predictors. If $\gamma=0$ and $r(z_{k})=1$ the proposed predictor is equivalent to the Nadaraya-Watson predictor [7, 8]. On the other hand if $\gamma=0$ and $r(z_{k})=[z_{k}^{T}\;1]$ a predictor based on Local Linear Regression is obtained. Besides, if $\gamma=0$ and if $L=0$ a parametric auto-regressive linear regression is performed. ###### Remark 3 It is important to remark that following similar reasoning it is possible to obtain different forecasting horizons. This is represented by the expression $y_{k+h}(\Psi^{*})$ being $h\geq 1$ the number of steps ahead. ## 5 Results In this section results are shown, the predictor was performed in four time series: the Monthly airline passenger numbers, the Canadian lynx data, the Monthly critical radio frequencies in Washington, D.C. and the Monthly pneumonia and influenza deaths time series are used in order to demonstrate the appropriateness and effectiveness of the proposed predictor. These time series come from different areas and have different statistical properties, so is a suitable benchmark to test time series predictors. Subsection 5.1 explain the characteristics of the study performed, hyper- parameterization, kernels, error measures and more details are exposed below. ### 5.1 Considerations * 1. To simplify the study, the proposed predictor (denoted $CP$) is considered with values $\sigma=0$ and $L=1$ in all cases. Note that $\sigma$ and $L$ could be considered hyper-parameters in order to improve the results obtained by the proposed predictor in this study. * 2. The proposed predictor ($CP$) is compared to three Nadaraya-Watson predictors (denoted as $NW1$, $NW2$ and $NW3$) using Epanechnikov, Gaussian and Tricube kernel functions respectively and three local linear regression models (denoted as $LL_{1}$, $LL_{2}$ and $LL_{3}$) using Epanechnikov, Gaussian and Tricube kernel functions respectively to define the local weights. Table 1 shows the expression of weights $w_{i,i}$ with $i=1,...,N$ for the aforementioned kernel functions. A bandwidth $\gamma$ is considered in the non-parametric predictors. Also, an auto-regressive linear regression $(AR)$ is indirectly included in the benchmark, as the proposed predictor also includes this model according to the hyperparameter combinations as explained in remark 2 of section 7. Table 1: Kernel functions Epanechnikov | Gaussian | Tricube ---|---|--- $w_{i,i}=\left\\{\begin{array}[]{cc}1-v_{i}^{2}&if\;|v_{i}|\leq 1\\\ 0&if\;|v_{i}|>1\end{array}\right.$ | $w_{i,i}=e^{-\frac{1}{2}v_{i}^{2}}$ | $w_{i,i}=\left\\{\begin{array}[]{cc}(1-|v_{i}|^{3})^{3}&if\;|v_{i}|\leq 1\\\ 0&if\;|v_{i}|>1\end{array}\right.$ $v_{i}=\frac{||z_{i}-z_{k}||}{\gamma}$ * 3. Two different forecast consistency measures are used in order to compare the predictor performances with the aforementioned models: Mean Absolute Error (MAPE) and Symmetric Mean Absolute Error (SMAPE) that have been studied by several authors [24]. Mean Absolute Error is defined by $\begin{array}[]{ccc}MAPE=\frac{100}{n}\displaystyle\sum_{t=1}^{n}\left|\frac{y_{t}-\hat{y}_{t}}{y_{t}}\right|,\end{array}$ (16) where $\hat{y}_{t}$ and ${y_{t}}$ are the predicted and observed data, respectively, and $n$ is the number of data. The second criterion is the Symmetric Mean Absolute Percentage error (SMAPE), which is $\begin{array}[]{ccc}SMAPE=\frac{100}{n}\displaystyle\sum_{t=1}^{n}\frac{\left|\hat{y}_{t}-y_{t}\right|}{(|y_{t}|+|\hat{y}_{t}|)/2},\end{array}$ (17) where $\hat{y}_{t}$ and ${y_{t}}$ are the predicted and observed data, respectively, and $n$ is the number of data. * 4. In order to train the predictors, the time series has been splitted into a training and test set with a percentage between 70-90 for a training set and a percentage between 10-30 of data for a test set. The aforementioned error measures described in formulas (16) and (17) are selected to benchmark the predictors, not only because of its interpretability but also because are scale-independent [25]. The training set, a leave-one-out cross validation approach, and a grid-search in the hyper-parameter-$\gamma$ space are used to find a suitable value of hyper-parameter $\gamma$. This value is used in the test set to evaluate the prediction methods. * 5. As explained in Section 4, the estimation can be used by a linear combination of the past data, in this way the different models are evaluated by using only the training set to infer the prediction this is when past data $y_{j}$ is used with $v<k$. Besides, three different forecast horizons are computed by predictor (1 step-ahead, 2 step-ahead and 3 step-ahead). In order to test the proposed predictor, two different type of data are used to perform the benchmark, the first in subsection 5.2 where some famous and more academic time series are used to compare the models, the second in subsection 5.3 a real life time series is used to test the predictor. ### 5.2 Academic Time series This subsection compare 6 non-parametric models against the proposed model in four different time series, performing forecasts in three different predicting horizon lengths. Each time series provides the results in bar-plots by $MAPE$ and $SMAPE$ errors in the test set for the proposed forecasting horizons. A final sub-subsection averages all time series results in order to extract general conclusions for the different benchmarked models. #### 5.2.1 Airline passengers dataset The classic Box and Jenkins airline data contains monthly totals of international airline passengers from $1949$ to $1960$ [1]. This time series plotted in Figure 1(a) has $144$ observations, the first $101$ observations were used as training set and the last $43$ as test set. It has also been very analyzed in the time series literature. As explained in Section 2, it is considered a detrended time series. In this sense a log with base 10 and a linear detrend function are applied to transform the data, it is shown the data set transformed in Figure 1(b). (a) Original Time series (b) Transformed Time series Figure 1: Monthly totals of international airline passengers (1949 $-$ 1960). The auto-correlation plot of the transformed data set shown in figure 2 at the appendix represents shows a high correlation between observations of this time series that are separated by $k=12$ time units, in this sense, the predictor can be represented as $r(z_{k})=z_{k}=[y_{k-1}\;y_{k-2}\;...\;y_{k-12}]^{T}$. Figure 2: Auto-correlation function of Airline passengers transformed time series. Figure 3 shows the forecasts of the proposed predictor by forecasting horizons with the hyper-parameters selected in both error measures. Figure 3: International airline passengers predictions by forecasting horizon in the test set. The hyper-parameter $\gamma$ is selected in the training set where the error is minimum, the value of $\gamma$ is inferred to perform forecasts in the test set. Besides, depending on the error measure selected in the training set the results may vary, in this case, same optimal hyper-parameters are found in both error measures, these results are in table 2. Table 2: Airline passengers time series optimal gamma. Ahead | $\gamma$_mape | $\gamma$_smape ---|---|--- 1.00 | 0.12 | 0.12 2.00 | 0.14 | 0.14 3.00 | 0.00 | 0.00 Results of this time series are shown on Table 7 in the appendix. To sum up the aforementioned table in a graphical way, figure 4 show the error measures by predictor and prediction horizon. (a) SMAPE (b) MAPE Figure 4: Mean of errors by forecasting horizon in airline passengers time series in test set. Results show by forecasting horizon that the proposed predictor outperforms in both error measures in the proposed forecasting horizons, with the exception of $SMAPE$ error criteria in the one and two step-ahead prediction, where the prediction is close to $LL$ results. Besides, results show that there is not a significant variation of results with the selection of the different kernels. #### 5.2.2 Canadian Lynx The Annual numbers of lynx trappings in Canada, contains the number of lynx trapped per year in the Mackenzie River district of Northern Canada from $1821$ to $1934$ [26]. It has been extensively analyzed in the time series literature with a focus on the nonlinear modeling. The lynx series plotted in Figure 5(a) shows a periodicity of approximately 10 years. The lynx series was studied by many researchers found the best-fitted model is AR(12) model [27]. In this way, the predictor is based on a auto-regressive model of order $p=12$, this is $r(z_{k})=z_{k}=[y_{k-1}\;y_{k-2}\;...\;y_{k-12}]^{T}$. The lynx series plotted in Figure 5(a) has $114$ observations, the first 80 observations of this data set were used as training set and the last 34 as test set. A log with base 10 was applied to the series in order to make a symmetrical data set, the plot of the data set is at Figure 5(b): (a) Original Time series (b) Transformed Time series Figure 5: Annual number of lynx trappings in Canada from 1821 to 1934. Figure 5 shows the forecasts of the proposed predictor by forecasting horizons with the hyper-parameters selected in both error measures, in this case, different optimal hyper-parameters are found in the error measure selection on the training set. (a) MAPE selection criterion (b) SMAPE selection criterion Figure 6: Canadyan lynx time series predictions by forecasting horizon in the test set. Table 3 shows results that comes from different optimal gamma selections by different error criteria and forecasting horizons. Table 3: Canadian Lynx time series optimal gamma. Ahead | $\gamma$_mape | $\gamma$_smape ---|---|--- 1.00 | 0.01 | 0.06 2.00 | 0.00 | 0.02 3.00 | 0.02 | 0.04 Corresponding to this predictions, the error measures are shown on Table LABEL:table:lynx_table in the appendix. To sum up this table in a graphical way, the figure 7 plot the error measures by predictor and prediction horizon. (a) SMAPE (b) MAPE Figure 7: Mean of test set errors by forecasting horizon in Canadian lynx time series. Results in figure 7 show that the proposed predictor get similar results to Local Linear regression in both error measures in the two first proposed forecast horizons, in this case a well-selected kernel on Local Linear regression could be a good option to take into account in order to get closer results in a short prediction term to the proposed predictor $CP$, in a three step-ahead prediction horizon the results mark a tie between $LL$ and $CP$ in the $SMAPE$, on the contrary by selecting $MAPE$ criterion the $LL$ outperforms. #### 5.2.3 Monthly critical radio frequencies Monthly critical radio frequencies in Washington, D.C., contains the highest radio frequency that can be used for broadcasting from May $1934$ to April $1954$ [28]. This time series plotted in Figure 8 has $240$ observations, the first $216$ observations were used as training set and the last $24$ as a test set. Figure 8: Monthly critical radio frequencies (1934$-$1954). According to auto-correlation plot attached in the Figure 9 at the appendix, the established model is based on an auto-regressive model of order twelve, which has also been used by many researchers [29, 27]. The auto-regressive model has the shape like $r(z_{k})=z_{k}=[y_{k-1}\;y_{k-2}\;...\;y_{k-12}]^{T}$. Figure 9: Auto-correlation function of Monthly critical radio frequencies time series. Prediction are shown in figures 10 for the proposed predictor by forecasting horizons with the hyper-parameters selected in both error measures, in this case, different optimal hyper-parameter are found in the different error measure criteria selected on the training set. (a) MAPE selection criterion (b) SMAPE selection criterion Figure 10: Monthly critical radio frequencies time series prediction by forecasting horizon in the test set. Prediction error results of this time series are recorded on Table 9 in the appendix, table 4 shows the optimal selected gamma. Table 4: Monthly critical radio frequencies time series optimal gamma. Ahead | $\gamma$_mape | $\gamma$_smape ---|---|--- 1.00 | 0.00 | 0.00 2.00 | 0.03 | 0.00 3.00 | 0.00 | 0.00 To sum up the table 9 in a graphical way, figure 11 show the error measures by predictor and forecasting horizon. (a) SMAPE (b) MAPE Figure 11: Mean of test set errors by forecasting horizon in monthly critical radio frequencies time series. The proposed predictor get similar results to Local Linear regression in SMAPE measure in the proposed forecast horizons, on the contrary by selecting $MAPE$ measure the Proposed predictor $CP$ outperforms. Besides, results shows that there is not a high significant variation of results with the selection of the different kernels in Local Linear Regression. #### 5.2.4 Monthly pneumonia and influenza deaths Monthly pneumonia and influenza deaths per $10.000$ people in the United States for 11 years, $1968$ to $1978$. This time series plotted in Figure 12 has $132$ observations, the first $84$ observations were used as training set and the last $24$ as a test set. Figure 12: Monthly pneumonia and influenza deaths (1968$-$1978). Figure 13 of auto-correlation at the appendix shows a seasonality of approximately 12 months. In this line the predictor is based on a auto- regressive model of order $p=12$, this is $r(z_{k})=z_{k}=[y_{k-1}\;y_{k-2}\;...\;y_{k-12}]^{T}$. Figure 13: Auto-correlation function of Monthly pneumonia and influenza deaths time series. Figure 14 shows the forecasts of the proposed predictor by forecasting horizons with the hyper-parameters selected in both error measures. Figure 14: Monthly pneumonia and influenza deaths time series predictions by forecasting horizon in the test set. Results of this time series are shown on Table 10 at the appendix. The hyper- parameter $\gamma$ is selected in the training set where the error is minimum, the value of $\gamma$ is inferred to perform forecasts in the test set as shown in table 5. Table 5: Monthly pneumonia and influenza deaths time series optimal gamma. Ahead | $\gamma$_mape | $\gamma$_smape ---|---|--- 1.00 | 0.50 | 0.50 2.00 | 0.22 | 0.22 3.00 | 0.08 | 0.08 To sum up table 10 in a graphical way, figure 15 plots the error measures by predictor and prediction horizon. (a) SMAPE (b) MAPE Figure 15: Mean of test set errors by forecasting horizon in monthly pneumonia and influenza deaths time series. Results shows by forecasting horizon that the proposed predictor outperforms in both error measures in the proposed forecast horizons. Besides, the results shows that tricube kernel is a suitable option that outperforms between the other selected non parametric methods for a short term forecast #### 5.2.5 Averaged results This subsection averages the results shown in tables 7, 8, 9 and 10 attached at the appendix. This results, averages all time series test error results in two error measures selection criterion, depending on the selected criteria in the training set the results could vary. Results shows that in average the proposed predictor outperforms the selected methods in the different results. (a) SMAPE (b) MAPE Figure 16: Mean of test set errors by forecasting horizon Figures from 17 to 19 with a different aggregation level show an average by grouping results in the different forecasting horizons shows that the proposed predictor is outperforming the proposed methods and offering a competitive alternative model. Figure 17: Mean of SMAPE and MAPE results for 1 step-ahead forecasts Figure 18: Mean of SMAPE and MAPE results for 2 step-ahead forecasts Figure 19: Mean of SMAPE and MAPE results for 3 step-ahead forecasts ### 5.3 Monthly electricity supplied It is provided in this subsection a real- world dataset in order to perform the proposed predictor. This is, the IEA provides monthly statistics with timely and consistent oil, oil price, natural gas and electricity data for all Organization for Economic Co-operation and Development member countries. Countries submitted monthly data is adjusted proportionately to maintain consistency with the most recent annual data for each generation source. This time series is the electricity supplied for Spain from January of $2000$ to May of $2017$, this data consist in Indigenous production plus Imports minus Exports. It includes transmission and distribution losses. Figure 20 plots the aforementioned data set which has $221$ observations, the first $181$ observations were used as training set and the last $40$ as a test set. (a) Original Time series (b) Transformed Time series Figure 20: Monthly electricity supplied in spain (2000$-$2017). Figure 21 of auto-correlation at the appendix shows a seasonality of approximately 12 months. In this line the predictor is based on a auto- regressive model of order $p=12$, this is $r(z_{k})=z_{k}=[y_{k-1}\;y_{k-2}\;...\;y_{k-12}]^{T}$. Figure 21: Auto-correlation function of Monthly electricity supplied in spain. The forecasts for the proposed predictor are in figure 22, these are plotted by forecasting horizons with the hyper-parameters selected in both error measures. (a) MAPE selection criterion (b) SMAPE selection criterion Figure 22: Monthly electricity supplied time series predictions by forecasting horizon in the test set. Results of this time series are shown on Table 11 at the appendix. The hyper- parameter $\gamma$ is selected in the training set where the error is minimum, the value of $\gamma$ is inferred to perform forecasts in the test set as shown in table 6. Table 6: Monthly electricity supplied time series optimal gamma. Ahead | $\gamma$_mape | $\gamma$_smape ---|---|--- 1.00 | 0.06 | 0.07 2.00 | 0.26 | 0.25 3.00 | 0.09 | 0.09 To sum up table 11 in a graphical way, figure 23 plots the error measures by predictor and prediction horizon. (a) SMAPE (b) MAPE Figure 23: Mean of test set errors by forecasting horizon in Monthly electricity supplied time series. Results from Figure 23 shows that the proposed predictor outperforms in both error measures in the proposed forecast horizons. The proposed predictor is a suitable option to consider in a real life problem. ## 6 Conclusions A novel non-parametric Time Series forecasting method has been proposed. The prediction is obtained by a weighted sum of past observations. A combination of deterministic and stochastic assumptions are used to obtain an expression of the outer bound of the prediction error. The weights are obtained solving a convex optimization problem that minimizes the upper bound of the prediction error. The method includes a tuning hyper-parameter. This hyper-parameter may balance the deterministic and stochastic considered assumptions. By a cross- validation scheme, a suitable hyper-parameter can be obtained. The performance of the proposed predictor is exposed by some datasets. ## 7 Appendix The following section contains the mathematical proofs, as well as research results contained in tables and some summary plots of this tables to make it easier for the reader to navigate through the document. ### 7.1 Mathematical derivations Taking into account Assumption 1 and Definitions 3 and 2 the following equalities can be inferred $\displaystyle\hat{e}_{k}(\Psi)$ $\displaystyle=$ $\displaystyle y_{k+1}-\hat{y}_{k+1}(\Psi)$ (18) $\displaystyle=$ $\displaystyle y_{k+1}-\Psi^{T}b_{Y}$ (19) $\displaystyle=$ $\displaystyle r(z_{k})^{T}\Phi_{k}-\Psi^{T}b_{Y}+e_{k}$ (20) $\displaystyle=$ $\displaystyle(A^{T}\Psi)^{T}\Phi_{k}-\Psi^{T}b_{Y}+e_{k}$ (21) $\displaystyle=$ $\displaystyle\Psi^{T}(A\Phi_{k}-b_{Y})+e_{k}$ (22) $\displaystyle=$ $\displaystyle\displaystyle\sum\limits_{j=1}^{k}\Psi_{j}(r(z_{j-1})^{T}\Phi_{k}-y_{j})+e_{k}$ (23) $\displaystyle=$ $\displaystyle-\displaystyle\sum\limits_{j=1}^{k}\Psi_{j}e_{j-1}+e_{k}.$ (24) _QED_ ### 7.2 Tables Model | Ahead | $\gamma$ | tr_MAPE | te_MAPE | $\gamma$ | tr_SMAPE | te_SMAPE ---|---|---|---|---|---|---|--- CP | 1.00 | 0.12 | 15.75 | 13.04 | 0.12 | 15.46 | 14.97 LL1 | 1.00 | 1.57 | 15.19 | 14.65 | 1.84 | 14.34 | 15.69 LL2 | 1.00 | 1.55 | 14.82 | 17.11 | 1.62 | 13.77 | 14.13 LL3 | 1.00 | 1.89 | 16.42 | 15.47 | 2.33 | 14.04 | 16.98 NW1 | 1.00 | 1.08 | 17.69 | 27.26 | 1.08 | 13.82 | 21.05 NW2 | 1.00 | 1.02 | 17.36 | 28.38 | 1.02 | 13.48 | 21.90 NW3 | 1.00 | 1.24 | 17.91 | 27.16 | 1.26 | 13.98 | 21.09 CP | 2.00 | 0.14 | 17.36 | 14.05 | 0.14 | 16.88 | 16.58 LL1 | 2.00 | 1.58 | 13.69 | 18.89 | 1.58 | 13.56 | 16.45 LL2 | 2.00 | 1.54 | 14.22 | 22.02 | 1.69 | 13.24 | 17.71 LL3 | 2.00 | 1.67 | 14.79 | 18.37 | 2.09 | 14.38 | 18.44 NW1 | 2.00 | 1.12 | 17.06 | 29.65 | 1.14 | 13.29 | 22.91 NW2 | 2.00 | 0.98 | 16.71 | 30.40 | 0.98 | 13.05 | 23.22 NW3 | 2.00 | 1.26 | 17.33 | 29.41 | 1.28 | 13.40 | 22.61 CP | 3.00 | 0.00 | 17.43 | 15.45 | 0.00 | 16.50 | 18.66 LL1 | 3.00 | 1.64 | 16.06 | 24.99 | 2.15 | 15.59 | 21.13 LL2 | 3.00 | 1.61 | 17.89 | 25.89 | 2.01 | 15.37 | 22.14 LL3 | 3.00 | 1.78 | 16.49 | 25.33 | 2.34 | 15.72 | 20.72 NW1 | 3.00 | 1.12 | 17.16 | 32.13 | 1.14 | 13.23 | 24.05 NW2 | 3.00 | 0.98 | 16.88 | 32.80 | 0.98 | 13.13 | 24.09 NW3 | 3.00 | 1.24 | 17.55 | 31.12 | 1.30 | 13.37 | 23.75 Table 7: Airline passengers time series results Model | Ahead | $\gamma$ | tr_MAPE | te_MAPE | $\gamma$ | tr_SMAPE | te_SMAPE ---|---|---|---|---|---|---|--- CP | 1.00 | 0.01 | 6.75 | 5.09 | 0.06 | 6.73 | 5.13 LL1 | 1.00 | 13.01 | 6.66 | 4.94 | 13.01 | 6.62 | 5.03 LL2 | 1.00 | 12.01 | 6.54 | 5.00 | 12.01 | 6.50 | 5.09 LL3 | 1.00 | 16.01 | 6.72 | 4.86 | 16.01 | 6.68 | 4.94 NW1 | 1.00 | 4.33 | 8.42 | 10.53 | 4.47 | 7.99 | 10.98 NW2 | 1.00 | 4.03 | 8.28 | 10.40 | 4.03 | 7.87 | 10.88 NW3 | 1.00 | 5.11 | 8.42 | 10.69 | 5.27 | 7.99 | 11.18 CP | 2.00 | 0.00 | 10.08 | 9.31 | 0.02 | 10.07 | 8.88 LL1 | 2.00 | 11.01 | 9.94 | 9.41 | 11.01 | 9.85 | 9.79 LL2 | 2.00 | 11.01 | 9.82 | 9.29 | 12.01 | 9.66 | 8.83 LL3 | 2.00 | 12.01 | 9.99 | 9.67 | 12.01 | 9.89 | 10.11 NW1 | 2.00 | 4.76 | 9.45 | 12.29 | 4.77 | 8.91 | 12.87 NW2 | 2.00 | 4.31 | 9.33 | 11.83 | 4.31 | 8.79 | 12.39 NW3 | 2.00 | 5.49 | 9.46 | 12.41 | 5.66 | 8.92 | 13.04 CP | 3.00 | 0.02 | 11.18 | 12.49 | 0.04 | 11.27 | 11.73 LL1 | 3.00 | 12.01 | 11.24 | 11.29 | 12.01 | 10.92 | 11.84 LL2 | 3.00 | 10.01 | 10.98 | 11.32 | 12.01 | 10.72 | 11.40 LL3 | 3.00 | 14.01 | 11.36 | 11.20 | 16.01 | 11.00 | 11.58 NW1 | 3.00 | 4.96 | 9.88 | 13.04 | 4.96 | 9.30 | 13.71 NW2 | 3.00 | 4.35 | 9.58 | 12.76 | 4.35 | 9.03 | 13.43 NW3 | 3.00 | 5.70 | 9.92 | 13.07 | 5.70 | 9.34 | 13.75 Table 8: Canadian Lynx time series results Model | Ahead | $\gamma$ | tr_MAPE | te_MAPE | $\gamma$ | tr_SMAPE | te_SMAPE ---|---|---|---|---|---|---|--- CP | 1.00 | 0.00 | 7.33 | 6.70 | 0.00 | 7.26 | 6.89 LL1 | 1.00 | 50.00 | 7.25 | 7.29 | 50.00 | 7.23 | 7.09 LL2 | 1.00 | 30.00 | 7.17 | 6.98 | 30.00 | 7.18 | 6.74 LL3 | 1.00 | 60.00 | 7.26 | 7.31 | 60.00 | 7.24 | 7.10 NW1 | 1.00 | 13.00 | 8.92 | 10.06 | 13.00 | 8.92 | 9.36 NW2 | 1.00 | 13.00 | 9.09 | 11.68 | 13.00 | 9.04 | 10.79 NW3 | 1.00 | 15.00 | 8.86 | 9.91 | 15.00 | 8.85 | 9.24 CP | 2.00 | 0.03 | 11.16 | 8.71 | 0.00 | 10.92 | 9.58 LL1 | 2.00 | 40.00 | 10.75 | 10.13 | 40.00 | 10.74 | 9.50 LL2 | 2.00 | 40.00 | 10.71 | 10.27 | 40.00 | 10.69 | 9.66 LL3 | 2.00 | 50.00 | 10.74 | 10.11 | 50.00 | 10.74 | 9.49 NW1 | 2.00 | 14.00 | 10.51 | 14.07 | 14.00 | 10.45 | 12.84 NW2 | 2.00 | 12.00 | 10.60 | 14.68 | 13.00 | 10.52 | 13.78 NW3 | 2.00 | 16.00 | 10.49 | 13.69 | 16.00 | 10.44 | 12.53 CP | 3.00 | 0.00 | 12.42 | 9.63 | 0.00 | 12.25 | 10.34 LL1 | 3.00 | 50.00 | 12.26 | 11.05 | 50.00 | 12.14 | 10.22 LL2 | 3.00 | 40.00 | 12.18 | 11.08 | 40.00 | 12.07 | 10.25 LL3 | 3.00 | 50.00 | 12.24 | 10.98 | 50.00 | 12.14 | 10.15 NW1 | 3.00 | 14.00 | 11.65 | 16.46 | 14.00 | 11.53 | 14.87 NW2 | 3.00 | 13.00 | 11.74 | 17.64 | 13.00 | 11.57 | 15.81 NW3 | 3.00 | 15.00 | 11.61 | 15.00 | 16.00 | 11.52 | 14.56 Table 9: Monthly critical radio frequencies time series results Model | Ahead | $\gamma$ | tr_MAPE | te_MAPE | $\gamma$ | tr_SMAPE | te_SMAPE ---|---|---|---|---|---|---|--- CP | 1.00 | 0.50 | 10.30 | 11.89 | 0.50 | 9.70 | 12.06 LL1 | 1.00 | 1.47 | 10.47 | 18.44 | 1.48 | 10.31 | 17.19 LL2 | 1.00 | 1.41 | 10.69 | 19.08 | 1.41 | 10.56 | 17.60 LL3 | 1.00 | 1.61 | 10.80 | 17.63 | 1.61 | 10.68 | 16.38 NW1 | 1.00 | 0.92 | 10.87 | 20.04 | 0.92 | 10.96 | 18.46 NW2 | 1.00 | 0.92 | 11.38 | 21.41 | 0.92 | 11.45 | 19.72 NW3 | 1.00 | 0.92 | 10.58 | 18.84 | 0.92 | 10.69 | 17.31 CP | 2.00 | 0.22 | 12.81 | 15.14 | 0.22 | 12.04 | 15.85 LL1 | 2.00 | 1.95 | 13.99 | 23.72 | 1.48 | 13.95 | 21.40 LL2 | 2.00 | 1.90 | 13.54 | 23.27 | 1.90 | 13.77 | 21.02 LL3 | 2.00 | 1.61 | 14.70 | 22.66 | 1.55 | 14.08 | 19.38 NW1 | 2.00 | 1.01 | 11.22 | 22.56 | 0.96 | 11.24 | 20.07 NW2 | 2.00 | 0.95 | 11.65 | 23.46 | 0.95 | 11.67 | 21.50 NW3 | 2.00 | 0.92 | 10.84 | 20.67 | 0.92 | 10.78 | 18.73 CP | 3.00 | 0.08 | 13.62 | 15.11 | 0.08 | 12.97 | 15.71 LL1 | 3.00 | 1.91 | 13.52 | 23.40 | 1.49 | 13.38 | 19.94 LL2 | 3.00 | 1.88 | 13.09 | 23.12 | 1.88 | 13.11 | 21.08 LL3 | 3.00 | 2.00 | 13.88 | 23.09 | 1.63 | 13.43 | 19.07 NW1 | 3.00 | 0.92 | 11.08 | 21.20 | 0.92 | 11.17 | 19.53 NW2 | 3.00 | 0.92 | 11.65 | 21.58 | 0.92 | 11.84 | 19.99 NW3 | 3.00 | 0.94 | 10.92 | 20.81 | 0.94 | 10.96 | 19.09 Table 10: Monthly pneumonia and influenza deaths time series results Model | Ahead | $\gamma$ | tr_MAPE | te_MAPE | $\gamma$ | tr_SMAPE | te_SMAPE ---|---|---|---|---|---|---|--- CP | 1.00 | 0.06 | 13.88 | 21.81 | 0.07 | 14.16 | 23.88 LL1 | 1.00 | 1.90 | 17.08 | 30.84 | 2.00 | 14.52 | 24.42 LL2 | 1.00 | 1.87 | 16.93 | 30.80 | 1.50 | 14.40 | 24.14 LL3 | 1.00 | 2.00 | 17.14 | 30.86 | 2.00 | 14.62 | 24.38 NW1 | 1.00 | 0.68 | 17.76 | 31.16 | 0.73 | 15.00 | 25.00 NW2 | 1.00 | 0.65 | 17.58 | 31.60 | 0.65 | 15.04 | 24.65 NW3 | 1.00 | 0.78 | 17.81 | 31.21 | 0.81 | 14.98 | 24.76 CP | 2.00 | 0.26 | 15.22 | 24.32 | 0.25 | 15.23 | 26.22 LL1 | 2.00 | 2.00 | 18.01 | 35.40 | 2.00 | 15.94 | 27.67 LL2 | 2.00 | 1.98 | 17.77 | 35.36 | 1.85 | 15.85 | 27.53 LL3 | 2.00 | 2.00 | 18.18 | 35.46 | 2.00 | 16.09 | 27.71 NW1 | 2.00 | 0.70 | 19.01 | 35.19 | 0.72 | 16.14 | 27.07 NW2 | 2.00 | 0.68 | 18.96 | 37.16 | 0.70 | 16.24 | 27.75 NW3 | 2.00 | 0.78 | 19.04 | 34.48 | 0.80 | 16.19 | 26.74 CP | 3.00 | 0.09 | 15.95 | 21.80 | 0.09 | 16.13 | 24.51 LL1 | 3.00 | 1.96 | 18.62 | 35.73 | 1.92 | 16.30 | 26.50 LL2 | 3.00 | 1.79 | 18.33 | 35.24 | 1.59 | 16.02 | 26.80 LL3 | 3.00 | 2.00 | 18.76 | 35.87 | 2.00 | 16.48 | 26.58 NW1 | 3.00 | 0.73 | 19.92 | 37.01 | 0.73 | 16.87 | 27.81 NW2 | 3.00 | 0.65 | 19.24 | 38.11 | 0.65 | 16.70 | 28.31 NW3 | 3.00 | 0.83 | 20.01 | 36.74 | 0.85 | 16.92 | 27.79 Table 11: Monthly electricity supplied in spain time series results ## References * [1] G. Box and G. Jenkins, Time Series Analysis: Forecasting and Control. San Francisco, CA: Holden-Day, 1976. * [2] J. Hamilton, Time series analysis. Princeton, NJ: Princeton Univ. Press, 1994. * [3] H. Tong, Threshold Models in Nonlinear Time Series Analysis, vol. 21 of Lecture Notes in Statistics. Heidelberg: Springer, 1983. * [4] V. Haggan and T. Ozaki, “Modeling nonlinear vibrations using an amplitude-dependent autoregressive time series model,” Biometrika, vol. 68, pp. 186–196, 1981. * [5] K. Chang and H. Tong, “On estimating thresholds in autoregressive models,” Journal of Time Series Analysis, vol. 7, pp. 179–190, 1986. * [6] Y. Truong, A nonparametric framework for time series analysis. New Directions in Time Series Analysis, New York: Springer, 1993. * [7] E. A. Nadaraya, “On estimating regression,” Theory of Probability & Its Applications, vol. 9, no. 1, pp. 141–142, 1964. * [8] G. S. Watson, “Smooth regression analysis,” Sankhyā Ser., vol. 26, pp. 359–372, 1964. * [9] W. Härdle, Applied nonparametric regression. No. 19 in Econometric Society monographs, Cambridge u. a.: Cambridge University Pr., 1990. * [10] J. Fan and I. Gijbels, Local polynomial modelling and its applications. No. 66 in Monographs on statistics and applied probability series, London [u.a.]: Chapman and Hall, 1996. * [11] T. Hastie and R. Tibshirani, Generalized Additive Models. Monographs on Statistics and Applied Probability, Chapman and Hall, 1990. * [12] W. Härdle, H. Lütkepohl, and R. Chen, “A review of nonparametric time series analysis,” International Statistical Review, vol. 65, no. 1, pp. 49–73, 1997. * [13] J. Fan and Q. Yao, Nonlinear Time Series: Nonparametric Methods and Parametric Methods. Springer Series in Statistics, New York: Springer, 2003. * [14] J. Gao, Nonlinear Time Series: Semiparametric and Nonparametric Methods. Chapman and Hall/CRC, 2007. * [15] J. G. Gooijer and A. Gannoun, “Nonparametric conditional predictive regions for time series,” Computational Statistics & Data Analysis, vol. 33, no. 3, pp. 259 – 275, 2000. * [16] Y. Yin and P. Shang, “Forecasting traffic time series with multivariate predicting method,” Applied Mathematics and Computation, vol. 291, pp. 266 – 278, 2016. * [17] C. Bergmeir, R. J. Hyndman, and B. Koo, “A note on the validity of cross-validation for evaluating autoregressive time series prediction,” Computational Statistics & Data Analysis, vol. 120, pp. 70 – 83, 2018. * [18] E. Mangalova and O. Shesterneva, “Sequence of nonparametric models for gefcom 2014 - probabilistic electric load forecasting,” International Journal of Forecasting, vol. 32, no. 3, pp. 1023 – 1028, 2016. * [19] R. J. Hyndman, M. L. King, I. Pitrun, and B. Billah, “Local linear forecasts using cubic smoothing splines,” Australian & New Zealand Journal of Statistics, vol. 47, no. 1, pp. 87–99, 2005. * [20] M. Milanese, J. Norton, H. Piet-Lahanier, and E. Walter, Bounding Approaches to System Identification. Plenum Press, New York, 1996. * [21] J. M. Bravo, T. Alamo, M. Vasallo, and M. E. Gegúndez, “A general framework for predictors based on bounding techniques and local approximation,” IEEE Transactions on Automatic Control, vol. 62, pp. 3430–3435, July 2017. * [22] J. Roll, A. Nazin, and L. Ljung, “Nonlinear system identification via direct weight optimization,” Automatica, vol. 41, no. 3, pp. 475–490, 2005. * [23] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004. * [24] J. S. Armstrong, Long-range Forecasting: From Crystal Ball to Computer, vol. 2. Wiley, 1985. * [25] R. J. Hyndman and A. B. Koehler, “Another look at measures of forecast accuracy,” International Journal of Forecasting, vol. 22, no. 4, pp. 679 – 688, 2006. * [26] M. J. A. M. W. Campbell, “A survey of statistical work on the mackenzie river series of annual canadian lynx trappings for the years 1821?1934 and a new analysis,” Journal of the Royal Statistical Society series, vol. A, 140, pp. 411–431, 1977. * [27] G. Zhang, “Time series forecasting using a hybrid arima and neural network model,” Neurocomputing, vol. 50, pp. 159–175, jan 2003. * [28] Newton, “Monthly critical radio frequencies in washington,” 1988. data retrieved from datamarket. * [29] T. S. Rao and M. Gabr, An Introduction to Bispectral Analysis and Bilinear Time Series Models, vol. 24. New York: Springer-Verlag, 1984.
# Modular Frobenius pseudo-varieties Aureliano M. Robles-Pérez and José Carlos Rosales∗ Both authors are supported by the project MTM2017-84890-P (funded by Ministerio de Economía, Industria y Competitividad and Fondo Europeo de Desarrollo Regional FEDER) and by the Junta de Andalucía Grant Number FQM-343.Departamento de Matemática Aplicada & Instituto de Matemáticas (IMAG), Universidad de Granada, 18071-Granada, Spain. E-mail<EMAIL_ADDRESS>ORCID: 0000-0003-2596-1249.Departamento de Álgebra & Instituto de Matemáticas (IMAG), Universidad de Granada, 18071-Granada, Spain. E-mail<EMAIL_ADDRESS>ORCID: 0000-0003-3353-4335. ###### Abstract If $m\in\mathbb{N}\setminus\\{0,1\\}$ and $A$ is a finite subset of $\bigcup_{k\in\mathbb{N}\setminus\\{0,1\\}}\\{1,\ldots,m-1\\}^{k}$, then we denote by $\displaystyle\mathscr{C}(m,A)=\Big{\\{}S\in\mathscr{S}_{m}\mid s_{1}+\cdots+s_{k}-m\in S\mbox{ if }(s_{1},\ldots,s_{k})\in S^{k}\mbox{ and }$ $\displaystyle s_{1}\bmod m,\ldots,s_{k}\bmod m)\in A\Big{\\}}.$ In this work we prove that $\mathscr{C}(m,A)$ is a Frobenius pseudo-variety. We also show algorithms that allows us to establish whether a numerical semigroup belongs to $\mathscr{C}(m,A)$ and to compute all the elements of $\mathscr{C}(m,A)$ with a fixed genus. Moreover, we introduce and study three families of numerical semigroups, called of second-level, thin and strong, and corresponding to $\mathscr{C}(m,A)$ when $A=\\{1,\ldots,m-1\\}^{3}$, $A=\\{(1,1),\ldots,(m-1,m-1)\\}$, and $A=\\{1,\ldots,m-1\\}^{2}\setminus\\{(1,1),\ldots,(m-1,m-1)\\}$, respectively. Keywords: Modular pseudo-varieties, second-level numerical semigroups, thin numerical semigroups, strong numerical semigroups, tree associated (with a modular pseudo-variety). 2010 AMS Classification: 20M14. ## 1 Introduction Let $\mathbb{N}$ be the set of non-negative integers. A _numerical semigroup_ is a subset $S$ of $\mathbb{N}$ that is closed under addition, $0\in S$ and $\mathbb{N}\setminus S$ is finite. The _Frobenius number_ of $S$, denoted by $\mathrm{F}(S)$, is the greatest integer that does not belong to $S$. The cardinality of $\mathbb{N}\setminus S$, denoted by $\mathrm{g}(S)$, is the _genus_ of $S$. A Frobenius pseudo-variety is a non-empty family $\mathcal{P}$ of numerical semigroups that fulfils the following conditions. 1. 1. $\mathcal{P}$ has a maximum element (with respect to the inclusion order). 2. 2. If $S,T\in\mathcal{P}$, then $S\cap T\in\mathcal{P}$. 3. 3. If $S\in\mathcal{P}$ and $S\neq\max(\mathcal{P})$, then $S\cup\\{{\rm F}(S)\\}\in\mathcal{P}$. The _multiplicity_ of a numerical semigroup $S$, denoted by $\mathrm{m}(S)$, is the least positive integer that belongs to $S$. If $m$ is a positive integer, then we denote by $\mathscr{S}_{m}=\left\\{S\mid S\mbox{ is a numerical semigroup with }\mathrm{m}(S)=m\right\\}$. Let $m\in\mathbb{N}\setminus\\{0,1\\}$ and let $A$ be a finite subset of $\bigcup_{k\in\mathbb{N}\setminus\\{0,1\\}}\\{1,\ldots,m-1\\}^{k}$ (where $X^{k}=X\times\stackrel{{{}^{(k)}}}{{\cdots}}\times X=\left\\{(x_{1},\ldots,x_{k})\mid x_{1},\ldots,x_{k}\in X\right\\}$). We denote by $\displaystyle\mathscr{C}(m,A)=\Big{\\{}S\in\mathscr{S}_{m}\mid s_{1}+\cdots+s_{k}-m\in S\mbox{ if }(s_{1},\ldots,s_{k})\in S^{k}\mbox{ and }$ $\displaystyle s_{1}\bmod m,\ldots,s_{k}\bmod m)\in A\Big{\\}}.$ Our main purpose in this work is to study the set $\mathscr{C}(m,A)$. In Section 2 we show that $\mathscr{C}(m,A)$ is a Frobenius pseudo-variety with maximum element given by $\Delta(m)=\left\\{0,m,\to\right\\}$ (where the symbol $\to$ means that every integer greater than $m$ belongs to $\Delta(m)$). Thus, we call the pseudo-varieties that arise in this way modular Frobenius pseudo-varieties. Also, we give an algorithm that allows us to establish whether a numerical semigroup belongs to $\mathscr{C}(m,A)$. In addition, with the help of the results in [9], we can arrange $\mathscr{C}(m,A)$ in a rooted tree and we find an algorithm to compute all the elements of $\mathscr{C}(m,A)$ with a fixed genus. If $X$ is a non-empty subset of $\mathbb{N}$, then we denote by $\langle X\rangle$ the submonoid of $(\mathbb{N},+)$ generated by $X$, that is, $\langle X\rangle=\big{\\{}\lambda_{1}x_{1}+\cdots+\lambda_{n}x_{n}\mid n\in\mathbb{N}\setminus\\{0\\},\ x_{1},\ldots,x_{n}\in X,\ \lambda_{1},\ldots,\lambda_{n}\in\mathbb{N}\big{\\}}.$ It is well known (see Lemma 2.1 of [11]) that $\langle X\rangle$ is a numerical semigroup if and only if $\gcd(X)=1$. If $M$ is a submonoid of $(\mathbb{N},+)$ and $M=\langle X\rangle$, then we say that $X$ is a _system of generators_ of $M$. Moreover, if $M\not=\langle Y\rangle$ for any subset $Y\subsetneq X$, then we say that $X$ is a _minimal system of generators_ of $M$. In Corollary 2.8 of [11] it is shown that each submonoid of $(\mathbb{N},+)$ has a unique minimal system of generators and that such a system is finite. We denote by $\mathrm{msg}(M)$ the minimal system of generators of $M$. The cardinality of $\mathrm{msg}(M)$, denoted by $\mathrm{e}(M)$, is the _embedding dimension_ of $M$. By applying Proposition 2.10 of [11], if $S$ is a numerical semigroup, then we know that $\mathrm{e}(S)\leq\mathrm{m}(S)$. A numerical semigroup $S$ has _maximal embedding dimension_ if $\mathrm{e}(S)=\mathrm{m}(S)$. This family of numerical semigroups has been extensively studied (for instance, see [2] and [11]). Let us denote by $\mathcal{M}_{m}$ the set formed by the numerical semigroups that have maximal embedding dimension and with multiplicity $m$. It is easy to see that $\mathcal{M}_{m}=\mathscr{C}(m,\\{1,\ldots,m-1\\}^{2})$. In Sections 3, 4, and 5 we study the family $\mathscr{C}(m,A)$ for * • $A=\\{1,\ldots,m-1\\}^{3}$, * • $A=\\{(1,1),\ldots,(m-1,m-1)\\}$, * • $A=\\{1,\ldots,m-1\\}^{2}\setminus\\{(1,1),\ldots,(m-1,m-1)\\}$, respectively. Observe that, in a certain sense, these families are generalisation of $\mathcal{M}_{m}$. To finish this introduction, we are going to comment several ideas (see [3, 4, 14]) that motivate the study of modular Frobenius pseudo-varieties. First of all, observe that modular Frobenius pseudo-varieties are related to the non-homogeneous patterns with positive coefficients that involve in their constant parameter the multiplicity of the numerical semigroup (see [4]). The notion of non-homogeneous pattern was introduced in [4] as a generalisation of the notion of homogeneous pattern [3]. Thus, a _linear pattern_ $p(x_{1},\ldots,x_{n})$ is an expression of the form $a_{1}x_{1}+\cdots+a_{n}x_{n}+a_{0}$, with $a_{0}\in\mathbb{Z}$ (as usual, $\mathbb{Z}$ is the set of integers numbers) and $a_{1},\ldots,a_{n}\in\mathbb{Z}\setminus\\{0\\}$. In particular, the (linear) pattern $p$ is _homogeneous_ if $a_{0}=0$, and _non-homogeneous_ if $a_{0}\not=0$. On the other hand, it is said that a numerical semigroup $S$ admits the homogeneous pattern $p$ if $p(s_{1},\ldots,s_{n})\in S$ for every non- increasing sequence $(s_{1},\ldots,s_{n})\in S^{n}$. The corresponding definition for non-homogeneous patterns is a bit different: a numerical semigroup $S$ admits the non-homogeneous pattern $p$ if $p(s_{1},\ldots,s_{n})\in S$ for every non-increasing sequence $(s_{1},\ldots,s_{n})\in(S\setminus\\{0\\})^{n}$. Having in mind that $M(S)=S\setminus\\{0\\}$ is an ideal of the numerical semigroup $S$ (in fact, $M(S)$ is the maximal ideal of $S$), in [14] the concepts of the above paragraph were extended in the following way: an ideal $I$ of a numerical semigroups $S$ admits the pattern $p$ if $p(s_{1},\ldots,s_{n})\in S$ for every non-increasing sequence $(s_{1},\ldots,s_{n})\in I^{n}$. At this point we have the main difference between the above-mentioned papers and our proposal in this work: we do not keep the non-increasing condition (on the sequences in which we evaluate the pattern) in mind. In addition, we take sequences in sets $A$ without structure (that is, $A$ does not have to be an ideal of a numerical semigroup). Now, let us denote by $\mathscr{S}_{m}(p)$ the family of numerical semigroups with multiplicity $m$ that admit the pattern $p$. If we take the patterns $p_{1}=2x_{1}+x_{2}-m$ and $p_{2}=x_{1}+2x_{2}-m$, then $\mathscr{S}_{m}(p_{1},p_{2})=\mathscr{S}_{m}(p_{1})\cap\mathscr{S}_{m}(p_{2})$ (that is, the family of numerical semigroups which admit $p_{1}$ and $p_{2}$ simultaneously) is equal to $\mathscr{C}(m,A)$ for $A=\left\\{(a_{1},a_{2},a_{3})\in\\{1,\ldots,m-1\\}^{3}\mid a_{1}\equiv a_{2}\pmod{m}\right\\},$ being this one an easy example of the connection between modular Frobenius pseudo-varieties and families of numerical semigroups defined by non- homogeneous patterns (as considered in [4]). A first question studied in [4, 14] is the next one: if $p=a_{1}x_{1}+\cdots+a_{n}x_{n}+a_{0}$ is a non-homogeneous pattern, for which values of $a_{0}$ we have that the family $\mathscr{S}(p)$, of numerical semigroups that admit the pattern $p$, is non-empty? If the answer is affirmative, then it is said that $p$ is an _admissible pattern_. In our case, we have imposed that $a_{0}=-m$, where $m$ will be the multiplicity of all the numerical semigroups, in order to get a relevant advantage: we want to build in a (more or less) easy and explicit way the Apéry set of the numerical semigroups belonging to $\mathscr{C}(m,A)$. As a consequence of this fact, we will be able to obtain extra information about such families of numerical semigroups. Of course, in addition to $-m$, it is possible to consider other values that maintain the pseudo-variety structure. Thus, for $p=x_{1}+\cdots+x_{n}+a_{0}$, we have that, independently of the chosen set $A$, every numerical semigroup $S$, such that $a_{0}\in S$, admits the pattern $p$. Now, let us observe that, if $A_{1}\subseteq A_{2}$, then $\mathscr{C}(m,A_{2})\subseteq\mathscr{C}(m,A_{1})$. This fact allow us to have sufficient conditions on the patterns $p=a_{1}x_{1}+\cdots+a_{n}x_{n}-km$, with $k\in\mathbb{N}\setminus\\{0,1\\}$, from the results in [4, 14]. For example, from [4, Theorem 4.1] or [14, Proposition 20], we can assert that $\mathscr{C}(m,A)\not=\emptyset$ if $n-k\geq 1$. A second question that appears in [3, 14] is about the _equivalence of patterns_. Briefly, the pattern $p_{1}$ induces the pattern $p_{2}$ if $\mathscr{S}(p_{2})\subseteq\mathscr{S}(p_{1})$ and, moreover, $p_{1}$ and $p_{2}$ are equivalent patterns if they induce each other. Maybe the first result of this type is the equivalence between the homogeneous patterns $2x_{1}-x_{2}$ and $x_{1}+x_{2}-x_{3}$, which correspond to the family of Arf numerical semigroups (see [5]). Since the set $A$ has an important role (not to say the main role) in the families $\mathscr{C}(m,A)$, the large number of possibilities in the choice of $A$ leads us to believe that this is an issue that deserves a new work. By the way, in Section 4 we study the family of thin numerical semigroups, denoted by $\mathscr{T}_{m}$, and in Section 5 we consider the family of strong numerical semigroups, denoted by $\mathscr{R}_{m}$. These families are associated with the patterns $2x-m$ and $x+y-m$, respectively. However, the set $A$ is different in each case and, as a consequence, we have that there is not a inclusion relation between both of them (see Examples 4.6 and 5.6). This fact may give an idea about the difficulty of obtaining similar results to those seen in [3, 14]. In any case, we can show simple results about the equivalence (or, at least, the inclusions) of the families $\mathscr{C}(m,A)$. ###### Remark 1.1. All the patterns $p=x_{1}+\cdots+x_{n}-m$ are equivalent (independently of the chosen set $A$) if $n\geq m$, in which case we have that $\mathscr{C}(m,A)=\mathscr{S}_{m}$. Indeed, applying the pigeonhole principle, if $s_{1},\ldots,s_{n}$ are elements of $S\in\mathscr{S}_{m}$ such that $s_{i}\not\equiv 0\pmod{m}$, $1\leq i\leq n$, then there exist $i,j\in\\{1,\ldots,n\\}$, with $i<j$, such that $s_{i}+\cdots+s_{j}=km$ for some $k\in\mathbb{N}\setminus\\{0\\}$. Consequently, $S\in\mathscr{C}(m,A)$. ###### Remark 1.2. Let us set $A=\\{(1,1),(3,4)\\}$, $B=\\{(1,1,2),(1,3,1),(1,3,4)\\}$, and $m\geq 5$. Then $\mathscr{C}(m,A)\subseteq\mathscr{C}(m,B)$. In order to verify this inclusion, we take $a_{1},a_{2},a_{3}\in S\in\mathscr{C}(m,A)$. * • If $a_{1}\equiv 1\pmod{m}$, $a_{2}\equiv 1\pmod{m}$, $a_{3}\equiv 2\pmod{m}$, then $a_{1}+a_{2}+a_{3}-m=(a_{1}+a_{2}-m)+a_{3}\in S$ . * • If $a_{1}\equiv 1\pmod{m}$, $a_{2}\equiv 3\pmod{m}$, $a_{3}\equiv 1\pmod{m}$, then $a_{1}+a_{2}+a_{3}-m=(a_{1}+a_{3}-m)+a_{2}\in S$. * • If $a_{1}\equiv 1\pmod{m}$, $a_{2}\equiv 3\pmod{m}$, $a_{3}\equiv 4\pmod{m}$, then $a_{1}+a_{2}+a_{3}-m=(a_{2}+a_{3}-m)+a_{1}\in S$. More generally, let us suppose that, for each $b\in B$, there exists $a\in A$ such that $b$ is obtained by adding coordinates to $a$. Then $\mathscr{C}(m,A)\subseteq\mathscr{C}(m,B)$. As a final recommendation, it is worth mentioning that in [4, Section 2] and in [14, Introduction] there are several motivating examples, and references, as to why it is interesting to study (non-homogeneous) patterns of numerical semigroups. ## 2 The pseudo-variety $\mathscr{C}\boldsymbol{(m,A)}$ In this section, $m$ is an integer greater than or equal to $2$ and $A$ is a finite subset of $\bigcup_{k\in\mathbb{N}\setminus\\{0,1\\}}\\{1,\ldots,m-1\\}^{k}$. Moreover, recall that $\displaystyle\mathscr{C}(m,A)=\Big{\\{}S\in\mathscr{S}_{m}\mid s_{1}+\cdots+s_{k}-m\in S\mbox{ if }(s_{1},\ldots,s_{k})\in S^{k}\mbox{ and }$ $\displaystyle s_{1}\bmod m,\ldots,s_{k}\bmod m)\in A\Big{\\}},$ where $x\bmod m$ denotes the remainder after division of $x$ by $m$. If $S$ is a numerical semigroup and $x\in S\setminus\\{0\\}$, then the _Apéry set of $x$ in $S$_ (see [1]) is $\mathrm{Ap}(S,x)=\\{w(0)=0,w(1),\ldots,w(x-1)\\}$, where $w(i)$ is the least element of $S$ that is congruent with $i$ modulus $x$. Observe that an integer $s$ belongs to $S$ if and only if there exists $t\in\mathbb{N}$ such that $s=w(s\bmod x)+tx$. ###### Proposition 2.1. Let $S\in\mathscr{S}_{m}$ and $\mathrm{Ap}(S,m)=\\{w(0)=0,w(1),\ldots,w(m-1)\\}$. Then the following conditions are equivalent. 1. 1. $S\in\mathscr{C}(m,A)$. 2. 2. $w(i_{1})+\cdots+w(i_{k})-m\in S$ for all $(i_{1},\ldots,i_{k})\in A$. ###### Proof. (1. $\Rightarrow$ 2.) Since $w(i_{1}),\ldots,w(i_{k})\in S$ and $(w(i_{1})\bmod m,\ldots,w(i_{k})\bmod m)=(i_{1},\ldots,i_{k})\in A$, then $w(i_{1})+\cdots+w(i_{k})-m\in S$. (2\. $\Rightarrow$ 1.) Let $s_{1},\ldots,s_{k}\in S$ such that $(s_{1}\bmod m,\ldots,s_{k}\bmod m)=(i_{1},\ldots,i_{k})\in A.$ Then there exist $t_{1},\ldots,t_{k}\in\mathbb{N}$ such that $s_{j}=w(i_{j})+t_{j}m$, $1\leq j\leq k$, and thus, $s_{1}+\cdots+s_{k}=(w(i_{1})+\cdots+w(i_{k})-m)+(t_{1}+\cdots+t_{k})m\in S$. ∎ By using the function AperyListOfNumericalSemigroupWRTElement(S,m) of [6], we can compute $\mathrm{Ap}(S,m)$ from a system of generators of $S$. Thereby, we have the following algorithm to decide if a numerical semigroup $S$ belongs or not to $\mathscr{C}(m,A)$. ###### Algorithm 2.2. INPUT: A finite subset $G$ of positive integers. OUTPUT: $\langle G\rangle\in\mathscr{C}(m,A)$ or $\langle G\rangle\notin\mathscr{C}(m,A)$. * (1) If $\min$(G)$\not=m$, return $\langle G\rangle\notin\mathscr{C}(m,A)$. * (2) If $\gcd(G)\not=1$, return $\langle G\rangle\notin\mathscr{C}(m,A)$. * (3) Compute $\mathrm{Ap}(\langle G\rangle,m)=\\{w(0),w(1),\ldots,w(m-1)\\}$. * (4) If $w(i_{1})+\cdots+w(i_{k})-m\in S$ for all $(i_{1},\ldots,i_{k})\in A$, return $\langle G\rangle\in\mathscr{C}(m,A)$. * (5) Return $\langle G\rangle\notin\mathscr{C}(m,A)$. Let us illustrate the working of the previous algorithm through an example. ###### Example 2.3. Let us make use of Algorithm 2.2 with $G=\\{5,7,9\\}$, $m=5$, and $A=\\{(1,3),(2,2)\\}$. * • $\min(G)=5$. * • $\gcd(G)=1$. * • $\mathrm{Ap}(\langle G\rangle,5)=\\{w(0)=0,w(1)=16,w(2)=7,w(3)=18,w(4)=19\\}$. * • $w(1)+w(3)-5=29\in\langle G\rangle$ and $w(2)+w(2)-5=9\in\langle G\rangle$. * Therefore, $\langle G\rangle=\langle 5,7,9\rangle\in\mathscr{C}(5,\\{(1,3),(2,2)\\})$. Recall that, if $m$ is an integer greater than or equal to $2$, then we denote by $\Delta(m)=\\{0,m,\to\\}$. It is clear that $\Delta(m)\in\mathscr{S}_{m}$ and that, if $s_{1},\ldots,s_{k}\in\Delta(m)\setminus\\{0\\}$ and $k\geq 2$, then $s_{1}+\cdots+s_{k}-m\in\Delta(m)$. Therefore, we have the next result. ###### Lemma 2.4. Let $m\in\mathbb{N}\setminus\\{0,1\\}$ and $A\subseteq\bigcup_{k\in\\{2,\to\\}}\\{1,\ldots,m-1\\}^{k}$ ($A$ finite). Then $\Delta(m)\in\mathscr{C}(m,A)$. Let $m\in\mathbb{N}\setminus\\{0,1\\}$. Then it is easy to show that $S\cap T\in\mathscr{S}_{m}$ for all $S,T\in\mathscr{S}_{m}$. Moreover, $\Delta(m)$ is the maximum of $\mathscr{S}_{m}$ and, if $S\in\mathscr{S}_{m}$ and $S\not=\Delta(m)$, then $S\cup\\{\mathrm{F}(S)\\}\in\mathscr{S}_{m}$. From all this, we conclude the following result. ###### Lemma 2.5. Let $m\in\mathbb{N}\setminus\\{0,1\\}$. Then $\mathscr{S}_{m}$ is a Frobenius pseudo-variety with $\Delta(m)$ as maximum element. ###### Proposition 2.6. $\mathscr{C}(m,A)$ is a Frobenius pseudo-variety. ###### Proof. From Lemmas 2.4 and 2.5, we have that $\Delta(m)$ is the maximum of $\mathscr{C}(m,A)$. It is also easy to see that, if $S,T\in\mathscr{C}(m,A)$, then $S\cap T\in\mathscr{C}(m,A)$. In order to finish the proof, let us see that, if $S\in\mathscr{C}(m,A)$ and $S\not=\Delta(m)$, then $S\cup\\{\mathrm{F}(S)\\}\in\mathscr{C}(m,A)$. For that, we have to show that, if $s_{1},\ldots,s_{k}\in S\cup\\{\mathrm{F}(S)\\}$ and $(s_{1}\bmod m,\ldots,s_{k}\bmod m)\in A$, then $s_{1}+\cdots+s_{k}-m\in S\cup\\{\mathrm{F}(S)\\}$. In effect, if $\mathrm{F}(S)\notin\\{s_{1},\ldots,s_{k}\\}$, then the result is true because $S\in\mathscr{C}(m,A)$. On the other hand, if $\mathrm{F}(S)\in\\{s_{1},\ldots,s_{k}\\}$, then $s_{1}+\cdots+s_{k}-m\geq\mathrm{F}(S)$ and, consequently, $s_{1}+\cdots+s_{k}-m\in S\cup\\{\mathrm{F}(S)\\}$. ∎ Our next purpose in this section is to show an algorithm that allows us to build all the elements of $\mathscr{C}(m,A)$ that have a fixed genus. To do that, we use the concept of rooted tree. A _graph_ $G$ is a pair $(V,E)$ where $V$ is a non-empty set (whose elements are called _vertices_ of $G$) and $E$ is a subset of $\\{(v,w)\in V\times V\mid v\neq w\\}$ (whose elements are called _edges_ of $G$). A _path (of length $n$) connecting the vertices $x$ and $y$ of $G$_ is a sequence of different edges $(v_{0},v_{1}),(v_{1},v_{2}),\ldots,(v_{n-1},v_{n})$ such that $v_{0}=x$ and $v_{n}=y$. We say that a graph $G$ is a _(rooted) tree_ if there exists a vertex $r$ (known as the _root_ of $G$) such that, for any other vertex $x$ of $G$, there exists a unique path connecting $x$ and $r$. If there exists a path connecting $x$ and $y$, then we say that $x$ is a _descendant_ of $y$. In particular, if $(x,y)$ is an edge of the tree, then we say that $x$ is a _child_ of $y$. (See [12].) We define the graph $\mathrm{G}\big{(}\mathscr{C}(m,A)\big{)}$ in the following way: $\mathscr{C}(m,A)$ is the set of vertices and $(S,S^{\prime})\in\mathscr{C}(m,A)\times\mathscr{C}(m,A)$ is an edge if $S\cup\\{\mathrm{F}(S)\\}=S^{\prime}$. The following result is a consequence of Lemma 4.2 and Theorem 4.3 of [9]. ###### Theorem 2.7. $\mathrm{G}\big{(}\mathscr{C}(m,A)\big{)}$ is a tree with root $\Delta(m)$. Moreover, the children set of $S\in\mathscr{C}(m,A)$ is $\big{\\{}S\setminus\\{x\\}\mid x\in\mathrm{msg}(S),\ S\setminus\\{x\\}\in\mathscr{C}(m,A),\ x>\mathrm{F}(S)\big{\\}}.$ Let $S$ be a numerical semigroup and let $x\in S$. Then it is clear that $S\setminus\\{x\\}$ is a numerical semigroup if and only if $x\in\mathrm{msg}(S)$. Moreover, let us observe that, if $x\in\mathrm{msg}(S)$ and $m\in S\setminus\\{0,x\\}$, then $\mathrm{Ap}(S\setminus\\{x\\},m)=\big{(}\mathrm{Ap}(S,m)\setminus\\{x\\}\big{)}\cup\\{x+m\\}.$ In the following result we characterise the children of $S\in\mathscr{C}(m,A)$. ###### Proposition 2.8. Let $S\in\mathscr{C}(m,A)$, $\mathrm{Ap}(S,m)=\\{w(0),w(1),\ldots,w(m-1)\\}$ and $x\in\mathrm{msg}(S)\setminus\\{m\\}$. Then $S\setminus\\{x\\}\in\mathscr{C}(m,A)$ if and only if $w(i_{1})+\cdots+w(i_{k})\neq m+x$ for all $(i_{1},\ldots,i_{k})\in A$. ###### Proof. Let us suppose that $\mathrm{Ap}(S\setminus\\{x\\},m)=\big{(}\mathrm{Ap}(S,m)\setminus\\{x\\}\big{)}\cup\\{x+m\\}=\\{w^{\prime}(0),w^{\prime}(1),\ldots,w^{\prime}(m-1)\\}$. (Necessity.) If $w(i_{1})+\cdots+w(i_{k})=m+x$, then $m+x\notin\\{w(i_{1}),\ldots,w(i_{k})\\}$ and, therefore, $w(i_{1})=w^{\prime}(i_{1}),\ldots,w(i_{k})=w^{\prime}(i_{k})$. Moreover, $w(i_{1})+\cdots+w(i_{k})-m=x\notin S\setminus\\{x\\}$ and, consequently, $S\setminus\\{x\\}\notin\mathscr{C}(m,A)$. (Sufficiency.) In order to prove that $S\setminus\\{x\\}\in\mathscr{C}(m,A)$, by Proposition 2.1, it is enough to see that, if $(i_{1},\ldots,i_{k})\in A$, then $w^{\prime}(i_{1})+\cdots+w^{\prime}(i_{k})-m\in S\setminus\\{x\\}$. Since $S\in\mathscr{C}(m,A)$, we easily deduce that $w^{\prime}(i_{1})+\cdots+w^{\prime}(i_{k})-m\in S$. Now, if $w^{\prime}(i_{1})+\cdots+w^{\prime}(i_{k})-m\notin S\setminus\\{x\\}$, then $w^{\prime}(i_{1})+\cdots+w^{\prime}(i_{k})=m+x$. Thus, $w(i_{1})=w^{\prime}(i_{1}),\ldots,w(i_{k})=w^{\prime}(i_{k})$ and $w(i_{1})+\cdots+w(i_{k})=m+x$, where the last equality is in contradiction with $S\in\mathscr{C}(m,A)$. ∎ Let us observe that a tree can be built in a recurrent way starting from its root and connecting each vertex with its children. Let us also observe that the elements of $\mathscr{C}(m,A)$ with genus equal to $g+1$ are precisely the children of the elements of $\mathscr{C}(m,A)$ with genus equal to $g$. We are ready to show the above announced algorithm. ###### Algorithm 2.9. INPUT: A positive integer $g$. OUTPUT: $\\{S\in\mathscr{C}(m,A)\mid\mathrm{g}(S)=g\\}$. * (1) If $g<m-1$, return $\emptyset$. * (2) $X=\\{\Delta(m)\\}$ and $i=m-1$. * (3) If $i=g$, return $X$. * (4) For each $S\in X$, compute the set $\mathscr{B}_{S}=\big{\\{}x\in\mathrm{msg}(S)\mid x>\mathrm{F}(S),\ x\not=m,\ S\setminus\\{x\\}\in\mathscr{C}(m,A)\big{\\}}.$ * (5) If $\ \bigcup_{S\in X}\mathscr{B}_{S}=\emptyset$, return $\emptyset$. * (6) $X=\bigcup_{S\in X}\big{\\{}S\setminus\\{x\\}\mid x\in\mathscr{B}_{S}\big{\\}}$, $i=i+1$, and go to (3). For computing item (4) in the above algorithm, we use Proposition 2.8 and the following Lemma 2.10, which is a reformulation of Corollary 18 of [8], and is useful to obtain the minimal system of generators of $S\setminus\\{x\\}$ starting from the minimal system of generators of $S$. ###### Lemma 2.10. Let $S$ be a numerical semigroup with $\mathrm{msg}(S)=\\{n_{1}<n_{2}<\cdots<n_{e}\\}$. If $i\in\\{2,\ldots,e\\}$ and $n_{i}>\mathrm{F}(S)$, then $\mathrm{msg}(S\setminus\\{n_{i}\\})=\left\\{\begin{array}[]{l}\\{n_{1},\ldots,n_{e}\\}\setminus\\{n_{i}\\},\quad\mbox{if there exists }j\in\\{2,\ldots,i-1\\}\\\ \hskip 96.73918pt\mbox{such that }n_{i}+n_{1}-n_{j}\in S;\\\\[5.69054pt] \big{(}\\{n_{1},\ldots,n_{e}\\}\setminus\\{n_{i}\\}\big{)}\cup\\{n_{i}+n_{1}\\},\quad\mbox{in other case.}\end{array}\right.$ We illustrate the functioning of Algorithm 2.9 with an example. ###### Example 2.11. Let us compute all the elements of $\mathscr{C}(5,\\{(1,1),(1,2)\\})$ with genus equal to $6$. * • $X=\\{\langle 5,6,7,8,9\rangle\\}$ and $i=4$. * • $B_{\langle 5,6,7,8,9\rangle}=\\{6,9\\}$. * • $X=\\{\langle 5,7,8,9,11\rangle,\langle 5,6,7,8\rangle\\}$ and $i=5$. * • $B_{\langle 5,7,8,9,11\rangle}=\\{7,8,9,11\\}$ and $B_{\langle 5,6,7,8\rangle}=\emptyset$. * • $X=\\{\langle 5,8,9,11,12\rangle,\langle 5,7,9,11,13\rangle,\langle 5,7,8,11\rangle,\langle 5,7,8,9\rangle\\}$ and $i=6$. * • Return $\\{\langle 5,8,9,11,12\rangle,\langle 5,7,9,11,13\rangle,\langle 5,7,8,11\rangle,\langle 5,7,8,9\rangle\\}$. Taking advantage of the above example, we finish this section building the first three levels of the tree $\mathrm{G}\big{(}\mathscr{C}(5,\\{(1,1),(1,2)\\})\big{)}$. $\langle 5,6,7,8,9\rangle$69$\langle 5,7,8,9,11\rangle$$\langle 5,6,7,8\rangle$78911$\langle 5,8,9,11,12\rangle$$\langle 5,7,9,11,13\rangle$$\langle 5,7,8,11\rangle$$\langle 5,7,8,9\rangle$ Observe that the number which appear next to each edge $(S^{\prime},S)$ is the minimal generator $x$ of $S$ such that $S^{\prime}=S\setminus\\{x\\}$. ## 3 Second-level numerical semigroups We say that a numerical semigroup $S$ is of second-level if $x+y+z-\mathrm{m}(S)\in S$ for all $(x,y,z)\in(S\setminus\\{0\\})^{3}$. We denote by $\mathscr{L}_{2,m}$ the set of all the second-level numerical semigroups with multiplicity equal to $m$. ###### Proposition 3.1. Let $S$ be a numerical semigroup with minimal system of generators given by $\\{m=n_{1}<n_{2}<\cdots<n_{e}\\}$. Then the following two conditions are equivalents. 1. 1. $S\in\mathscr{L}_{2,m}$. 2. 2. If $(i,j,k)\in\\{2,\ldots,e\\}^{3}$, then $n_{i}+n_{j}+n_{k}-m\in S$. ###### Proof. (1. $\Rightarrow$ 2.) It is evident from the definition of second-level numerical semigroup. (2. $\Rightarrow$ 1.) Let $(x,y,z)\in(S\setminus\\{0\\})^{3}$. If $0\in\\{x\bmod m,y\bmod m,z\bmod m\\}$, then it is clear that $x+y+z-m\in S$. Now, if $0\notin\\{x\bmod m,y\bmod m,z\bmod m\\}$, then we easily deduce that there exist $(i,j,k)\in\\{2,\ldots,e\\}^{3}$ and $(s_{1},s_{2},s_{3})\in S^{3}$ such that $(x,y,z)=(n_{i},n_{j},n_{k})+(s_{1},s_{2},s_{3})$. Therefore, $x+y+z-m=(n_{i}+n_{j}+n_{k}-m)+s_{1}+s_{2}+s_{3}\in S$. Consequently, $S\in\mathscr{L}_{2,m}$. ∎ The above proposition allows us to easily decide whether a numerical semigroup is of second-level or not. ###### Example 3.2. Let us see that $S=\langle 5,7,16\rangle\in\mathscr{L}_{2,m}$. In effect, it is clear that $\\{7+7+7-5,7+7+16-5,7+16+16,16+16+16-5\\}=\\{16,25,34,43\\}\subseteq S$. Therefore, by Proposition 3.1, we have that $S\in\mathscr{L}_{2,m}$. Now our intention is to show that $\mathscr{L}_{2,m}$ is a modular Frobenius pseudo-variety. ###### Proposition 3.3. Let $m\in\mathbb{N}\setminus\\{0,1\\}$. Then $\mathscr{L}_{2,m}=\mathscr{C}(m,\\{1,\ldots,m-1\\}^{3})$. ###### Proof. Let $S\in\mathscr{L}_{2,m}$ and $\mathrm{Ap}(S,m)=\\{w(0),w(1),\ldots,w(m-1)\\}$. If $(i,j,k)\in\\{1,\ldots,m-1\\}^{3}$, then $(w(i),w(j),w(k))\in(S\setminus\\{0\\})^{3}$ and, therefore, $w(i)+w(j)+w(k)-m\in S$. By applying Proposition 2.1, we have that $S\in\mathscr{C}(m,\\{1,\ldots,m-1\\}^{3})$. In order to see the other inclusion, let $S\in\mathscr{C}(m,\\{1,\ldots,m-1\\}^{3})$ and $(x,y,z)\in(S\setminus\\{0\\})^{3}$. Firstly, if $0\in\\{x\bmod m,y\bmod m,z\bmod m\\}$, then it is clear that $x+y+z-m\in S$. Secondly, if $0\notin\\{x\bmod m,y\bmod m,z\bmod m\\}$, then there exist $(i,j,k)\in\\{1,\ldots,m-1\\}^{3}$ and $(p,q,r)\in\mathbb{N}^{3}$ such that $x=w(i)+pm$, $y=w(j)+qm$, and $z=w(k)+rm$. Thus, $x+y+z-m=(w(i)+w(j)+w(k)-m)+(p+q+r)m\in S$ and, therefore, $S\in\mathscr{L}_{2,m}$. ∎ As an immediate consequence of Propositions 2.6 and 3.3, we have the following result. ###### Corollary 3.4. Let $m\in\mathbb{N}\setminus\\{0,1\\}$. Then $\mathscr{L}_{2,m}$ is a modular pseudo-Frobenius variety and, in addition, $\Delta(m)$ is the maximum of $\mathscr{L}_{2,m}$. Our next step is to build the tree associated with the pseudo-variety $\mathscr{L}_{2,m}$. To do this, we should characterise the possible children of each element in $\mathscr{L}_{2,m}$. Let $S$ be a numerical semigroup with $\mathrm{msg}(S)=\\{n_{1},\ldots,n_{e}\\}$. If $s\in S$, then we denote by $L_{S}(s)=\max\\{a_{1}+\cdots+a_{e}\mid(a_{1},\ldots,a_{e})\in\mathbb{N}^{e}\mbox{ and }a_{1}n_{1}+\cdots+a_{e}n_{e}=s\\}.$ ###### Proposition 3.5. Let $m\in\mathbb{N}\setminus\\{0,1\\}$, $S\in\mathscr{L}_{2,m}$, and $x\in\mathrm{msg}(S)\setminus\\{m\\}$. Then $S\setminus\\{x\\}\in\mathscr{L}_{2,m}$ if and only if $L_{S\setminus\\{x\\}}(x+m)\leq 2$. ###### Proof. (Necessity.) Let us suppose that $L_{S\setminus\\{x\\}}(x+m)\geq 3$. Then there exists $(a,b,c)\in(S\setminus\\{0,x\\})^{3}$ such that $x+m=a+b+c$. Thus, $a+b+c-m=x\notin S\setminus\\{x\\}$ and, therefore, $S\setminus\\{x\\}\notin\mathscr{L}_{2,m}$. (Sufficiency.) If $(a,b,c)\in(S\setminus\\{0,x\\})^{3}$, then $a+b+c-m\in S$, since $S\in\mathscr{L}_{2,m}$. Now, if $a+b+c-m=x$, then $L_{S\setminus\\{x\\}}(x+m)\geq 3$. Therefore, $a+b+c-m\not=x$ and, consequently, $a+b+c-m\in S\setminus\\{x\\}$. Thus, we conclude that $S\setminus\\{x\\}\in\mathscr{L}_{2,m}$. ∎ By applying Theorem 2.7, Propositions 3.3 and 3.5, and Lemma 2.10, we can easily build the tree $\mathrm{G}(\mathscr{L}_{2,m})$. ###### Example 3.6. The first four levels of $\mathrm{G}(\mathscr{L}_{2,4})$ appear in the following figure. $\langle 4,5,6,7\rangle$567$\langle 4,6,7,9\rangle$$\langle 4,5,7\rangle$$\langle 4,5,6\rangle$6797$\langle 4,7,9,10\rangle$$\langle 4,6,9,11\rangle$$\langle 4,6,7\rangle$$\langle 4,5,11\rangle$7910911$\langle 4,9,10,11\rangle$$\langle 4,7,10,13\rangle$$\langle 4,7,9\rangle$$\langle 4,6,11,13\rangle$$\langle 4,6,9\rangle$ The Frobenius problem (see [7]) consists in finding formulas that allow us to compute the Frobenius number and the genus of a numerical semigroup in terms of the minimal system of generators of such a numerical semigroup. This problem was solved in [15] for numerical semigroups with embedding dimension two. At present, the Frobenius problem is open for embedding dimension greater than or equal to $3$. However, if we know the Apéry set $\mathrm{Ap}(S,x)$ for some $x\in S\setminus\\{0\\}$, then we have solved the Frobenius problem for $S$ because we have the following result from [13]. ###### Lemma 3.7. Let $S$ be a numerical semigroup and let $x\in S\setminus\\{0\\}$. Then 1. 1. $\mathrm{F}(S)=(\max(\mathrm{Ap}(S,x)))-x$, 2. 2. $\mathrm{g}(S)=\frac{1}{x}(\sum_{w\in\mathrm{Ap}(S,x)}w)-\frac{x-1}{2}$. The knowledge of $\mathrm{Ap}(S,x)=\\{w(0),w(1),\ldots,w(x-1)\\}$ also allows us to determine the membership of an integer to the numerical semigroup $S$. In fact, if $n\in\mathbb{N}$, then $n\in S$ if and only if $n\geq w(n\bmod x)$. Now our purpose is to show that, if $S\in\mathscr{L}_{2,m}$, then is rather easy to compute $\mathrm{Ap}(S,m)$. We need the following easy result. ###### Lemma 3.8. Let $m\in\mathbb{N}\setminus\\{0,1\\}$, $S\in\mathscr{L}_{2,m}$, $\mathrm{msg}(S)=\\{n_{1}\\!=\\!m,n_{2},\ldots,n_{e}\\}$. Then $\\{0,n_{2},\ldots,n_{e}\\}\\!\subseteq\\!\mathrm{Ap}(S,m)\\!\subseteq\\!\\{0,n_{2},\ldots,n_{e}\\}\cup\\{n_{i}+n_{j}\mid(i,j)\in\\{2,\ldots,e\\}^{2}\\}.$ As an immediate consequence of the above lemma we can formulate the following result. ###### Proposition 3.9. Let $m\in\mathbb{N}\setminus\\{0,1\\}$, $S\in\mathscr{L}_{2,m}$, $\mathrm{msg}(S)=\\{n_{1}\\!=\\!m,n_{2},\ldots,n_{e}\\}$. Then $\mathrm{Ap}(S,m)=\\{w(0),w(1),\ldots,w(m-1)\\}$ where $w(i)$ is the least element of $\\{0,n_{2},\ldots,n_{e}\\}\cup\\{n_{i}+n_{j}\mid(i,j)\in\\{2,\ldots,e\\}^{2}\\}$ that is congruent to $i$ modulo $m$. ###### Corollary 3.10. Let $m\in\mathbb{N}\setminus\\{0,1\\}$ and $S\in\mathscr{L}_{2,m}$. Then $m=\mathrm{m}(S)\leq\frac{\mathrm{e}(S)\left(\mathrm{e}(S)+1\right)}{2}$. Following the notation introduced in [10], we say that $x\in\mathbb{Z}\setminus S$ is a pseudo-Frobenius number of $S$ if $x+s\in S$ for all $s\in S\setminus\\{0\\}$. We denote by $\mathrm{PF}(S)$ the set of all the pseudo-Frobenius numbers of $S$. The cardinality of $\mathrm{PF}(S)$ is an important invariant of $S$ (see [2]) that is the so-called type of $S$ and it is denoted by $\mathrm{t}(S)$. Let $S$ be a numerical semigroup. Then we define the following binary relation over $\mathbb{Z}$: $a\leq_{S}b$ if $b-a\in S$. In [11] it is shown that $\leq_{S}$ is a partial order (that is, reflexive, transitive, and antisymmetric). Moreover, Proposition 2.20 of [11] is the next result. ###### Proposition 3.11. Let $S$ be a numerical semigroup and $x\in S\setminus\\{0\\}$. Then $\mathrm{PF}(S)=\\{w-x\mid w\in\mathrm{Maximals}_{\leq_{S}}(\mathrm{Ap}(S,x))\\}$ Let us observe that, if $w,w^{\prime}\in\mathrm{Ap}(S,x)$, then $w^{\prime}-w\in S$ if and only if $w^{\prime}-w\in\mathrm{Ap}(S,x)$. Therefore, $\mathrm{Maximals}_{\leq_{S}}(\mathrm{Ap}(S,x))$ is the set $\\{w\in\mathrm{Ap}(S,x)\mid w^{\prime}-w\notin\mathrm{Ap}(S,x)\setminus\\{0\\}\mbox{ for all }w^{\prime}\in\mathrm{Ap}(S,x)\\}.$ We finish this section with an example that illustrates the above results. ###### Example 3.12. Having in mind that $S=\langle 5,6,13\rangle$ is a second-level numerical semigroup, it is easy to compute $\mathrm{Ap}(S,5)$. In fact, by applying Lemma 3.8, we have that $\mathrm{Ap}(S,5)\subseteq\\{0,6,13\\}\cup\\{12,19,26\\}$ and, by Proposition 3.9, we conclude that $\mathrm{Ap}(S,5)=\\{0,6,12,13,19\\}$. On the other hand, by Lemma 3.7, we know that $\mathrm{F}(S)=19-5=14$ and $\mathrm{g}(S)=\frac{1}{5}(6+12+13+19)-\frac{5-1}{2}=8$. Finally, since $\mathrm{Maximals}_{\leq_{S}}(\mathrm{Ap}(S,5))=\\{12,19\\}$, Proposition 3.11 allows us to claim that $\mathrm{PF}(S)=\\{7,12\\}$ and $\mathrm{t}(S)=2$. ###### Remark 3.13. The definition of second-level numerical semigroup can be easily generalised to greater levels. Thus, we say that a numerical semigroup is of $n$th-level if $x_{1}+\cdots+x_{n+1}-\mathrm{m}(S)\in S$ for all $(x_{1},\ldots,x_{n+1})\in(S\setminus\\{0\\})^{n+1}$ and denote by $\mathscr{L}_{n,m}$ the set of all the $n$th-level numerical semigroups with multiplicity equal to $m$. It is clear that for $n$th-level we obtain similar results to those of second- level. In particular, $\mathscr{L}_{n,m}=\mathscr{C}(m,\\{1,\ldots,m-1\\}^{n+1})$ and Proposition 3.5 remains true taking $L_{S\setminus\\{x\\}}(x+m)\leq n$. On the other hand, having in mind that $\mathscr{L}_{1,m}$ is the family of numerical semigroups with maximal embedding dimension and that (by Remark 1.1) $\mathscr{L}_{m-1,m}=\mathscr{S}_{m}$, we can observe that $\mathscr{L}_{1,m}\subseteq\mathscr{L}_{2,m}\subseteq\cdots\subseteq\mathscr{L}_{m-1,m}=\mathscr{S}_{m},$ where all the inclusions are strict, as we can deduce from the following example. ###### Example 3.14. Let us set $m=5$ and $S_{1}=\langle 5,6\rangle=\\{0,5,6,10,11,12,15,16,17,18,20,\to\\}.$ Then, * • $S_{1}\in\mathscr{L}_{4,5}\setminus\mathscr{L}_{3,5}=\mathscr{S}_{5}\setminus\mathscr{L}_{3,5}$, because $4\times 6-5\not\in S_{1}$ and $s_{1}+\cdots+s_{5}-5\geq 5\times 6-5\geq F(S_{1})+1$; * • $S_{2}=S_{1}\cup\\{19\\}=\langle 5,6,19\rangle\in\mathscr{L}_{3,5}\setminus\mathscr{L}_{2,5}$, because $3\times 6-5\not\in S_{2}$ and $s_{1}+\cdots+s_{4}-5\geq 4\times 6-5\geq F(S_{2})+1$; * • $S_{3}=S_{2}\cup\\{13,14\\}=\langle 5,6,13,14\rangle\in\mathscr{L}_{2,5}\setminus\mathscr{L}_{1,5}$, because $2\times 6-5\not\in S_{2}$ and $s_{1}+s_{2}+s_{3}-5\geq 3\times 6-5\geq F(S_{3})+1$. Generalising this example to other values of $m$ is trivial. ###### Remark 3.15. Let us observe that the chain obtained in Remark 3.13 is reminiscent, in some sense, of the chain associated with subtraction patterns (see [3, Section 6]). ## 4 Thin numerical semigroups We say that a numerical semigroup $S$ is thin if $2x-\mathrm{m}(S)\in S$ for all $x\in S\setminus\\{0\\}$. We denote by $\mathscr{T}_{m}$ the set of all the thin numerical semigroups with multiplicity equal to $m$. ###### Proposition 4.1. Let $S$ be a numerical semigroup with minimal system of generators given by $\\{m=n_{1}<n_{2}<\cdots<n_{e}\\}$. Then the following two conditions are equivalents. 1. 1. $S\in\mathscr{T}_{m}$. 2. 2. $2n_{i}-m\in S$ for all $i\in\\{2,\ldots,e\\}$. ###### Proof. (1. $\Rightarrow$ 2.) It follows from the definition of thin numerical semigroup. (2. $\Rightarrow$ 1.) Let $x\in S\setminus\\{0\\}$. If $x\equiv 0\pmod{m}$, then it is clear that $2x-m\in S$. Now, if $x\not\equiv 0\pmod{m}$, then we have that there exist $i\in\\{2,\ldots,e\\}$ and $s\in S$ such that $x=n_{i}+s$. Therefore, $2x-m=(2n_{i}-m)+2s\in S$ and, consequently, $S\in\mathscr{T}_{m}$. ∎ The above proposition allows us to easily decide whether a numerical semigroup is thin or not. ###### Example 4.2. Let us see that $S=\langle 4,6,7\rangle\in\mathscr{T}_{4}$. In effect, it is clear that $\\{2\cdot 6-4,2\cdot 7-4\\}=\\{8,10\\}\subseteq S$. Therefore, by Proposition 4.1, we have that $S\in\mathscr{T}_{m}$. Now we want to show that $\mathscr{T}_{m}$ is a modular Frobenius pseudo- variety. ###### Proposition 4.3. $\mathscr{T}_{m}=\mathscr{C}(m,\\{(1,1),(2,2)\ldots,(m-1,m-1)\\})$ for all $m\in\mathbb{N}\setminus\\{0,1\\}$. ###### Proof. If $S\in\mathscr{T}_{m}$ and $\mathrm{Ap}(S,m)=\\{w(0),w(1),\ldots,w(m-1)\\}$, then it is clear that $\\{w(1)+w(1)-m,\ldots,w(m-1)+w(m-1)-m\\}\subseteq S$. Therefore, by applying Proposition 2.1, we have that $S\in\mathscr{C}(m,\\{(1,1),(2,2)\ldots,(m-1,m-1)\\})$. To see the other inclusion, let $S\in\mathscr{C}(m,\\{(1,1),(2,2)\ldots,(m-1,m-1)\\})$ and $x\in S\setminus\\{0\\}$. On the one hand, if $x\equiv 0\pmod{m}$, then it is clear that $2x-m\in S$. On the other hand, if $x\not\equiv 0\pmod{m}$, then there exist $i\in\\{1,\ldots,m-1\\}$ and $t\in\mathbb{N}$ such that $x=w(i)+tm$. Therefore, $2x-m=(w(i)+w(i)-m)+2tm\in S$ and in consequence $S\in\mathscr{T}_{m}$. ∎ From Propositions 2.6 and 4.3, we get the following result. ###### Corollary 4.4. Let $m\in\mathbb{N}\setminus\\{0,1\\}$. Then $\mathscr{T}_{m}$ is a modular pseudo-Frobenius variety and, in addition, $\Delta(m)$ is the maximum of $S\in\mathscr{T}_{m}$. In order to build the tree associated with the pseudo-variety $S\in\mathscr{T}_{m}$, we are going to characterise the possible children of each $S\in\mathscr{T}_{m}$. ###### Proposition 4.5. Let $m\in\mathbb{N}\setminus\\{0,1\\}$, $S\in\mathscr{T}_{m}$, and $x\in\mathrm{msg}(S)\setminus\\{m\\}$. Then $S\setminus\\{x\\}\in\mathscr{T}_{m}$ if and only if $\frac{x+m}{2}\notin S$. ###### Proof. (Necessity.) If $\frac{x+m}{2}\in S$, then $\frac{x+m}{2}\in S\setminus\\{0,x\\}$ and $2\frac{x+m}{2}-m=x\notin S\setminus\\{x\\}$. Therefore, $S\setminus\\{x\\}\notin\mathscr{T}_{m}$. (Sufficiency.) If $a\in S\setminus\\{0,x\\}$, then $2a-m\in S$ because $S\in\mathscr{T}_{m}$. Now, if $2a-m=x$, then $\frac{x+m}{2}=a\in S$. Therefore, $2a-m\not=x$ and, consequently, $2a-m\in S\setminus\\{x\\}$. Thereby, $S\setminus\\{x\\}\in\mathscr{T}_{m}$. ∎ By applying Theorem 2.7, Propositions 4.3 and 4.5, and Lemma 2.10, we can build the tree $\mathrm{G}(\mathscr{T}_{m})$. Let us see an example. ###### Example 4.6. In the next figure we have the first four levels of $\mathrm{G}(\mathscr{T}_{4})$. $\langle 4,5,6,7\rangle$57$\langle 4,6,7,9\rangle$$\langle 4,5,6\rangle$679$\langle 4,7,9,10\rangle$$\langle 4,6,9,11\rangle$$\langle 4,6,7\rangle$79911$\langle 4,9,10,11\rangle$$\langle 4,7,10,13\rangle$$\langle 4,6,11,13\rangle$$\langle 4,6,9\rangle$ We finish this section describing the Apéry set for $S\in\mathscr{T}_{m}$. ###### Proposition 4.7. Let $m\in\mathbb{N}\setminus\\{0\\}$, $S\in\mathscr{T}_{m}$, $\mathrm{msg}(S)=\\{m=n_{1},n_{2},\ldots,n_{e}\\}$. Then $\mathrm{Ap}(S,m)=\\{w(0),w(1),\ldots,w(m-1)\\}$, where $w(i)$ is the least element of the set $\\{a_{2}n_{2}+\ldots+a_{e}n_{e}\mid(a_{2},\ldots,a_{e})\in\\{0,1\\}^{e-1}\\}$ that is congruent to $i$ modulo $m$. ###### Corollary 4.8. Let $m\in\mathbb{N}\setminus\\{0\\}$ and $S\in\mathscr{T}_{m}$. Then $m=\mathrm{m}(S)\leq 2^{\mathrm{e}(S)-1}$. ###### Remark 4.9. Let us observe that we can generalise the concept of thin numerical semigroups in the following way: setting $n\in\mathbb{N}\setminus\\{0,1\\}$, we say that a numerical semigroup $S$ is $n$-thin if $nx-\mathrm{m}(S)\in S$ for all $x\in S\setminus\\{0\\}$, and denote by $\mathscr{T}_{n,m}$ the set of all $k$-thin numerical semigroups with multiplicity equal to $m$. It is clear that $\mathscr{T}_{m}=\mathscr{T}_{2,m}$. Again, $\mathscr{T}_{n,m}$ is a modular pseudo-variety, Proposition 4.5 is valid for the condition $\frac{x+m}{n}\not\in S$, and we can build the chain $\mathscr{T}_{2,m}\subsetneq\mathscr{T}_{3,m}\subsetneq\cdots\subsetneq\mathscr{T}_{m,m}=\mathscr{S}_{m}.$ Note that Example 3.14 also gives us the construction that ensures the strict inclusions in this case. ## 5 Strong numerical semigroups We say that a numerical semigroup $S$ is strong if $x+y-\mathrm{m}(S)\in S$ for all $(x,y)\in(S\setminus\\{0\\})^{2}$ such that $x\not\equiv y\pmod{\mathrm{m}(S)}$. We denote by $\mathscr{R}_{m}$ the set of all the strong numerical semigroups with multiplicity equal to $m$. ###### Proposition 5.1. Let $S$ be a numerical semigroup with minimal system of generators given by $\\{m=n_{1}<n_{2}<\cdots<n_{e}\\}$. Then the following two conditions are equivalents. 1. 1. $S\in\mathscr{R}_{m}$. 2. 2. $\\{n_{i}+n_{j}-m,3n_{i}-m\\}\subseteq S$ for all $i\in\\{2,\ldots,e\\}$ and for all $(i,j)\in\\{2,\ldots,e\\}^{2}$ such that $i\not=j$. ###### Proof. (1. $\Rightarrow$ 2.) It is enough to observe that $n_{i}\not\equiv n_{j}\pmod{m}$ and that $2n_{i}\not\equiv n_{i}\pmod{m}$. (2. $\Rightarrow$ 1.) Let $x,y\in S\setminus\\{0\\}$ such that $x\not\equiv y\pmod{\mathrm{m}}$. If $x\equiv 0\pmod{m}$ or $y\equiv 0\pmod{m}$, then it is clear that $x+y-m\in S$. Now, if $x\not\equiv 0\pmod{m}$ and $y\not\equiv 0\pmod{m}$, then there exists $(i,j)\in\\{1,\ldots,m-1\\}^{2}$, with $i\not=j$, and there exists $(p,q)\in\mathbb{N}^{2}$ such that $x=w(i)+pm$ and $y=w(j)+qm$. Moreover, if there exists $(a,b)\in\\{2,\ldots,e\\}^{2}$ such that $a\not=b$ and $w(i)-n_{a},w(j)-n_{b}\in S$, then it is easy to see that $x+y-m\in S$. In other case, there exists $(r,t)\in(\mathbb{N}\setminus\\{0\\})^{2}$ such that $w(i)=r\cdot n_{a}$ and $w(j)=t\cdot n_{a}$ for some $a\in\\{2,\ldots,e\\}$. Then, since $i\not=j$, we deduce that $r+t\geq 3$ and, consequently, $x+y-m\in S$. In conclusion, $S\in\mathscr{R}_{m}$. ∎ The above proposition allows us to easily decide whether a numerical semigroup is strong or not. ###### Example 5.2. If $S=\langle 4,5,7\rangle$, then $\\{5+7-4,3\cdot 5-4,3\cdot 7-4\\}=\\{8,11,17\\}\subseteq S$. Therefore, by Proposition 5.1, we have that $S\in\mathscr{R}_{m}$. Now we want to show that $\mathscr{R}_{m}$ is a modular Frobenius pseudo- variety. Let us denote by $A=\\{1,\ldots,m-1\\}^{2}\setminus\\{(1,1),(2,2),\ldots,(m-1,m-1)\\}$. ###### Proposition 5.3. $\mathscr{R}_{m}=\mathscr{C}(m,A)$ for all $m\in\mathbb{N}\setminus\\{0,1\\}$. ###### Proof. Let $S\in\mathscr{R}_{m}$, $\mathrm{Ap}(S,m)=\\{w(0),w(1),\ldots,w(m-1)\\}$, and $(i,j)\in A$. Then $\\{(w(i),w(j)\\}\in(S\setminus\\{0\\})^{2}$ and $w(i)\not\equiv w(j)\pmod{m}$. Therefore, $w(i)-w(j)-m\in S$ and, by applying Proposition 2.1, we have that $S\in\mathscr{C}(m,A)$. To see the other inclusion, let $S\in\mathscr{C}(m,A)$ and $(x,y)\in(S\setminus\\{0\\})^{2}$. On the one hand, if $0\in\\{x\bmod m,y\bmod m\\}$, then it is clear that $x+y-m\in S$. On the other hand, if $0\notin\\{x\bmod m,y\bmod m\\}$, then there exist $(i,j)\in A$ and $(p,q)\in\mathbb{N}^{2}$ such that $x=w(i)+pm$ and $y=w(j)+qm$. Therefore, $x+y-m=(w(i)+w(i)-m)+(p+q)m\in S$ and, consequently, $S\in\mathscr{R}_{m}$. ∎ From Propositions 2.6 and 5.3, we get the following result. ###### Corollary 5.4. Let $m\in\mathbb{N}\setminus\\{0,1\\}$. Then $\mathscr{R}_{m}$ is a modular pseudo-Frobenius variety and, in addition, $\Delta(m)$ is the maximum of $S\in\mathscr{R}_{m}$. We are now interested in the description of the tree associated with the pseudo-variety $S\in\mathscr{R}_{m}$. In order to do that, we are going to characterise the children of an arbitrary $S\in\mathscr{R}_{m}$. ###### Proposition 5.5. Let $m\in\mathbb{N}\setminus\\{0,1\\}$, $S\in\mathscr{R}_{m}$, and $x\in\mathrm{msg}(S)\setminus\\{m\\}$. Then $S\setminus\\{x\\}\in\mathscr{R}_{m}$ if and only if $x+m\notin\\{a+b\mid a,b\in\mathrm{msg}(S)\setminus\\{m,x\\},\;a\not=b\\}\cup\\{3a\mid a\in\mathrm{msg}(S)\setminus\\{m,x\\}\\}$. ###### Proof. (Necessity.) If $a,b\in\mathrm{msg}(S)\setminus\\{m,x\\}$ and $a\not=b$, then $(a,b)\in(S\setminus\\{0,x\\})^{2}$ and $a\not\equiv b\pmod{m}$. Since $S\setminus\\{x\\}\in\mathscr{R}_{m}$, we have that $a+b-m\in S\setminus\\{x\\}$ and, therefore, $a+b-x\not=x$. Thus, $x+m\notin\\{a+b\mid a,b\in\mathrm{msg}(S)\setminus\\{m,x\\},\;a\not=b\\}$. On the other hand, if $a\in\mathrm{msg}(S)\setminus\\{m,x\\}$, then $(a,2a)\in(S\setminus\\{0,x\\})^{2}$ and $a\not\equiv 2a\pmod{m}$. Once again, since $S\setminus\\{x\\}\in\mathscr{R}_{m}$, we have that $3a-m\in S\setminus\\{x\\}$ and, therefore, $a+b-x\not=x$. Thus, $x+m\notin\\{3a\mid a\in\mathrm{msg}(S)\setminus\\{m,x\\}\\}$. (Sufficiency.) Let $a,b\in S\setminus\\{0,x\\}$ such that $a\not=b$. Since $S\in\mathscr{R}_{m}$, we have that $a+b-m\in S$ and $3a-m\in S$. Now, by Lemma 2.10, we know that $\mathrm{msg}(S)\setminus\\{x\\}\subseteq\mathrm{msg}(S\setminus\\{x\\})\subseteq(\mathrm{msg}(S)\setminus\\{x\\})\cup\\{x+m\\}$. Thus, from this fact and the hypothesis, it easily follows that $a+b-m\not=x$ and $3a-m\not=x$, that is, $a+b-m,3a-m\in S\setminus\\{x\\}$. By applying Proposition 5.1, we conclude that $S\setminus\\{x\\}\in\mathscr{R}_{m}$. ∎ From Theorem 2.7, Propositions 5.3 and 5.5, and Lemma 2.10, we can build the tree $\mathrm{G}(\mathscr{R}_{m})$. Let us see an example. ###### Example 5.6. In the next figure we have the first four levels of $\mathrm{G}(\mathscr{R}_{4})$. $\langle 4,5,6,7\rangle$56$\langle 4,6,7,9\rangle$$\langle 4,5,7\rangle$677$\langle 4,7,9,10\rangle$$\langle 4,6,9,11\rangle$$\langle 4,5,11\rangle$79109$\langle 4,9,10,11\rangle$$\langle 4,7,10,13\rangle$$\langle 4,7,9\rangle$$\langle 4,6,11,13\rangle$ We finish this section describing the Apéry set for $S\in\mathscr{R}_{m}$. ###### Proposition 5.7. Let $m\in\mathbb{N}\setminus\\{0,1\\}$, $S\in\mathscr{R}_{m}$, $\mathrm{msg}(S)=\\{m=n_{1},n_{2},\ldots,n_{e}\\}$. Then $\mathrm{Ap}(S)=\\{w(0),w(1),\ldots,w(n)\\}$, where $w(i)$ is the least element of the set $\\{0,n_{2},\ldots,n_{e}\\}\cup\\{2n_{2},\ldots,2n_{e}\\}$ that is congruent to $i$ modulo $m$. ###### Corollary 5.8. Let $m\in\mathbb{N}\setminus\\{0,1\\}$ and $S\in\mathscr{R}_{m}$. Then $m=\mathrm{m}(S)\leq 2\mathrm{e}(S)-1$. It is interesting to observe that, under the hypotheses of Proposition 5.7, $n_{i}\in\mathrm{Maximals}_{\leq_{S}}(\mathrm{Ap}(S,m))$ if and only if $2n_{i}\notin\mathrm{Ap}(S,m)$. In addition, $2n_{i}\in\mathrm{Ap}(S,m)$ if and only if $2n_{i}\in\mathrm{Maximals}_{\leq_{S}}(\mathrm{Ap}(S,m))$. Therefore, from Propositions 3.11 and 5.7, we get the next result. ###### Corollary 5.9. Let $m\in\mathbb{N}\setminus\\{0,1\\}$ and $S\in\mathscr{R}_{m}$. Then $\mathrm{t}(S)=\mathrm{e}(S)-1$. It is well known that, if $S$ is a numerical semigroup, then $\mathrm{e}(S)\leq\mathrm{m}(S)$ and $\mathrm{t}(S)\leq\mathrm{m}(S)-1$ (see Proposition 2.10 and Corollary 2.23 in [11]). Combining these facts with Corollaries 5.8 and 5.9, we have the next result. ###### Corollary 5.10. Let $m\in\mathbb{N}\setminus\\{0,1\\}$ and $S\in\mathscr{R}_{m}$. Then $\frac{\mathrm{m}(S)-1}{2}\leq\mathrm{t}(S)\leq\mathrm{m}(S)-1$ (or, equivalently, $\frac{\mathrm{m}(S)+1}{2}\leq\mathrm{e}(S)\leq\mathrm{m}(S)$). ###### Remark 5.11. Contrary to what happens in Sections 3 and 4, the concept of strong numerical semigroup does not have a natural generalisation. In fact, we have, at least, two possibilities. 1. 1. $x_{1}+\cdots x_{n}-\mathrm{m}(S)\in S$ for all $(x_{1},\ldots,x_{n})\in(S\setminus\\{0\\})^{n}$ such that $x_{i}\not\equiv x_{j}\pmod{\mathrm{m}(S)}$, for all $i\not=j$. In this case, $(x_{1},\ldots,x_{n})\in A_{n}$, where $A_{n}=\left\\{(\alpha_{1},\ldots,\alpha_{n})\in\\{1,\ldots,m-1\\}^{n}\mid\alpha_{i}\not=\alpha_{j}\mbox{ for all }i\not=j\right\\}.$ 2. 2. $x_{1}+\cdots x_{n}-\mathrm{m}(S)\in S$ for all $(x_{1},\ldots,x_{n})\in B_{n}$, where $B_{n}=\\{1,\ldots,m-1\\}^{n}\setminus\\{(1,\ldots,1),(2,\ldots,2),\ldots,(m-1,\ldots,m-1)\\}.$ It is clear that, if $n\geq 3$, then $A_{n}\subsetneq B_{n}$ and, in consequence, $\mathscr{C}(m,B_{n})\subseteq\mathscr{C}(m,A_{n})$. In addition, it is guessed that, if $n<m$, then the second inclusion will be strict. ## Acknowledgement This version of the article has been accepted for publication, after peer review but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/s13348-021-00339-0. ## References * [1] R. Apéry, Sur les branches superlinéaires des courbes algébriques, _C. R. Acad. Sci. Paris_ 222 (1946), 1198–1200. * [2] V. Barucci, D. E. Dobbs, and M. Fontana, _Maximality Properties in Numerical Semigroups and Applications to One-Dimensional Analytically Irreducible Local Domains_ , Mem. Amer. Math. Soc. 598 (1997). * [3] M. Bras-Amorós, P. A. García-Sánchez, Patterns on numerical semigroups, _Linear Algebra Appl._ 414 (2006), 652–669. * [4] M. Bras-Amorós, P. A. García-Sánchez, A. Vico-Oton, Nonhomogeneous patterns on numerical semigroups, _Internat. J. Algebra Comput._ 23 (2013), 1469–1483. * [5] A. Campillo, J. I. Farrán, and C. Munuera, On the parameters of algebraic-geometry codes related to Arf semigroups, _IEEE Trans. Inform. Theory_ 46(7) (2000), 2634–2638. * [6] M. Delgado, P. A. García-Sánchez, and J. Morais. _NumericalSgps, a GAP package for numerical semigroups_ , version 1.2.2 (03/03/2020). `https://gap-packages.github.io/numericalsgps/` * [7] J. L. Ramírez Alfonsín, _The Diophantine Frobenius problem_ , Oxford Lectures Series in Mathematics and its Applications, vol. 30 (Oxford Univ. Press, Oxford, 2005). * [8] A. M. Robles-Pérez, J. C. Rosales, The numerical semigroup of phrases’ lengths in a simple alphabet, _The Scientific World Journal_ 2013 (2013), Article ID 459024, 9 pages. * [9] A. M. Robles-Pérez, J. C. Rosales, Frobenius pseudo-varieties in numerical semigroups, _Ann. Mat. Pura Appl._ 194 (2015), 275–287. * [10] J. C. Rosales and M. B. Branco, Numerical semigroups that can be expressed as an intersection of symmetric numerical semigroups, J. Pure Appl. Algebra 171 (2002), 303–314. * [11] J. C. Rosales and P. A. García-Sánchez, _Numerical semigroups_ , Developments in Mathematics, Vol. 20 (Springer, New York, 2009). * [12] K. H. Rosen, _Handbook of Discrete and Combinatorial Mathematics_ (CRC Press, Boca Raton, 2000). * [13] E. S. Selmer, On the linear diophantine problem of Frobenius, _J. Reine Angew. Math._ 293/294 (1977), 1–17. * [14] K. Stokes, Patterns of ideals numerical semigroups, _Semigroup Forum_ 93 (2016), 180–200. * [15] J. J. Sylvester, Problem 7382, _The Educational Times, and Journal of the College of Preceptors, New Ser._ , 36(266) (1883), 177. Solution by W. J. Curran Sharp, ibid., 36(271) (1883), 315.
# Liberalized market designs for district heating networks under the EMB3Rs platform 1st António Faria INESC TEC Porto, Portugal <EMAIL_ADDRESS>2nd Tiago Soares INESC TEC Porto, Portugal <EMAIL_ADDRESS>3rd José Maria Cunha INEGI Porto, Portugal <EMAIL_ADDRESS>4th Zenaida Mourão INEGI Porto, Portugal <EMAIL_ADDRESS> ###### Abstract Current developments in heat pumps, supported by innovative business models, are driving several industry sectors to take a proactive role in future district heating and cooling networks in cities. For instance, supermarkets and data centers have been assessing the reuse of waste heat as an extra source for the district heating network, which would offset the additional investment in heat pumps. This innovative business model requires complete deregulation of the district heating market to allow industrial heat producers to provide waste heat as an additional source in the district heating network. This work proposes the application of innovative market designs for district heating networks, inspired by new practices seen in the electricity sector. More precisely, pool and Peer-to-Peer (P2P) market designs are addressed, comparing centralized and decentralized market proposals. An illustrative case of a Nordic district heating network is used to assess the performance of each market design, as well as the potential revenue that different heat producers can obtain by participating in the market. An important conclusion of this work is that the proposed market designs are in line with the new trends, encouraging the inclusion of new excess heat recovery players in district heating networks. ###### Index Terms: District heating networks, Excess heat, Market liberalization, Energy Market, Peer-to-peer ## I Introduction Over the years, District Heating and Cooling (DHC) systems have been proliferating in many countries [1]. In Denmark, according to EUROHEAT & POWER, 65% of citizens were served by Distric Heating Networks (DHNs) in 2017, accounting for more than 30 000 km of pipelines in DHNs. Most European DHC systems follow a monopolistic approach due to heat demand sparsity, the market power of a single generating unit that often owns the DHN, the lack of DHN linking all possible customers, and long-term return on investment. These reasons pull back new investors and market liberalization, which could foster the reuse of waste heat as an extra source in DHNs [2, 3]. In fact, DHN is a natural monopoly due to the large infrastructure and operation costs, concerning the production and distribution of heating and cooling. Therefore, the heat production plants and the network are commonly owned, operated and managed by the same company, which is the main obstacle to the complete liberalization of the system [4]. Overall, DHC systems are heavily regulated and price competitiveness for consumers is disregarded. Nevertheless, governments (through energy regulators and policymakers) are enforcing the liberalization of heat markets (similar to what happened in the power system), as it becomes easier to monitor the whole process of energy systems, aiming to drag the prices down through competition, once the energy providers are competing with each other, leading to economic benefits for consumers [5, 6, 7, 8, 9]. Therefore, DHC market liberalization is gaining momentum in some European countries, aiming to replicate and adapt the good experience with electricity markets, bringing their capacity to improve system efficiency [10, 7, 11]. This disruptive paradigm shift will increase competitiveness through the inclusion of new players in the system. That is, several agents from different industry sectors can play an active role in the DHC market by buying and selling energy from different sources, increasing competitiveness and bringing financial benefits to everyone involved [6, 12]. The authors in [10, 13] present case studies suggesting that a large amount of heat demand can be supplied by industries, e.g., by supplying waste/excess heat of industry processes to neighbouring consumers. Similarly, the authors in [14, 15] also demonstrate the benefits that external producers (taking advantage of heat pumps, waste heat and renewable heat technology) bring to the DHC system if they supply their excess heat to the DHN. The results would be advantageous for all parties, bringing economic and environmental gains. On the other hand, the works in [16, 17, 18] show the benefits of the synergies between the power and DHC systems, modeling centralized dispatches to improve the efficiency of the entire energy system. In addition, consumers can also play an active role in the DHC system, providing demand flexibility in response to dynamic tariffs, thereby improving market competition [19, 20, 21, 22]. DHC markets inspired by the electricity sector, applying conventional market designs and approaches, are growing [23, 11]. An example of a running DHC market is the Open District Heating project [24], operating at Stockholm’s DHN, which encourages industrial businesses to sell their excess heat to the DHN at a uniform price cleared in the proposed day-ahead heating market. In addition, innovative market ideas to increase competitiveness in the DHN are emerging in the literature [25, 26, 27]. One of them is the adaption of the sharing economy principle to industries and small-scale production units to supply surplus heat to the DHN [13, 28]. In this regard, different consumer-centric market designs, adapted from the power system, are expected to be replicated to the DHC system, allowing these new market participants to inject heat in the DHN and get extra revenue. In order to assess several options and assumptions for the best market design to apply in existing and new DHNs, a brand new platform (EMB3Rs) is being developed [29]. This platform will empower different stakeholders (e.g., utility companies, municipalities, DHN operators, excess heating producers, among other entities) to simulate distinct market designs that can be applied to current and future DHNs. In this context, this work contributes to the literature and to the EMB3Rs platform, modelling distinct market models for the negotiation of heat in DHNs considering a competitive environment. More precisely, three distinct market designs are modelled and compared, namely, the pool-based, the peer-to-peer (P2P), and the community-based market designs. The markets are adapted from the current and future trends in electricity markets. Additionally, consumers preferences (e.g., distance, losses and $CO_{2}$) through product differentiation are applied to the P2P market design, enabling consumers to choose sources they prefer to be provided from. An illustrative DHN based on Nordic countries is used to test the applicability of the proposed solution. The main contributions of the present work are fourfold: * • To implement, analyze and compare, different market models in the EMB3Rs platform; * • To model new market designs for heat exchange in the DHN, namely, the pool- based, P2P, and community-based market designs; * • To explore competitiveness in DHC markets, enabling industrial businesses with excess heat recovery systems to inject excess heat in the DHN; * • To improve market options for consumers by introducing product differentiation in the P2P market design. In addition to this introductory section, this paper is organized as follows. Section two describes the EMB3Rs platform for the simulation of different DHC market designs. Section three presents the detailed mathematical models of the proposed market designs. Section four assesses the proposed market models considering an illustrative case of Nordic DHNs, while section five gathers the conclusions of the study. ## II EMB3Rs Platform for DHC Market Simulation This section provides an overview of the EMB3Rs platform that will incorporate current and new market designs, adapted to the context of DHC systems. In addition, it provides a brief review of the actual situation of the DHC markets in the Nordic countries. ### II-A Current DHC Market Situation in Nordic Countries The current situation of DHC markets varies on a country basis, as the deregulation of DHC systems has been carried out in different ways [30]. In Denmark, the DHN is still a natural monopoly, as the network and heating plants are mostly owned by energy companies, municipalities or consumer cooperatives. The regulation dictates that the heat supply works under non- profit rules, which means that the supplier must provide heat to consumers at marginal cost. This non-profit rule benefit everyone, as any profits are distributed to consumers to reduce costs [31]. In this case, industries with excess heat are encouraged to self-consume and only then to sell excess heat to the market, since the sale of excess heat comes with a tax to prioritize energy efficiency [30]. Similarly to Denmark, DHNs are also heavily regulated in Norway. DHNs are mostly private and municipal owned, with mandatory connections to consumers decided by the municipalities, while the operator is forced to expand the network [30]. The energy price from different producers are set on a competitive market, but prices for consumers with mandatory connections are regulated and cannot exceed the price of electric heating within the supply area [32]. Alternatively, consumers without mandatory connections are free to choose their heating source (e.g., electric heating or heat pump), so the supply price will follow the electricity price [32, 33]. In contrast, Sweden was one of the first European countries to deregulate the heating market, however, that deregulation was not as robust as expected. According to [34], the prices of the different Swedish utility companies are not similar, meaning that these companies behave as price-makers. The costs are related to heating production and DHN operation, while what was expected was marginal-based pricing. On the other hand, Finish utility companies have a monopoly on certain DHN s. Costumers have no open market to select their DHC utility [35]. Some Finish companies have been trying to change this paradigm, i.e., offering seasonal tariffs, but these measures also do not shape the fair price for customers [19]. For further details on the situation of DHCs systems in European countries, interested readers are referred to [30, 36]. Nonetheless, the transition to sustainable, efficient and competitive markets is unavoidable and future DHC markets will require new market approaches suitable to the integration of renewable energy sources in DHNs [37]. ### II-B EMB3Rs Platform Overview The EMB3Rs platform has been designed to assess the reuse and trade of excess thermal energy in a holistic perspective within an industrial process, energy system environment, or in an DHN under regulated and liberalized market environment [38]. The platform empowers industrial users and stakeholders to investigate the revenue potential of using industrial excess heat and cold as an energy resource, based on the simulation of supply-demand scenarios. Therefore, the platform simulates multiple business and market models, proposing innovative solutions in the sector. From the large variety of options, users can: (i) map new and existing supply and demand users with geographic relevancy and enable their interlink; (ii) assess costs and benefits related to the excess heat and cold utilization routes, considering existing and new network infrastructure (e.g., DHN); (iii) explore and assess the feasibility of new technology and business scenarios; and (iv) compare and analyze distinct market models applied to the DHN to dynamically create new business models and identify potential benefits and barriers under specific regulatory framework conditions. The integration of a dedicated market module in EMB3Rs platform allows users to perform market analysis considering multiple existing market designs. Therefore, users can create, test and validate different market structures for selling and buying energy in the DHN, identifying barriers and risks, as well as regulatory framework conditions required to ensure that the implementation of such market solutions are economically feasible. That is, the market analysis enable users (e.g. industries, supermarkets and data centers) to estimate potential revenues from selling excess heat and cold. This is especially important for users who have invested (or are considering investing) in waste heat recovery technology to assess the potential economic and environmental savings of their investment. ### II-C Market Approach for Heat Exchange On the EMB3Rs platform, users must be able to explore different market designs, from centralized to the decentralized designs, allowing them to analyze the best market framework for their interests, which can be economic, environmental or social. In this regard, three distinct market designs are adapted in the present work to be included in the EMB3Rs platform. The conventional pool market, the innovative P2P and community-based market designs are addressed to ensure that the platform’s users (e.g., industries, supermarkets and data centers) can assess their business models under different levels of market decentralization for the exchange of thermal energy in DHN s. All the three market designs are inspired in the electricity sector, and therefore, need to be adapted to the underlying characteristics of DHC systems. The pool market follows a systemic perspective of the whole market by applying the merit order mechanism and performing the intersection of production and demand curves. This mechanism, known as uniform price, results in a market clearing price that is used for the settlement of producers and consumers. That is, each producer and consumer scheduled in the market will receive and pay for the energy at the market clearing price, respectively. In contrast, consumer-centric market designs (such as P2P and community-based market designs) follow a more decentralized and consumer-focused perspective. The P2P market enables producers and consumers to exchange energy directly with each other, subject to certain specific conditions defined by consumers. In this market design, no central facilitator is needed to verify energy exchanges. On the other hand, the community-based market requires the use of a central entity that coordinates energy exchanges within the energy community, well as the imports and exports to other energy communities and DHN players. It worth mention that these kind of decentralized markets can empower consumers and prosumers to play a more active role in the DHN. For instance, local supermarkets are emerging thermal prosumers that can provide and consume heat in different hours, making them a flexible player to reuse excess heat and even selling surplus heat to other consumers in the DHN. ## III District Heating Market Designs The DHC market designs discussed in this work, represent insights into the future of heat exchange in DHN s. There is still a long way to go regarding infrastructure and legislation for the implementation of liberalized markets. In this context, the first steps in what we believe could be the DHC systems of tomorrow are given in this work. In this way, pool, P2P and community-based market approaches are addressed. Note that for the rest of the work, it is assumed that the heat sources are considered producers and the heat sinks are consumers. ### III-A Pool Market Design The pool market designs consists of representing the merit order mechanism and obtain the market clearing price through the intersection of supply and demand curves. Thus, the market has the goal of maximizing the social welfare, meaning that lower offers from producers and higher offers from consumers are accepted. Mathematically, this market can be presented as: $\displaystyle\min_{D}\quad$ $\displaystyle\sum_{n\in\Omega_{n}}C_{n}P_{n}$ (1a) s.t. $\displaystyle\underline{P}_{n}\leq P_{n}\leq\overline{P}_{n}$ $\displaystyle p\in\Omega_{n}$ (1b) $\displaystyle\sum_{n\in\Omega_{n}}P_{n}=0$ (1c) $\displaystyle P_{n}\leq 0$ $\displaystyle n\in\Omega_{c}$ (1d) $\displaystyle P_{n}\geq 0$ $\displaystyle n\in\Omega_{p}$ (1e) where $D={\\{P_{n}\in\mathbb{R}\\}}_{n\in\Omega_{n}}$ correspond to the energy traded by each agent $n$. $C_{n}$ represents the agents’ bid price; $\underline{P_{n}}$, $\overline{P_{n}}$, represent the lower and upper bound of the agents’ energy offer, respectively; $\Omega_{c}$ represent the consumers sets, $\Omega_{p}$ represent the producer sets. Eq (1b) set the agents offers boundaries. Eq (1c) sets the market balance, where the supply must equal the demand. (1d) sets that the consumption is non-positive in the system, while (1e) sets that production variable from producers is non- negative. ### III-B P2P Market Design Regarding the P2P approach, it is proposed that two different peers can trade heat on a bilateral basis, without a third party supervision [39]. That is, each peer $n$ can exchange with another peer $m$ on an individual basis, defining the amount of energy to be bought or sold at a given price. This problem can be mathematically formulated as follows: $\displaystyle\min_{D}\quad$ $\displaystyle\sum_{n\in\Omega_{n}}C_{n}P_{n}$ (2a) s.t. $\displaystyle P_{n}=\sum_{m\in\Omega_{n}}P_{n,m}$ $\displaystyle n\in\Omega_{n}$ (2b) $\displaystyle\underline{P}_{n}\leq P_{n}\leq\overline{P}_{n}$ $\displaystyle n\in\Omega_{n}$ (2c) $\displaystyle P_{n,m}+P_{m,n}=0$ $\displaystyle\\{n,m\\}\in\\{\Omega_{n}\\}$ (2d) $\displaystyle P_{n}\leq 0$ $\displaystyle n\in\Omega_{c}$ (2e) $\displaystyle P_{n}\geq 0$ $\displaystyle n\in\Omega_{p}$ (2f) where $D={\\{P_{n}\in\mathbb{R}\\}}_{n\in\Omega_{n}}$ represents the heat traded by each agent $n$. Like in the pool market, the goal is to minimize the cost associated with the agents’ transactions (2a). The total heat traded by an agent $n$ must equal the sum of the heat exchanges from that agent $n$ to the other agents $m$ (2b). Also, a reciprocity is expected in the bilateral trades (2d), where $P_{n,m}$ and $P_{m,n}$ must be symmetric. Looking at the peer-to-peer formulation, one can see that it yields the trade between agents. Thus, a preference can be added to each of these trades, which can be translated into a penalty or benefit. This is called product differentiation, meaning that a certain trade can be advantageous or harmful to the system management. In this way, the objective function is willing to benefit or penalize the trades that deserve such consideration. The distance between agents, the thermal losses and the CO2 emissions are preferences that can be placed within this scope. There is also the option where the agents can choose the penalty that best suits their ideology. For instance, on the EMB3Rs platform, three different penalty options are provided to the consumers. One option is the physical network distance between agents. For example, an agent can select the distance penalty if he wishes to trade with the nearest neighbor. Another option is thermal losses, where an agent can select the thermal losses penalty if it is concerned about the system energy efficiency. Alternatively, the $CO_{2}$ penalty is proposed if an agent has environmental concerns. Conventionally, the product differentiation is represented as: $C_{n,m}=P_{n,m}c_{n,m}$ (3) where $C_{n,m}$ represents the final penalty applied to the trade between agents $n$ and $m$. $P_{n,m}$ represents the energy trade between agents $n$ and $m$, and $c_{n,m}$ represents the initial penalty between these agents. In order to apply product differentiation, the objective function must account with the penalty from Eq (3). Thus, the objective function takes the following form: $\min_{D}\sum_{n\in\Omega_{n}}C_{n}P_{n}+\sum_{n\in\Omega_{n}}\sum_{m\in\Omega_{n}}C_{n,m}$ (4) where $D=\\{P_{n},C_{n,m}\\}\in\mathbb{R}_{n,m\in\Omega_{n}}$. Hence, the formulation is completed, since equations (2b)-(2f) keep unchanged. Nevertheless, the determination of the product differentiation penalties may follow different ways. #### III-B1 Physical Network Distance Preference In the distance preference, the network distance between the selected agents is determined. The penalty implies the sum up of all the pipes that make the path between agents. Note that Dijkstra’s algorithm [40] is used to find the shortest path between agents. Thus, the penalty associated to the network distance is given by: $c_{n,m}=\sum_{i\in\Omega_{I_{n,m}}}d_{i,n,m}/TotDist$ (5) where $d_{i,n,m}$ represents the pipe distance along the path between agents $n$ and $m$, while 385.08m is the total network distance. #### III-B2 Network Thermal Losses Preference The thermal losses penalty between two agents is given by the share that each agent has in the system losses considering the thermal flow in each pipe. In this case, it is required to determine the thermal flow in the DHN and, therefore, the losses in each pipe. To determine the thermal flow and losses in the DHN based on the initial market results, the thermal control algorithm in [41] is used. Therefore, the impact that each agent has on the thermal flow and losses of each pipeline is determined using Bialek’s downstream looking algorithm [42]. Finally, the thermal losses penalty for the transaction between two peers is given by: $c_{n,m}=\sum_{i\in\Omega_{I_{n,m}}}l_{i,n,m}D_{i,n,m}d_{i,n,m}/TotLoss$ (6) where $l_{i,n,m}$ represents the thermal losses in each pipe along the path between agents $n$ and $m$; $D_{i,n,m}$ represents the $n,m$ peer impact in each pipe of the system determined by the downstream looking algorithm presented in [42]. In this way, a fairly penalty allocation for the transaction between two agents is achieved, accounting for the cumulative impact that such transaction has in the thermal losses in the system. #### III-B3 $CO_{2}$ Emissions Preference The last option proposed for product differentiation is to penalize transactions through $CO_{2}$ emissions. This penalty consists of penalizing peer transactions that may, consequently, emit higher emissions into the atmosphere. The EMB3Rs platform can provide standard levels of $CO_{2}$ per technology, and therefore, penalties between agents $n$ and $m$ consider such levels. Here, the penalty is only associated with the heat source. Hence, the $CO_{2}$ penalty between agents $n$ and $m$ is given by the quotient between agent $n$ emissions and the total system emissions: $c_{n,m}=E_{n}/\sum_{n\in\Omega_{n}}E_{n}$ (7) where $E_{n}$ represents the $CO_{2}$ emissions by agent $n$. ### III-C Community-based Market The community-based market design intends to represent a more hierarchical structure of bilateral peer trades. In general, a community is composed by members who share common interests or are geographically close. In this model, there is a community manager responsible for the community’s energy management. This manager supervises all the trading activities within the community, as well as works as an intermediary in the heat trade with other communities or with the main grid [43]. The mathematical formulation is presented as: $\displaystyle\begin{aligned} \min_{D}\sum_{n\in\Omega_{n}}\sum_{k\in\Omega_{k}}C_{n,k}P_{n,k}-c_{exp,k}q_{exp,k}\\\ +c_{imp,k}q_{imp,k}\\\ \end{aligned}$ (8a) $\displaystyle\begin{aligned} P_{k,k^{\prime}}+P_{k^{\prime},k}=0,\forall(k,k^{\prime})\in(\Omega_{k})\\\ \end{aligned}$ (8b) $\displaystyle\begin{aligned} q_{exp,k^{\prime}}=\sum_{k\in\Omega_{k}}P_{k^{\prime},k},\forall k^{\prime}\in\Omega_{k}\end{aligned}$ (8c) $\displaystyle\begin{aligned} q_{imp,k^{\prime}}=\sum_{k\in\Omega_{k}}P_{k^{\prime},k},\forall k^{\prime}\in\Omega_{k}\\\ \end{aligned}$ (8d) $\displaystyle\begin{aligned} \sum_{k\in\Omega_{k}}P_{k^{\prime},k}=q_{exp,k^{\prime}}-q_{imp,k^{\prime}},\\\ \forall k^{\prime}\in\Omega{k}\\\ \end{aligned}$ (8e) $\displaystyle\begin{aligned} P_{n,k}+q_{n,k}+\alpha_{n,k}-\beta_{n,k}=0,\\\ \forall(n,k)\in(\Omega_{n},\Omega_{k})\\\ \end{aligned}$ (8f) $\displaystyle\begin{aligned} \sum_{n\in\Omega_{n}}q_{n,k}=0,\forall k\in\Omega_{k}\\\ \end{aligned}$ (8g) $\displaystyle\begin{aligned} \sum_{n\in\Omega_{n}}\beta_{n,k}=q_{exp,k},\forall k\in\Omega_{k}\\\ \end{aligned}$ (8h) $\displaystyle\begin{aligned} \sum_{n\in\Omega_{n}}\alpha_{n,k}=q_{imp,k},\forall k\in\Omega_{k}\\\ \end{aligned}$ (8i) $\displaystyle\begin{aligned} \underline{P}_{n}\leq P_{n}\leq\overline{P}_{n}&&(n,k)\in(\Omega_{n},\omega_{k})\end{aligned}$ (8j) where $D={\\{P_{n,k},q_{exp,k},q_{imp,k}\in\mathbb{R}\\}}_{(n,k)\in(\Omega_{n},\Omega{k}}$. $P_{n,k}$ represents the internal trade of agent $n$ within its own community $k$. (8b) represents the symmetry when communities exchange heat. Equation (8c) balances the exported heat by a community with other communities. The same is valid for (8d), regarding the imported heat. Also, the sum of one community bilateral trades must equal the exported heat minus the imported heat (8e). Equation (8f) sets agents’ balance, i.e., the purchase/consumption, the heat traded within the community and the heat exchanged with other communities must reach an equilibrium in each time period. Within a community, the purchase/consumption of all involved agents must be equal to zero (8g). Furthermore, the heat exported by each community agent must equal the total heat exported by the community (8h). The same holds true for the imported heat (8i). Like in the previous market designs, heat boundaries ought to be kept (8j). ## IV Case study In this section, a case study is presented considering an illustrative Nordic DHN with several producers and consumers. This illustrative example has been developed to assess different market designs on the EMB3Rs platform. All the input data and results of this study, including demand and supplier offers for an entire year (from April 2018 to March 2019) are available at Mendeley Data [44]. ### IV-A Case Characterization A DHN has been built considering several producers and consumers with different characteristics and patterns. Figure 1: Illustrative district heating network. Note that the DHN must ensure that the temperature is within the levels required by the heating demand, and that the flow rates in the DHN must be kept at a reasonable low level in order to avoid water velocities. To this end, it is assumed that this DHN operates similarly to most Danish DHNs, which work within annual averages temperatures of 77.6∘C supply and 43.1∘C return [45]. Figure 1 shows the schematic diagram of the DHN, where 31 row houses and 4 potential producers are considered. The consumption of 31 row houses for a entire year (from April 2018 to March 2019) has been generated considering a typical demand pattern taken from [46]. The price that the row houses are willing to pay for the demand in the market follows a normal distribution, in which the base price is the heat tariff in Copenhagen, Denmark [47]. In order to suppress basic consumption needs, at least 70% of the heat demand of each house must be supplied at all periods. A 15 kW industrial ammonia heat pump is located in the DHN and can provide heat at some time of the day at a certain cost. The heat pump generation profile considers a constant Coefficient of Performance (COP) of 4.8, providing hot water via a heat exchanger at 80∘C, based on [48, 49]. The cost curve of the heat pump is based on the electricity spot price in 2018 and 2019 in DK2 area in Denmark, taken from [50]. In addition, a 0.4 MW data center is included in the DHN. Commonly, data centers follow a relatively constant pattern of excess heat recovery to inject into the DHN, although the temperature of their excess heat from the condenser cooling towers is usually between 35∘C and 45∘C. Thus, an industrial ammonia heat pump, similar to the one referred above, would be required to upgrade its heat to inject into the DHN. This data center has been modeled producing 71,6 kWh on average, in which the calculus for the heat recovery profile is based on [51, 52]. To this value, it would be added the energy used in the ammonia compressor. The cost curve for the data center sell recovered heat energy in the DHN has been modeled following a normal distribution and the monthly excess heat procurement costs presented in [52]. A Combined Heat and Power (CHP) unit is included in the DHN being the main producer in the system. This CHP is designed to provide the entire consumption of the system, being therefore the most expensive generation resource. The cost curve for a entire year follows the behavior of the natural gas spot price for years 2018 and 2019, available in [53]. Note that the prices were normalized for the Nordic context. Besides this, a supermarket with heat pump technology is included in the system behaving like a prosumer. That is, the supermarket may consume heat from the DHN or inject recovered heat into the DHN, taking into account the hour of the day and the outdoor temperature. The generation and consumption profile depends on the outdoor temperature. It has been considered the outdoor temperature in Copenhagen for the entire year (April 2018 to March 2019), available at [54]. Then, the prosumer profile of the supermarket is determined following a typical COP (around 3.0) for heat recovery in supermarkets, and a typical supermarket consumption pattern, detailed in [55]. The cost curve for the supermarket to inject recovered heat in the DHN depends on the outdoor temperature and is based on [56]. It is noteworthy that different market designs may require the use of different data or configurations. For example, the community-based market design requires the configuration of the energy community, that is, who are the community members. For the community-based market, two communities were created, based on the aforementioned energy resources, namely: * • Community 1: Data Center and all consumers from 19 to 31; * • Community 2: Supermarket, Heat Hump, and consumers from 1 to 18. Regarding the P2P market model via product differentiation, the required data were retrieved based on the THERMOS project tool [57]. This tool is able to provide the distance (Table I) and nominal losses (Table II) between agents, based on the supply and return temperatures, and on the maximum heat flow in the pipelines. The $CO_{2}$ signals for the CHP were obtained from [58], while for technologies that rely on the electricity mix were retrieved from [59] considering the Nordic zone. Table III presents the $CO_{2}$ signals for all heat producers. TABLE I: DHN distance between agents. | Distance (m) ---|--- Agent | CHP | Supermarket | Data Center | Heat Pump C1-C10 | 266,24 | 181,25 | 206,15 | 174,96 C11-C15 | 190,76 | 20,47 | 168,58 | 199,06 C16-C18 | 228,66 | 143,67 | 230,27 | 137,38 C19-C25 | 175,25 | 90,26 | 158,21 | 127,01 C26-C30 | 196,37 | 111,38 | 224,52 | 193,31 C31 | 259,32 | 174,33 | 122,23 | 94,28 SM | 201,37 | - | 240,87 | 209,67 TABLE II: DHN nominal losses between agents. | Losses (W/m) ---|--- Agent | CHP | Supermarket | Data Center | Heat Pump C1-C10 | 17,31 | 16,40 | 17,31 | 14,02 C11-C15 | 18,35 | 17,23 | 16,83 | 17,43 C16-C18 | 17,90 | 17,12 | 17,73 | 17,58 C19-C25 | 18,10 | 18,01 | 17,78 | 17,49 C26-C30 | 17,24 | 16,51 | 17,41 | 16,43 C31 | 17,39 | 16,99 | 17,05 | 16,66 SM | 18,64 | - | 16,87 | 17,86 TABLE III: $CO_{2}$ emissions by heat producer. $CO_{2}$ Signals (g/kW) --- CHP | Supermarket | Data Center | Heat Pump 225 | 225 | 166.1 | 34.6 ### IV-B Results This section presents the main results and indicators for comparing the different market designs. All simulations were performed for an entire year of market operation. #### IV-B1 General Results Table IV presents the social welfare and the revenue achieved by each agent over the simulated year. For the pool market, the achieved results are the same as the Full P2P, so these are not discussed in detail. As expected, the Full P2P market design is the one presenting the best solution, since there are no limitations on heat exchanges between agents, opposite to what happens in P2P with product differentiation where penalties (consumer preferences) are considered. Note that social welfare represents the objective function without penalties, i.e., once the objective is defined, the penalties are removed and all heat transactions are kept. Within the P2P markets, the P2P with distance as product differentiation (P2P Distance) is the one achieving the lowest social welfare (65,9% compared to Full P2P), since it is the one that most penalizes the transactions between agents. P2P CO2 is the one reaching the social welfare closest to the Full P2P (more than 99.8%). The Full P2P and the community-based are the market models supplying more load, reaching 90% of the total load demand. Other models have a smaller delivery capacity and the minimum is reached for the P2P Distance where only 70% of the entire load demand is met. Although the community performs the poorest social welfare (63% compared to the Full P2P), it is worth stressing that it is the market that allocates the most load. In terms of heat production, the CHP and the data center are the ones producing the most heat throughout the year. The CHP has the largest thermal energy producing capacity and is the most expensive resource. Thus, it is often used to cover the remaining energy demand, which other producers cannot cover. On the other hand, the high dispatch of the data center is related to its high nominal capacity and low bid price offered in the market. The CHP shows a drop of about 45 % in production at P2P Distance when compared to the Full P2P, which is linked to the fact that it is the producer that is more distant from the consumers. It is worth mentioning that the heat pump reaches high dispatched heat levels and, consequently, high revenue in the P2P Distance and Community-based markets. The heat pump is located very close to the consumption points, which helps to explain the heat pump performance in the market design that considers the distance between agents. With respect to the community-based, the heat pump results are related to the community structure. The heat pump is a member of Community 2, where only the supermarket compete to meet the demand. As the supermarket behaves as a prosumer, the heat pump or imported heat are often the only available heat sources for that community, leading to a higher market share for the heat pump. As the heat pump and the data center are the two sources with the lowest $CO_{2}$ emissions, these are also the only agents presenting an increase in the heat supplied (1.8% and 6.7%, respectively), when comparing the P2P $CO_{2}$ with the Full P2P. TABLE IV: Agents’ revenue by market design Revenue (€) --- | Full P2P | P2P Distance | P2P Losses | P2P $CO_{2}$ | Community Social Welfare | 175250 | 115560 | 166422 | 175040 | 110407 CHP | 89328 | 69179 | 78094 | 85115 | 185057 Supermarket | 5615 | 6162 | 5813 | 5352 | 6093 Data Center | 85090 | 77614 | 84670 | 86931 | 77452 Heat Pump | 6610 | 13413 | 5338 | 7007 | 14113 Load | 361893 | 281928 | 340338 | 359446 | 366479 TABLE V: Agents’ dispatched heat by market design Dispatched Heat (kW) --- | Full P2P | P2P Distance | P2P Losses | P2P $CO_{2}$ | Community Load | 682941 | 532850 | 642078 | 678188 | 687215 CHP | 217191 | 120623 | 180546 | 205486 | 275674 Supermarket | 39937 | 43255 | 43255 | 38173 | 42758 Data Center | 411472 | 338954 | 408897 | 419155 | 336219 Heat Pump | 14341 | 30018 | 11522 | 15372 | 32564 #### IV-B2 Average Dispatched Heat and Successful Participation in the Market In addition to the general results, two key performance indicators (namely, the Average Dispatched Heat (ADH) and the Successful Participation in the Market (SPM)), were introduced. ADH represents the amount of heat that is dispatched from a source on average, i.e., the mean percentage of dispatched heat from the total capacity of the source. The values are presented in percentage (%) and determined through: $ADH(n{)}=\frac{\sum_{t=1}^{T}\frac{P_{n,t}}{\overline{P}_{n,t}}}{T},\forall n\in\left\\{\Omega_{p}\right\\}$ (9) where $P_{n,t}$ represents the heat dispatched by source $n$ in time period $t$ and $\overline{P}_{n,t}$ represents the maximum capacity of source $n$ in time period $t$. Regarding the SPM, it indicates the level of participation by an agent $n$ in the market, which is given by: $SPM(n{)}=\frac{\sum_{t=1}^{T}{Participation_{(n,t{)}}}}{T}\times 100,\forall n\in\left\\{\Omega_{p}\right\\}$ (10) where $Participation_{n,t}$ is a binary variable indicating whether a source $n$ is or not dispatched in the market, in time frame $t$. In addition to the annual results, seasonal results are also presented, once the sources and loads have seasonal behaviors. As one can see in Table VI, the heat dispatched is generally higher in the winter, which is linked to lower external temperatures, hence larger levels of heat demand are required. However, the CHP presents lower ADH in the winter when compared to the summer period. This is connected to the higher bidding prices offered by this resource in that period of the year, which enhances other resources participation in the market. Also note that the supermarket is the resource with the highest ADH, being fully dispatched most of the time. It is also noteworthy that the heat pump is less dispatched in the summer than in the winter, not only due to the increase of the bid offer, but also due to the lower production capacity during this season. TABLE VI: Annual and seasonal index of average dispatched heat for each heat producer and market design. | Year ---|--- | CHP | Supermarket | Data Center | Heat Pump Full P2P | 72% | 97% | 62% | 25% P2P Distance | 71% | 100% | 51% | 64% P2P Losses | 71% | 98% | 61% | 14% P2P $CO_{2}$ | 72% | 96% | 63% | 28% Community | 30% | 100% | 51% | 91% | Summer | CHP | Supermarket | Data Center | Heat Pump Full P2P | 84% | 97% | 48% | 1% P2P Distance | 83% | 100% | 29% | 36% P2P Losses | 87% | 98% | 48% | 1% P2P $CO_{2}$ | 84% | 96% | 49% | 4% P2P Community | 34% | 100% | 31% | 92% | Winter | CHP | Supermarket | Data Center | Heat Pump Full P2P | 60% | 98% | 76% | 50% P2P Distance | 58% | 100% | 73% | 92% P2P Losses | 54% | 98% | 76% | 28% P2P $CO_{2}$ | 60% | 97% | 77% | 53% Community | 26% | 100% | 71% | 90% Regarding the SPM indicator, the results clearly point to a high successful participation of the supermarket and data center in all market designs. When it comes to the data center, these results are justified by its steady heat production and low offer price, being one of the first sources that all consumers want to exchange with. It is important to highlight the contrast exhibited between SPM and ADH in relation to the CHP, since in the summer there is less heat demand that can be met by other agents with better offers, thus reducing this agent overall participation. TABLE VII: Annual and seasonal index of successful participation in the market for each heat producer and market design. | Year ---|--- | CHP | Supermarket | Data Center | Heat Pump Full P2P | 36% | 91% | 89% | 26% P2P Distance | 61% | 100% | 100% | 64% P2P Losses | 37% | 99% | 100% | 16% P2P $CO_{2}$ | 35% | 88% | 90% | 28% Community | 81% | 100% | 100% | 92% | Summer | CHP | Supermarket | Data Center | Heat Pump Full P2P | 13% | 83% | 93% | 1% P2P Distance | 56% | 100% | 99% | 37% P2P Losses | 15% | 99% | 100% | 1% P2P $CO_{2}$ | 12% | 75% | 95% | 4% Community | 71% | 100% | 100% | 93% | Winter | CHP | Supermarket | Data Center | Heat Pump Full P2P | 60% | 95% | 85% | 51% P2P Distance | 66% | 100% | 100% | 93% P2P Losses | 59% | 98% | 100% | 31% P2P $CO_{2}$ | 59% | 93% | 86% | 54% Community | 91% | 100% | 100% | 91% #### IV-B3 Fairness Indicators Fairness indexes are also assessed in this work. The methodology of [60, 61] was followed to evaluate the resource allocation in each market design. These indicators are not meant to measure quantities, but rather to assess the relationships between the different agents and the impact that each of them brings to the whole system. To do so, Quality of Service (QoS), Quality of Experience (QoE) and Min-Max Indicator (MiM) were determined. QoS represents how all the agents impact the heat distribution in the system, i.e., if all involved agents trade the same amount of heat, then the QoS would be equal to 100%. This index assesses the equilibrium in the system. QoE points out the consumer satisfaction related to the heating price when trading with other agents. The MiM indicator stands for the fairness in the prosumers and consumers field, where the ratio between the minimum and maximum values for each time period is calculated. If all the consumers trade the same amount of heat, then this index equals 100%. Table VIII gathers the fairness indicators results. As one can see, in general, the market modules present a QoS around 20%, meaning that there are agents with larger capacities when compared to other. This discrepancy leads to lower levels of QoS. When looking at community 2, this index is even lower which is related to the heat pump impact in this community. For most of the year, this player is in charge of supplying the whole community, creating a huge impact, attracting a large part of the exchange within the community. The QoE, related to the user viewpoint, presents similar values for all P2P designs. When analysing the communities, these values are substantially lower, due to the fewer competitiveness existing in each community. Therefore, agents are compelled to exchange with players who do not offer prices as favorable as their competitors at certain times, as in the P2P market models. The low values presented by MiM point to the significant difference between the heat values that are exchanged among the different agents. TABLE VIII: Fairness indicators for each market model | QoS | QoE | MiM ---|---|---|--- Full P2P | 21% | 78% | 4% P2P Distance | 17% | 83% | 4% P2P Losses | 21% | 79% | 5% P2P CO2 | 20% | 79% | 4% Community | Com 1 | Com 2 | Com 1 | Com 2 | Com 1 | Com 2 26% | 14% | 48% | 23% | 2% | 16% #### IV-B4 Supermarket Individual Analysis The supermarket is the only prosumer in the system, which means that it is the only player capable of behaving as a producer or consumer in different periods of time, being important to analyze its individual trades with other peers. When the supermarket is behaving as a producer, it is able to sell heat to the loads. Figure 2 depicts the cumulative heat trade over the year between the supermarket and the loads for each of the considered P2P market designs. More precisely, figure 2 points to a steady supply to all consumers by the supermarket in the Full P2P design, which was expected, since there are no preference constraints for any heat consumer. On the other hand, the product differentiation effect is clear in the P2P Distance and P2P Losses, since consumer preferences (namely, distance and losses) encourage trading with closest peers. Thus, the consumers (C11-C15) are strongly encouraged to trade with the supermarket, as it is one of the closest producers. In fact, most of the supermarket heat production goes directly to these consumers (about 59.2% and 73.9% for P2P Distance and P2P Losses, respectively), supplying other consumers with residual heat, or not at all. In the P2P considering the $CO_{2}$ signals, there are no major fluctuations once the $CO_{2}$ emissions value of the supermarket (225 g/kW) is similar to that of the CHP and Data Center, and much higher than that of the Heat Pump. In this way, the differentiation criterion is minimal relative to the CHP and Data Center with consumers giving priority to trade with the Heat Pump. More precisely, as both the supermarket and the Heat Pump have a low capacity to influence the system, the changes in the exchanges between the supermarket and the consumers are relatively small compared to the Full P2P market design. Figure 2: Cumulative annual heat exchange of the supermarket as a heat producer in the P2P designs. Notwithstanding, there are periods in which the supermarket does not have sufficient self-generation of heat and needs to consume from the DHN, behaving as a consumer in the market. In this case, Figure 3 depicts the annual percentage of heat supplied by the heat producers to the supermarket. In general, the supermarket is mainly supplied by CHP and the data center, since these agents have a large thermal capacity. As the supermarket is closer to the CHP, when considering the distance criteria (P2P Distance), the heat supplied by this resource, reaches its peak. Hence, as the data center is the farthest resource from the supermarket, the heat exchange reaches its low. The same line of thought is true for the P2P Losses. Conversely, as the heat pump is the resource with the lowest $CO_{2}$ emissions, this resource reaches its maximum when considering the P2P $CO_{2}$ market design. Figure 3: Cumulative annual heat exchange of the supermarket as a heat consumer in the P2P designs. Looking at the community-based market design (Figure 4), one can see that as a consumer, the supermarket is compelled to import about 80% of the heat, the remaining 20% being supplied by the community itself (heat pump). As a heat producer, all production is shared with the community itself, and no heat is exported. Figure 4: Supermarket heat exchange in the Community design. ## V Conclusion District heating still has a long way to go, especially regarding the way heat is exchanged and the infrastructure needed for this transformation. Within this scope, new market models for district heating have been proposed in this work, encouraging direct heat exchange between peers. The network characteristics and impact on heat exchange were also assessed through product differentiation, giving to the peers and network operators the possibility to define and test criteria that best fits their interests. All markets designs were simulated, compared and incorporated in the market module of the EMB3Rs platform. The results point to the feasible implementation of this type of market structure in DHNs. The Full P2P model presents the best results, since it disregards any limitations of the DHN for the heat exchanges between the different players. This work, also proves that it is possible to impact the way heat is distributed according to preferences that may be associated with distance, minimizing losses or mitigating $CO_{2}$ emissions. As an example, analyzing the market design of P2P Distance, one can see that the supermarket can increase by 500% the heat supply to closest consumers when compared to the Full P2P market design. In addition, the Community-based market design also reveals the possibility to divide agents into communities, allowing them to manage their own community and exchange heat with other communities, through heat import or export. Overall, if looking at the equilibrium between the agent participation in the market, the quality indicators do not show a balanced system. This is linked to the different heat technologies and prices, that change over the year according to several factors as the weather. The MiM also highlights this point, as a low value for this indicator means a big difference between the maximum and minimum heat traded amongst the agents. Future work will focus on full network thermal characterization and comparison with the main findings here presented. Also larger networks will be explored in order to test the solutions in a real-world like environment. ## Acknowledgements This work is supported by the European Union’s Horizon 2020 through the EU framework Program for Research and Innovation, within the EMB3Rs project under the agreement No. 847121. In addition, we would like to thank Tiago Sousa for the insight comments that allowed us to improve the paper. ## Data Availability Datasets related to this article can be found at http://dx.doi.org/10.17632/ydbcpb73t2.1, an open-source online data repository hosted at Mendeley Data (António, Tiago, Zenaida, José, 2020). ## References * [1] B. V. Mathiesen, N. Bertelsen, N. C. A. Schneider, L. S. García, S. Paardekooper, J. Z. Thellufsen, and S. R. Djørup, “Towards a decarbonised heating and cooling sector in Europe: Unlocking the potential of energy efficiency and district energy,” _Aalborg Universitet_ , p. 98, 2019\. [Online]. Available: https://vbn.aau.dk/ws/portalfiles/portal/316535596/Towards_a_decarbonised_H_C_sector_in_EU_Final_Report.pdf * [2] U. Persson and S. Werner, “Heat distribution and the future competitiveness of district heating,” _Applied Energy_ , vol. 88, no. 3, pp. 568–576, 2011\. * [3] S. Buffa, M. Cozzini, M. D’Antoni, M. Baratieri, and R. Fedrizzi, “5th generation district heating and cooling systems: A review of existing cases in Europe,” _Renewable and Sustainable Energy Reviews_ , vol. 104, no. October 2018, pp. 504–522, 2019. * [4] D. Magnusson, “Who brings the heat? – From municipal to diversified ownership in the Swedish district heating market post-liberalization,” _Energy Research & Social Science_, vol. 22, pp. 198–209, dec 2016. * [5] P. Westin and F. Lagergren, “Re-regulating district heating in Sweden,” _Energy Policy_ , vol. 30, no. 7, pp. 583–596, 2002. * [6] P. Söderholm and L. Wårell, “Market opening and third party access in district heating networks,” _Energy Policy_ , vol. 39, no. 2, pp. 742–752, 2011. * [7] W. Liu, D. Klip, W. Zappa, S. Jelles, G. J. Kramer, and M. van den Broek, “The marginal-cost pricing for a competitive wholesale district heating market: A case study in the Netherlands,” _Energy_ , vol. 189, p. 116367, 2019. * [8] J. Zhang, B. Ge, and H. Xu, “An equivalent marginal cost-pricing model for the district heating market,” _Energy Policy_ , vol. 63, pp. 1224–1232, 2013. * [9] M. Wissner, “Regulation of district-heating systems,” _Utilities Policy_ , vol. 31, pp. 63–73, 2014. * [10] A. Gebremedhin and B. Moshfegh, “Modelling and optimization of district heating and industrial energy system - An approach to a locally deregulated heat market,” _International Journal of Energy Research_ , vol. 28, no. 5, pp. 411–422, 2004. * [11] K. Gulzar, S. Sierla, V. Vyatkin, N. Papakonstantinou, P. G. Flikkema, and C. W. Yang, “An auction-based smart district heating grid,” _IEEE International Conference on Emerging Technologies and Factory Automation, ETFA_ , pp. 1–8, 2015, Luxembourg. * [12] V. Bürger, J. Steinbach, L. Kranzl, and A. Müller, “Third party access to district heating systems - Challenges for the practical implementation,” _Energy Policy_ , vol. 132, no. October 2018, pp. 881–892, 2019. * [13] M. Marinova, C. Beaudry, A. Taoussi, M. Trépanier, and J. Paris, “Economic assessment of rural district heating by bio-steam supplied by a paper mill in Canada,” _Bulletin of Science, Technology & Society_, vol. 28, no. 2, pp. 159–173, 2008. * [14] S. Syri, H. Mäkelä, S. Rinne, and N. Wirgentius, “Open district heating for Espoo city with marginal cost based pricing,” _12th International Conference on the European Energy Market, EEM_ , pp. 1–5, 2015, Lisbon, Portugal. * [15] L. Brand, A. Calvén, J. Englund, H. Landersjö, and P. Lauenburg, “Smart district heating networks - A simulation study of prosumers’ impact on technical parameters in distribution networks,” _Applied Energy_ , vol. 129, pp. 39–48, 2014. * [16] X. Liu and P. Mancarella, “Modelling, assessment and Sankey diagrams of integrated electricity-heat-gas networks in multi-vector district energy systems,” _Applied Energy_ , vol. 167, pp. 336–352, 2016. * [17] J. Huang, Z. Li, and Q. H. Wu, “Coordinated dispatch of electric power and district heating networks: A decentralized solution using optimality condition decomposition,” _Applied Energy_ , vol. 206, no. September, pp. 1508–1522, 2017. * [18] S. Lu, W. Gu, J. Zhou, X. Zhang, and C. Wu, “Coordinated dispatch of multi-energy system with district heating network: Modeling and solution strategy,” _Energy_ , vol. 152, pp. 358–370, 2018. * [19] D. F. Dominković, M. Wahlroos, S. Syri, and A. S. Pedersen, “Influence of different technologies on dynamic pricing in district heating systems: Comparative case studies,” _Energy_ , vol. 153, no. March, pp. 136–148, 2018. * [20] H. Li, J. Song, Q. Sun, F. Wallin, and Q. Zhang, “A dynamic price model based on levelized cost for district heating,” _Energy, Ecology and Environment_ , vol. 4, pp. 15–25, 2019. * [21] S. Djørup, K. Sperling, S. Nielsen, P. A. Østergaard, J. Z. Thellufsen, P. Sorknæs, H. Lund, and D. Drysdale, “District heating tariffs, economic optimisation and local strategies during radical technological change,” _Energies_ , vol. 13, no. 5, p. 1172, 2020. * [22] S. Bhattacharya, V. Chandan, V. Arya, and K. Kar, “Fairness based demand response in DHC networks using real time parameter identification,” _2016 IEEE International Conference on Smart Grid Communications, SmartGridComm 2016_ , pp. 32–37, 2016. * [23] A. Pažėraitė and M. Krakauskas, “Towards liberalized district heating market : Kaunas city case,” _Management of Organizations: Systematic Research_ , vol. 67, pp. 53–67, 2013. * [24] Exergi. Open district heating. [Online]. Available: https://www.opendistrictheating.com/ * [25] H. Li, Q. Sun, Q. Zhang, and F. Wallin, “A review of the pricing mechanisms for district heating systems,” _Renewable and Sustainable Energy Reviews_ , vol. 42, pp. 56–65, 2015. * [26] I. Moshkin and A. Sauhats, “Solving district heating optimization problems in the market conditions,” _57th International Scientific Conference on Power and Electrical Engineering of Riga Technical University (RTUCON)_ , pp. 1–6, 2016, Riga, Latvia. * [27] D. Valeriy and K. Dmytro, “Functional structure of the local thermal energy market in district heating,” _2019 IEEE 6th International Conference on Energy Smart Systems, ESS 2019 - Proceedings_ , vol. 1, pp. 343–346, 2019. * [28] M. Karlsson, A. Gebremedhin, S. Klugman, D. Henning, and B. Moshfegh, “Regional energy system optimization - Potential for a regional heat market,” _Applied Energy_ , vol. 86, no. 4, pp. 441–451, 2009. * [29] EMB3Rs. EMB3Rs - heat and cold matching platform. [Online]. Available: https://www.emb3rs.eu/ * [30] S. Donnellan, F. Burns, O. Alabi, and R. Low, “Lessons from european regulation and practice for scottish district heating regulation,” _ClimateXChange_ , 2018. [Online]. Available: https://www.climatexchange.org.uk/media/3569/lessons-from-european-district-heating-regulation.pdf * [31] Danish Energy Agency, “Regulation and planning of district heating in Denmark,” pp. 1–27, 2016. [Online]. Available: https://ens.dk/sites/ens.dk/files/contents/material/file/regulation_and_planning_of_district_heating_in_denmark.pdf * [32] T. Aanensen and N. Fedoryshyn, “District heating and district cooling in norway,” _Statistics Norway_ , 2014. [Online]. Available: https://www.ssb.no/en/energi-og-industri/artikler-og-publikasjoner/_attachment/184839?_ts=1475e7199a8 * [33] D. Hawkey and J. Webb, “District energy development in liberalised markets: situating uk heat network development in comparison with dutch and norwegian case studies,” _Technology Analysis & Strategic Management_, vol. 26, no. 10, pp. 1228–1241, 2014. * [34] M. Åberg, L. Fälting, and A. Forssell, “Is Swedish district heating operating on an integrated market? - Differences in pricing, price convergence, and marketing strategy between public and private district heating companies,” _Energy Policy_ , vol. 90, pp. 222–232, 2016. * [35] S. Paiho and F. Reda, “Towards next generation district heating in Finland,” _Renewable and Sustainable Energy Reviews_ , vol. 65, pp. 915–924, 2016. * [36] S. Werner, “International review of district heating and cooling,” _Energy_ , vol. 137, pp. 617–631, 2017. * [37] P. Sorknæs, H. Lund, I. R. Skov, S. Djørup, K. Skytte, P. E. Morthorst, and F. Fausto, “Smart energy markets - future electricity, gas and heating markets,” _Renewable and Sustainable Energy Reviews_ , vol. 119, no. March, p. 109655, 2020. * [38] EMB3Rs, “User-driven energy-matching & business prospection tool for industrial excess heat/cold reduction, recovery and redistribution,” Horizon 2020, Nº 847121. [Online]. Available: https://cordis.europa.eu/project/id/847121 * [39] E. Sorin, L. Bobo, and P. Pinson, “Consensus-based approach to peer-to-peer electricity markets with product differentiation,” _IEEE Transactions on Power Systems_ , vol. 34, no. 2, pp. 994–1004, 2019. * [40] E. W. Dijkstra, “A note on two problems in connexion with graphs,” _Numerische mathematik_ , vol. 1, no. 1, pp. 269–271, 1959. * [41] Y. Cao, W. Wei, L. Wu, S. Mei, M. Shahidehpour, and Z. Li, “Decentralized operation of interdependent power distribution network and district heating network: a market-driven approach,” _IEEE Transactions on Smart Grid_ , vol. PP, no. c, p. 1, 2018. * [42] J. Bialek, “Tracing the flow of electricity,” _IEE Proceedings: Generation, Transmission and Distribution_ , vol. 143, no. 4, pp. 313–320, 1996\. * [43] T. Sousa, T. Soares, P. Pinson, F. Moret, T. Baroche, and E. Sorin, “Peer-to-peer and community-based markets: A comprehensive review,” _Renewable and Sustainable Energy Reviews_ , vol. 104, no. January, pp. 367–378, 2019. * [44] A. S. Faria, T. Soares, J. M. Cunha, and Z. Mourão. (2020) District heating database. http://dx.doi.org/10.17632/ydbcpb73t2.1. * [45] H. Gadd and S. Werner, “Achieving low return temperatures from district heating substations,” _Applied Energy_ , vol. 136, pp. 59–67, 2014. * [46] M. Brand, “Heating and domestic hot water systems in buildings supplied by low-temperature district heating,” _Technical University of Denmark_ , pp. 1–141, 2014. * [47] HOFOR. The price for district heating 2020 for business customers. [Online]. Available: https://www.hofor.dk/erhverv/priser-paa-forsyninger-erhvervskunder/prisen-fjernvarme-2020-erhvervskunder/ * [48] T. Ommen, W. B. Markussen, and B. Elmegaard, “Heat pumps in district heating networks,” _2nd Symposium on Advances in Refrigeration and Heat Pump Technology_ , pp. 1–9, 2013, Odense, Denmark. * [49] S. Fukano, T. Kudo, and N. Arata, “Ammonia heat pump package using waste heat as source,” _4th IIR International Conference on Ammonia Refrigeration Technology_ , pp. 2946–2951, 2011, Ohrid, Macedonia. * [50] ENERGI. Welcome to energi data service. [Online]. Available: https://www.energidataservice.dk/ * [51] M. Monroe. How to reuse waste heat from data centers intelligently. [Online]. Available: https://www.datacenterknowledge.com/archives/2016/05/10/how-to-reuse-waste-heat-from-data-centers-intelligently * [52] M. Wahlroos, M. Pärssinen, J. Manner, and S. Syri, “Utilizing data center waste heat in district heating – impacts on energy efficiency and prospects for low-temperature district heating networks,” _Energy_ , vol. 140, pp. 1228 – 1238, 2017. * [53] U.S. Energy Information. Natural gas. [Online]. Available: https://www.eia.gov/dnav/ng/hist/rngwhhdd.htm * [54] rp5u. Weather archive in copenhagen. [Online]. Available: https://rp5.ru/Weather_archive_in_Copenhagen,_Kastrup_(airport),_METAR * [55] M. Karampour and S. Sawalha, “Supermarket refrigeration and heat recovery using $CO_{2}$ as refrigerant: A comprehensive evaluation based on field measurements and modelling,” no. June, p. 45, 2014. [Online]. Available: https://www.diva-portal.org/smash/get/diva2:849667/FULLTEXT01.pdf * [56] L. R. Adrianto, P.-A. Grandjean, and S. Sawalha, “Heat recovery from $CO_{2}$ refrigeration system in supermarkets to district heating network,” _13th IIR Gustav Lorentzen Conference on Natural Refrigerants (GL2018)_ , pp. 1–8, 2018, Valencia, Spain. * [57] THERMOS. Thermal energy resource modelling and optimization system. [Online]. Available: https://www.thermos-project.eu/home/ * [58] European Investment Bank, _EIB project carbon footprint methodologies: Methodologies for the assessment of project GHG emissions and emission variations_ , 2020, no. July. [Online]. Available: https://www.eib.org/attachments/strategies/eib_project_carbon_footprint_methodologies_en.pdf * [59] European Environment Agency. Data visualization - $co_{2}$ emission intensity. [Online]. Available: https://www.eea.europa.eu/data-and-maps/daviz/co2-emission-intensity-5#tab-googlechartid_chart_11_filters=%7B%22rowFilters%22%3A%7B%7D%3B%22columnFilters%22%3A%7B%22pre_config_ugeo%22%3A%5B%22European%20Union%20(current%20composition)%22%5D%7D%7D) * [60] F. Moret and P. Pinson, “Energy collectives: A community and fairness based approach to future electricity markets,” _IEEE Transactions on Power Systems_ , vol. 34, no. 5, pp. 3994–4004, 2019. * [61] R. Jain, D. Chiu, and W. Hawe, “A quantitative measure of fairness and discrimination for resource allocation in shared computer systems,” _arXiv: Networking and Internet Architecture_ , 1998. [Online]. Available: http://arxiv.org/abs/cs/9809099
# Thermal rectification through a nonlinear quantum resonator Bibek Bhandari NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa, Italy Paolo Andrea Erdman NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa, Italy Rosario Fazio The Abdus Salam International Centre for Theoretical Physics , Strada Costiera 11, I-34151 Trieste, Italy Dipartimento di Fisica, Università di Napoli “Federico II”, Monte S. Angelo, I-80126 Napoli, Italy Elisabetta Paladino Dipartimento di Fisica e Astronomia Ettore Majorana, Università di Catania, Via S. Sofia 64, I-95123, Catania, Italy INFN, Sez. Catania, I-95123, Catania, Italy CNR-IMM, Via S. Sofia 64, I-95123, Catania, Italy Fabio Taddei NEST, Istituto Nanoscienze-CNR and Scuola Normale Superiore, I-56126 Pisa, Italy ###### Abstract We present a comprehensive and systematic study of thermal rectification in a prototypical low-dimensional quantum system – a non-linear resonator: we identify necessary conditions to observe thermal rectification and we discuss strategies to maximize it. We focus, in particular, on the case where anharmonicity is very strong and the system reduces to a qubit. In the latter case, we derive general upper bounds on rectification which hold in the weak system-bath coupling regime, and we show how the Lamb shift can be exploited to enhance rectification. We then go beyond the weak-coupling regime by employing different methods: i) including co-tunneling processes, ii) using the non-equilibrium Green’s function formalism and iii) using the Feynman- Vernon path integral approach. We find that the strong coupling regime allows us to violate the bounds derived in the weak-coupling regime, providing us with clear signatures of high order coherent processes visible in the thermal rectification. In the general case, where many levels participate to the system dynamics, we compare the heat rectification calculated with the equation of motion method and with a mean-field approximation. We find that the former method predicts, for a small or intermediate anharmonicity, a larger rectification coefficient. ## I Introduction Thermal transport in quantum devices has garnered vast attention in the last decade fuelled by the incessant efforts in the miniaturization of electronic and thermal devices. Furthermore, the research in this field has been constantly growing thanks to advances in the experimental realization of nanoscale thermal devices that have sharpened our understanding on how energy/heat flows through small (quantum) systems giazotto2006 ; giazotto2012 ; pekola2015 ; ronzani2018 ; maillet2019 ; maillet2020 . The phenomenon at the heart of our investigation is thermal rectification, an intriguing effect which may arise also at the nanoscale, where it may play a key role for heat management in small devices. It refers to the asymmetric conduction of heat, whereby the heat flow in one direction is different with respect to the heat flow in the opposite direction, see Fig. 1. Thermal rectification, first observed experimentally by Starr in 1935 starr1935 , has been studied in a variety of setups since then, both theoretically terraneo2002 ; li2004 ; segal2005 ; eckmann2006 ; zeng2008 ; ojanen2009 ; ruokola2009 ; archak ; wu2009 ; wu2009b ; kuo2010 ; otey2010 ; Zhang2010 ; Yang2018 ; roberts2011 ; ruokola2011 ; gunawardana2012 ; martinez2013 ; giazotto2013 ; liu2014 ; landi2014 ; jiang2015 ; sanchez2015 ; joulain2016 ; agarwalla ; vicioso2018 ; giazotto2020 and experimentally chang2006 ; scheibner2008 ; senior2019 ; schmotz2011 ; martinez2015 . Figure 1: Schematic representation of a quantum system S coupled to the two heat baths. The left and right baths are characterized, respectively, by the temperatures $T_{\text{L}}$ and $T_{\text{R}}$. Panel (a) represents the positive bias case, i.e. $T_{\text{L}}=T+\Delta T/2$ and $T_{\text{R}}=T-\Delta T/2$ with $\Delta T>0$, while panel (b) represents the negative bias configuration where the sign of $\Delta T$ is reversed. In the presence of some asymmetry in the coupling to the baths (represented by the different thickness of the grey barriers), the magnitude of the heat currents flowing through the device may depend on the sign of $\Delta T$, leading to thermal rectification. From the practical viewpoint, the importance of thermal rectification stems from the fact that it can be used in a nanoscale device to divert heat from sensitive areas, while preventing it from flowing back in. On the other hand, it is interesting to understand what are the fundamental physical requirements for a system to exhibit thermal rectification, and what are the strategies to optimize it. In this paper we study thermal rectification through a multi- level quantum system (S) coupled to two thermal baths kept at different temperatures as schematically sketched in Fig. 1. This is a paradigmatic situation that applies to several different experimentally available setups, such as the experiment of Ref. senior2019, where the effect was first observed in an artificial atom. Few ingredients are necessary for rectification to occur. As we shall see in the following of the paper, the baths must be asymmetrically coupled to the system and inelastic scattering/interactions must necessarily be present. Indeed, in the absence of the latter, the current can be described by the Landauer-Büttiker scattering approach landauer1957 ; buttiker1985 , expressed as an energy integral of a transmission function (which does not depend on temperature) multiplied by the difference of energy distribution of the baths. In this situation no rectification is possible, since the temperatures of the baths enter only through their distributions. Inelastic processes occur naturally in the presence of non-linearities, for example induced by interactions, or by time-dependent driving in the Hamiltonian describing the system campeny2019 . In the presence of interactions, at least when the spectral density of the baths have identical energy dependence, one can formally express the heat current analogous to the scattering theory with an effective transmission coefficient which now depends also on the temperatures of the baths ojanen ; agarwalla . If, in addition, the quantum system S is coupled asymmetrically to the two baths, thermal rectification can take place. Figure 2: Schematic representation of the transmission function, as a function of the energy $\epsilon$, for the positive and negative bias case. Each panel corresponds to a different variations of the transmission function which can give rise to heat rectification. Within this framework, we can identify three possible ways the transmission probability can change upon inverting the temperature bias (from positive to negative). As schematically shown in Fig. 2, the transmission functions can change in height (a), position (b) and width (c). The height shift is the main mechanism that allows rectification even in the weak coupling regime, and it is present whenever one accounts for inelastic processes. The position shift is caused by the real part of the self energy, known as Lamb shift, which accounts for the renormalization of the system energy scales due to the system-bath coupling. Finally, the width of the transmission probability may change when the system is strongly coupled to the baths. In most cases we consider, the width and height shift occur together. Rectification has been studied in different nanoscale devices, such as quantum dots ruokola2011 ; kuo2010 ; vicioso2018 , spin-boson models agarwalla ; segal2005 , non-linear harmonic resonators ruokola2009 ; archak , and hybrid quantum devices wu2009 ; wu2009b ; martinez2013 , to name a few. In most cases, the weak coupling, wide band approximation has been employed. Asymmetric system-bath coupling and the presence of non-linearities are sufficient conditions to rectify wu2009 . When studying the spin-boson model segal2005 ; agarwalla and the non-linear harmonic resonator ruokola2009 , it has been observed that rectification increases both as a function of the temperature difference and as a function of the asymmetry between the system- bath coupling strengths. The spin-boson model has been studied also beyond the weak coupling regime in Ref. agarwalla, using non equilibrium Green’s functions or the Non-Interacting-Blip-Approximation (NIBA) segal2005 ; boudjada2014 . Although thermal rectification has been studied in various specific systems, strategies to maximize rectification remain, to a large extent, unclear. Moreover, it is not known if there are any fundamental bounds to the maximum rectification that can be obtained, and what is the impact of quantum coherence on rectification. In this paper, we address these issues by considering, as a prototype model of a multi-level system, an anharmonic quantum oscillator. In the limit of very strong anharmonicity the system reduces to a a qubit (two-level systems) coupled to two different thermal baths. In our analysis we employed different formalisms to explore different regimes. In the qubit case we used: (1) the Master equation (ME) taking co- tunneling into account, (2) non-equilibrium Green’s functions (NEGF) and (3) exact calculations based on Feynman-Vernon path integral approach. In the case of arbitrary anharmonicity we used the equation of motion (EOM) method. In the limit of very strong anharmonicity (qubit case), without assuming any specific model for the bath and system-bath Hamiltonian, we studied how to maximize the rectification and we derived general upper bounds valid within the weak coupling regime. Furthermore, we found that the rectification can be enhanced by exploiting the temperature dependence of the Lamb-shift, together with gapped density of states in the baths. Going beyond the weak-coupling regime we generalized the calculation of Ref. agarwalla, by addressing general spin couplings between the system and the baths, and by including the effect of the Lamb shift. Furthermore, employing the Feynman-Vernon path integral approach, we were also able to study the strong coupling regime in an exactly solvable case. Thanks to the combination of all these approaches it was possible to see that many bounds and limitations emerging in the weak coupling regime can be overcome, and that rectification can be enhanced by higher order quantum coherent processes. The violation of such bounds provides a clear and simple, experimentally observable, strong-coupling signature of thermal rectification. For smaller anharmonicity, the multi-level dynamics of the system comes into play and the qubit approximation breaks down. Ruokola et al. ruokola2009 studied thermal rectification in a non-linear harmonic resonator using the mean-field Hartree approximation. Such approximation gives accurate results when the strength of anharmonicity is small compared to other energy scales of the system. In this paper, we go beyond the mean-field approximation employing the EOM method to study thermal rectification in the strong coupling and large-interaction regime. The paper is organized as follows. In Section II we introduce the model we are going to analyze in the rest of the paper. The two baths, kept at different temperatures, can be of fermionic or bosonic nature. The Hamiltonian of the system is that of an anharmonic oscillator, more specifically with a Kerr-like $U$ non-linearity. In the case of very large $U$, the model reduces to a two level system. Different types of coupling between the system and the baths are considered as well to see if different choices may lead to an enhancement of the rectification. The heat current and the rectification coefficient will be defined in Section III. Here we will also introduce an expression for the current that will be used in the remainder of the paper when developing our approximation schemes. Sections IV \- VI contain the results of our analysis. We first start by analyzing the $U\to\infty$ case. In Section IV we study the qubit case in the weak coupling regime, while in Section V we go beyond the weak coupling regime. The various approximation schemes here are also compared to an exact solution that we are able to derive for a specific choice of the couplings. In Section VI, we relax the approximation of a two-level system and study thermal rectification in a non-linear resonator as a function of $U$. Finally, in Section VII we draw the conclusions. Appendices contain several different details of the calculations, not inserted in the main text to favour the readability of the paper. ## II The model We consider a quantum system S arbitrarily coupled to two thermal baths denoted by L (left) and R (right) [see Fig. 1 for a sketch]. The Hamiltonian governing the dynamics of this setup is given by $\mathcal{H}=\mathcal{H}_{\text{L}}+\mathcal{H}_{\text{R}}+\mathcal{H}_{U}+\mathcal{H}_{\text{L,S}}+\mathcal{H}_{\text{R,S}},$ (1) where $\mathcal{H}_{\alpha}$, for $\alpha=\text{L},\text{R}$, is the Hamiltonian of bath $\alpha$, $\mathcal{H}_{U}$ is the Hamiltonian of the system S and $\mathcal{H}_{\alpha,\text{S}}$ describes the coupling between bath $\alpha$ and S. Each of these components – the baths, the system, and the couplings – contribute in different ways to the thermal properties of the device. Below we describe in detail these different parts. ### II.1 Thermal baths The baths are assumed, as customary, to be “large” quantum systems in equilibrium with a well defined temperature $T_{\alpha}$ (and equal chemical potential $\mu$ in the fermionic case). The Hamiltonian of the bosonic (B) and fermionic (F) baths is given by $\displaystyle\mathcal{H}_{\alpha}^{\text{(B)}}$ $\displaystyle=\sum_{k}\epsilon_{\alpha k}\,b^{\dagger}_{\alpha k}b_{\alpha k},$ (2) $\displaystyle\mathcal{H}_{\alpha}^{\text{(F)}}$ $\displaystyle=\sum_{k}\epsilon_{\alpha k}\,c^{\dagger}_{\alpha k}c_{\alpha k},$ where $b_{\alpha k}$ and $b_{\alpha k}^{\dagger}$ ($c_{\alpha k}$ and $c_{\alpha k}^{\dagger}$) are, respectively, the destruction and creation bosonic (fermionic) operators of an excitation with energy $\epsilon_{\alpha k}$ in bath $\alpha$ and quantum number $k$. The operators satisfy the usual commutation and anticommutation relations: $[b_{\alpha k},b^{\dagger}_{\alpha^{\prime}k^{\prime}}]=\delta_{\alpha,\alpha^{\prime}}\delta_{k,k^{\prime}}$, $[b_{\alpha k},b_{\alpha^{\prime}k^{\prime}}]=0$, $\\{c_{\alpha k},c^{\dagger}_{\alpha^{\prime}k^{\prime}}\\}=\delta_{\alpha,\alpha^{\prime}}\delta_{k,k^{\prime}}$ and $\\{c_{\alpha k},c_{\alpha^{\prime}k^{\prime}}\\}=0$, where $[\dots,\dots]$ and $\\{\dots,\dots\\}$ denote, respectively, the commutator and anticommutator. In the following, for simplicity, we will generically use the symbol $d_{\alpha k}$ to denote both cases and will later specify the nature of the particles forming the bath. Since the baths are at thermal equilibrium, the bosonic baths are prepared in a thermal Gibbs state $\rho^{\text{(B)}}_{\alpha}=e^{-\mathcal{H}^{\text{(B)}}_{\alpha}/(k_{\text{B}}T_{\alpha})}/Z^{\text{(B)}}_{\alpha}$, where $Z^{\text{(B)}}_{\alpha}=\Tr\,[e^{-\mathcal{H}^{\text{(B)}}_{\alpha}/(k_{\text{B}}T_{\alpha})}]$ is the partition function of bath $\alpha$, while we assume the fermionic baths to be prepared in the state $\rho^{\text{(F)}}_{\alpha}=e^{-(\mathcal{H}^{\text{(F)}}_{\alpha}-\mu\mathcal{N}_{\alpha})/(k_{\text{B}}T_{\alpha})}/Z^{\text{(F)}}_{\alpha}$, where $Z^{\text{(F)}}_{\alpha}=\Tr[e^{-(\mathcal{H}^{\text{(F)}}_{\alpha}-\mu\mathcal{N}_{\alpha})/(k_{\text{B}}T_{\alpha})}]$ is the grand partition function of bath $\alpha$ and $\mathcal{N}_{\alpha}=\sum_{k}c^{\dagger}_{\alpha k}c_{\alpha k}$ is the particle number operator of bath $\alpha$. ### II.2 The System The system S connecting the two reservoirs is a multi-level quantum system. As discussed in the introduction, as a paradigmatic case also relevant for experiments, we will consider a non-linear resonator whose Hamiltonian is given by $\mathcal{H}_{U}=\Delta b^{\dagger}b+\frac{U}{2}b^{\dagger}b^{\dagger}bb,$ (3) where $\Delta$ determines the frequency of the harmonic resonator, $b$ ($b^{\dagger}$) is a bosonic destruction (creation) operator, and $U$ describes the strength of the non-linear term. The essential ingredient, we believe, are the multi-level structure of the spectrum and its non-harmonic nature. In this perspective, the Kerr-like form represents a generic situation capturing all these features. In fact it bridges between weakly anharmonic systems like the transmon koch2007 for small values of $U$ to multilevel qubits like the fluxonium manucharyan2009 ; koch2013 or the phase qubits martinis2002 for increasing values of the anharmonicity parameter. In the limit of large $U$, only the two number states $|n=0\rangle$ and $|n=1\rangle$ are relevant for the dynamics and the corresponding Hamiltonian reduces to that of a qubit ($U=\infty$) $\mathcal{H}_{\infty}=\frac{\Delta}{2}\sigma_{z},$ (4) (we dropped an irrelevant constant) where $\Delta$ is the energy spacing between the ground and excited state, and $\sigma_{z}$ denotes a Pauli matrix. Physically, in a bosonic system the qubit may represent a non-linear harmonic oscillator where the interaction is so strong that only the first two states are energetically accessible. ### II.3 System-bath coupling The coupling allows energy exchange between the baths and S. In the $U=\infty$ (qubit) case, we can write the most general system-bath interaction as ${\cal H}_{\alpha,\text{S}}=\sigma^{+}\otimes B_{\alpha}+\sigma^{-}\otimes B_{\alpha}^{\dagger}+\sigma_{z}\otimes B_{\alpha z},$ (5) where $B_{\alpha}$ is an arbitrary operator (not necessarily Hermitian) acting on the Hilbert space of bath $\alpha$, while $B_{\alpha z}$ is an Hermitian operator acting on the space of bath $\alpha$ (see App. A for details). This expression can be derived by expanding the operators acting on the tensor space of S and of the bath onto the product basis, and choosing the Pauli matrices and the identity as basis of Hermitian operators acting on the qubit space. Aside from deriving some general properties, throughout this paper we will mainly consider the “linear coupling” and the “non-linear coupling” cases, i.e. $\displaystyle B_{\alpha}^{\text{(lin)}}$ $\displaystyle=\sum_{k}V_{\alpha k}\,d_{\alpha k},$ (6) $\displaystyle B_{\alpha}^{\text{(non-lin)}}$ $\displaystyle=\sum_{k}V_{\alpha k}\,d^{2}_{\alpha k},$ (7) respectively. Obviously the non-linear coupling [Eq. (7)] only applies to the case of boson baths. The coupling strength is determined by $V_{\alpha k}$. When assessing strong coupling effects, we will focus on the spin-boson model, i.e. we will consider a bosonic bath coupled to the qubit via the following interaction $\mathcal{H}_{\alpha,\text{S}}=\sum_{i=x,y,z}{u}_{\alpha,i}\sigma_{i}\otimes\sum_{k}V_{\alpha k}\,(b_{\alpha k}+b_{\alpha k}^{\dagger}),$ (8) where $\vec{u}_{\alpha}=(\sin\theta_{\alpha}\cos\phi_{\alpha},\sin\theta_{\alpha}\sin\phi_{\alpha},\cos\theta_{\alpha})$ is a unit vector parametrized by the angles $\theta_{\alpha}$ and $\phi_{\alpha}$. In the generic non-linear resonator case (finite $U$), we will consider bosonic baths. Interactions are already present in S, so we will focus on the following linear coupling $\mathcal{H}_{\alpha,\text{S}}=\sum_{k}V_{\alpha k}\,b_{\alpha k}^{\dagger}b+h.c.$ (9) As we will see in the following, the system-bath interaction can be conveniently characterized by the spectral density $\Gamma_{\alpha}(\epsilon)=2\pi\sum_{k}\delta(\epsilon-\epsilon_{\alpha k})V_{\alpha k}V^{*}_{\alpha k}.$ (10) Taking the continuum limit for the energy spacing of the baths, and assuming that the coupling constants $V_{\alpha k}$ only depend on the energy $\epsilon_{\alpha k}$, we can rewrite Eq. (10) as $\Gamma_{\alpha}(\epsilon)=2\pi D_{\alpha}(\epsilon)|V_{\alpha}(\epsilon)|^{2},$ (11) where $D_{\alpha}(\epsilon)$ is the density of states of bath $\alpha$, and $V_{\alpha}(\epsilon_{\alpha k})=V_{\alpha k}$. In the following, we will consider generic spectral densities for the two baths. In some cases, explicitly mentioned, we will focus on bosonic baths with Ohmic spectral densities and an exponential cut-off energy $\epsilon_{\text{C}}$, i.e. $\Gamma_{\alpha}(\epsilon)=\pi K_{\alpha}\,\epsilon\,e^{-\epsilon/\epsilon_{\rm C}}\equiv K_{\alpha}J(\epsilon),$ (12) where $K_{\alpha}$ is the dimensionless Ohmic coupling strength weiss . ## III Heat current and rectification coefficient We are interested in studying the steady-state heat current flowing across the device when a temperature bias is imposed between the baths. Specifically, as depicted in Fig. 1, we fix $T_{\text{L}}=T+\Delta T/2$ and $T_{\text{R}}=T-\Delta T/2$, where $T$ is the average temperature. Since no work is performed on the system (in the fermionic case we consider no chemical potential bias), the first principle of thermodynamics tell us that heat will flow from left to right if $\Delta T>0$ (positive bias case, see Fig. 1a), otherwise it will flow from right to left (negative bias case, see Fig. 1b). Furthermore, since we consider steady state currents, the heat flowing out of one bath is equal to the one flowing into the other bath. Therefore, for simplicity we define the heat flowing out of the left lead as $I(\Delta T)\equiv-\lim_{t\to+\infty}\,\frac{d}{dt}\expectationvalue*{H_{\text{L}}}(t),$ (13) where $\expectationvalue{\dots}(t)=\Tr[\rho(t)\dots]$, $\rho(t)$ being the density matrix representing the state of the total system at time $t$. Notice that the time variation of the energy associated with the coupling Hamiltonian vanishes in steady state ludovico2016 . According to the definition (13), the heat current $I(\Delta T)$ is positive when $\Delta T>0$ and negative when $\Delta T<0$. As discussed in the introduction, it is possible to construct devices where the magnitude of the heat current depends on the sign of the temperature bias. Specifically, if the left-right symmetry is broken, the magnitude of the heat current $I(\Delta T)$ induced by a positive bias may be different with respect to the magnitude of $I(-\Delta T)$, which is the heat current induced by a negative bias. We therefore define the rectification coefficient $R$ as $R=\frac{I(\Delta T)+I(-\Delta T)}{I(\Delta T)-I(-\Delta T)},$ (14) for $\Delta T>0$ [so that $I(-\Delta T)<0$ and the numerator represents the difference of the magnitudes of the currents]. The definition is such that $|R|\leq 1$. Furthermore, $R=0$ means that no rectification takes place, while $|R|=1$ means that we have perfect rectification (i.e. the heat current is finite in one direction, and null in the other). Positive (negative) values of $R$ indicate that the heat flow is greater for positive (negative) temperature biases. For later convenience, it is useful to write the current in a Meir-Wingreen meirwingreen form which will make apparent the necessary ingredients for rectification. Starting from the formal definition of the heat current given in Eq. (13), we can simplify the calculation of the heat current using a standard procedure known as “bath embedding” stefanucci , which is valid whenever the operators of the bath appear linearly in $\mathcal{H}_{\alpha,\text{S}}$. This approach applies to all models except for the qubit with non-linear coupling, Eq. (7), which will be treated in the weak coupling regime only. Under such hypothesis, the formally exact Meir- Wingreen-type formula meirwingreen for the heat current can be written as ojanen ; velizhanin ; wang2006 ; saito2008 ; segal2014 $I(\Delta T)=\int\frac{d\epsilon}{2\pi\hbar}\,\epsilon\Tr\left[G^{<}(\epsilon)\Sigma_{\text{L}}^{>}(\epsilon)-G^{>}(\epsilon)\Sigma^{<}_{\text{L}}(\epsilon)\right],$ (15) where the integration is performed over $[0,+\infty]$ ($[-\infty,+\infty]$) for bosonic (fermionic) baths, and the $\Tr[...]$ runs over the internal degrees of freedom of the system S. In the previous expression $G^{\lessgtr}(\epsilon)$ is the Fourier transform of the lesser/greater Green’s function of S, in the presence of the baths, defined as $G^{<}(t-t^{\prime})=\mp i\expectationvalue*{d^{\dagger}(t^{\prime})d(t)}$ and $G^{>}(t-t^{\prime})=-i\expectationvalue*{d(t)d^{\dagger}(t^{\prime})}$ (upper sign for bosons and lower sign for fermions). Moreover, $\Sigma_{\text{L}}^{<}(\epsilon)=\mp i\Gamma_{\text{L}}(\epsilon)n_{\text{L}}(\epsilon)$ and $\Sigma_{\text{L}}^{>}(\epsilon)=-i\Gamma_{\text{L}}(\epsilon)(1\pm n_{\text{L}}(\epsilon))$ are, respectively, the Fourier transform of the lesser and greater embedded self energies induced by the left bath, and $n_{\rm L}(\epsilon)$ denotes the energy distribution of the left bath. Therefore, $n_{\text{L}}(\epsilon)=(e^{\epsilon/(k_{\text{B}}T_{\text{L}})}-1)^{-1}$ for bosonic baths, while $n_{\text{L}}(\epsilon)=(e^{(\epsilon-\mu)/(k_{\text{B}}T_{\text{L}})}+1)^{-1}$ for fermionic baths. The only quantities which must be determined in Eq. (15) are $G^{\lessgtr}(\epsilon)$. There is a typical situation in which Eq. (15) can be written as a simpler and more transparent expression. Namely, if the spectral densities $\Gamma_{\alpha}(\epsilon)$ of the baths are proportional, i.e. $\Gamma_{\text{L}}(\epsilon)\propto\Gamma_{\text{R}}(\epsilon)$, we can write Eq. (15) as jauho $I(\Delta T)=\int\frac{d\epsilon}{2\pi\hbar}\,\epsilon\,\mathcal{T}(\epsilon,T,\Delta T)\left[n_{\text{L}}(\epsilon)-n_{\text{R}}(\epsilon)\right],$ (16) where $\mathcal{T}(\epsilon,T,\Delta T)=i\Tr\left\\{\frac{\Gamma_{\text{L}}(\epsilon)\Gamma_{\text{R}}(\epsilon)}{\Gamma_{\text{L}}(\epsilon)+\Gamma_{\text{R}}(\epsilon)}[G^{>}(\epsilon)-G^{<}(\epsilon)]\right\\}$ (17) and $n_{\alpha}(\epsilon)$ denotes the energy distribution of bath $\alpha$. This formula was used in Ref. agarwalla to study the spin-boson problem. The dependence of $\mathcal{T}(\epsilon,T,\Delta T)$ on the temperatures may arise from $G^{\lessgtr}(\epsilon)$, which are indeed correlation functions of $S$ computed in the presence of the baths. We notice that, in the absence of this temperature dependence, the magnitude of the heat current would remain the same in the positive and negative bias cases, and there would be no thermal rectification. Indeed, this is the case for non-interacting systems, where Eq. (17) reduces to the well known scattering formula with a transmission function that does not depend on the temperature of the baths. It is therefore crucial to introduce a non-linearity in the local system or in the coupling Hamiltonian to observe thermal rectification. By computing the Green function in certain approximation schemes, rectification can be explored at different orders in the system-bath coupling. We will first consider the weak coupling approximation where the results can equivalently be derived by a Master Equation. ## IV $U=\infty$ \- Weak coupling regime We start our analysis considering the case of two-level system (the $U=\infty$ case). Furthermore, in this Section, we derive general properties and upper bounds to the rectification coefficient $R$ in the limit in which the baths are weakly coupled to the qubit. This regime is obtained by performing a leading order expansion in $\mathcal{H}_{\alpha,\text{S}}$. At this level the baths are effectively treated as Markovian breuer2002 and transport is described by a Lindblad master equation for the reduced density matrix of the system. Heat transport takes place via sequential tunneling processes where the transition from the ground (excited) to the excited (ground) state involves single photon absorption (emission) in the bosonic case boese2001 ; segal2005 . The approximations considered in this Section lead to height and position shifts in the transmission function (see Fig. 2), but width shifts are neglected. Indeed, in this regime the width of the transmission function is the smallest energy scale, so it is infinitesimal, see Eq. (63). Width shifts will appear beyond the weak coupling regime, as discussed in the following Sections. Under weak coupling, the rectification ratio is found to be (see App. B for details) $R=\frac{\Upsilon^{-1}_{\text{L}}(T_{\text{L}})+\Upsilon^{-1}_{\text{R}}(T_{\text{R}})-\Upsilon^{-1}_{\text{L}}(T_{\text{R}})-\Upsilon^{-1}_{\text{R}}(T_{\text{L}})}{\Upsilon^{-1}_{\text{L}}(T_{\text{L}})+\Upsilon^{-1}_{\text{R}}(T_{\text{R}})+\Upsilon^{-1}_{\text{L}}(T_{\text{R}})+\Upsilon^{-1}_{\text{R}}(T_{\text{L}})},$ (18) where $\Upsilon_{\alpha}(T)=[1+e^{-\Delta/(k_{B}T)}]\Upsilon^{-}_{\alpha}(T)$ is the total dissipation rate induced by bath $\alpha$ footnote1 . The rate $\Upsilon_{\alpha}^{-}(T)$ is associated to the transition of the qubit from the excited to the ground state by exchanging energy with bath $\alpha$. Since bath $\alpha$ is prepared in a thermal state, the other rate $\Upsilon^{+}_{\alpha}(T)$ (from the ground to the excited state) is related by the detailed balance. The dissipation rate can then be calculated by evaluating $\Upsilon^{-}_{\alpha}(T)=\frac{1}{\hbar^{2}}\int\nolimits_{-\infty}^{+\infty}dt\,e^{i\Delta t/\hbar}\expectationvalue{B_{\alpha}(t)B_{\alpha}^{\dagger}(0)},$ (19) where the expectation value is taken with respect to the equilibrium thermal state $\rho_{\alpha}$ of the bath, $\Delta$ is the energy spacing of the qubit, $B_{\alpha}(t)$ and $B^{\dagger}_{\alpha}(t)$ are interaction picture operators [see App. C for a derivation of Eq. (19)]. Notice that Eq. (18) holds for arbitrary spectral densities of the baths. The role of the Lamb- shift, which is neglected in Eq. (18), will be discussed in Section IV.4. We can now study $R$ for any weakly coupled system using Eqs. (18) and (19), which are generally easy to compute (we will consider various models explicitly in the following subsections). As a consequence of the weak coupling assumption, the coupling term $\sigma_{z}\otimes B_{\alpha z}$ [see Eq. (5)] does not contribute to $I(\Delta T)$, therefore neither to $R$. This is due to the fact that, in the weak coupling regime, the heat current is mediated by transitions in S. Therefore, the heat current only depends on the population of the qubit, which in turn is solely determined by the coupling terms proportional to $\sigma^{+}$ and $\sigma^{-}$ (see App. B for more details). There is nevertheless ample room for optimizing the rectification by considering different coupling Hamiltonians. We decompose the dissipation rate as $\Upsilon_{\alpha}(\Delta,T)=\Gamma_{\alpha}(\Delta)g_{\alpha}(\Delta,T),$ (20) where $\Gamma_{\alpha}(\Delta)$ is the spectral density of bath $\alpha$, given by Eq. (11), and $g_{\alpha}(\Delta,T)\geq 0$ are arbitrary non-negative functions. In App. D we explicitly evaluate $g_{\alpha}(\Delta,T)$ for various models. The rest of the section is organized as follows: in Section IV.1 we consider the case in which the two functions $g_{\rm L}(\Delta,T)$ and $g_{\rm R}(\Delta,T)$ are equal. Under such a weak condition on the baths, we derive general bounds on $R$. We then study the impact of linear and non-linear coupling to the baths. In Section IV.2 we show that rectification can be enhanced in the case in which $g_{\rm L}\neq g_{\rm R}$. In Section IV.3, we study a generic spin coupling to the bath. At last, in Section IV.4 we show how the Lamb-shift can be exploited to further enhance rectification. ### IV.1 $g_{\rm L}=g_{\rm R}$ case In this section we assume that, in Eq. (20), $g_{\rm L}(\Delta,T)=g_{\rm R}(\Delta,T)\equiv g(\Delta,T)$. This implies that the dissipation rates of the two baths, as a function of temperature, are equal up to a prefactor. However, the rates may have any dependence on the gap of the qubit through the spectral densities. Having defined the asymmetry coefficient $\lambda$ as $\lambda=\frac{\Gamma_{\text{L}}(\Delta)-{\Gamma}_{\text{R}}(\Delta)}{\Gamma_{\text{L}}(\Delta)+{\Gamma}_{\text{R}}(\Delta)},$ (21) such that $|\lambda|\leq 1$, and using Eq. (20), one can cast Eq. (18) into the simple form $R=\lambda\,\frac{g(\Delta,T_{\text{C}})-g(\Delta,T_{\text{H}})}{g(\Delta,T_{\text{C}})+g(\Delta,T_{\text{H}})}.$ (22) Without specifying the precise model, we can derive the following general properties of $R$: * • If $\lambda>0$, $R$ is a decreasing function of $g(\Delta,T_{\text{H}})$, and an increasing function of $g(\Delta,T_{\text{C}})$ (the monotonicity is inverted if $\lambda<0$). Therefore, if $g(\Delta,T)$ is monotonous with respect to $T$, then $R$ is monotonous with respect to $\Delta T$. * • $R$ is linear, therefore monotonous, with respect to $\lambda$. * • Given the first property, we can maximize the possible rectification by taking the limits where $g(\Delta,T_{\text{H}})$ and $g(\Delta,T_{\text{C}})$ respectively tend to zero and infinity. This yields the following bound $|R|\leq|\lambda|.$ (23) As a consequence, the maximum rectification is severely limited by the asymmetry ratio $\lambda$. As expected, for $\lambda=0$ we find that there is no rectification, and the only way to obtain perfect rectification is to have a vanishingly small coupling to one bath. * • Given the second property, $|R|$ is bounded by $|(g(\Delta,T_{\text{C}})-g(\Delta,T_{\text{H}}))/(g(\Delta,T_{\text{C}})+g(\Delta,T_{\text{H}}))|$. We therefore have stronger rectification when $g(\Delta,T)$ has a strong temperature dependence. #### IV.1.1 Linear system-bath tunnel couplings In this subsection we study heat rectification through a qubit where the coupling to the baths is linear, i.e. defined by Eq. (6). For fermionic baths weakly coupled to the qubit, we have that (see App. D.1, for details) $g(\Delta,T)=1.$ (24) Plugging this value into Eq. (22) shows us that no rectification is possible senior2019 . This is indeed expected, since a qubit coupled to fermionic reservoirs can be described by a non-interacting fermionic Hamiltonian, where the Landauer-Büttiker formula can be used to compute the heat current. Next, we consider bosonic baths. In this case, as shown in App. D.2, we have that $g(\Delta,T)=\coth{[\Delta/(2k_{\rm B}T)]},$ (25) so rectification is possible. In particular, we find the following properties: * • Since $g(\Delta,T)$ is a monotonous increasing function of $T$, $\lambda$ and $R$ have opposite signs. This means that more heat flows out of the lead which is more weakly coupled. * • Since $g(\Delta,T)$ is a monotonous function of $T$, the rectification increases with $\Delta T$. * • Since $g(\Delta,T)$ is never zero, but it diverges for $T\to\infty$, the bound in Eq. (23) is saturated only in the limit of infinitely hot reservoir ($T_{\text{H}}\to\infty$). * • It can be explicitly seen that $R$ is a decreasing function of the gap $\Delta$, so it is maximum in the limit $\Delta\to 0$. In this limit, we can expand the hyperbolic cotangent, finding the following bound $|R|\leq\lambda\,\frac{T_{\text{H}}-T_{\text{C}}}{T_{\text{H}}+T_{\text{C}}}.$ (26) #### IV.1.2 Bosonic baths with non-linear coupling For the non-linear coupling $g(\Delta,T)$ is defined through the relation $\Upsilon_{\alpha}(\Delta,T)=\Gamma_{\alpha}(\Delta/2)g(\Delta,T)/2$. By replacing $\Gamma_{\alpha}(\Delta)$ with $\Gamma_{\alpha}(\Delta/2)$ in the definition of $\lambda$, Eq. (21), the function $g(\Delta,T)$ reads [see App. D.3] $g(\Delta,T)=1+\coth^{2}[\Delta/(4k_{\rm B}T)].$ (27) The properties listed below follow: * • $g(\Delta,T)$, as a function of $T$, has the same monotonicity as the bosonic case with linear coupling, it only diverges for $T\to\infty$, and it is never zero. Therefore, it has the same first 3 properties of Subsection IV.1.1. * • Also in this case, $g(\Delta,T)$ is monotonous with respect to $\Delta$, such that $R$ is maximized in the limit $\Delta\to 0$. Performing an expansion for small $\Delta$, we find the following bound $|R|\leq\lambda\,\frac{T_{\text{H}}^{2}-T_{\text{C}}^{2}}{T_{\text{H}}^{2}+T_{\text{C}}^{2}}.$ (28) Comparing with Eq. (26), we see that the non-linear coupling may be more effective in rectifying the heat current. This can be explicitly verified by comparing the exact expressions of $R$ using Eq. (22). ### IV.2 $g_{\rm L}\neq g_{\rm R}$ case It may be useful to check if allowing arbitrary functions $g_{\rm L}(\Delta,T)$ and $g_{\rm R}(\Delta,T)$ in Eq. (20) results in a stronger rectification. Notice that $g_{\rm L}$ and $g_{\rm R}$ are different whenever the correlation function in Eq. (19) is different for the two baths. This may happen considering different bath Hamiltonians, and/or different coupling Hamiltonians. As an example, we take the two bosonic baths, but we consider two different coupling Hamiltonians. We assume the left bath to be linearly coupled to the qubit (as in Section IV.1.1) and the right bath to be non-linearly coupled to the qubit (as in Section IV.1.2). Plugging the respective rates for the left and the right lead into Eq. (18) yields an exact expression for $R$ which in the limit $\Delta\to 0$ is simply given by $R=\frac{T_{\text{H}}-T_{\text{C}}}{T_{\text{H}}+T_{\text{C}}}$ (29) regardless of $\lambda$ [under the obvious assumption that $\Gamma_{\alpha}(\Delta\to 0)$ does not diverge]. Notably, a general property of the regime analyzed in Section IV.1 is that $|R|\leq|\lambda|$, while here we have rectification even for $\lambda=0$, and it can be made arbitrarily large simply by choosing larger and larger temperature differences. This shows that, in general, an asymmetry in the form of the system-bath couplings can produce large rectification coefficients. ### IV.3 Generic spin coupling In this section, we investigate what happens when the qubit is coupled to the baths through the same arbitrary bath operators, but through different Pauli spin matrices. As an example, we consider the coupling Hamiltonian given in Eq. (8) with $V_{\text{L}k}=V_{\text{R}k}$, although also non-linear couplings can be treated on the same footing. As shown in App. D.4, this system can be mapped into the case discussed in Section IV.1 with an effective ${\Gamma}_{\alpha}(\Delta)\propto\sin^{2}{\theta_{\alpha}}$. Therefore, all the properties derived in Section IV.1 hold in this case, where the asymmetry coefficient is given by $\lambda=\frac{\sin^{2}\theta_{\text{L}}-\sin^{2}\theta_{\text{R}}}{\sin^{2}\theta_{\text{L}}+\sin^{2}\theta_{\text{R}}},$ (30) while the function $g(\Delta,T)$ depends on the bath and system-bath Hamiltonian [in the specific case of Eq. (8), we have $g(\Delta,T)=\coth{[\Delta/(2k_{\rm B}T)]}$, see Eq. (25)]. The rectification does not depend on the angle $\phi_{\alpha}$; as we will show, this property does not hold beyond the weak coupling regime thanks to coherent transport effects. The only relevant parameter is $\theta_{\alpha}$, which is the angle between the coupling term and the qubit Hamiltonian (proportional to $\sigma_{z}$). Since the rectification is linear in $\lambda$, it reaches the maximum when $\theta_{\text{L}}$ is $0$ and $\theta_{R}$ is $\pi/2$, or viceversa. ### IV.4 Role of the Lamb-Shift Until now we have ignored the Lamb shift, i.e. the renormalization of the energy gap of the qubit induced by the presence of the baths. The renormalization of the qubit splitting depends on both (L/R) temperatures, thus it may influence the rectification properties of the device. This allows us to have rectification not subject to the bounds derived in the previous sections. As shown in App. F, if the system-bath Hamiltonian does not contain terms proportional to $\sigma_{z}$ (i.e. $B_{\alpha z}=0$), the Lamb shift Hamiltonian (which has to be summed to the bare Hamiltonian $\mathcal{H}_{\infty}$) takes the following form breuer2002 : $\tilde{\mathcal{H}}=\left[\delta\Delta_{\text{L}}(\Delta,T_{\text{L}})+\delta\Delta_{\text{R}}(\Delta,T_{\text{R}})\right]\sigma_{z},$ (31) where $\delta\Delta_{\alpha}(\epsilon,T)=\frac{1}{2\pi}\mathcal{P}\int\nolimits_{-\infty}^{+\infty}\frac{\Upsilon_{\alpha}(\epsilon^{\prime},T)}{\epsilon-\epsilon^{\prime}}d\epsilon^{\prime}.$ (32) In Eq. (32), $\mathcal{P}$ indicates a Cauchy principal value integration. We recall that the $\Delta$ appearing in Eq. (31) is the bare gap. The renormalized gap is therefore given by $\tilde{\Delta}(\Delta T)=\Delta+\delta\Delta_{\text{L}}(\Delta,T+\Delta T/2)+\delta\Delta_{\text{R}}(\Delta,T-\Delta T/2),$ (33) and it may change upon inverting the temperature bias ($\Delta T\to-\Delta T)$. In the presence of a Lamb shift, $R$ is still given by Eq. (18) provided that we replace $\Delta\to\tilde{\Delta}(\Delta T)$. In general, we notice that the renormalization terms $\delta\Delta_{\alpha}(\Delta,T_{\alpha})$ is of the same order in the coupling strength as the rates $\Upsilon_{\alpha}(\epsilon,T_{\alpha})$ (which are evaluated at leading order in the coupling). Therefore, if the rates $\Upsilon_{\alpha}(\epsilon,T_{\alpha})$ are smooth functions of $\epsilon$, their variation due to the Lamb shift will be beyond leading order in the coupling strength. The effect of the Lamb shift on rectification is thus negligible in the weak coupling regime when the spectral density of the baths is a smooth function of the energy (on the $\hbar\Upsilon_{\alpha}$ scale). On the contrary, the Lamb shift may become relevant for rectification whenever there is a strong energy dependence in $\Upsilon_{\alpha}(\epsilon,T_{\alpha})$, for example, if the density of states of the baths has a gap. As we will show in detail in the following, even a small renormalization of the gap can have a large impact on the current. Figure 3: Upper panel: the bare gap $\Delta$, the renormalized gap $\tilde{\Delta}(\Delta T)$ in the positive bias case and the renormalized gap $\tilde{\Delta}(-\Delta T)$ in the negative bias case, as a function of the bare gap $\Delta$. The dashed gray line corresponds to the gap $\epsilon_{0}$ in the density of states of the baths, while the region highlighted in gray shows where the renormalized gaps are respectively larger and smaller than $\epsilon_{0}$. Lower panel: the heat currents $I(\Delta T)$ and $|I(-\Delta T)|$. In the highlighted region we have perfect rectification (up to higher order corrections in the coupling strength). The parameters are: $K_{\text{R}}/K_{\text{L}}=5$, $\epsilon_{\rm C}=(20/3)\,k_{\rm B}T$, $\epsilon_{0}=(4/3)\,k_{\rm B}T$ and $\Delta T/T=2/3$. Note that the current is plotted in units of $(k_{B}T)^{2}/\hbar$ and, once the ratio $K_{\text{R}}/K_{\text{L}}$ is fixed, scales with $K_{\rm L}$. We consider two bosonic baths with a cutoff frequency $\epsilon_{\rm C}$ and a gap in the density of states $\epsilon_{0}$, $\Upsilon_{\alpha}(\epsilon,T_{\alpha})=\frac{\pi}{\hbar}\,K_{\alpha}\,\theta(\epsilon-\epsilon_{0})\,\epsilon\,e^{-\epsilon/\epsilon_{\rm C}}\coth{[\epsilon/(2k_{\rm B}T_{\alpha})]},$ (34) where $\theta(\epsilon)$ is the Heaviside function and $K_{\alpha}$ is the dimensionless Ohmic coupling strength introduced in Eq. (12). In the upper panel of Fig. 3 we show the bare gap $\Delta$ (black curve), the renormalized gap $\tilde{\Delta}(\Delta T)$ for the positive bias case (blue curve), and the renormalized gap $\tilde{\Delta}(-\Delta T)$ for the negative bias case (green curve), as a function of the bare gap $\Delta$. The renormalized gaps are different in the positive and negative bias cases. In particular, in the highlighted region $\tilde{\Delta}(-\Delta T)$ is inside the gap, i.e. it is smaller than $\epsilon_{0}$ (dashed gray line), while $\tilde{\Delta}(\Delta T)$ is outside the gap. We therefore expect a finite heat current in the latter case, and a zero heat current in the former. This is confirmed in the lower panel of Fig. 3 where $I(\Delta T)$ and $|I(-\Delta T)|$ are plotted as a function of the bare gap. The heat currents are computed using Eq. (63) with $\Delta\to\tilde{\Delta}(\Delta T)$ to account for the Lamb shift. The highlighted region is the one in which perfect rectification is possible. This is however an ideal situation. The inclusion of higher order effects in the coupling strength (for example co-tunneling effects) will reduce the rectification. Indeed, the perfect rectification visible in the grey region in the lower panel of Fig. 3 is a consequence of the current $I(-\Delta T)$ being directly proportional to the density of states, therefore exactly zero for $\tilde{\Delta}(-\Delta T)<\epsilon_{0}$. On the other hand, higher order effects create small yet finite currents even in this parameter range. Nonetheless, it is useful to have identified a mechanism to enhance rectification exploiting the Lamb shift. ## V $U=\infty$ \- Beyond weak coupling As shown in the previous section, in the weak-coupling regime there are bounds to the rectification coefficient $R$. In this section, we show that some of these bounds can be overcome by increasing the coupling between the system and the baths. We will see that quantum coherence can be beneficial for rectification. From the point of view of the transmission function, see Fig. 2, going beyond the weak coupling regime allows us to consider also the effect of a width shift, which was neglected in the sequential tunneling regime. For concreteness we focus on bosonic baths and consider three different approaches. First, we include co-tunneling contributions to the heat current in the sequential tunneling regime derived from the master equation. The importance of considering co-tunneling resides not only in improving the analysis compared to the weak-coupling regime. It is also an important guide to interpret the results derived with other two methods we employ. These consist first of all in an approach based on non-equilibrium Green’s function theory (NEGF); secondly in a formal exact solution for the heat current valid for general spectral densities and coupling strengths derived within the Feynman-Vernon path-integral approach. For Ohmic baths we consider the special strong coupling condition characterized by $K_{R}+K_{L}=1/2$ where the heat current is derived in closed form. This solution, which extends to the non- equilibrium case the Toulouse limit of the spin-boson model, also provides a benchmark for the non-perturbative non-equilibrium Green’s function results. In addition, it holds beyond the non-adiabatic regime treated in the NIBA segal2005 ; agarwalla ; boudjada2014 . We will mainly consider two different couplings: the “XX coupling”, where both left and right baths are coupled to the system through $\sigma_{x}$, i.e. $\theta_{\text{L}}=\theta_{\text{R}}=\pi/2,\;\phi_{\rm L}=0,\;\phi_{\rm R}=0$, and the “XY coupling”, i.e. $\theta_{\text{L}}=\theta_{\text{R}}=\pi/2,\;\phi_{\rm L}=0,\;\phi_{\rm R}=\pi/2$. Since the XX and XY couplings only differ by the angle $\phi_{\alpha}$ [see Eq. (8)], both cases display identical rectification within the weak coupling regime (see Section IV.3). As we will see, this property is violated when going beyond the weak coupling regime, signaling the effect of higher order coherent quantum effects. Heat transport in the $\Delta\rightarrow 0$ limit will be studied for arbitrary spin coupling as defined in Eq. (8). This limiting case exhibits vanishing heat current in the sequential tunneling limit. Hence, thermal current and thermal rectification becomes solely due to higher order processes. We proceed by considering first co-tunneling processes (in Section V.1), and then applying the NEGF method in Section V.2. The exact results, based on Feynman-Vernon path integral approach, will be introduced in Section V.3, while its impact on rectification will be discussed in Section V.4. The results for arbitrary couplings in the $\Delta\rightarrow 0$ limit are presented at the end of the Section. ### V.1 Co-tunneling processes Both for the XX and XY couplings, only elastic co-tunneling processes contribute. These are processes that coherently transfer an excitation from one bath to the other (via a virtual state) without changing the state of the qubit, and thus without affecting the ME itself. The reason why co-tunneling processes can only be elastic comes from the fact that in a co-tunneling process the coupling Hamiltonian (which changes the state of the qubit) is applied twice. Therefore the two-level system is brought back to the initial state. The heat current, including both sequential and co-tunneling processes, can be expressed as (see App. G for details of the calculation) $I(\Delta T)=I^{\text{seq}}(\Delta T)+I^{\text{cot}}(\Delta T),$ (35) where $I^{\text{seq}}(\Delta T)$ is the heat current relative to the sequential regime, given by Eq. (63), and $I^{\text{cot}}(\Delta T)=\int_{0}^{\infty}\frac{d\epsilon}{2\pi\hbar}\epsilon\;\Gamma_{\text{L}}(\epsilon)\Gamma_{\text{R}}(\epsilon)\\\ \times\left|\frac{1}{\Delta+\epsilon+i\eta}\pm\frac{1}{\Delta-\epsilon+i\eta}\right|^{2}\left[n_{\text{R}}(\epsilon)-n_{\text{L}}(\epsilon)\right]$ (36) is the contribution due to co-tunneling, where $\eta$ is an infinitesimal positive quantity, and $n_{\alpha}(\epsilon)$ is the Bose-Einstein distribution relative to bath $\alpha$. The plus sign in Eq. (36) refers to the XX coupling, while the minus sign to the XY coupling. Eq. (36) diverges logarithmically in the limit $\eta\to 0^{+}$, but the co- tunneling rates can be regularised as discussed extensively in the literature, see Refs. turek, ; Kaasbjerg2016, ; bibek, ; paolo, . Assuming that the two- level system is in the ground state, the first term inside the square modulus of Eq. (36) arises from virtual transitions of an excitation from one bath to the qubit, and then from the qubit to the other bath (see App. G). The second term instead arises from the (virtual) process in which excitations are created both in one bath and in the qubit, and then by destroying an excitation in the qubit and in the other bath. The choice of the XX and XY couplings produce opposite interference effects between these two processes. If we had neglected the “counter rotating” terms in $\mathcal{H}_{\alpha,\text{S}}$ [Eq. (8)], the second term inside the square modulus would have vanished and the co-tunneling rates would have become the same in the XX and XY cases. Crucially, since in Eq. (36) the temperatures only enter through the Bose- Einstein distributions, $I^{\text{cot}}(\Delta T)$ is an anti-symmetric function, i.e. $I^{\text{cot}}(-\Delta T)=-I^{\text{cot}}(\Delta T)$. Therefore the contribution of co-tunneling to the heat current is the same both for the positive and negative bias case. The impact of co-tunneling on rectification can be easily appreciated by plugging Eq. (35) into Eq. (14): $R=\frac{I^{\text{seq}}(\Delta T)+I^{\text{seq}}(-\Delta T)}{I^{\text{seq}}(\Delta T)-I^{\text{seq}}(-\Delta T)+2I^{\text{cot}}(\Delta T)},$ (37) where we fixed $\Delta T>0$, so that $I(\Delta T)>0$ and $I(-\Delta T)<0$. Notably, co-tunneling only appears in the denominator of Eq. (37). Defining $R^{\text{seq}}=[I^{\text{seq}}(\Delta T)+I^{\text{seq}}(-\Delta T)]/[I^{\text{seq}}(\Delta T)-I^{\text{seq}}(-\Delta T)]$, we see that if $I^{\text{cot}}(\Delta T)<0$, then $|R|>|R^{\text{seq}}|,$ (38) ($|R|<|R^{\text{seq}}|$ if $I^{\text{cot}}(\Delta T)>0$). Therefore, in the presence of sequential tunneling, co-tunneling can enhance rectification despite being an elastic process which would induce no rectification on its own. Interestingly, as discussed below, the co-tunneling contribution $I^{\text{cot}}(\Delta T)$ is usually negative when sequential tunneling dominates. In the weak coupling regime (i.e. when $\hbar\Upsilon$ is the smallest energy scale), sequential tunneling dominates when $\Delta$ is of the order of $k_{B}T$. In the presence of sequential tunneling processes only, the transmission function ${\cal T}(\epsilon,T,\Delta T)$ in Eq. (16) can be thought of as an infinitely narrow function in $\epsilon$ peaked at the resonant condition. The co-tunneling contributions broadens the transmission function, as qualitatively illustrated in Fig. 2(c). As the width of the transmission function increases, the weight of ${\cal T}(\epsilon,T,\Delta T)$ moves from its peak to its tails. Therefore, where sequential processes dominate, i. e. around the peak, co-tunneling contributions decrease the heat flow. On the other hand, where sequential tunneling is suppressed, i. e. in the tails of the transmission function, co-tunneling increases the heat flow. ### V.2 Non-equilibrium Green’s function method In this section we will employ the NEGF method to compute heat currents. It is convenient to first express spin operators in a Majorana representation liu ; schad ; schad2 $\sigma_{a}=-\frac{i}{2}\sum_{bc=x,y,z}\epsilon_{abc}\,\eta_{b}\eta_{c},$ (39) where $\epsilon_{abc}$ is the Levi-Civita symbol, and $\eta_{a}$ denotes three Majorana fermion operators (they satisfy the anti-commutation relation $\\{\eta_{a},\eta_{b}\\}=0$ for $a\neq b$, $\eta_{a}^{2}=1$ and $\eta_{a}=\eta_{a}^{\dagger}$). The system and coupling Hamiltonians [see Eqs. (4) and (8)] therefore become (up to an irrelevant additive constant) $\displaystyle\mathcal{H}_{\infty}$ $\displaystyle=-i\frac{\Delta}{2}\eta_{x}\eta_{y},$ (40) $\displaystyle\mathcal{H}_{\alpha,\text{S}}$ $\displaystyle=-\frac{i}{2}\sum_{abc}u_{\alpha,a}\epsilon_{abc}\,\eta_{b}\eta_{c}\otimes\sum_{k}V_{\alpha k}(b_{\alpha k}+b^{\dagger}_{\alpha k}),$ where the indices $a$, $b$ and $c$ run over $x$, $y$ and $z$ in the sum. The advantage is that in this representation the system Hamiltonian is quadratic, while the non-linearity is transferred to the coupling term. In the Majorana representation the system-bath coupling gives us the non-linear effects that, as we discussed, are necessary to observe rectification. Assuming that the spectral densities of the two baths are proportional, the heat current is given by Eq. (16) where $\mathcal{T}(\epsilon,T,\Delta T)$, in general, must be computed numerically. However, we are able to find an analytic expression for the transmission function by solving the Dyson equation for the Green’s functions with an expression for the self energy expanded to leading order in the coupling Hamiltonian $\mathcal{H}_{\alpha,\text{S}}$ in Eq. (40) (see App. H for details). In the XX coupling case, this method leads to $\mathcal{T}_{\text{XX}}(\epsilon,T,\Delta T)=\\\ \frac{4\,\Delta^{2}\Gamma_{\text{L}}(\epsilon)\Gamma_{\text{R}}(\epsilon)}{\left(\epsilon^{2}-2\epsilon\left(\delta{\Delta}_{\text{L}}(\epsilon,T_{\text{L}})+\delta{\Delta}_{\text{R}}(\epsilon,T_{\text{R}})\right)-\Delta^{2}\right)^{2}+\xi^{2}(\epsilon)}$ (41) where $\xi(\epsilon)=\epsilon\sum_{\alpha}{\Gamma}_{\alpha}(\epsilon)[1+2n_{\alpha}(\epsilon)]$, and $\delta\Delta_{\alpha}(\epsilon,T_{\alpha})$, which describes the Lamb shift induced by bath $\alpha$, is defined in Eq. (32) with $\Upsilon_{\alpha}(\epsilon^{\prime},T_{\alpha})=\Gamma_{\alpha}(\epsilon^{\prime})\coth{[\epsilon^{\prime}/(2k_{\text{B}}T_{\alpha})]}$ [Eq. (20) with Eq. (25)] footnote:majorana . Instead, in the XY coupling case we find $\mathcal{T}_{\text{XY}}(\epsilon,T,\Delta T)=\frac{4\,\epsilon^{2}\,\Gamma_{\text{L}}(\epsilon)\Gamma_{\text{R}}(\epsilon)}{\left(\epsilon^{2}-\mathcal{X}(\epsilon)-\Delta^{2}\right)^{2}+\mathcal{Y}^{2}(\epsilon)},$ (42) where $\mathcal{X}(\epsilon)=2\epsilon\left[\delta{\Delta}_{\text{L}}(\epsilon,T_{\text{L}})+\delta{\Delta}_{\text{R}}(\epsilon,T_{\text{R}})\right)+\left(1+2n_{\text{L}}(\epsilon)\right]\\\ \times\left[1+2n_{\text{R}}(\epsilon)\right]\Gamma_{\text{L}}(\epsilon)\Gamma_{\text{R}}(\epsilon)-4\delta{\Delta}_{\text{L}}(\epsilon,T_{\text{L}})\delta{\Delta}_{\text{R}}(\epsilon,T_{\text{R}}),$ (43) and $\mathcal{Y}(\epsilon)=\sum_{\begin{subarray}{c}\alpha,\beta=\text{L},\text{R}\\\ \alpha\neq\beta\end{subarray}}\left[2\delta{\Delta}_{\alpha}(\epsilon,T_{\alpha})-\epsilon\right]\left[1+2n_{\beta}(\epsilon)\right]\Gamma_{\beta}(\epsilon).$ (44) As shown in App. H, this approach provides results which are more accurate with respect to the ME approach also including co-tunneling processes, since it contains higher order processes beyond sequential and co-tunneling thanks to the implicit re-summation performed by solving the Dyson equation. As shown in Fig. 4, indeed the transmission function (41) includes height (top panel) and position (bottom panel) shifts effects bridging between the sequential and sequential-plus-co-tunneling regimes illustrated in the previous sections. Red solid curves refer to the positive bias, while black dashed curves refer to negative bias. The vertical lines in the bottom panel are guides to the eye to highlight the Lamb-shift. A discussion of the results deriving from this formulation will be deferred to Section V.4 in order to allow for a comparison between the different approaches. Figure 4: Transmission function $\mathcal{T}_{XX}(\epsilon,T,\Delta T)$ as a function of energy $\epsilon$ for $K_{L}=0.006$ and $K_{R}=0.003$ (top panel); $K_{L}=0.06$ and $K_{R}=0.03$ (bottom panel). Red solid (black dashed) curves are for positive (negative) bias, while the vertical lines are guides for the eye. The other parameters are $\Delta=k_{\rm B}T$, $\epsilon_{\rm C}=6k_{\rm B}T$ and the temperature difference is $\Delta T=0.6T$. ### V.3 Exactly solvable case - $K_{\text{R}}+K_{\text{L}}=1/2$ By applying the Feynman-Vernon path-integral approach to the spin-boson problem weiss , rectification can be computed exactly for the XX coupling condition. This case will also serve as a benchmark for approximate studies in the non-perturbative regimes. The exact formal expression for the heat current Eq. (15) for generic spectral densities of the two baths having the same energy dependence, i.e. $\Gamma_{\text{L}}(\epsilon)\propto\Gamma_{\text{R}}(\epsilon)$, takes the form of Eq. (16) with $\mathcal{T}(\epsilon,T,\Delta T)$ replaced by $\mathcal{T}_{\text{XX}}^{\rm(ex)}(\epsilon,T,\Delta T)=2\frac{\Gamma_{\text{L}}(\epsilon)\Gamma_{\text{R}}(\epsilon)}{\Gamma_{\text{L}}(\epsilon)+\Gamma_{\text{R}}(\epsilon)}{\rm Im}\left[\chi(\epsilon)\right]$ (45) (see Appendix H.4 for details). In Eq. (45), $\chi(\epsilon)$ is the Fourier transform of the qubit dynamical susceptibility in the presence of the two baths, $\chi(t)=(i/\hbar)\Theta(t)\langle[\sigma_{x}(t),\sigma_{x}(0)]\rangle$, given in Eq. (173). Here we focus on Ohmic spectral densities, defined as in Eq. (12). The dimensionless coupling strength $K_{\alpha}$ enters the exact expression of the dynamical susceptibility in a form which allows the path summation in analytic form when $K_{\text{R}}+K_{\text{L}}=1/2$, analogously to the $K=1/2$ regime of the spin-boson model, corresponding to the Toulouse limit of the anisotropic Kondo problem weiss ; sassetti1990 , see Eq. (177). We obtain $\chi(t)=\frac{2\Delta^{2}}{\hbar^{3}\gamma}\Theta(t)e^{-\gamma t/2}\int_{0}^{\infty}d\tau P(\tau)\\\ \times\left[e^{-\gamma|t-\tau|/2}-e^{-\gamma|t+\tau|/2}\right],$ (46) where $\gamma=\pi\Delta^{2}/(2\hbar\,\epsilon_{\rm C})$ and $P(\tau)=\prod_{\alpha}\left(\frac{\epsilon_{\rm C}}{\pi k_{\rm B}T_{\alpha}}\sinh\left(\frac{\pi|\tau|k_{\rm B}T_{\alpha}}{\hbar}\right)\right)^{-2K_{\alpha}}.$ (47) We note that $\chi(t)$ takes the same form of the spin-boson model at $K=1/2$ with the only difference that the bath-induced (dipole or intra-blip, see App. H.4) interactions involving the two baths enter $P(\tau)$ in factorized form. When $K_{\text{L}}=K_{\text{R}}=1/4$ and $\Delta T=0$ we recover the susceptibility at the Toulouse point and the heat current trivially vanishes. In order to evaluate the rectification, the heat current (16), is more conveniently written by substituting Eqs. (45) and (46), and reads $I=\frac{1}{\hbar}\frac{K_{\text{L}}K_{\text{R}}}{K_{\text{L}}+K_{\text{R}}}\imaginary\int_{-\infty}^{+\infty}dt\,\chi(t)F(-t),$ (48) where $F(-t)=(k_{\rm B}T_{\text{R}})^{3}\psi^{(2)}\left(1+\frac{k_{\rm B}T_{\text{R}}}{\epsilon_{\text{C}}}\left(1-i\frac{\epsilon_{\text{C}}\,t}{\hbar}\right)\right)\\\ -(k_{\rm B}T_{\text{L}})^{3}\psi^{(2)}\left(1+\frac{k_{\rm B}T_{\text{L}}}{\epsilon_{\text{C}}}\left(1-i\frac{\epsilon_{\text{C}}\,t}{\hbar}\right)\right)$ (49) and $\psi^{(2)}(z)$ denotes the second derivative of the digamma function. The resulting exact form of the heat rectification includes all possible heat transfer processes. In the following we will evaluate it explicitly by numerical integration of the current expressed as in Eq. (48). ### V.4 Rectification coefficient In this subsection we show that the general properties and bounds derived in Section IV can be overcome in the strong-coupling regime, allowing the system to enhance rectification. Furthermore, we will also identify the effect of higher order coherent transport processes on rectification. We will consider Ohmic spectral density, as in Eq. (12), for both baths. Figure 5: Rectification coefficient $R$, computed with the three methods described in the legend, plotted as a function of $K_{\text{L}}$ both for the XX and XY couplings [the ME(seq) case is the same for both couplings]. The parameters are $K_{\text{R}}=0.005$, $\Delta=0.8k_{\text{B}}T$, $\epsilon_{\text{C}}=10k_{\text{B}}T$ and $\Delta T/T=8/5$. We denote with “NEGF” the calculations performed with the non-equilibrium Green’s function method described in Section V.2, with “ME(cot)” those performed with the master equation which includes co-tunneling described in Section V.1, and with “ME(seq)” the calculations performed in the weak coupling limit as described in Section IV. The inset shows the same points plotted as a function of $\lambda$ for $\lambda\in[0.25,1]$. We neglect the Lamb shift in this plot. Figure 6: Rectification coefficient $R$, computed with the three methods described in the legend, plotted as a function of the qubit gap $\Delta$. The parameters are $K_{\text{L}}=0.006$, $K_{\text{R}}=0.03$, $\epsilon_{\text{C}}=10k_{\text{B}}T$ and $\Delta T/T=1.9$. The horizontal magenta line shows $|\lambda|=\left|(K_{\rm L}-K_{\rm R})\big{/}(K_{\rm L}+K_{\rm R})\right|=0.67$. We neglect the Lamb shift in this plot. In Fig. 5 we plot $R$ as a function of $K_{\text{L}}$ in the XX and XY case comparing the NEGF calculation, the ME calculation including co-tunneling effects [ME(cot)] and the ME calculation in the weak coupling regime [ME(seq)]. The coupling constant $K_{\text{R}}$ is set to 0.005 and the temperatures are fixed. First we notice that for small values of $K_{\text{L}}$, i.e. in the weak coupling limit, all curves coincide, as expected. As $K_{\text{L}}$ increases, we notice that the NEGF and ME(cot) curves nicely agree up to $K_{\text{L}}\approx 0.025$, and then we see some deviations. Interestingly, we notice that the rectification obtained using NEGF and ME(cot) method is different in the XX and XY cases, contrary to what is obtained using the ME(seq) method. Indeed, in Section IV.3 we showed that, in the weak coupling regime, rectification only depends on the angle between the qubit ($\sigma_{z}$) and the coupling term. Higher order coherent processes, instead, are able to distinguish these different couplings, as they produce different interference effects [see the $\pm$ in Eq. (36)]. Rectification is enhanced in the XX coupling case thanks to higher order processes, while it is suppressed in the XY case (we will explain this behavior describing Fig. 6). In the inset of Fig. 5 we plot the same points as a function of the asymmetry coefficient $\lambda=(K_{\text{L}}-K_{\text{R}})/(K_{\text{L}}+K_{\text{R}})$ [see Eq. (21)]. We recall that, in the weak coupling regime, we proved that $R$ is linear in $\lambda$ (see Section IV.1). Indeed, for small values of $\lambda$, the behavior is linear. Interestingly, the behaviour becomes non-linear for larger values of $\lambda$, which correspond to larger values of the coupling constant $K_{\text{L}}$. This non-linearity is yet another signature of higher order coherent processes. In Fig. 6 we plot $R$, computed with the three methods described above, as a function of $\Delta$. The choice of the values of the coupling constants has been made in order to show that the NEGF and ME(cot) results, although qualitatively agreeing with each other, present quantitative deviations with respect to the ME(seq) calculation. First of all we notice that for $\Delta>2k_{\rm B}T$ all methods predict similar values of $R$. On the other hand, for $\Delta\leq 2k_{\rm B}T$, rectification is stronger in the XX case (blue curves), while it is weaker in the XY case (green curves) as compared to the sequential tunneling result, consistently with Fig. 5. This means that coherent processes can decrease (XY case) or increase (XX case) rectification. This different behavior can be understood by recalling the discussion in Sec. V.1 and that the co-tunneling contributions depend on the coupling. In particular, in the XY case $I^{\rm cot}(\Delta T)$ is positive and, according to Eq. (37), $R$ is suppressed with respect to the sequential result, while in the XX case $I^{\rm cot}(\Delta T)$ is negative and $R$ is increased. We also notice that, for small $\Delta$, in the XX case the two terms inside the square modulus of Eq. (36) tend to cancel each other, resulting in a small co- tunneling contribution, while in the XY case co-tunneling remains finite. At the same time, the heat current due to only sequential processes tends to zero as $\Delta/(k_{\text{B}}T)\to 0$ [see Eq. (63)]. These observations explain the large deviation between the ME(seq) curve and the XY case for small $\Delta$. Moreover, we notice that in the XY case, thanks to higher order processes, the NEGF and ME(cot) are non-monotonous with respect to $\Delta$ (as discussed in Section IV.1.1, $R$ is monotonous in $\Delta$ in the weak coupling regime). Interestingly, the value of $R$ computed using the NEGF and ME(cot) methods in the XX case shows a violation of the general weak-coupling bound of Eq. (23), i. e. we find that $|R|>|\lambda|=0.67$ (denoted with a horizontal magenta line in Fig. 6). Finally, according to expectations, we mention that the contributions of co-tunneling on the heat current gets smaller for decreasing $\Delta$ (corresponding to increasing temperature). This, however, does not prevent $R$ from largely deviating with respect to $R^{\rm seq}$, since such deviations depend on the ratio between co-tunneling and sequential contributions to the current [see Eq. (37)]. Figure 7: Heat current and rectification (inset) as functions of the temperature bias $\Delta T$ for $\Delta=0.8k_{B}T$ and all other parameters as in Fig. 6. The color code of the curves is the same as for Figs. 5 and 6. We now study, in Fig. 7, the behavior of the currents, calculated with the three methods, and rectification (inset) as a function of the temperature bias. As far as the currents are concerned, all the curves show that the increase of $I$ with $\Delta T$ slightly deviates from the linear behavior already for small values of temperature bias, and all the curves have the same qualitative behavior. We also notice that NEGF and ME(cot) methods give essentially the same results in both in the XX and XY cases. In particular, the current in the XY (XX) case is larger (smaller), in the whole range of $\Delta T$ considered, with respect to the result obtained accounting for sequential processes only. On the other hand, $R$ shows a nearly linear behavior up to $\Delta T/T=1$. The curves relative to $R$ calculated with the two methods [NEGF and ME(cot)] show an increasing relative deviation with $\Delta T$, while the value of rectification reaches around 0.35 for the XY case and almost 0.5 for the XX case. Figure 8: Heat current (upper panel) and rectification coefficient (lower panel) plotted as functions of the qubit gap $\Delta$ in the strong coupling regime (in the XX coupling case) computed using the two methods indicated in the legend. The NEGF calculation includes the Lamb shift. The parameters are: $K_{\text{L}}=0.49$, $K_{\text{R}}=0.01$, $\epsilon_{\text{C}}=100k_{\text{B}}T$ and $\Delta T/T=1.9$. In Fig. 8 we plot the heat current (upper panel) and the rectification coefficient (lower panel) as a function of $\Delta$. We compare the analytic results obtained in the XX case using the NEGF method including the Lamb-shift (see Section V.2) with the exact calculation obtained using the Feynman-Vernon path integral approach (see Section V.3). In doing so, we are constrained to fixing the coupling strength as $K_{\text{L}}+K_{\text{R}}=1/2$. The exact and NEGF calculations for the heat current, although in qualitative agreement, give quantitatively different results. The NEGF method tends to overestimate the magnitude of the heat current for values of $\Delta/(k_{\text{B}}T)\lesssim 10$, while it underestimates the heat current for larger values of $\Delta/(k_{\text{B}}T)$. A similar trend is followed by the rectification coefficient: the NEGF calculation overestimates $R$, with respect to the exact one, when $\Delta/(k_{\text{B}}T)\lesssim 3$. For large values of $\Delta$, the rectification coefficient predicted by both methods tends to $0$. Note that the qualitative agreement was not a priori expected, considering that the NEGF method is perturbative in the coupling strength (valid, strictly speaking, only for $K_{\text{L}}$ and $K_{\text{R}}\ll 1$). In Fig. 9 we plot the heat current (top panel) and the rectification (bottom panel) obtained with the exact calculation as functions of the coupling constant $K_{\text{L}}$. Because of the constraint $K_{\text{R}}=1/2-K_{\text{L}}$, here we can explore a very asymmetric coupling condition (one bath strongly coupled and the other weakly coupled), which cannot be treated by using approximate analytical approaches and it is difficult to address even numerically. In the top panel the solid curve refer to positive bias, while the dashed curve refer to the negative bias, for a fixed value of $\Delta=0.01k_{B}T$. Notice that the two curves are symmetric with respect to the point $K_{\text{L}}=1/4$. Both currents vanish for $K_{\text{L}}=0$ and $K_{\text{L}}=1/2$, since no current can flow when one of the two coupling strengths is zero. The maximum occurs at around $K_{\text{L}}=0.15$ for the current relative to the positive bias. In the bottom panel the two solid curves of $R$ correspond to different values of $\Delta$. It is worth stressing that they differ by a little extent even though $\Delta$ spans two orders of magnitude. This is peculiar of the asymmetric coupling condition, with one bath strongly coupled, and significantly differs from the behavior observed for smaller couplings reported in Fig. 6. The rectification vanishes when the coupling strengths are equal, $K_{\text{L}}=1/4$, and it is maximum for $K_{\text{L}}=0$ or 1/2, as expected from the fact that this is the most asymmetric situation. We observe that the rectification fulfills the bound $|R|<|\lambda|$ [Eq. (23)] derived for the weak coupling, as shown by the green dashed curve which represents $(-\lambda)$ as a function of $K_{\text{L}}$. The dependence on the (average) temperature of the currents and rectification is analyzed in Fig. 10 under asymmetric coupling conditions ($K_{L}=0.49$, $K_{R}=0.01$) for fixed values of $\Delta T$ and $\Delta$. As shown in the top panel, for $k_{B}T\gtrsim 0.05\epsilon_{\text{C}}$ both currents (obtained with the exact calculation) decrease, as one can expect from the fact that the ratio $\Delta T/T$ decreases. On the other hand, for the smallest temperatures, with $k_{B}T\lesssim\Delta$, both currents show an increase (the current for positive bias being maximum when $k_{B}T\approx\Delta$). The (absolute value of) rectification, instead, monotonously decreases with $T$, taking its maximum value when the weakly coupled bath (R) is at zero temperature ($T_{R}=0$, i. e. $T=\Delta T/2$). This can be explained by the fact that the ratio $\Delta T/T$ is maximum in this situation, thus maximizing the asymmetry between the two baths. For increasing $T$, $R$ reduces tending to zero. Figure 9: Heat current and rectification, obtained with the exact calculation, as a function of $K_{\text{L}}$ setting $K_{\text{R}}=1/2-K_{\text{L}}$. The parameters not shown on the figure are $\Delta/\epsilon_{\text{C}}=0.005$ and $\Delta T/T=1.9$. Figure 10: Plot of the heat currents (top panel) and rectification (bottom panel) as functions of $k_{B}T$, obtained with the exact calculation. The inset in the bottom panel shows a zoom of the curves in a range of small temperatures (from $k_{B}T=k_{B}\Delta T/2=0.005\epsilon_{\text{C}}$). Rectification is clearly maximized at low temperatures. The parameters are: $K_{\text{L}}=0.49$, $K_{\text{R}}=0.01$, $\Delta=0.05\epsilon_{\text{C}}$ and $k_{B}\Delta T=0.01\epsilon_{\text{C}}$. Analogously to what we did in the sequential tunneling limit (Section IV.3), also here it is interesting to understand the case of a generic coupling. We focus on the $\Delta/(k_{\text{B}}T)\to 0$ limit where heat transport is determined by higher order coherent processes (sequential tunneling component being vanishingly small, as discussed above). We therefore consider the coupling Hamiltonian, given in Eq. (8), with an arbitrary coupling to the left bath, i.e. $\theta_{\text{L}}=\theta$ and $\phi_{\text{L}}=\phi$, but with fixed $\sigma_{x}$ coupling to the right lead, i.e $\theta_{\text{R}}=\pi/2$ and $\phi_{\text{R}}=0$. The XX and XY cases can be recovered, respectively, by setting $\theta=\pi/2$ and $\phi=0$, or $\theta=\pi/2$ and $\phi=\pi/2$. Notice that, by considering a coupling with $\theta\neq\pi/2$, we are including also a $\sigma_{z}$ coupling to the left lead. We recall that, in the weak coupling regime, the $\sigma_{z}$ coupling does not contribute to the heat current. In order to isolate the impact on rectification of different spin couplings, we consider the case of identical spectral densities for the two baths, i.e. $\Gamma_{\text{L}}(\epsilon)=\Gamma_{\text{R}}(\epsilon)$. Therefore, the only asymmetry in the coupling, which can give rise to rectification, is given by the different directions described by $\vec{u}_{\text{L}}$ and $\vec{u}_{\text{R}}$. Figure 11: Contour plot of $I(\Delta T)$, calculated with the NEGF approach, as a function of $\theta$ and $\phi$. The parameters are: $\epsilon_{\text{C}}=80k_{\text{B}}T$, $K_{\text{L}}=K_{\text{R}}=0.06$, $\Delta=0$ and $\Delta T/T=1.9$. In Fig. 11 we show a contour plot of the heat current $I(\Delta T)$, at fixed temperatures and for equal Ohmic spectral densities [i.e. $K_{\text{L}}=K_{\text{R}}$, see Eq. (12)], as a function of the two angles $\theta$ and $\phi$ in the small gap limit, i.e. for $\Delta/(k_{\text{B}}T)\to 0$. For simplicity, we neglected the Lamb shift. Strikingly, the heat current is maximum when the left lead is coupled through $\sigma_{z}$, i.e. for $\theta=0$ (lower part of Fig. 11). This is surprising for two reasons: first, in the weak coupling limit the heat current at $\theta=0$ would be null even for finite values of $\Delta$, since $\sigma_{z}$ does not contribute to the heat currents. Second, regardless of the coupling strength, a single bath coupled to S through $\sigma_{z}$ cannot transfer heat to the system, since the Hamiltonian of S would commute with the total Hamiltonian (and thus it would be a conserved quantity). In this case, the $\sigma_{z}$ coupling would only produce dephasing in the qubit state. We can therefore qualitatively describe transport in this regime as a direct transfer of heat from one bath to the other. As $\theta$ increases, and therefore as the $\sigma_{z}$ component decreases, the heat current decreases monotonously, to the point that it is null in the XX case ($\phi=0$), while it remain constant in the XY case (along $\phi=\pi/2$). Figure 12: Contour plot of $R$ as a function of $\theta$ and $\phi$. The parameters are the same as in Fig. 11. Interestingly, also the rectification coefficients roughly follows a similar trend, i.e. it is maximum where also the heat currents are maximum. This can be seen in Fig. 12, where $R$ is contour-plotted as a function of $\theta$ and $\phi$ for the same parameters as in Fig. 11. Indeed, $R$ is maximum for $\theta=0$, i.e. when the left lead is coupled only through $\sigma_{z}$. As $\theta$ increases, the modulus of $R$ decreases monotonically along $\phi=0$, just as the heat current itself. However, it remain constant along $\phi=\pi/2$, while for intermediate values of $\phi$ it displays a non- monotonic behavior. We can therefore conclude that the optimal operational points in the $\Delta/(k_{\text{B}}T)\to 0$ limit are the XY and the XZ coupling cases. These couplings simultaneously maximize the magnitude of the heat current and of the rectification coefficient. We emphasize that the heat current, which in this limit is solely due to coherent quantum processes, behaves in the opposite way with respect to what would be expected from weak coupling calculations (the heat currents should be zero both because $\Delta=0$ and because $\sigma_{z}$ does not contribute to the heat current). ## VI Rectification at finite $U$ How does the picture described so far changes when several levels come into play in the rectification process? In this section we will study a non-linear resonator defined by the Hamiltonian (3) at finite $U$, coupled to bosonic leads by the Hamiltonian (9). For the sake of convenience, in our numerical procedures we use the Ohmic spectral density in Eq. (12) in a form with a sharp cut-off, namely $\Gamma_{\alpha}(\epsilon)=\pi K_{\alpha}\epsilon\;\theta(\epsilon)\theta(\epsilon_{\rm C}-\epsilon).$ (50) The most important effects are expected in the non-perturbative regime. To this end we will employ the Keldysh non-equilibrium Green’s function technique, with the retarded Green’s function for the system defined as $G_{b;b}^{r}(t,t^{\prime})=-i\theta(t-t^{\prime})\left\langle\left[b(t),b^{\dagger}(t^{\prime})\right]\right\rangle$. By following Ref. meir1991, , we use the equation of motion (EOM) decoupled to the second order (see App. I for details) to obtain the following analytic expression for the retarded Green’s function $G_{b;b}^{r}(\epsilon)=\frac{1+2A(\epsilon)\left\langle n\right\rangle}{\epsilon-\Delta-\Sigma^{(0)}(\epsilon)+2A(\epsilon)\left(\Sigma^{(2)}(\epsilon)+\Sigma^{(3)}(\epsilon)\right)},$ (51) where $n(t)=b^{\dagger}(t)b(t)$ is the number operator, $\left\langle n\right\rangle$ is the expectation value of the occupation given by $\left\langle n\right\rangle=\sum_{\alpha}\int\frac{d\epsilon}{2\pi}G_{b;b}^{r}(\epsilon)n_{\alpha}(\epsilon)\Gamma_{\alpha}(\epsilon)G_{b;b}^{r^{*}}(\epsilon),$ (52) and $A(\epsilon)/U=\left(\epsilon-\Delta-U\left\langle n\right\rangle-\left(2\Sigma^{(0)}(\epsilon)+\Sigma^{(1)}(\epsilon)\right)\right)^{-1},$ (53) $n_{\alpha}$ being the Bose-Einstein distribution relative to bath $\alpha$. In order to calculate the Green’s function $G^{r}_{b;b}(\epsilon)$, one has to solve Eqs. (51) and (52) self-consistently. In Eq. (51), the embedded self- energy is given by $\Sigma^{(0)}(\epsilon)=\sum_{\alpha}\int d\omega\left[\frac{\Gamma_{\alpha}(\omega)}{\epsilon-\omega+i\eta}\right],$ (54) while the other self energies are given by $\displaystyle\Sigma^{(1)}(\epsilon)$ $\displaystyle=$ $\displaystyle\sum_{\alpha}\int\frac{d\omega}{2\pi}\left[\frac{\Gamma_{\alpha}(\omega)}{\epsilon+\omega-2\Delta-2U\left\langle n\right\rangle-U+i\eta}\right],$ $\displaystyle\Sigma^{(2)}(\epsilon)$ $\displaystyle=$ $\displaystyle\sum_{\alpha}\int\frac{d\omega}{2\pi}\left[\frac{n_{\alpha}(\omega)\Gamma_{\alpha}(\omega)}{\epsilon-\omega+i\eta}\right],$ $\displaystyle\Sigma^{(3)}(\epsilon)$ $\displaystyle=$ $\displaystyle\sum_{\alpha}\int\frac{d\omega}{2\pi}\left[\frac{\Gamma_{\alpha}(\omega)n_{\alpha}(\omega)}{\epsilon+\omega-2\Delta-2U\left\langle n\right\rangle-U+i\eta}\right].$ The inclusion of the self-energies defined by the expressions above ensures that the onsite correlations are correctly captured meir1991 . The advantage of this approach is that it describes both weak and strong coupling regimes and keeps the processes involving virtual states of the system meir1993 . It is worth stressing that in deriving Eq. (51), we have neglected terms involving correlation in the baths by setting $\left\langle\left[b^{\dagger}(t)b_{\alpha k}(t)b_{\alpha k}(t),b^{\dagger}(t^{\prime})\right]\right\rangle=0$ and $\left\langle b^{\dagger}b_{\alpha k}\right\rangle=\left\langle bb_{\alpha k}^{\dagger}\right\rangle=0$. The contributions from these terms become significant for very strong coupling meir1991 . The lesser and greater Green’s functions used to calculate the heat currents [see Eq. (15)] are obtained from the following relation $G_{b;b}^{\lessgtr}(\epsilon)=G_{b;b}^{r}(\epsilon)\Sigma^{(0)<}(\epsilon)G_{b;b}^{r^{*}}(\epsilon).$ (55) The mean field (MF) approximation, on the other hand, is obtained by decoupling the EOM for the retarded Green’s function to the first order, so that the latter takes the simple form $G_{b;b}^{r,MF}(\epsilon)=\left[\epsilon-\Delta-U\langle n\rangle-\Sigma^{(0)}(\epsilon)\right]^{-1}.$ (56) This expression makes clear that the MF approximation renormalizes the energy of the resonator, while leaving unchanged the self-energy compared to the non- interacting case. This means that higher order onsite correlations are not taken into account. The MF approximation was employed in Ref. ruokola2009, to calculate the rectification in nonlinear quantum circuits. Considering an average thermal energy $k_{\rm B}T$ of the order of $\Delta$, we checked that the onsite correlation effects become significant when $U$ is of the order of the spectral density, i. e. $U\approx\pi K_{\alpha}\Delta$. For $U\ll\pi K_{\alpha}\Delta$, MF approximation and EOM give similar results ruokola2009 . In the absence of the non-linearity, Eqs. (51) and (56) reduce to the same expression for exact Green’s function of a non-interacting harmonic resonator, which yields no rectification. Figure 13: Occupation as a function $U$ for the following parameters $K_{\rm L}=0.06$, $K_{\rm R}=0.003$, $\epsilon_{C}=100U$, $\Delta=5k_{B}T$, and $\Delta T/T=0.4$. The blue curve is calculated using the EOM, while the red curve refers to the MF approximation. The thin magenta horizontal line is the occupation of the excited level for a qubit with the same parameters, calculated accounting for sequential tunneling processes only. In Fig. 13 we plot the occupation $\langle n\rangle$, calculated using the EOM (blue dashed curve) and in the MF approximation (red curve), as a function of the non-linearity parameter $U$. For small values of $U$ the two curves are close to each other, but they start to significantly depart from $U\simeq 20k_{B}T$. We also checked that MF and EOM results nearly coincide for $U<0.1k_{B}T$ and $\Delta\approx k_{B}T$. The two curves of $\langle n\rangle$ are monotonically decreasing, as expected from the fact that with increasing $U$ the number of levels available in the resonator for transport decreases, and so does the expectation value of the occupation. For $U$ very large one reaches the situation corresponding to a qubit. Indeed, the curve relative to the EOM approaches the magenta line which represent the occupation of the excited level for a qubit, characterized by the same parameters, calculated accounting for sequential tunneling processes only. Figure 14: Heat current calculated with three different methods as a function of resonator energy $\Delta$ for $\Delta T/T=8/5$, $\epsilon_{C}=100U$, $K_{\rm L}=0.06$ and $K_{\rm R}=0.003$. The three panels refer to different values of $U$. In Fig. 14, we plot the heat current as a function of the resonator energy $\Delta$ for a few values of the non-linearity parameter $U$. We fix the coupling strength to be within the weak coupling regime, i. e. $K_{\rm L}=0.06$ and $K_{\rm R}=0.003$. The heat current is calculated using both methods: i) the MF approximation (red curve), whose Green’s function is Eq. (56), and ii) the EOM method to second order (dashed blue curve), whose Green’s function is Eq. (51). In addition, as a reference we include the heat current relative to the qubit (i. e. $U\to\infty$) calculated using the master equation formulation taking into account the co-tunneling contributions (dotted green curve). Fig. 14 shows that, for all values of $U$ considered, the three curves coincide starting from $\Delta\geq 5k_{\rm B}T$. This means that for $\Delta\geq 5k_{\rm B}T$ the system behaves as a qubit, whereas for $\Delta\leq 5k_{\rm B}T$ the non-linear resonator acts as a multi-level quantum system. At $U=10k_{B}T$ (middle panel) the curve for ME departs from the other two curves for a slightly smaller value of $\Delta$, with respect to the case $U=k_{B}T$. For $U=k_{B}T$, EOM and MF curves start deviating for $\Delta\leq 4k_{\rm B}T$, and the EOM method predicts a much higher current. Both MF and EOM curves display a maximum for small values of $\Delta$. When $U$ reaches $10k_{B}T$ (middle panel) the heat current calculated with EOM reduces by roughly a half, tending to agree more with the MF result (which though does not display a maximum). Nearly no changes are observed for the MF curve by increasing $U$ up to $20k_{B}T$, while the heat current predicted by the EOM method gets further reduced. We note that the heat current obtained for the qubit case, green curve, gets vanishingly small for $\Delta=0$, whereas the MF and EOM calculations predict a finite heat current. Interestingly, for larger values of $U$ (lower panel), the heat current computed with the EOM method approaches the ME(cot) curve even for small values of $\Delta$, as one would expect. Figure 15: Rectification as a function of the resonator energy $\Delta$ (top panel) for $U=k_{B}T$ and of the non-linear parameter $U$ (bottom panel) for $\Delta=k_{B}T$ (solid curve) and $\Delta=2k_{B}T$ (dashed curve). Other parameters are the same as in Fig. 14. In Fig. 15 we plot the rectification coefficient $R$ (calculated using both EOM and MF methods) as a function of the resonator energy $\Delta$ (top panel) for fixed thermal bias and coupling strengths, and of the non-linearity parameter $U$, for two values of $\Delta$ (bottom panel). The top panel shows that the two methods agree for $\Delta>4.5k_{B}T$, while for smaller values of $\Delta$ EOM predicts always larger rectification, with respect to the MF method, which displays a maximum at $\Delta\simeq k_{B}T$. The largest value of rectification found is around 40%. In the bottom panel, we first notice that the two methods do not agree over the whole range of values of $U$ considered, since we have chosen small resonator energies ($\Delta=k_{B}T$ and $\Delta=2k_{B}T$). The EOM method, for both values of $\Delta$, predicts a larger rectification with respect to the MF method. Furthermore, the EOM curve presents a maximum for $U$ of order $k_{B}T$, then $R$ steadily decreases with increasing $U$. The MF curves, on the other hand, are also non-monotonous with $U$, but with a very broad maximum. ## VII Conclusions In this paper we have presented a systematic theoretical study of thermal rectification in a paradigmatic multi-level quantum system, namely a non- linear harmonic resonator, coupled to two thermal baths kept at different temperatures $T_{\rm H}$ and $T_{\rm C}$, corresponding to a thermal bias $\Delta T=T_{\rm H}-T_{\rm C}$. Thermal rectification, consisting in an asymmetric heat conduction occurring under reversal of the thermal bias, is possible in the presence of an asymmetry in the coupling between system and baths and of interactions. On a general perspective, both in the fermionic and bosonic cases, our aim was to explore in such a quantum system under which conditions thermal rectification can be induced and maximized, to find fundamental bounds to the maximum rectification, and to assess the impact of coherent processes on rectification. To this end, we have first focused on the case where the strength of the non-linear term $U$ in the Hamiltonian [Eq. (3)] is so large that the system behaves as a qubit (a two-level system with energy spacing, gap, $\Delta$). In this case we have considered different transport regimes, depending on the strength and on the nature of the coupling between system and baths, and have used different theoretical approaches. To the lowest order in the coupling, we have employed a master equation to calculate the heat current and we have determined, in different conditions, the behavior and fundamental bounds of the rectification coefficient $R$ in terms of the temperatures $T_{\rm H}$ and $T_{\rm C}$ and of the asymmetry in the system-bath couplings ($|R|\leq 1$ and $R=0$ for no rectification). In particular, under the only assumption that the dissipation rates of the two baths, as a function of temperature, are equal up to a gap-dependent prefactor, we have found that: * • the rectification ratio $R$ is monotonous with respect to $\Delta T$, if the dissipation rates are monotonous in temperature (for example when the baths are bosonic); * • $R$ is a linear function of $\lambda$; * • the modulus of $R$ is upper bounded by $\lambda$ ($|R|\leq|\lambda|$); * • $R$ is larger when the temperature dependence of the dissipation rates is stronger. Here $\lambda$ is the asymmetry between the spectral densities of the two baths. In particular, for the case of bosonic baths we have found that $R$ is a decreasing function of the energy spacing $\Delta$ and, in the limit of small $\Delta$, * • $|R|$ is upper bounded by $\lambda(T_{\rm H}-T_{\rm C})/(T_{\rm H}+T_{\rm C})$ in the case of linear coupling; * • $|R|$ is upper bounded by $\lambda(T^{2}_{\rm H}-T^{2}_{\rm C})/(T^{2}_{\rm H}+T^{2}_{\rm C})$ in the case of non-linear coupling. On the other hand, we have found that when the dissipation rates of the two baths are arbitrary the rectification can be stronger. For example, in the case of bosonic baths and assuming the qubit to be linearly coupled to the left and non-linearly coupled to the right we have found that $R=(T_{\rm H}-T_{\rm C})/(T_{\rm H}+T_{\rm C})$ in the limit of small gap. This means that rectification can be made arbitrarily large simply by choosing a large temperature difference, regardless of $\lambda$. We have then considered the case of arbitrary spin operators involved in the coupling Hamiltonian Eq. (5) to the left ${\cal H}_{\rm L,S}$ and to the right ${\cal H}_{\rm R,S}$. In particular, with XX coupling we specify that both left and right baths are coupled to the system through $\sigma_{x}$, while with XY coupling we specify that the left bath is coupled through $\sigma_{x}$ and right bath is coupled through $\sigma_{y}$. We have found that $R$ only depends on the angle between the coupling term and the qubit Hamiltonian (proportional to $\sigma_{z}$). Finally, we have assessed how the Lamb-shift can give rise to an enhancement of the rectification when the baths have a gap in their density of states. Next, we have investigated what happens in the regime beyond the weak coupling by making use of three different techniques which allow us to describe increasingly stronger coupling between system and baths and the effect of quantum coherence. Namely, first we have included co-tunneling effects in the master equation approach, then we have used a perturbative approach based on non equilibrium Green’s function theory and, finally, we have performed an exact calculation employing the Feynman-Vernon path integral approach which accounts for general spectral densities and coupling conditions. All these approaches allow us to conclude that the rectification can be enhanced by going beyond the weak coupling regime, even violating the bounds found in the first-order coupling regime. In particular, we have found that: * • co-tunneling processes enhance rectification when heat transport is dominated by sequential tunneling; * • $R$ is in general non-monotonous with the coupling strength and depends on the spin operators involved in the coupling Hamiltonian; * • $R$ increases with increasing coupling strength in the XX coupling case; * • $R$ increases with $\lambda$ faster that linear in the XX coupling case; * • $R$ increases with decreasing $\Delta$ in the XX coupling case, and is non- monotonous for the XY coupling; * • the heat currents calculated with non equilibrium Green’s function method and with the exact method are qualitatively in agreement even for large system bath couplings; * • in the limit of small $\Delta$, where the coherent (higher order) contributions to the heat current dominates, heat currents and $R$ are maximized in the XY and the XZ coupling cases. Finally, we have considered the case in which the non-linear term $U$ is finite. Employing the Keldysh non-equilibrium Green’s function technique we have calculated the heat current using the equation of motion (EOM) decoupled to the second order and in the mean field approximation. We have discussed the behavior of the heat current and of the rectification as functions of the resonator energy and of $U$. We have found that the rectification, using the EOM approach, is not monotonous in both $\Delta$ and $U$ and reaches a maximum of 40%, much larger that what is predicted employing the mean field approximation. The paper is enriched with a number of appendices which contain the details of the calculations described in the main text. To conclude, we believe that our results can be very valuable for the design and interpretation of experiments on thermal rectifiers based on qubits and non-linear harmonic resonators. ## VIII Acknowledgements We thank Jukka Pekola for many stimulating discussions and for his comments on the draft. R.F. research has been conducted within the framework of the Trieste Institute for Theoretical Quantum Technologies (TQT). E.P. acknowledeges hospitality of ICTP where part of this work has been carried out. We acknowledge support from the SNS-WIS joint lab QUANTRA, and E.P. acknowledges support by the University of Catania, Piano di Incentivi per la Ricerca di Ateneo 2020/2022, proposal Q-ICT and by the CNR- QuantERA grant SiUCs. ## Appendix A Most Generic System-Bath Coupling in the qubit case In this appendix we prove that the system bath interaction described by Eq. (5) is indeed the most generic system-bath interaction in the qubit case. The most generic Hermitian operator acting on the tensor product space between S (a two-dimensional Hilbert space) and the baths (an arbitrary dimensional Hilbert space) can be expanded on the product basis of the two Hilbert spaces. We therefore consider a basis $\\{\mathcal{B}_{i}\\}_{i}$ of Hermitian operators acting on the space of the bath, and the specific basis $\vec{\sigma}_{j}\equiv\\{\mathbb{1},\sigma_{x},\sigma_{y},\sigma_{z}\\}$ of Hermitian operators acting on the qubit space. This yields $\mathcal{H}_{\alpha,\text{S}}=\sum_{i,j}a_{ij}\,\mathcal{B}_{i}\otimes\sigma_{j}=\sum_{j}B_{j}\otimes\sigma_{j},$ (57) where $B_{j}=\sum_{i}a_{ij}\mathcal{B}_{i}$ is an Hermitian operator acting on the bath space. Using the relations $\displaystyle\sigma_{x}$ $\displaystyle=\sigma^{+}+\sigma^{-}$ (58) $\displaystyle\sigma_{y}$ $\displaystyle=i\sigma^{+}-i\sigma^{-},$ we obtain Eq. (5), where $B_{\alpha}=B_{x}+iB_{y}$. ## Appendix B Rectification in the weak coupling regime and qubit case We now compute the heat current flowing out of the leads in the weak coupling regime, valid when $\mathcal{H}_{\alpha,\text{S}}$ is “small enough” breuer2002 . Under these conditions the evolution of the reduced density matrix $\rho_{\text{S}}$ of the qubit obeys a Lindblad master equation. Furthermore, when the qubit is not degenerate (i.e. when $\Delta\neq 0$), the Lindblad master equation can be cast in the form of a rate equation for the occupation probabilities of the qubit, defined by $P_{1}=\Tr{\rho_{\text{S}}\sigma^{+}\sigma^{-}}$ and $P_{0}=1-P_{1}$. Only the terms in $\mathcal{H}_{\alpha,\text{S}}$ proportional to $\sigma^{+}$ and $\sigma^{-}$ contribute to the rate equation. Indeed, rewriting $\mathcal{H}_{\alpha,\text{S}}$ as in Eq. (57), the rate equation only depends on the following matrix elements of the $\sigma_{j}$ operators breuer2002 : $\matrixelement*{0}{\sigma_{j}}{1}.$ (59) where $\\{\ket{0},\ket{1}\\}$ are the eigenstates of the qubit. Since $\matrixelement*{0}{\sigma_{j}}{1}=0$ for $\sigma_{j}=\mathbb{1},\sigma_{z}$, the only terms that determine the populations are the ones proportional to $\sigma_{x}$ and $\sigma_{y}$, and therefore to $\sigma^{+}$ and $\sigma^{-}$. Neglecting for the moment the Lamb shift, the probabilities satisfy breuer2002 $\frac{\partial}{\partial t}\begin{pmatrix}P_{0}\\\ P_{1}\end{pmatrix}=\begin{pmatrix}-\Upsilon^{+}(\Delta)&\Upsilon^{-}(\Delta)\\\ \Upsilon^{+}(\Delta)&-\Upsilon^{-}(\Delta)\end{pmatrix}\begin{pmatrix}P_{0}\\\ P_{1}\end{pmatrix},$ (60) where $\Upsilon^{\pm}(\Delta)=\Upsilon^{\pm}_{\text{L}}(\Delta,T_{\text{L}})+\Upsilon^{\pm}_{\text{R}}(\Delta,T_{\text{R}})$, and $\Upsilon^{\pm}_{\alpha}(\Delta,T)$, for $\alpha=\text{L},\text{R}$, are derived in App. C. Using Eq. (60) and $P_{0}+P_{1}=1$, we can find the steady state populations $\displaystyle P_{0}$ $\displaystyle=\frac{\Upsilon^{-}(\Delta)}{\Upsilon^{-}(\Delta)+\Upsilon^{+}(\Delta)},\quad$ $\displaystyle P_{1}=\frac{\Upsilon^{+}(\Delta)}{\Upsilon^{-}(\Delta)+\Upsilon^{+}(\Delta)}.$ (61) The heat current flowing out of bath $\alpha$ at temperature $T_{\alpha}$ [see Eq. (13)] can then be computed as $I_{\alpha}(T_{\alpha})=\Delta\left(P_{0}\Upsilon^{+}_{\alpha}(\Delta,T_{\alpha})-P_{1}\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})\right).$ (62) Obviously, the steady-state heat current and, as a consequence, the rectification ratio within the weak-coupling regime only arise from the terms proportional to $\sigma^{+}$ and $\sigma^{-}$. Since $\Upsilon^{+}_{\alpha}(\Delta,T_{\alpha})$ and $\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})$ are related by the detailed balance equation, $\Upsilon^{+}_{\alpha}(\Delta,T_{\alpha})=e^{-\Delta/(k_{B}T_{\alpha})}\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})$, we can express them as $\Upsilon_{\alpha}^{\pm}(\Delta,T_{\alpha})=\Upsilon_{\alpha}(\Delta,T_{\alpha})f(\pm\Delta/(k_{\text{B}}T_{\alpha}))$, where $f(x)=(1+e^{x})^{-1}$. Using Eqs. (61) and (62), the heat current is given by $I(\Delta T)=\Delta\frac{\Upsilon_{\text{L}}(\Delta,T_{\text{L}})\Upsilon_{\text{R}}(\Delta,T_{\text{R}})}{\Upsilon_{\text{L}}(\Delta,T_{\text{L}})+\Upsilon_{\text{R}}(\Delta,T_{\text{R}})}\\\ \times\left[f(\Delta/(k_{\text{B}}T_{\text{L}}))-f(\Delta/(k_{\text{B}}T_{\text{R}}))\right],$ (63) where $T_{\text{L}}=T+\Delta T/2$ and $T_{\text{R}}=T-\Delta T/2$. In conclusion, under weak coupling and neglecting the Lamb shift, the heat current only depends on the tunneling rates $\Upsilon_{\alpha}(\Delta,T_{\alpha})$ which can be explicitly evaluated for the models considered in Section IV. By plugging Eq. (63) into Eq. (14) we find the general expression for $R$, i.e. Eq. (18). ## Appendix C Tunneling Rates In this section we prove Eq. (19) following Ref. breuer2002, . To this end we consider the system-bath Hamiltonian as written in Eq. (57), such that all operators are Hermitian. As shown in App. B, the term proportional to $\sigma_{z}$ does not contribute to the heat current, therefore we consider $\mathcal{H}_{\alpha,\text{S}}=B_{\alpha x}\otimes\sigma_{x}+B_{\alpha y}\otimes\sigma_{y},$ (64) where $\displaystyle B_{\alpha x}$ $\displaystyle=\frac{B_{\alpha}^{\dagger}+B_{\alpha}}{2},$ $\displaystyle B_{\alpha y}$ $\displaystyle=\frac{i(B_{\alpha}^{\dagger}-B_{\alpha})}{2}.$ (65) Using the results of Ref. breuer2002, with $\mathcal{H}_{\alpha,\text{S}}$ given by Eq. (64), the dissipation rate induced by bath $\alpha$ is given by $\Upsilon^{-}_{\alpha}(\Delta,T)=\sum_{i,j=\\{x,y\\}}\gamma_{ij}\matrixelement*{1}{\sigma_{i}}{0}\matrixelement*{0}{\sigma_{j}}{1}=\\\ =\gamma_{xx}+\gamma_{yy}+i\gamma_{yx}-i\gamma_{xy}=\int\limits_{-\infty}^{+\infty}dt\,e^{i\Delta t}\expectationvalue*{B_{\alpha}(s)B^{\dagger}_{\alpha}(0)},$ (66) where $\gamma_{ij}=\int_{-\infty}^{+\infty}dt\,e^{i\Delta t}\expectationvalue*{B^{\dagger}_{\alpha i}(t)B_{\alpha j}(0)}.$ (67) Here $B_{\alpha i}(t)$ is the time evolution of $B_{\alpha i}$ in the interaction picture, i.e. $B_{\alpha i}(t)$ is the Heisenberg picture operator evolved solely according to Hamiltonian of the bath $\mathcal{H}_{\alpha}$. In the last step of Eq. (66) we used Eq. (65) to express $B_{\alpha x}$ and $B_{\alpha y}$ in terms of $B_{\alpha}$ and $B_{\alpha}^{\dagger}$. This concludes the proof. ## Appendix D Tunneling Rates in Specific Models In this section we derive the expression for $\Upsilon_{\alpha}(\Delta,T)$ in various models. ### D.1 Fermionic baths with linear (tunnel) couplings In this subsection we consider a fermionic bath $\mathcal{H}_{\alpha}^{\text{(F)}}$, as defined in Eq. (2). Since we consider the case of equal chemical potentials, we define the energies $\epsilon_{\alpha k}$ in Eq. (2) as measured respect to the common chemical potential $\mu$. Therefore, the energies $\epsilon_{\alpha k}$ are defined in the interval $[-\infty,+\infty]$. Plugging the linear coupling Hamiltonian, given in Eq. (6), into Eq. (19) yields $\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})=\sum_{k,k^{\prime}}V_{\alpha k}V_{\alpha k^{\prime}}^{*}\int\limits_{-\infty}^{+\infty}dt\,e^{i\Delta t}\expectationvalue{c_{\alpha k}(t)c_{\alpha k^{\prime}}^{\dagger}}.$ (68) In the interaction picture, time-evolved bath operators $\hat{O}$ satisfy (with $\hbar=1$) $\frac{d\hat{O}(t)}{dt}=i\left[\mathcal{H}_{\alpha},\hat{O}(t)\right].$ (69) Using the fact that $[\mathcal{H}_{\alpha},c_{\alpha k}]=-\epsilon_{\alpha k}c_{\alpha k}$, we find $c_{\alpha k}(t)=e^{-i\epsilon_{\alpha k}t}c_{\alpha k}.$ (70) Plugging Eq. (70) into Eq. (68) yields $\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})=2\pi\sum_{k}|V_{\alpha k}|^{2}\expectationvalue*{c_{\alpha k}c^{\dagger}_{\alpha k}}\delta(\Delta-\epsilon_{\alpha k})=\\\ 2\pi\sum_{k}|V_{\alpha k}|^{2}[1-f(\beta_{\alpha}\epsilon_{\alpha k})]\delta(\Delta-\epsilon_{\alpha k}),$ (71) where $f(x)=(\exp(x)+1)^{-1}$. Recognizing the spectral function, defined in Eq. (10), we have that $\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})=\Gamma_{\alpha}(\Delta)[1-f(\Delta/(k_{\text{B}}T_{\alpha}))].$ (72) Using the detailed balance condition, we find that $\Upsilon_{\alpha}(\Delta,T)=\Gamma_{\alpha}(\Delta),$ (73) which implies $g(\Delta,T)=1$. This proves Eq. (24). ### D.2 Bosonic baths with linear (tunnel-like) coupling In this section we consider a bosonic bath $\mathcal{H}_{\alpha}^{\text{(B)}}$, as defined in Eq. (2), and a linear coupling as in Eq. (6). As in the fermionic case, we have that $[\mathcal{H}_{\alpha}^{\text{(B)}},b_{\alpha k}]=-\epsilon_{\alpha k}b_{\alpha k}$, so also in this case we have that the interaction picture destruction operator is given by $b_{\alpha k}(t)=e^{-i\epsilon_{\alpha k}t}b_{\alpha k}.$ (74) Performing the same steps as in the fermionic case, we end up with Eq. (71) with $\expectationvalue*{b_{\alpha k}b^{\dagger}_{\alpha k}}$ instead of $\expectationvalue*{c_{\alpha k}c^{\dagger}_{\alpha k}}$, which leads to having $1+n(\epsilon_{\alpha k}/(k_{\text{B}}T_{\alpha}))$ instead of $1-f(\epsilon_{\alpha k}/(k_{\text{B}}T_{\alpha}))$, where $n(x)\equiv(\exp(x)-1)^{-1}$. We therefore find $\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})=\Gamma_{\alpha}(\Delta)[1+n(\Delta/(k_{\text{B}}T_{\alpha}))],$ (75) which, using the detailed balance condition, leads to $\Upsilon_{\alpha}(\Delta,T_{\alpha})=\Gamma_{\alpha}(\Delta)\coth{(\Delta/(2k_{\text{B}}T_{\alpha}))}.$ (76) This proves Eq. (25). ### D.3 Bosonic baths with non-linear coupling In this section we consider a bosonic bath $\mathcal{H}_{\alpha}^{\text{(B)}}$, as defined in Eq. (2), and a non-linear coupling as in Eq. (7). Using Eq. (19), we have that $\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})=\sum_{k,k^{\prime}}V_{k}V_{k^{\prime}}^{*}\int\limits_{-\infty}^{+\infty}dt\,e^{i\Delta t}\expectationvalue{b_{k}^{2}(t)(b_{k^{\prime}}^{\dagger})^{2}}.$ (77) Using Eq. (74), we can compute the time integral, finding $\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})=2\pi\sum_{k}|V_{\alpha k}|^{2}\expectationvalue{b_{\alpha k}^{2}(b_{\alpha k}^{\dagger})^{2}}\,\delta(\Delta-2\epsilon_{\alpha k}).$ (78) We therefore need to compute the expectation value $\expectationvalue*{b_{\alpha k}^{2}(b_{\alpha k}^{\dagger})^{2}}$. Using the commutation relations, we have that $\expectationvalue*{b_{\alpha k}^{2}(b_{\alpha k}^{\dagger})^{2}}=\expectationvalue{n_{\alpha k}^{2}}+3n(\epsilon_{\alpha k}/(k_{\text{B}}T_{\alpha}))+2,$ (79) where $\expectationvalue*{n_{\alpha k}^{2}}=\expectationvalue*{(b_{\alpha k}^{\dagger}b_{\alpha k})^{2}}$ is the thermal expectation value of the square number. The calculation of $\expectationvalue*{n_{\alpha k}^{2}}$ is performed in App. E. Using Eq. (94), we have that $\expectationvalue*{b_{\alpha k}^{2}(b_{\alpha k}^{\dagger})^{2}}=2\left[n(\epsilon_{\alpha k}/(k_{\text{B}}T_{\alpha}))+1\right]^{2}.$ (80) Plugging this into Eq. (78), recalling that $\delta(\Delta-2\epsilon_{k})=\delta(\Delta/2-\epsilon_{k})/2$ (which can be proven changing variables), and recognizing the spectral function, we find $\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})=\Gamma_{\alpha}(\Delta/2)[1+n(\Delta/(2k_{\text{B}}T_{\alpha}))]^{2}.$ (81) Finally, using the detailed balance condition we find $\Upsilon_{\alpha}(\Delta,T_{\alpha})=\frac{1}{2}\Gamma_{\alpha}(\frac{\Delta}{2})\,[1+\coth^{2}(\Delta/(4k_{\text{B}}T_{\alpha}))].$ (82) Replacing Eq. (20) with $\Upsilon_{\alpha}(\Delta,T)=\Gamma_{\alpha}(\Delta/2)g(\Delta,T)/2$, $g(\Delta,T)$ is given by Eq. (27). This implies that, for the results of Section IV.1.2, $\Gamma_{\alpha}(\Delta)$ has to be replaced with $\Gamma_{\alpha}(\Delta/2)$ in the definition of $\lambda$ [Eq. (21)]. ### D.4 Arbitrary baths with different $\sigma$ couplings In this subsection we consider arbitrary baths coupled to the qubit via Eq. (8). In fact, in this appendix we consider a more general case given by $\mathcal{H}_{\alpha,\text{S}}=\sum_{i=x,y,z}(u_{\alpha,i}\sigma_{i})\otimes(B_{\alpha}+B_{\alpha}^{\dagger}),$ (83) where $\vec{u}_{\alpha}=(\sin\theta_{\alpha}\cos\phi_{\alpha},\sin\theta_{\alpha}\sin\phi_{\alpha},\cos\theta_{\alpha})$ is a unit vector, and $B_{\alpha}$ is an arbitrary bath operator. As discussed in Section IV, the term proportional to $\sigma_{z}$ does not contribute to the heat current, so we can neglect it. The term that matters is $[\vec{u}_{\alpha}]_{x}\sigma_{x}+[\vec{u}_{\alpha}]_{y}\sigma_{y}=\sin\theta_{\alpha}\left(e^{i\phi_{\alpha}}\sigma^{+}+e^{-i\phi_{\alpha}}\sigma^{-}\right).$ (84) Assuming that $\Delta>0$, and that $B_{\alpha}^{\dagger}$ produces excitations in bath $\alpha$ with positive energy, also the terms proportional to $B^{\dagger}\sigma^{+}$ and $B\sigma^{-}$ vanish. The relevant terms of the interacting Hamiltonian thus become $\mathcal{H}_{\alpha,\text{S}}=\sigma^{+}\otimes\tilde{B}_{\alpha}+\sigma^{-}\otimes\tilde{B}_{\alpha}^{\dagger},$ (85) where we define $\tilde{B}_{\alpha}=\sin\theta_{\alpha}e^{i\phi_{\alpha}}B_{\alpha}.$ (86) This interacting Hamiltonian is now of the form of Eq. (5). Therefore, the tunneling rates can be computed from Eq. (19), yielding $\Upsilon^{-}_{\alpha}(\Delta,T_{\alpha})=\sin^{2}\theta_{\alpha}h(\Delta,T_{\alpha}),$ (87) where $h(\Delta,T_{\alpha})=\int dt\,e^{i\Delta t}\expectationvalue*{B_{\alpha}(t)B_{\alpha}^{\dagger}(0)}_{\alpha}$ only depends on the bath through the temperature, and it does not depend on $\theta_{\alpha}$ nor $\phi_{\alpha}$. Using the detailed balance condition we find $\Upsilon_{\alpha}(\Delta,T_{\alpha})=(\sin^{2}\theta_{\alpha})h(\Delta,T_{\alpha})(1+e^{-\Delta/(k_{B}T_{\alpha})}).$ (88) This situation therefore corresponds to the equal-g case [$g_{\rm L}=g_{\rm R}\equiv g$ in Eq. (20)], where $\Gamma_{\alpha}(\Delta)=\sin^{2}\theta_{\alpha}$, and $g(\Delta,T)=h(\Delta,T)(1+e^{-\Delta/(k_{B}T)})$. ## Appendix E Thermal Averages In this section we show how to compute the expectation value $\expectationvalue*{n_{\alpha k}^{2}}$ for the bosonic bath. Let us define the inverse temperature $\beta_{\alpha}=1/(k_{B}T_{\alpha})$ The partition function $Z_{\alpha}$ is given by $Z=\sum_{\\{n_{i}\\}=0}^{+\infty}p(\\{n_{i}\\}),$ (89) where the sum is over each $n_{i}$ from $0$ to $+\infty$, and where $p(\\{n_{i}\\})=e^{-\beta_{\alpha}\sum_{j}n_{j}\epsilon_{\alpha j}}$ (90) is the canonical probability of finding the bath in a Fock state with occupation numbers $\\{n_{i}\\}$. Using these two definitions, and recalling that $\expectationvalue{n_{\alpha k}^{m}}\equiv\sum p(\\{n_{i}\\})\,n^{m}_{k}$, it is easy to prove that $\displaystyle-\frac{1}{\beta}\frac{\partial\ln Z}{\partial\epsilon_{\alpha k}}$ $\displaystyle=\expectationvalue*{n_{\alpha k}},$ (91) $\displaystyle\frac{1}{\beta^{2}}\frac{\partial^{2}\ln Z}{\partial\epsilon_{\alpha k}^{2}}$ $\displaystyle=\expectationvalue*{n_{\alpha k}^{2}}-\expectationvalue*{n_{\alpha k}}^{2}.$ (92) Plugging Eq. (90) into (89), and recognizing that we can perform all the sums as geometric series, we can express the logarithm of the bosonic partition function as $\ln Z=-\sum_{k}\ln{(1-e^{-\beta\epsilon_{\alpha k}})}.$ (93) Plugging Eq. (93) into (91), we find the well know result that $\expectationvalue{n_{\alpha k}}=n(\beta_{\alpha}\epsilon_{\alpha k})$. Plugging Eq. (93) into (92), we find $\expectationvalue*{n_{\alpha k}^{2}}=2n^{2}(\beta_{\alpha}\epsilon_{\alpha k})+n(\beta\epsilon_{\alpha k}).$ (94) ## Appendix F Lamb shift In this appendix we compute the Lamb shift of the qubit gap induced by the bath in the weak coupling regime, and we derive Eq. (31). In order to use the results of Ref. breuer2002, , we consider a coupling Hamiltonian as written in Eq. (64). As shown in Ref. breuer2002, , we have that $\tilde{\mathcal{H}}=\sum_{\begin{subarray}{c}\epsilon=\\{0,\pm\Delta\\}\\\ i,j=\\{x,y\\}\end{subarray}}S_{xy}(\epsilon)\sigma_{i}^{\dagger}(\epsilon)\sigma_{j}(\epsilon),$ (95) where $S_{ij}(\epsilon)=\frac{1}{2\pi}\mathcal{P}\int_{-\infty}^{+\infty}\frac{\gamma_{ij}(\omega)}{\epsilon-\omega}d\omega\equiv\mathcal{S}_{\epsilon}[\gamma_{ij}(\omega)],$ (96) with $\gamma_{ij}(\omega)=\int_{-\infty}^{+\infty}dt\,e^{i\omega t}\expectationvalue*{B^{\dagger}_{i}(t)B_{j}(0)}$ (97) defined exactly as in Eq. (67), where $\Delta$ is replaced with $\omega$, and where $\sigma_{i}(\epsilon)=\sum_{\epsilon^{\prime}-\epsilon^{{}^{\prime\prime}}=\epsilon}\ket{\epsilon{{}^{\prime\prime}}}\bra{\epsilon{{}^{\prime\prime}}}\sigma_{i}\ket{\epsilon^{\prime}}\bra{\epsilon^{\prime}}.$ (98) Notice that the functional $\mathcal{S}_{\epsilon}[\dots]$, defined in Eq. (96), is linear, and that $\epsilon^{\prime\prime}$ and $\epsilon^{\prime}$ run over the two eigenvalues of the qubit, $-\Delta/2,\Delta/2$. For ease of notation, we identify the excited state of the qubit with $\ket{1}=\ket{\Delta/2}$, and the ground state with $\ket{0}=\ket{-\Delta/2}$. Expanding the sum in Eq. (98), we have $\displaystyle\sigma_{i}(\Delta)$ $\displaystyle=\matrixelement*{0}{\sigma_{i}}{1}\,\ket{0}\bra{1},$ (99) $\displaystyle\sigma_{i}(-\Delta)$ $\displaystyle=\matrixelement*{1}{\sigma_{i}}{0}\,\ket{1}\bra{0},$ $\displaystyle\sigma_{i}(0)$ $\displaystyle=\sum_{k=0,1}\matrixelement*{k}{\sigma_{i}}{k}\,\ket{k}\bra{k}=0,$ where we used the fact that both $\sigma_{x}$ and $\sigma_{y}$ have only zeros on the diagonal in the last equality. Therefore, the non-null elements are given by $\displaystyle\sigma_{x}(\Delta)$ $\displaystyle=\sigma^{-},$ $\displaystyle\sigma_{x}(-\Delta)$ $\displaystyle=\sigma^{+},$ (100) $\displaystyle\sigma_{y}(\Delta)$ $\displaystyle=-i\sigma^{-},$ $\displaystyle\sigma_{y}(-\Delta)$ $\displaystyle=i\sigma^{+}.$ Plugging these results into Eq. (95), using the anti-commutation relation $\\{\sigma^{-},\sigma^{+}\\}=\mathbb{1}$, and neglecting the terms proportional to the identity, we find $\tilde{\mathcal{H}}=\sigma_{z}\left[S_{xx}(\Delta)+S_{yy}(\Delta)-iS_{xy}(\Delta)+iS_{yx}(\Delta)\right]\\\ -\sigma_{z}\left[S_{xx}(-\Delta)+S_{yy}(-\Delta)+iS_{xy}(-\Delta)-iS_{yx}(-\Delta)\right].$ (101) Expressing $S_{ij}(\pm\Delta)$ in terms of the functional $\mathcal{S}_{\epsilon}$ yields $\tilde{\mathcal{H}}=\sigma_{z}\mathcal{S}_{\Delta}\left[\gamma_{xx}(\omega)+\gamma_{yy}(\omega)-i\gamma_{xy}(\omega)+i\gamma_{yx}(\omega)\right]\\\ -\sigma_{z}\mathcal{S}_{-\Delta}\left[\gamma_{xx}(\omega)+\gamma_{yy}(\omega)+i\gamma_{xy}(\omega)-i\gamma_{yx}(\omega)\right].$ (102) Using the definition of $\gamma_{ij}(\omega)$ in Eq. (97), and expressing $B_{x}$ and $B_{y}$ in terms of $B$ and $B^{\dagger}$ through Eq. (65), it can be shown that $\displaystyle\gamma_{xx}(\omega)+\gamma_{yy}(\omega)-i\gamma_{xy}(\omega)+i\gamma_{yx}(\omega)=\Upsilon^{-}(\omega),$ (103) $\displaystyle\gamma_{xx}(\omega)+\gamma_{yy}(\omega)+i\gamma_{xy}(\omega)-i\gamma_{yx}(\omega)=\Upsilon^{+}(-\omega),$ where $\Upsilon^{-}(\omega)$ and $\Upsilon^{+}(\omega)$ are the rates introduced in Eq. (60). Plugging Eq. (103) into Eq. (102) yields $\tilde{\mathcal{H}}=\sigma_{z}\left(\mathcal{S}_{\Delta}\left[\Upsilon^{-}(\omega)\right]-\mathcal{S}_{-\Delta}\left[\Upsilon^{+}(-\omega)\right]\right)$ (104) Using Eq. (96), it can be shown that the operator $\mathcal{S}_{\Delta}[\dots]$ satisfies the general property $\mathcal{S}_{-\Delta}[f(-\omega)]=-\mathcal{S}_{\Delta}[f(\omega)]$. Therefore, we find $\tilde{\mathcal{H}}=\sigma_{z}\mathcal{S}_{\Delta}\left[\Upsilon^{-}(\omega)+\Upsilon^{+}(\omega)\right].$ (105) Finally, recalling that $\Upsilon^{\pm}(\omega)=\Upsilon^{\pm}_{\text{L}}(\omega,T_{\text{L}})+\Upsilon^{\pm}_{\text{R}}(\omega,T_{\text{R}})$, we have that $\Upsilon^{-}(\omega)+\Upsilon^{+}(\omega)=\Upsilon_{\text{L}}(\omega,T_{\text{L}})+\Upsilon_{\text{R}}(\omega,T_{\text{R}}).$ (106) Therefore, we find $\tilde{\mathcal{H}}=\sigma_{z}\left(\mathcal{S}_{\Delta}\left[\Upsilon_{\text{L}}(\omega,T_{\text{L}})\right]+\mathcal{S}_{\Delta}\left[\Upsilon_{\text{R}}(\omega,T_{\text{R}})\right]\right),$ (107) which proves Eq. (31). ## Appendix G Co-tunneling calculation In this appendix we derive Eq. (36), i.e. the contribution to the heat current of co-tunneling processes. We will focus on the XX and XY coupling cases, defined in Section V. For simplicity, in this appendix we express the system bath Hamiltonian $\mathcal{H}_{\alpha,\text{S}}$ as $\displaystyle\mathcal{H}_{\text{L},\text{S}}$ $\displaystyle=(\sigma^{+}+\sigma^{-})\otimes\sum_{k}V_{\alpha k}(b_{\alpha k}+b_{\alpha k}^{\dagger}),$ (108) $\displaystyle\mathcal{H}_{\text{R},\text{S}}$ $\displaystyle=(q\sigma^{+}+q^{*}\sigma^{-})\otimes\sum_{k}V_{\alpha k}(b_{\alpha k}+b_{\alpha k}^{\dagger}),$ where $q$ is a complex coefficient given by $q=1$ in the XX case (since $\sigma_{x}=\sigma^{+}+\sigma^{-}$) and by $q=i$ in the XY case (since $\sigma_{y}=i\sigma^{+}-i\sigma^{-}$). Co-tunneling is a second-order process where a state of the uncoupled system evolves into another state of the uncoupled system passing through a “virtual state” by interacting twice with $\mathcal{H}_{\alpha,\text{S}}$. Since $\mathcal{H}_{\alpha,\text{S}}$ contains the operators $\sigma^{+}$ and $\sigma^{-}$, and since co-tunneling rates are obtained by acting twice with $\mathcal{H}_{\alpha,\text{S}}$, the state of the qubit remains unaltered during a co-tunneling process. This property, which is denoted as “elastic co- tunneling”, implies that co-tunneling rates do not enter the master equation for the probabilities. We start by considering all processes which transfer an excitation from the left to the right bath while the qubit is in the ground state. Let us denote with $\ket{0}$ and $\ket{1}$ the ground and excited state of the qubit, and with $\ket{n_{\alpha}}_{k}$ a Fock state with $n_{\alpha}$ excitations in mode $k$ of bath $\alpha$. The initial $\ket{i}$, final $\ket{f}$, and possible intermediate states $\ket{\nu_{i}}$ involved in the co-tunneling process are respectively given by $\displaystyle\ket{i}$ $\displaystyle=\ket{0}\otimes\ket{n_{\text{L}}}_{k}\otimes\ket{n_{\text{R}}}_{k^{\prime}},$ (109) $\displaystyle\ket{f}$ $\displaystyle=\ket{0}\otimes\ket{n_{\text{L}}-1}_{k}\otimes\ket{n_{\text{R}}+1}_{k^{\prime}},$ $\displaystyle\ket{\nu_{1}}$ $\displaystyle=\ket{1}\otimes\ket{n_{\text{L}}-1}_{k}\otimes\ket{n_{\text{R}}}_{k^{\prime}},$ $\displaystyle\ket{\nu_{2}}$ $\displaystyle=\ket{1}\otimes\ket{n_{\text{L}}}_{k}\otimes\ket{n_{\text{R}}+1}_{k^{\prime}},$ for all choices of $k$ and $k^{\prime}$. Using the Fermi golden rule, the rate of transition from the initial state $\ket{i}$ to the final state $\ket{f}$ is given by $\Upsilon_{i\to f}=\frac{2\pi}{\hbar}\left|A_{if}\right|^{2}\delta(\epsilon_{i}-\epsilon_{f}),$ (110) where $\epsilon_{i/f}$ is the energy of the initial/final state in the absence of the system-bath interaction, and $A_{if}=\sum_{j}\frac{\langle f|\sum_{\alpha}\mathcal{H}_{\alpha,\text{S}}|\nu_{j}\rangle\langle\nu_{j}|\sum_{\alpha}\mathcal{H}_{\alpha,\text{S}}|i\rangle}{\epsilon_{i}-\epsilon_{\nu_{j}}+i\eta},$ (111) $\eta$ being an infinitesimal positive quantity and $\epsilon_{\nu_{j}}$ the energy of $\ket{\nu_{j}}$. Using Eqs. (108) and (109), we have that the non- null matrix elements are $\displaystyle\matrixelement*{f}{\mathcal{H}_{\text{R},\text{S}}}{\nu_{1}}=\matrixelement*{\nu_{2}}{\mathcal{H}_{\text{R},\text{S}}}{i}=qV_{\text{R}k_{\text{R}}}\sqrt{n_{\text{R}}+1},$ (112) $\displaystyle\matrixelement*{\nu_{1}}{\mathcal{H}_{\text{L},\text{S}}}{i}=\matrixelement*{f}{\mathcal{H}_{\text{L},\text{S}}}{\nu_{2}}=\,V_{\text{L}k_{\text{L}}}\sqrt{n_{\text{L}}}.$ The co-tunneling heat current $I^{\rm cot(0)}_{\text{L}\to\text{R}}$ that accounts for the transfer of an excitation from left to right, while the qubit is in state $\ket{0}$, is obtained by performing a weighed sum, according to the equilibrium probabilities, over all the initial and final states of the quantity $\Upsilon_{i\to f}$ multiplied by the transferred energy. Combining Eqs. (112), (111) and (110), and using for simplicity $\epsilon_{k}=\epsilon_{\text{L}k}$ and $\epsilon_{k^{\prime}}=\epsilon_{\text{R}k^{\prime}}$, we have that $I^{\rm cot(0)}_{\text{L}\to\text{R}}=\frac{2\pi}{\hbar}\sum_{kk^{\prime}}\epsilon_{k}\,|V_{\rm Lk}|^{2}|V_{\rm Rk^{\prime}}|^{2}n_{\rm L}(\epsilon_{\rm k})[1+n_{\rm R}(\epsilon_{{\rm k^{\prime}}})]\\\ \times\left|\frac{q^{*}}{\Delta+\epsilon_{\rm k}+i\eta}+\frac{q}{\Delta-\epsilon_{\rm k}+i\eta}\right|^{2}\delta(\epsilon_{\rm k}-\epsilon_{\rm k^{\prime}}).$ (113) As usual, we assume that the energies in the leads form a continuum, so we can replace the sum with an integral. Performing some calculations, and recalling that $|q|^{2}=1$ both in the XX and XY case, we have that $I^{\rm cot(0)}_{\text{L}\to\text{R}}=\int_{0}^{+\infty}\frac{d\epsilon}{2\pi\hbar}\,\epsilon\,\Gamma_{\text{L}}(\epsilon)\Gamma_{\text{R}}(\epsilon)n_{\rm L}(\epsilon)[1+n_{\rm R}(\epsilon)]\\\ \times\left|\frac{1}{\Delta+\epsilon+i\eta}+\frac{q}{q^{*}}\frac{1}{\Delta-\epsilon+i\eta}\right|^{2}.$ (114) Note that the term $q/q^{*}$ is respectively $1$ and $-1$ in the XX and XY cases. The co-tunneling heat current $I^{\rm cot(0)}_{\text{R}\to\text{L}}$ transferring an excitation from right to left bath when the qubit is in $|0\rangle$ is given by Eq. (114) exchanging $\text{L}\leftrightarrow\text{R}$. We thus find that the net heat current while the qubit is in the ground state $I^{\rm cot(0)}\equiv I^{\rm cot(0)}_{\text{L}\to\text{R}}-I^{\rm cot(0)}_{\text{R}\to\text{L}}$ is given by $I^{\rm cot(0)}=\int_{0}^{+\infty}\frac{d\epsilon}{2\pi\hbar}\,\epsilon\,\Gamma_{\text{L}}(\epsilon)\Gamma_{\text{R}}(\epsilon)[n_{\rm L}(\epsilon)-n_{\rm R}(\epsilon)]\\\ \times\left|\frac{1}{\Delta+\epsilon+i\eta}+\frac{q}{q^{*}}\frac{1}{\Delta-\epsilon+i\eta}\right|^{2}.$ (115) Repeating the same derivation assuming that the qubit is in the excited state, it can be shown that the net heat current $I^{\rm cot(1)}$ is the same, i.e. $I^{\rm cot(0)}=I^{\rm cot(1)}$. Therefore, the heat current due to co- tunneling processes is $I^{\rm cot}\equiv I^{\rm cot(0)}=I^{\rm cot(1)}$ given by Eq. (115) which corresponds to Eq. (36) in the main text. ## Appendix H Non equilibrium Green’s function calculation In this appendix we will consider a qubit in contact with bosonic baths. In all our calculations, we fix the coupling on the left hand side to have only the $\hat{\sigma}_{x}$ component, i.e $u_{{\rm L},y}=u_{{\rm L},z}=0$ and $u_{{\rm L},x}=1$. The total Hamiltonian in terms of spin operators $H=\frac{\Delta}{2}\hat{\sigma}_{z}+\sum_{k,\alpha}\epsilon_{\alpha k}\hat{b}_{\alpha k}^{\dagger}\hat{b}_{\alpha k}+\sum_{j}u_{{\rm R},j}\hat{\sigma}_{j}\hat{B}_{{\rm R}}+\hat{\sigma}_{x}\hat{B}_{{\rm L}}.$ (116) Spin operators do not satisfy the usual Wick’s theorem. The usual Feynman diagram techniques applied to obtain Dyson equations cannot be used. In order to overcome this difficulty, one can undergo Majorana fermion transformation of spin operators using the following relations liu ; schad : $\hat{\sigma}_{x}={-i}\hat{\eta}_{y}\eta_{z};~{}\hat{\sigma}_{y}={-i}\hat{\eta}_{z}\hat{\eta}_{x};~{}\hat{\sigma}_{z}={-i}\hat{\eta}_{x}\eta_{y}.$ (117) The total Hamiltonian in terms of Majorana fermions reads $\displaystyle H=$ $\displaystyle-\frac{i\Delta}{2}\hat{\eta}_{x}\hat{\eta}_{y}+\sum_{k,{\alpha}}\epsilon_{{\alpha}k}b_{{\alpha}k}^{\dagger}b_{{\alpha}k}-i\Big{[}u_{{\rm R},x}\hat{\eta}_{y}\hat{\eta}_{z}\hat{B}_{\rm R}$ $\displaystyle+u_{{\rm R},y}\hat{\eta}_{z}\hat{\eta}_{x}\hat{B}_{\rm R}+u_{{\rm R},z}\hat{\eta}_{x}\hat{\eta}_{y}\hat{B}_{\rm R}\Big{]}-i\hat{\eta}_{y}\hat{\eta}_{z}\hat{B}_{\rm L}.$ (118) We write the Green’s function for spin operators as: $\displaystyle\hat{G}^{<}_{l,l^{\prime}}(t,t^{\prime})$ $\displaystyle=$ $\displaystyle-i\langle\hat{\sigma}_{l^{\prime}}(t^{\prime})\hat{\sigma}_{l}(t)\rangle,$ $\displaystyle\hat{G}^{r}_{l,l^{\prime}}(t,t^{\prime})$ $\displaystyle=$ $\displaystyle-i\Theta(t-t^{\prime})\langle\left[\hat{\sigma}_{l}(t),\hat{\sigma}_{l^{\prime}}(t^{\prime})\right]\rangle.$ (119) The relations between the Green’s function in the Majorana representation and the Green’s function in spin representation are given by langreth ; makhlin , $\displaystyle\hat{G}^{</>}_{l,l^{\prime}}(t,t^{\prime})=\mp\hat{\Pi}^{</>}_{l,l^{\prime}}(t,t^{\prime})$ $\displaystyle\hat{G}^{r}(t,t^{\prime})=\theta(t-t^{\prime})\left[\hat{\Pi}^{>}(t,t^{\prime})+\hat{\Pi}^{<}(t,t^{\prime})\right],$ (120) where $\hat{\Pi}^{<}_{l,l^{\prime}}(t,t^{\prime})=i\langle\hat{\eta}_{l^{\prime}}(t^{\prime})\hat{\eta}_{l}(t)\rangle$ and $\hat{\Pi}^{>}_{l,l^{\prime}}(t,t^{\prime})=-i\langle\hat{\eta}_{l}(t)\hat{\eta}_{l^{\prime}}(t^{\prime})\rangle$ are the lesser and greater Green’s functions for Majorana operators, respectively. The heat current flowing from the lead ${\rm L}$ to the system is given by $\displaystyle I(t)$ $\displaystyle=\frac{i}{\hbar}\left\langle\left[H_{\rm L}(t),H(t)\right]\right\rangle$ $\displaystyle=-\frac{2}{\hbar}\sum_{k}\epsilon_{Lk}V_{Lk}{\rm Re}\left[G_{x,Lk}^{<}(t,t)\right],$ (121) where $G_{x,Lk}^{<}(t,t^{\prime})=-i\left\langle\hat{b}_{Lk}^{\dagger}(t^{\prime})\hat{\sigma}_{x}(t)\right\rangle$. Following standard Keldysh NEGF treatment using Langreth theorem, the steady state heat current as defined in Eq. (13) can be written as: $I(\Delta T)=-\frac{2}{\hbar}\int d\epsilon\;\epsilon\;{\rm Re}\left[G_{xx}^{r}(\epsilon)\Sigma_{L}^{<}(\epsilon)+G_{xx}^{<}(\epsilon)\Sigma_{L}^{a}(\epsilon)\right],$ (122) where $\Sigma_{L}(\epsilon)=\sum_{k}|V_{Lk}|^{2}g_{Lk}(\epsilon)$ is the self energy of the bath L and $g_{Lk}(\epsilon)$ is the Green’s function for the uncoupled bath L. Applying the relations of Eq. (120), the heat current can be computed as $I(\Delta T)=-\int_{0}^{\infty}\frac{d\epsilon}{2\pi\hbar}\epsilon\left[\Pi_{xx}^{>}(\epsilon)\Sigma_{L}^{<}(\epsilon)+\Pi_{xx}^{<}(\epsilon)\Sigma_{L}^{>}(\epsilon)\right],$ (123) where the self energies due to system bath coupling, $\Sigma_{L}^{<}(\epsilon)$ and $\Sigma_{L}^{>}(\epsilon)$ are defined in Sec III. In order to evaluate the heat currents, one needs to calculate the lesser and greater components of the Majorana Green’s function. ### H.1 Derivation of Green’s function In this section we will derive the Green’s functions in Majorana representation. Normal ordering for Majorana fermions is not defined. It is useful to write the Majorana operators in terms of Dirac operators langreth $\hat{\eta}_{x}=\hat{f}+\hat{f}^{\dagger};~{}~{}\hat{\eta}_{y}=i(\hat{f}^{\dagger}-\hat{f});~{}~{}\hat{\eta}_{z}=\hat{g}+\hat{g}^{\dagger}.$ (124) The fermionic nature of $\hat{f}$ is consistent with, $\hat{f}=\frac{\hat{\eta}_{x}+i\hat{\eta}_{y}}{2};~{}~{}\hat{f}^{2}=0;~{}~{}\hat{f}^{\dagger^{2}}=0;~{}~{}\left\\{\hat{f},\hat{f}^{\dagger}\right\\}=1,$ (125) and should hold for $g$ as well. The Majorana representation does not suffer from vertex problem langreth and the constraints on spins are naturally imposed on Majorana operators schad2 . The Hamiltonian for the qubit gets transformed to ${H}_{\infty}=\frac{\Delta}{2}(1-2\hat{f}^{\dagger}\hat{f}),$ (126) whereas the contact Hamiltonians are $H_{\rm R,S}=\sum_{k}V_{Rk}\Big{[}u_{{\rm R},x}\,(f^{\dagger}-f)\hat{\eta}_{z}-iu_{{\rm R},y}\,\hat{\eta}_{z}(f+f^{\dagger})\\\ +u_{{\rm R},z}(1-2f^{\dagger}f)\Big{]}\hat{B}_{R},$ (127) and $H_{\rm L,S}=\sum_{k}{V_{Lk}}(\hat{f}^{\dagger}-\hat{f})\hat{\eta}_{z}\hat{B}_{\rm L}.$ (128) Note that we consider general spin coupling in the right lead whereas a fixed $\sigma_{x}$ coupling in the left. The contour ordered Green’s function for the Majorana operators can be written as: $\hat{\Pi}_{xx}(\tau,\tau^{\prime})=\begin{bmatrix}\hat{\Pi}_{xx}^{t}(t,t^{\prime})&\hat{\Pi}_{xx}^{<}(t,t^{\prime})\vspace{0.03\columnwidth}\\\ \hat{\Pi}_{xx}^{>}(t,t^{\prime})&\hat{\Pi}_{xx}^{\bar{t}}(t,t^{\prime})\end{bmatrix}$ (129) We also define the Green’s function for Dirac fermions $f$ in the Bogoliubov- Nambu representation, $\hat{\psi}\equiv(\hat{f},\hat{f}^{\dagger})^{T}$ and $\hat{\psi}^{\dagger}\equiv(\hat{f}^{\dagger},\hat{f})$, such that $\hat{G}_{\psi}(\tau,\tau^{\prime})=-i\left\langle\mathcal{T}\hat{\psi}(\tau)\hat{\psi}(\tau^{\prime})\right\rangle$. On expansion in the Keldysh contour, $\hat{G}_{\psi}(\tau,\tau^{\prime})=\\\ \begin{bmatrix}{G}_{ff^{\dagger}}^{t}(t,t^{\prime})&{G}_{ff}^{t}(t,t^{\prime})&{G}_{ff^{\dagger}}^{<}(t,t^{\prime})&{G}_{ff}^{<}(t,t^{\prime})\vspace{0.03\columnwidth}\\\ {G}_{f^{\dagger}f^{\dagger}}^{t}(t,t^{\prime})&{G}_{f^{\dagger}f}^{t}(t,t^{\prime})&{G}_{f^{\dagger}f^{\dagger}}^{<}(t,t^{\prime})&{G}_{f^{\dagger}f}^{<}(t,t^{\prime})\vspace{0.03\columnwidth}\\\ {G}_{ff^{\dagger}}^{>}(t,t^{\prime})&{G}_{ff}^{>}(t,t^{\prime})&{G}_{ff^{\dagger}}^{\bar{t}}(t,t^{\prime})&{G}_{ff}^{\bar{t}}(t,t^{\prime})\vspace{0.03\columnwidth}\\\ {G}_{f^{\dagger}f^{\dagger}}^{>}(t,t^{\prime})&{G}_{f^{\dagger}f}^{>}(t,t^{\prime})&{G}_{f^{\dagger}f^{\dagger}}^{\bar{t}}(t,t^{\prime})&{G}_{f^{\dagger}f}^{\bar{t}}(t,t^{\prime})\vspace{0.03\columnwidth}\end{bmatrix},$ (130) where for instance, $G_{ff^{\dagger}}(\tau,\tau^{\prime})=-i\left\langle\mathcal{T}\hat{f}(\tau)\hat{f}^{\dagger}(\tau^{\prime})\right\rangle$. For more clarification, see Eqs. (A2) and (A3) in Ref. agarwalla, . The lesser and greater Green’s function in Majorana representation are $\displaystyle\Pi_{xx}^{<,>}(t,t^{\prime})=\begin{bmatrix}1&1\end{bmatrix}\hat{G}_{\psi}^{<,>}(t,t^{\prime})\begin{bmatrix}1\\\ 1\end{bmatrix}$ $\displaystyle\Pi_{yy}^{<,>}(t,t^{\prime})=\begin{bmatrix}1&-1\end{bmatrix}\hat{G}_{\psi}^{<,>}(t,t^{\prime})\begin{bmatrix}1\\\ -1\end{bmatrix}$ (131) ### H.2 Calculation of Dyson equation In order to obtain a Dyson equation for $\hat{\psi}$, we need to do perturbation expansion in terms of the contact Hamiltonian for Dirac fermions $f$, namely $G_{ff^{\dagger}}(\tau,\tau^{\prime})=G_{ff^{\dagger}}^{0}(\tau,\tau^{\prime})+\frac{i}{2}\sum_{\alpha}\\\ \int d\tau_{1}d\tau_{2}\left\langle\mathcal{T}\left[\hat{H}_{\alpha,\rm S}(\tau_{1})\hat{H}_{\alpha,\rm S}(\tau_{2})\hat{\tilde{f}}(\tau)\hat{\tilde{f}}^{\dagger}(\tau^{\prime})\right]\right\rangle+\cdots$ (132) After a long but straightforward calculation, we obtain $\hat{G}_{\psi}(\tau,\tau^{\prime})=\hat{G}_{\psi}^{0}(\tau,\tau^{\prime})\\\ +\int d\tau_{1}d\tau_{2}\,\hat{G}_{\psi}(\tau,\tau_{1})\hat{\Sigma}_{\psi}(\tau_{1},\tau_{2})\hat{G}_{\psi}^{0}(\tau_{2},\tau^{\prime}),$ (133) where $\hat{\Sigma}_{\psi}=\hat{\Sigma}_{\psi,{\rm L}}+\hat{\Sigma}_{\psi,{\rm R}}$, $\hat{\Sigma}_{\psi,{\rm R}}(\tau_{1},\tau_{2})=iD_{R}(\tau_{1},\tau_{2})\Big{(}u_{{\rm L},x}^{2}\Pi_{z,z}^{0}(\tau_{1},\tau_{2})\hat{\lambda}\\\ +u_{{\rm R},y}^{2}\Pi_{z,z}^{0}(\tau_{1},\tau_{2})\hat{{1}}+4u_{{\rm R},z}^{2}\begin{bmatrix}G_{ff^{\dagger}}^{0}(\tau_{1},\tau_{2})&0\\\ 0&G_{f^{\dagger}f}^{0}(\tau_{1},\tau_{2})\end{bmatrix}\Big{)},$ (134) and $\hat{\Sigma}_{\psi,{\rm L}}(\tau_{1},\tau_{2})=iD_{L}(\tau_{1},\tau_{2})\Pi_{z,z}^{0}(\tau_{1},\tau_{2})\hat{\lambda},$ (135) where $\hat{{1}}$ is the matrix of ones, the embedded self energy $D_{\alpha}(\tau_{1},\tau_{2})=-i\sum_{k}|{V_{\alpha k}|^{2}}\left\langle\mathcal{T}\left[B_{k\alpha}(\tau_{1})B_{k\alpha}(\tau_{2})\right]\right\rangle$, and $\hat{\lambda}=\begin{bmatrix}1&-1\\\ -1&1\end{bmatrix}.$ Writing the equation of motion for $\hat{G}_{\psi}^{0}$, we get $\displaystyle\hat{G}_{\psi}^{0}(\tau,\tau^{\prime})(-i\overleftarrow{\partial}_{\tau^{\prime}}+\Delta\hat{\sigma}_{z})=\delta(\tau-\tau^{\prime})\hat{\mathbb{1}},$ (136) where $\hat{\mathbb{1}}$ is a unit matrix. The retarded and advanced self energies due to coupling to the bath are given by $\displaystyle D_{\alpha}^{r/a}(\epsilon)$ $\displaystyle=\sum_{k}\left|V_{\alpha k}\right|^{2}\left(\frac{1}{\epsilon-\epsilon_{\alpha k}\pm i\eta}-\frac{1}{\epsilon+\epsilon_{\alpha k}\pm i\eta}\right)$ $\displaystyle={\delta{\Delta}_{\alpha}(\epsilon)}\mp\frac{i}{2}\left(\Gamma_{\alpha}(\epsilon)-\Gamma_{\alpha}(-\epsilon)\right),$ (137) where $\delta{\Delta}_{\alpha}(\epsilon)$ is the lamb shift defined as $\delta{\Delta}_{\alpha}(\epsilon)=\mathcal{P}\int_{-\infty}^{\infty}\frac{d\epsilon^{\prime}}{2\pi}\left(\frac{\Gamma_{\alpha}(\epsilon^{\prime})}{\epsilon-\epsilon^{\prime}}-\frac{{\Gamma}_{\alpha}(\epsilon^{\prime})}{\epsilon+\epsilon^{\prime}}\right),$ (138) The lesser and greater components of self energy take the form $\displaystyle D_{\alpha}^{<}(\epsilon)$ $\displaystyle=$ $\displaystyle- in_{\alpha}(\epsilon)\left({\Gamma}_{\alpha}(\epsilon)-\Gamma_{\alpha}(-\epsilon)\right),$ $\displaystyle D_{\alpha}^{>}(\epsilon)$ $\displaystyle=$ $\displaystyle-i(1+n_{\alpha}(\epsilon))\left({\Gamma}_{\alpha}(\epsilon)-\Gamma_{\alpha}(-\epsilon)\right).$ (139) The integration for the Lamb shift can be simplified to $\delta{\Delta}_{\alpha}(\epsilon)=\frac{\Gamma_{\alpha}}{2\pi}\left(\epsilon\;e^{-\epsilon/\epsilon_{\rm C}}\mathcal{E}\left[\frac{\epsilon}{\epsilon_{\rm C}}\right]-\epsilon\;e^{\epsilon/\epsilon_{\rm C}}\mathcal{E}\left[\frac{-\epsilon}{\epsilon_{\rm C}}\right]-2\epsilon_{\rm C}\right),$ (140) where $\mathcal{E}[\epsilon]=-\mathcal{P}\int_{-\epsilon}^{\infty}e^{-t}/t\;dt$ (141) is a well known exponential integral function. Note that, we used $\Gamma_{\alpha}(\epsilon)=0$ for $\epsilon<0$. Moreover, we have $\hat{\Sigma}_{\psi,{\rm R}}(\epsilon)=i\int\frac{d\epsilon^{\prime}}{2\pi}D_{R}(\epsilon-\epsilon^{\prime})\Big{(}u_{{\rm R},x}^{2}\Pi_{z,z}^{0}(\epsilon^{\prime})\hat{\lambda}\\\ +u_{{\rm R},y}^{2}\Pi_{z,z}^{0}(\epsilon^{\prime})\hat{{1}}+4u_{{\rm R},z}^{2}\begin{bmatrix}G_{ff^{\dagger}}^{0}(\epsilon^{\prime})&0\\\ 0&G_{f^{\dagger}f}^{0}(\epsilon^{\prime})\end{bmatrix}\Big{)},$ (142) $\hat{\Sigma}_{\psi,{\rm L}}(\epsilon)=i\int\frac{d\epsilon^{\prime}}{2\pi}D_{L}(\epsilon-\epsilon^{\prime})\Pi_{z,z}^{0}(\epsilon^{\prime})\hat{\lambda}.$ (143) Following Ref. jauho, for $\Sigma(\tau_{1},\tau_{2})=A(\tau_{1},\tau_{2})B(\tau_{1},\tau_{2})$ (144) the Langreth rules are given by $\displaystyle\Sigma^{<}(\tau_{1},\tau_{2})=A^{<}(\tau_{1},\tau_{2})B^{<}(\tau_{1},\tau_{2}),$ $\displaystyle\Sigma^{r}(\tau_{1},\tau_{2})=A^{<}(\tau_{1},\tau_{2})B^{r}(\tau_{1},\tau_{2})+A^{r}(\tau_{1},\tau_{2})B^{<}(\tau_{1},\tau_{2})$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+A^{r}(\tau_{1},\tau_{2})B^{r}(\tau_{1},\tau_{2}).$ (145) Since both Eqs. (134) and (135) have the form of Eq. (144), one can obtain the lesser $(\Sigma_{\psi}^{<})$, greater $(\Sigma_{\psi}^{>})$, retarded $(\Sigma_{\psi}^{r})$ and advanced $(\Sigma_{\psi}^{a})$ self energies in terms of different components of embedded self energy and the free Green’s function for the system using Eq. (145). For instance, $\hat{\Sigma}_{\psi,{\rm L}}^{<}(\epsilon)=i\int\frac{d\epsilon^{\prime}}{2\pi}D_{L}^{<}(\epsilon-\epsilon^{\prime})\Big{(}u_{{\rm L},x}^{2}\Pi_{z,z}^{0,<}(\epsilon^{\prime})\hat{\lambda}\\\ +u_{{\rm L},y}^{2}\Pi_{z,z}^{0,<}(\epsilon^{\prime})\hat{{1}}+4u_{{\rm L},z}^{2}\begin{bmatrix}G_{ff^{\dagger}}^{0,<}(\epsilon^{\prime})&0\\\ 0&G_{f^{\dagger}f}^{0,<}(\epsilon^{\prime})\end{bmatrix}\Big{)},$ (146) $\hat{\Sigma}_{\psi,{\rm R}}^{<}(\epsilon)=i\int\frac{d\epsilon^{\prime}}{2\pi}D_{R}^{<}(\epsilon-\epsilon^{\prime})\Pi_{z,z}^{0,<}(\epsilon^{\prime})\hat{\lambda}.$ (147) Eqs. (137) and (139) give all the components of the embedded self energies of the baths. The only unknowns are the free Green’s functions of the system which we will discuss below. The free dynamics of the system Hamiltonian can be easily computed to obtain $\displaystyle G_{ff^{\dagger}}^{0,r}(\epsilon)$ $\displaystyle=\mathcal{P}\left\\{\frac{1}{\epsilon+\Delta}\right\\}-i\pi\delta(\epsilon+\Delta),$ $\displaystyle G_{f^{\dagger}f}^{0,r}(\epsilon)$ $\displaystyle=\mathcal{P}\left\\{\frac{1}{\epsilon-\Delta}\right\\}-i\pi\delta(\epsilon-\Delta).$ (148) We can use the relation, $G^{r}-G^{a}=G^{>}-G^{<}$ to write $\displaystyle G_{f^{\dagger}f}^{0,>}(\epsilon)-G_{f^{\dagger}f}^{0,<}(\epsilon)$ $\displaystyle=$ $\displaystyle-2i\pi\delta(\epsilon-\Delta),$ $\displaystyle G_{ff^{\dagger}}^{0,>}(\epsilon)-G_{ff^{\dagger}}^{0,<}(\epsilon)$ $\displaystyle=$ $\displaystyle-2i\pi\delta(\epsilon+\Delta).$ (149) Using the fluctuation dissipation relation jauho , $G^{0,<}(\epsilon)=-f(\epsilon)\left(G^{0,>}(\epsilon)-G^{0,<}(\epsilon)\right)$ and $G^{0,>}(\epsilon)=\left(1-f(\epsilon)\right)\left(G^{0,>}(\epsilon)-G^{0,<}(\epsilon)\right)$ where $f(\epsilon)$ is the Fermi distribution of the system defined at average temperature of the two baths, we can write $\displaystyle G_{f^{\dagger}f}^{0,</>}(\epsilon)$ $\displaystyle=\pm 2i\pi f(\pm\epsilon)\delta(\epsilon-\Delta),$ $\displaystyle G_{ff^{\dagger}}^{0,</>}(\epsilon)$ $\displaystyle=\pm 2i\pi f(\pm\epsilon)\delta(\epsilon+\Delta).$ (150) The retarded and advanced Green’s function for the system in the Majorana notation are ${\Pi}_{z,z}^{0,r/a}(\omega)=\frac{2}{\omega\pm i\eta},$ (151) such that ${\Pi}_{z,z}^{0,r}(\omega)-{\Pi}^{0,a}(\omega)={\Pi}_{z,z}^{0,>}(\omega)-{\Pi}_{z,z}^{0,<}(\omega)=-4i\pi\delta(\omega).$ (152) If we take the effective temperature of the Majorana fermions to be given by $\beta_{\rm eff}(\beta_{L},\beta_{R})$, we have from the fluctuation- dissipation theorem for the ordinary fermionic system in equilibrium jauho : ${\Pi}_{z,z}^{{0},>}(\omega)+\Pi_{z,z}^{0,<}(\omega)=\left({\Pi}_{z,z}^{0,r}(\omega)-{\Pi}_{z,z}^{0,a}(\omega)\right)\tanh\left(\frac{\beta_{\rm eff}\omega}{2}\right)$ (153) Using Eqs. (152) and (153), one can find the lesser and greater Green’s function for the Majorana operators. Similarly, the time ordered and anti-time ordered self energies are obtained from $\displaystyle\hat{\Sigma}_{\psi}^{t}(\epsilon)+\hat{\Sigma}_{\psi}^{\bar{t}}(\epsilon)$ $\displaystyle=$ $\displaystyle\hat{\Sigma}_{\psi}^{>}(\epsilon)+\hat{\Sigma}_{\psi}^{<}(\epsilon),$ $\displaystyle\hat{\Sigma}_{\psi}^{t}(\epsilon)-\hat{\Sigma}_{\psi}^{\bar{t}}(\epsilon)$ $\displaystyle=$ $\displaystyle\hat{\Sigma}_{\psi}^{a}(\epsilon)+\hat{\Sigma}_{\psi}^{r}(\epsilon).$ Substituting Eq. (130) in Eq. (133) and undergoing Fourier transform, we obtain: $\hat{G}_{\psi}^{-1}(\epsilon)=\hat{G}_{\psi}^{0^{-1}}(\epsilon)-\hat{\Sigma}_{\psi}(\epsilon),$ (155) where the bare system Green’s function, $\hat{G}_{\psi}^{0,t^{-1}}(\epsilon)=-\hat{G}_{\psi}^{0,\bar{t}^{-1}}(\epsilon)=\epsilon\hat{\mathbb{1}}+\Delta\hat{\sigma}_{z},$ (156) such that $\hat{\sigma}_{k}\hat{G}_{\psi}(\epsilon)=\begin{bmatrix}\vspace{0.02\columnwidth}&\epsilon+\Delta-[\Sigma_{\psi}^{t}(\epsilon)]_{11}&-[\Sigma_{\psi}^{t}(\epsilon)]_{12}&-[\Sigma_{\psi}^{<}(\epsilon)]_{11}&-[\Sigma_{\psi}^{<}(\epsilon)]_{12}\\\ \vspace{0.02\columnwidth}&-[\Sigma_{\psi}^{t}(\epsilon)]_{21}&\epsilon-\Delta-[\Sigma_{\psi}^{t}(\epsilon)]_{22}&-[\Sigma_{\psi}^{<}(\epsilon)]_{21}&-[\Sigma_{\psi}^{<}(\epsilon)]_{22}\\\ \vspace{0.02\columnwidth}&[\Sigma_{\psi}^{>}(\epsilon)]_{11}&[\Sigma_{\psi}^{>}(\epsilon)]_{12}&\epsilon+\Delta+[\Sigma_{\psi}^{\bar{t}}(\epsilon)]_{11}&[\Sigma_{\psi}^{\bar{t}}(\epsilon)]_{12}\\\ &[\Sigma_{\psi}^{>}(\epsilon)]_{21}&[\Sigma_{\psi}^{>}(\epsilon)]_{22}&[\Sigma_{\psi}^{\bar{t}}(\epsilon)]_{21}&\epsilon-\Delta+[\Sigma_{\psi}^{\bar{t}}(\epsilon)]_{22}\end{bmatrix}^{-1},$ (157) where $\hat{\sigma}_{k}=\text{diag}(1,1,-1,-1)$ is introduced to keep the appropriate sign for two different branches of the Keldysh contour agarwalla . Using Eq. (131) along with Eq. (157), one can obtain the lesser and greater Green’s function in the Majorana representation. Substituting the Majorana Green’s functions in Eq. (123), we obtain the final expression for current with general spin coupling in the right lead and a fixed spin coupling $\hat{\sigma}_{x}$ in the left lead. ### H.3 Calculation of currents for simple models #### H.3.1 The XX and XY case The current for the XX and the XY system-bath coupling can be calculated from Eq. (123) after calculating the Green’s functions from Eq. (157). Note that one has to properly choose $u_{{\rm R},j}$ to obtain the XX and XY case. Considering the zero dimensionality of the spin system we arrive at the following expression for the heat current in the XY case $I(\Delta T)=\int\frac{d\epsilon}{2\pi\hbar}\epsilon\;\mathcal{T}_{XY}(\epsilon)[n_{\rm L}(\epsilon)-n_{{\rm R}}(\epsilon)],$ (158) where $\mathcal{T}_{XY}(\epsilon)=\frac{4\,\epsilon^{2}\,\Gamma_{\text{L}}(\epsilon)\Gamma_{\text{R}}(\epsilon)}{\left(\epsilon^{2}-\mathcal{X}(\epsilon)-\Delta^{2}\right)^{2}+\mathcal{Y}^{2}(\epsilon)},$ (159) where $\mathcal{X}(\epsilon)$ and $\mathcal{Y}(\epsilon)$ are defined in Eqs. (43) and (44), respectively. We will consider Ohmic spectral density for both baths with high frequency cut off given by $\epsilon_{C}$. In Eq. (159), when the Lamb shift term is neglected, the transmission probability has a Lorentzian form, whose width is determined by ${\Gamma}_{\alpha}(\epsilon)$. When the coupling is very weak, i.e. ${\Gamma}_{\alpha}(\epsilon)\ll\Delta\ll k_{\rm B}T_{\alpha}$, the Lorentzian effectively shrinks to vanishing width and peaked around $\Delta$. In this limit, one can write $\epsilon\approx\Delta$ giving: $\mathcal{T}_{XY}(\Delta)\approx\frac{4\Delta^{2}{\Gamma}_{\rm L}(\Delta){\Gamma}_{\rm R}(\Delta)}{\xi^{2}(\Delta)}.$ (160) The above result corresponds to the one obtained using master equation in the sequential tunneling limit. Following a similar calculation, the heat current for the XX case is given by $I(\Delta T)=\int_{0}^{\infty}\frac{d\epsilon}{2\pi\hbar}\epsilon\;\mathcal{T}_{XX}(\epsilon)[n_{{\rm L}}(\epsilon)-n_{\rm R}(\epsilon)],$ (161) where the transmission probability $\mathcal{T}_{XX}(\epsilon)$ is given by $\mathcal{T}_{\text{XX}}(\epsilon)=\frac{4\,\Delta^{2}\Gamma_{\text{L}}(\epsilon)\Gamma_{\text{R}}(\epsilon)}{\left(\epsilon^{2}-2\epsilon\left(\delta{\Delta}_{\text{L}}(\epsilon)+\delta{\Delta}_{\text{R}}(\epsilon)\right)-\Delta^{2}\right)^{2}+\xi^{2}(\epsilon)},$ (162) where $\xi(\epsilon)=\epsilon\sum_{\alpha}{\Gamma}_{\alpha}(\epsilon)\Big{(}1+2n_{\alpha}(\epsilon)\Big{)}$. In Refs. agarwalla, ; boudjada2014, a similar form for the transmission function has been derived within the NEGF but without including the frequency renormalization expressed by the Lamb shift. For $\Gamma_{\alpha}(\epsilon)\ll\Delta\ll k_{B}T$, we obtain $\mathcal{T}_{XX}=\mathcal{T}_{XY}$ given by Eq. (160). In the low temperature and weak coupling regime ($\Gamma_{\alpha}(\epsilon)\ll k_{\rm B}T\leq\Delta$), the first order sequential processes are generally suppressed and the dominant contribution comes from second order co-tunneling processes. For the heat current in the XX case we obtain $I(\Delta T)\approx\int_{0}^{\infty}\frac{d\epsilon}{2\pi\hbar}\frac{4\epsilon\,{\Gamma}_{\rm L}(\epsilon){\Gamma}_{\rm R}(\epsilon)}{\Delta^{2}}[n_{\rm L}(\epsilon)-n_{\rm R}(\epsilon)].$ (163) The above result corresponds to the co-tunneling contribution agarwalla and matches with Eq. (36). ### H.4 Exact calculation In this section we derive the formal exact expressions for the dynamical susceptibility entering the transmission function Eq. (45) $\chi(t)=\frac{i}{\hbar}\Theta(t)\langle[\sigma_{x}(t),\sigma_{x}(0)]\rangle$ (164) within the path-integral approach to the spin-boson model weiss . To deal with a correlated initial state at time $t=0$, we assume that the system starts at a preparation time $t_{p}<0$ in a factorized state (Feynman Vernon) $W_{tot}=\hat{\rho}_{\text{L}}(T_{\text{L}})\otimes\hat{\rho}_{\text{R}}(T_{\text{R}})\otimes\hat{\rho}(t_{p}),$ (165) where each bath is in the thermal equilibrium state described by the density matrix $\hat{\rho}_{\alpha}(T_{\alpha})$ and $\hat{\rho}(t_{p})$ is a general state of the qubit at the preparation time. Assuming that the system is ergodic, the response function will not depend on the chosen initial state when $t_{p}\to-\infty$. For the sake of simplicity, we assume that the qubit starts in a diagonal state (or sojourn) of the Pauli matrix which couples to the bath coordinates, $\sigma_{x}$, $|\eta_{p}\rangle$, with $\eta_{p}=1$. It is easy to demonstrate that in the case of XX coupling the effect of the two baths on the qubit evolution is expressed by the influence functional $\mathcal{F}[\sigma,\sigma^{\prime};t_{0}]=\exp\\{\int_{t_{0}}^{t}dt^{\prime}\int_{t_{0}}^{t^{\prime}}dt^{\prime\prime}\sum_{\alpha}\Big{[}\dot{\xi}(t^{\prime})\\\ {\rm Re}\left[Q_{\alpha}(t^{\prime}-t^{\prime\prime})\right]\dot{\xi}(t^{\prime\prime})+i\dot{\xi}(t^{\prime}){\rm Im}\left[Q_{\alpha}(t^{\prime}-t^{\prime\prime})\right]\dot{\eta}(t^{\prime\prime})\Big{]}\bigg{\\}}$ (166) where $Q_{\alpha}(t)={\rm Re}[Q_{\alpha}(t)]+i{\rm Im}[Q_{\alpha}(t)]$ is the complex bath-$\alpha$ correlation function $Q_{\alpha}(t)=\int_{0}^{\infty}\frac{d\omega}{\pi\hbar}\frac{2\Gamma_{\alpha}(\hbar\omega)}{\omega^{2}}\big{[}\coth\left(\frac{\hbar\omega}{2k_{\text{B}}T_{\alpha}}\right)\\\ (1-\cos(\omega t))+i\sin(\omega t)\big{]}\,.$ (167) The dynamical susceptibility, Eq. (164) is expressed as follows $\chi(t)=\frac{i}{\hbar}\Theta(t)\lim_{t_{p}\to-\infty}\,\sum_{\eta=\pm 1}\sum_{\xi_{0}=\pm 1}\eta\xi_{0}J(\eta,t;\xi_{0},0;\eta_{p},t_{p})\,,$ (168) where $J(\eta,t;\xi_{0},0;\eta_{p},t_{p})$ is the conditional propagating function to find the qubit in the diagonal (sojourn) state $\eta=\pm 1$ at time $t$, conditioned to having measured the system in off-diagonal (blip) state $\xi_{0}$ at time $t=0$ and having prepared it in state $\eta_{p}$ at time $t_{p}$ We find $\displaystyle J(\eta,t;\xi_{0},0;\eta_{p},t_{p})$ $\displaystyle=$ $\displaystyle\eta\eta_{p}\sum_{m,n=1}^{\infty}\big{(}-\frac{\Delta^{2}}{4\hbar^{2}}\big{)}^{m+n-1}\int_{t_{p}}^{t}\mathcal{D}_{2m-1,2n-1}\\{t_{j}\\}\sum_{\\{\xi_{j}=\pm 1\\}^{\prime}}\,G_{n+m-1}^{L}G_{n+m-1}^{R}\,\sum_{\\{\eta=\pm 1\\}^{\prime}}H_{n+m-1}^{L}H_{n+m-1}^{R}$ where the integration paths consist of $2n-1$ transitions for $t_{p}<t^{\prime}<0$ and $2m-1$ transitions for $0<t^{\prime}<t$ and we introduced the compact notation $\int_{t_{p}}^{t}\mathcal{D}_{k,l}\\{t_{j}\\}\times\cdots\equiv\int_{0}^{t}dt_{l+k}....\int_{0}^{t_{l+2}}dt_{l+1}\int_{t_{p}}^{0}dt_{l}..\int_{t_{p}}^{t_{2}}dt_{1}\times\cdots$. The symbol $\\{\\}^{\prime}$ reminds that the sum is over all sequences of blips and sojourns in accordance with the constraints indicated in the argument. The blip-sojourn interactions enter the $H_{i}$s, whereas the $G_{j}$s include the blip-blip interactions and are given by $\displaystyle H_{n+m-1}^{\alpha}$ $\displaystyle=$ $\displaystyle\exp{i\sum_{k=0}^{m+n-2}\sum_{j=0}^{m+n-1}\xi_{j}X_{j,k}^{\alpha}\eta_{k}}$ (170) $G_{n+m-1}^{\alpha}=\exp{-\sum_{j=1}^{n+m}{\rm Re}\left[Q^{\alpha}_{2j,2j-1}\right]}\\\ \exp{-i\sum_{j=2}^{m+n}\sum_{k=1}^{j-1}\xi_{j}\xi_{k}\Lambda_{j,k}^{\alpha}}$ (171) $\displaystyle X_{j,k}^{\alpha}$ $\displaystyle=$ $\displaystyle{\rm Im}\left[Q^{\alpha}_{2j,2k+1}+Q^{\alpha}_{2j-1,2k}-Q^{\alpha}_{2j,2k}-Q^{\alpha}_{2j-1,2k+1}\right],$ $\displaystyle\Lambda_{j,k}^{\alpha}$ $\displaystyle=$ $\displaystyle{\rm Re}\left[Q^{\alpha}_{2j,2k-1}+Q^{\alpha}_{2j-1,2k}-Q^{\alpha}_{2j,2k}-Q^{\alpha}_{2j-1,2k-1}\right].$ Inserting the conditional propagating function Eq. (LABEL:Jcond) in the susceptibility Eq. (168) it is possible to perform the sum over the sojourns leading to $\chi(t)=\frac{2}{\hbar}\lim_{t_{p}\to-\infty}\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\big{(}-\frac{\Delta^{2}}{2\hbar^{2}}\big{)}^{m+n-1}\int_{t_{p}}^{t}\mathcal{D}_{2m-1,2n-1}\\{t_{j}\\}\sum_{\\{\xi_{j}=\pm 1\\}}\xi_{n}G_{n+m-1}^{L}G_{n+m-1}^{R}\\\ \sin(\phi_{0,n+m-1}^{L}+\phi_{0,n+m-1}^{R})\Pi_{k=1}^{m+n-2}\cos(\phi_{k,n+m-1}^{L}+\phi_{k,n+m-1}^{R})$ (173) where $\phi_{k,m}^{\alpha}=\sum_{j=k+1}^{m}\xi_{j}X_{j,k}^{\alpha}\,.$ (174) Eq. (173) is the formal exact expression for the susceptibility for a qubit simultaneously coupled to two harmonic baths at different temperatures for general spectral densities and temperatures. #### H.4.1 Ohmic baths and the case $K_{\text{L}}+K_{\text{R}}=1/2$ We now specialize to the case of two baths with Ohmic damping defined in Eq. (12) where we assume identical dependence on the energies included in $J(\epsilon)$. The bath correlation functions take the form $\displaystyle Q_{\alpha}(t)$ $\displaystyle=$ $\displaystyle 2K_{\alpha}\ln{\Big{\\{}\Big{(}\frac{\epsilon_{\rm C}}{\pi k_{\text{B}}T_{\alpha}}\Big{)}\sinh\Big{(}\frac{\pi k_{\text{B}}T_{\alpha}|t|}{\hbar\beta_{\alpha}}\Big{)}\Big{\\}}}$ (175) $\displaystyle+$ $\displaystyle i\pi K_{\alpha}{\rm sgn}(t)\,.$ The blip-sojourn interactions and the phases $\phi_{k,m}^{\nu}$, Eq. (174) simplify, taking the form $\displaystyle X_{j,k}^{\alpha}=\pi K_{\alpha}\;,\;{\rm for}\,j=k+1\qquad X_{j,k}^{\alpha}=0\;,\;{\rm for}\,j\neq k+1$ $\displaystyle\phi_{k,n+m}^{\alpha}=\xi_{k+1}\pi K_{\alpha}\,.$ (176) The susceptibility Eq. (173) becomes $\chi(t)=\frac{2}{\hbar}\lim_{t_{p}\to-\infty}\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\big{(}-\frac{\Delta^{2}}{2\hbar^{2}}\big{)}^{m+n-1}\int_{t_{p}}^{t}\mathcal{D}_{2m-1,2n-1}\\{t_{j}\\}\sum_{\\{\xi_{j}=\pm 1\\}}\xi_{1}\xi_{n}G_{n+m-1}^{L}G_{n+m-1}^{R}\\\ \sin(\pi(K_{\rm L}+K_{\rm R}))\cos(\pi(K_{\rm L}+K_{\rm R}))^{n+m-2}.$ (177) We observe that dependence on the damping strengths $K_{\alpha}$ coming from the blip-sojourn interactions $X_{i,j}$, is in the simple form $K_{\rm L}+K_{\rm R}$. Thus the two Ohmic baths coupled to the qubit with strengths such that $K_{\rm L}+K_{\rm R}=1/2$ can be treated analogously to the standard spin-boson model at the Toulouse point. We remark that in Eq. (177) the coupling strengths enter non linearly the blip-blip interactions, $G_{n+m-1}^{L}G_{n+m-1}^{R}$, which include the temperatures of the two baths. Therefore, the two baths at $K_{\rm L}+K_{\rm R}=1/2$ are not simply equivalent to a single bath at $K=1/2$ with an effective temperature. We proceed with the evaluation of Eq. (177) for $K_{\rm L}+K_{\rm R}=1/2$. We observe that all the terms in the sum, except for the first one $m=n=1$, have $n+m-2$ zeros from $\cos(\pi(K_{\rm L}+K_{\rm R}))^{n+m-2}$. They give a non- vanishing contribution if a proper divergency comes from the interaction terms between the system’s transitions included in the $G_{n+m-1}^{L}G_{n+m-1}^{R}$. This is the typical case of a bath at $K=1/2$. In the case of two baths with $K_{\rm L}+K_{\rm R}=1/2$ we have $\lim_{K_{\rm L}+K_{\rm R}\to 1/2}\Delta^{2}\cos(\pi(K_{\rm L}+K_{\rm R}))\\\ \times\int_{0}^{\infty}d\tau e^{-\lambda\tau}e^{-\sum_{\alpha}{\rm Re}\left[Q_{\alpha}(\tau)\right]}=\frac{\pi}{2}\frac{\Delta^{2}}{\epsilon_{\rm C}}\equiv\hbar\gamma\,.$ (178) Such an integral describes a collapsed dipole which does not interact with any other dipole, having effectively a zero dipole moment. This mechanism allows to sum the different terms of the sum in Eq. (177), leading to $\chi(t)=\frac{4}{\hbar^{3}}\frac{\Delta^{2}}{2\gamma}\Theta(t)\,e^{-\gamma t/2}\int_{0}^{\infty}d\tau\\\ e^{-\sum_{\alpha}{\rm Re}[Q_{\alpha}(\tau)]}[e^{-\gamma|t-\tau|/2}-e^{-\gamma(t+\tau)/2}]\,.$ (179) This solution extends to the non-equilibrium case the dynamical susceptibility at the Toulouse limit of the spin-boson model. Performing its Fourier transform and inserting it in the transmission function Eq.(45) the heat current between two harmonic baths under the strong coupling condition $K_{\rm L}+K_{\rm R}=1/2$ is obtained. Equivalently, the heat current Eq. (16) with Eq. (45) and the Fourier transform of Eq. (179) can be written as $I=\frac{1}{\hbar}\frac{K_{\text{L}}K_{\text{R}}}{K_{\text{L}}+K_{\text{R}}}\imaginary\int_{-\infty}^{+\infty}dt\,\chi(t)F(-t),$ (180) where $F(-t)=(k_{\rm B}T_{\text{R}})^{3}\psi^{(2)}\left(1+\frac{k_{\rm B}T_{\text{R}}}{\epsilon_{\text{C}}}\left(1-i\frac{\epsilon_{\text{C}}\,t}{\hbar}\right)\right)\\\ -(k_{\rm B}T_{\text{L}})^{3}\psi^{(2)}\left(1+\frac{k_{\rm B}T_{\text{L}}}{\epsilon_{\text{C}}}\left(1-i\frac{\epsilon_{\text{C}}\,t}{\hbar}\right)\right)$ (181) and $\psi^{(2)}(z)$ denotes the second derivative of the digamma function. ## Appendix I Green’s functions for the non-linear resonator We first define a generic retarded Green’s function as $G_{A,B,C,...;b}^{r}(t,t^{\prime})=-i\theta(t-t^{\prime})\langle[A(t)B(t)C(t)...,b^{\dagger}(t^{\prime})]\rangle,$ (182) where $A$, $B$ and $C$ are operators such as $b$, $n$ or $b_{\alpha k}$, $n(t)=b^{\dagger}(t)b(t)$ being the number operator. In the subscript ($A,B,C,...;b$), the operators appearing before the semicolon are taken at time $t$, while the ones appearing after the semicolon are taken at time $t^{\prime}$. The Hamiltonian for the non-linear resonator is given by Eq. (3). The equation of motion for the retarded Green’s function of the system can be written as $i\partial_{t}G_{b;b}^{r}(t,t^{\prime})=\delta(t-t^{\prime})+\Delta G_{b;b}^{r}(t,t^{\prime})\\\ +UG_{n,b;b}^{r}(t,t^{\prime})+\sum_{k,\alpha}V_{\alpha k}G_{\alpha k;b}^{r}(t,t^{\prime})$ (183) where $G_{n,b;b}^{r}(t,t^{\prime})=-i\theta(t-t^{\prime})\langle[n(t)b(t),b^{\dagger}(t^{\prime})]\rangle,$ (184) and $G_{\alpha k;b}^{r}(t,t^{\prime})=-i\theta(t-t^{\prime})\left\langle\left[b_{\alpha k}(t),b^{\dagger}(t^{\prime})\right]\right\rangle.$ (185) Using the equation of motion for $G_{\alpha k;b}^{r}(t,t^{\prime})$ we obtain $\left(\epsilon-\Delta-\Sigma^{(0)}(\epsilon)\right)G_{b;b}^{r}(\epsilon)=1+UG_{n,b;b}^{r}(\epsilon),$ (186) where $\Sigma^{(0)}(\epsilon)$ is the usual self energy due to system-bath coupling defined as $\Sigma^{(0)}(\epsilon)=\sum_{\alpha}\Sigma_{\alpha}^{(0)}(\epsilon)=\sum_{k,\alpha}|V_{\alpha k}|^{2}g^{r}_{\alpha k;\alpha k}(\epsilon),$ (187) where $g^{r}_{\alpha k;\alpha k}$ is the retarded Green’s function for the free bath, i. e. $g^{r}_{\alpha k;\alpha k}(t,t^{\prime})=-i\theta(t-t^{\prime})\left\langle\left[b_{\alpha k}(t),b_{\alpha k}^{\dagger}(t^{\prime})\right]\right\rangle.$ (188) In order to evaluate Eq. (183), we need to evaluate $G^{r}_{n,b;b}(\epsilon)$ in terms of $G^{r}_{b;b}(\epsilon)$. Using the equation of motion we find $\left(\epsilon-\Delta\right)G^{r}_{n,b;b}(\epsilon)=2\left\langle n\right\rangle+UG^{r}_{n,n,b;b}(\epsilon)\\\ +\sum_{\alpha k}V_{\alpha k}\Big{[}2G^{r}_{n,\alpha k;b}(\epsilon)-G^{r}_{b,b,\alpha k^{\dagger};b}(\epsilon)\Big{]}.$ (189) We decouple Eq. (189) to second order by approximating $G^{r}_{n,n,b;b}(\epsilon)=\left\langle n\right\rangle G^{r}_{n,b;b}(\epsilon)$ and we obtain $\left(\epsilon-\Delta-U\left\langle n\right\rangle\right)G^{r}_{n,b;b}(\epsilon)=2\left\langle n\right\rangle\\\ +\sum_{\alpha k}V_{\alpha k}\Big{[}2G^{r}_{n,\alpha k;b}(\epsilon)-G^{r}_{b,b,\alpha k^{\dagger};b}(\epsilon)\Big{]}$ (190) where $G_{n,n,b;b}^{r}(t,t^{\prime})=-i\theta(t-t^{\prime})\langle[n(t)n(t)b(t),b^{\dagger}(t^{\prime})]\rangle.$ (191) We can again use equation of motion to evaluate $G^{r}_{n,\alpha k;b}(\epsilon)$ and obtain $(\epsilon-\epsilon_{\alpha k})G^{r}_{n,\alpha k;b}(\epsilon)=V_{\alpha k}\Big{(}G^{r}_{n,b;b}-n_{\alpha}(\epsilon_{\alpha k})G^{r}_{b;b}(\epsilon)\\\ +\left\langle b^{\dagger}b_{\alpha k}\right\rangle G^{r}_{\alpha k;b}(\epsilon)\Big{)}+\left\langle b^{\dagger}b_{\alpha k}\right\rangle,$ (192) and $\left(\epsilon+\epsilon_{\alpha k}-2\Delta-2U\left\langle n\right\rangle-U\right)G^{r}_{b,b,\alpha k^{\dagger};b}(\epsilon)=2\left\langle bb_{\alpha k}^{\dagger}\right\rangle\\\ -V_{\alpha k}G^{r}_{n,b;b}(\epsilon)+2V_{\alpha k}n_{\alpha}(\epsilon_{\alpha k})G^{r}_{b;b}(\epsilon).$ (193) We do not take into account the terms involving correlation between the leads and the system, such that $\left\langle b^{\dagger}b_{\alpha k}\right\rangle=\left\langle bb_{\alpha k}^{\dagger}\right\rangle=0$ meir1991 . Substituting Eq. (192) and Eq. (193) into Eq. (190) we obtain $G^{r}_{n,b;b}=\frac{2A(\epsilon)\left\langle n\right\rangle}{U}-\frac{2\left(\Sigma^{(2)}(\epsilon)+\Sigma^{(3)}(\epsilon)\right)A(\epsilon)}{U}G_{b;b}^{r}$ (194) where $A(\epsilon)/U=\left(\epsilon-\Delta-U\left\langle n\right\rangle-\left(2\Sigma^{(0)}(\epsilon)+\Sigma^{(1)}(\epsilon)\right)\right)^{-1},$ (195) $\Sigma^{(0)}(\epsilon)=\sum_{\alpha k}|V_{\alpha k}|^{2}\left(\epsilon-\epsilon_{\alpha k}+i\eta\right)^{-1}$, $\Sigma^{(1)}(\epsilon)=\sum_{\alpha k}|V_{\alpha k}|^{2}\left(\epsilon+\epsilon_{\alpha k}-2\Delta-2U\left\langle n\right\rangle-U+i\eta\right)^{-1}$, $\Sigma^{(2)}(\epsilon)=\sum_{\alpha k}|V_{\alpha k}|^{2}n_{\alpha}(\epsilon_{\alpha k})\left(\epsilon-\epsilon_{\alpha k}+i\eta\right)^{-1}$ and $\Sigma^{(3)}(\epsilon)=\sum_{\alpha k}|V_{\alpha k}|^{2}n_{\alpha}(\epsilon_{\alpha k})\big{(}\epsilon+\epsilon_{\alpha k}-2\Delta-2U\left\langle n\right\rangle-U\\\ +i\eta\big{)}^{-1}$. Substituting Eq. (194) in Eq. (183), we find the final expression for $G^{r}_{b;b}(\epsilon)$, i.e. $G^{r}_{b;b}(\epsilon)=\frac{1+2A(\epsilon)\left\langle n\right\rangle}{\epsilon-\Delta-\Sigma^{(0)}(\epsilon)+2A(\epsilon)\left(\Sigma^{(2)}(\epsilon)+\Sigma^{(3)}(\epsilon)\right)}.$ (196) The self energies are given by $\displaystyle\Sigma^{(1)}(\epsilon)$ $\displaystyle=$ $\displaystyle\sum_{\alpha}\int\frac{d\omega}{2\pi}\left[\frac{\Gamma_{\alpha}(\omega)}{\epsilon+\omega-2\Delta-2U\left\langle n\right\rangle-U+i\eta}\right],$ $\displaystyle\Sigma^{(2)}(\epsilon)$ $\displaystyle=$ $\displaystyle\sum_{\alpha}\int\frac{d\omega}{2\pi}\left[\frac{n_{\alpha}(\omega)\Gamma_{\alpha}(\omega)}{\epsilon-\omega+i\eta}\right],$ $\displaystyle\Sigma^{(3)}(\epsilon)$ $\displaystyle=$ $\displaystyle\sum_{\alpha}\int\frac{d\omega}{2\pi}\left[\frac{\Gamma_{\alpha}(\omega)n_{\alpha}(\omega)}{\epsilon+\omega-2\Delta-2U\left\langle n\right\rangle-U+i\eta}\right].$ For any function $g$ we can write $\int d\omega\frac{g(\omega)}{x-\omega+i\eta}=\mathcal{P}\int d\omega\left\\{\frac{g(\omega)}{x-\omega}\right\\}-\frac{i}{2}g(x),$ (197) where the first term is the Cauchy-Hadamard principal value distribution. ## References * (1) F. Giazotto, T. T. Heikkilä, A. Luukanen, A. M. Savin, and J. P. Pekola, Rev. Mod. Phys. 78, 217 (2006). * (2) F. Giazotto and M. J. Martínez-Pérez, Nature 492, 401 (2012). * (3) J. P. Pekola, Nat. Phys. 11, 118 (2015). * (4) A. Ronzani, B. Karimi, J. Senior, Y. -C. Chang, J. T. Peltonen, C. -D. Chen, and J. P. Pekola, Nat. Phys. 14, 991 (2018). * (5) O. Maillet, P. A. Erdman, V. Cavina, B. Bhandari, E. T. Mannila, J. T. Peltonen, A. Mari, F. Taddei, C. Jarzynski, V. Giovannetti, and J. P. Pekola, Phys. Rev. Lett. 122, 150604 (2019). * (6) O. Maillet, D. Subero, J. T. Peltonen, D. S. Golubev, and J. P. Pekola, Nat. Commun. 11, 4326 (2020). * (7) C. Starr, Physics 7, 15 (1936). * (8) M. Terraneo, M. Peyrarad, and G. Casati, Phys. Rev. Lett. 88, 094302 (2002). * (9) B. Li, L. Wang, and G. Casati, Phys. Rev. Lett. 93, 184301 (2004). * (10) D. Segal, and A. Nitzan, Phys. Rev. Lett. 94, 034301 (2005). * (11) J. -P. Eckmann, and C. M. -Monasterio, Phys. Rev. Lett. 97, 094301 (2006). * (12) N. Zeng, and J. -S. Wang, Phys. Rev. B 78, 024305 (2008). * (13) T. Ojanen, Phys. Rev. B 80, 180301 (R) (2009). * (14) T. Ruokola, T. Ojanen, and A. -P. Jauho, Phys. Rev. B 79, 144306 (2009). * (15) A. Purkayastha, A. Dhar, and M. Kulkarni, Phys. Rev. A 94, 052134 (2016). * (16) L. -A. Wu, C. X. Yu, and D. Segal, Phys. Rev. E 80, 041103 (2009). * (17) L. -A. Wu, and D. Segal, Phys. Rev. Lett. 102, 095503 (2009). * (18) D. M. -T. Kuo, and Y. Chang, Phys. Rev. B 81, 205321 (2010). * (19) C. R. Otey, W. T. Lau, and S. Fan, Phys. Rev. Lett. 104, 154301 (2010). * (20) L. Zhang, J.-S. Wang, and B. Li, Phys. Rev. B 81, 100301(R) (2010) * (21) Y. Yang, H. Chen, H. Wang, N. Li, and L. Zhang, Phys. Rev. E 98, 042131 (2018) * (22) N. A. Roberts and D. G. Walker, Int. J. Therm. Sci. 50, 648 (2011). * (23) T. Ruokola, and T. Ojanen, Phys. Rev. B 83, 241404(R) (2011). * (24) K. G. S. H. Gunawardana, K. Mullen, J. Hu, Y. P. Chen, and X. Ruan, Phys. Rev. B 85, 245417 (2012). * (25) M. J. Martínez-Pérez, and F. Giazotto, Appl. Phys. Lett. 102, 182602 (2013). * (26) F. Giazotto, and F. S. Bergeret, Appl. Phys. Lett. 103, 242602 (2013). * (27) G. T. Landi, E. Novais, M. J. de Oliveira, and D. Karevski, Phys. Rev. E. 90, 042142 (2014). * (28) Y. -Y. Liu, W. -X. Zhou, L. -M. Tang, K. -Q. Chen, Appl. Phys. Lett. 105, 203111 (2014). * (29) J. -H. Jiang, M. Kulkarni, D. Segal, and Y. Imry, Phys. Rev. B 92, 045309 (2015). * (30) R. Sánchez, B. Sothmann, and A. N. Jordan, New J. Phys. 17, 075006 (2015). * (31) K. Joulain, J. Drevillon, Y. Ezzahari, and J. Ordonez-Miranda, Phys. Rev. Lett. 116, 200601 (2016). * (32) B. K. Agarwalla, D. Segal, New J. Phys. 19, 043030 (2017). * (33) A. Marcos-Vicioso, C. López-Jurado, M. Ruiz-Garcia, and R. Sánchez, Phys. Rev. B 98, 035414 (2018). * (34) F. Giazotto, and F. S. Bergeret, Appl. Phys. Lett. 116, 192601 (2020). * (35) C. W. Chang, D. Okawa, A. Majumdar, A. Zettl, Science 314, 1121 (2006). * (36) R. Scheibner, M. König, D. Reuter, A. D. Wieck, C. Gould, H. Buhmann, and L. W. Molenkamp, New J. Phys. 10, 083016 (2008). * (37) M. Schmotz, J. Maier, E. Scheer and P. Leiderer, New J. Phys. 13, 113027 (2011). * (38) M. J. Martínez-Pérez, A. Fornieri, and F. Giazotto, Nat. Nanotechnol. 10, 303 (2015). * (39) J. Senior, A. Gubaydullin, B. Karimi, J. T. Peltonen, J. Ankerhold, and J. P. Pekola, Commun. Phys. 3, 40 (2020). * (40) R. Landauer, IBM J. Res. Dev. 1, 223 (1957). * (41) M. Büttiker, Y. Imry, R. Landauer, and S. Pinhas, Phys. Rev. B 31, 6207 (1985). * (42) A. Riera-Campeny, M. Mehboudi, M. Pons, and A. Sanpera, Phys. Rev. E 99, 032126 (2019). * (43) T. Ojanen and A. -P. Jauho, Phys. Rev. Lett. 100, 155902 (2008). * (44) N. Boudjada and D. Segal, J. Phys. Chem. A 118, 11323 (2014). * (45) Jens Koch, Terri M. Yu, Jay Gambetta, A. A. Houck, D. I. Schuster, J. Majer, Alexandre Blais, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, Phys. Rev. A 76, 042319 (2007). * (46) Manucharyan, V. E., J. Koch, L. I. Glazman, and M. H. Devoret, Science 326, 113 (2009). * (47) G. Zhu, D. G. Ferguson, V. E. Manucharyan, and J. Koch, Phys. Rev. B 87, 024510 (2013). * (48) Martinis, J. M., S. Nam, J. Aumentado, and C. Urbina, 2002, Phys. Rev. Lett. 89, 117901 (2002). * (49) U. Weiss Quantum Dissipative Systems (World Scientific, 2012). * (50) M. F. Ludovico, M. Moskalets, D. Sánchez, and L. Arrachea, Phys. Rev. B 94, 035436 (2016). * (51) Y. Meir and N. S. Wingreen, Phys. Rev. Lett. 68, 2512 (1992). * (52) G. Stefanucci and R. van Leeuwen, Nonequilibrium many-body theory of quantum systems. (Cambridge University Press, 2013). * (53) K. A. Velizhanin, M. Thoss, and H. Wang, J. Chem. Phys. 133, 084503 (2010). * (54) J. S. Wang, J. Wang, and N. Zeng, Phys. Rev. B 74, 033408 (2006). * (55) K. Saito, Europhys. Lett. 83, 50006 (2008). * (56) D. Segal, Phys. Rev. E 90, 012148 (2014). * (57) H. Haug and A.P. Jauho, Quantum Kinetics in Transport and Optics f Semiconductors. (Springer-Verlag Berlin Heidelberg, 2008). * (58) H. P. Breuer and F. Petruccione, The Theory of Open Quantum Systems. (Oxford University Press, Oxford, 2002). * (59) D. Boese and R. Fazio, Europhys. Lett. 56, 576 (2001). * (60) In order to avoid clutter in the notation, in this section we use the shorthand symbol $\Upsilon^{\pm}_{\alpha}(T)$ instead of $\Upsilon^{\pm}_{\alpha}(\Delta,T)$. * (61) M. Turek and K. A. Matveev, Phys. Rev. B 65, 115332 (2002). * (62) K. Kaasbjerg and A.-P. Jauho, Phys Rev. Lett 116, 196801 (2016) * (63) B. Bhandari, G. Chiriacò, P. A. Erdman, R. Fazio, and F. Taddei, Phys. Rev. B 98, 035415 (2018). * (64) P. A. Erdman, J. T. Peltonen, B. Bhandari, B. Dutta, H. Courtois, R. Fazio, F. Taddei, and J. P. Pekola, Phys. Rev. B 99, 165405 (2019). * (65) P. Schad, Y. Makhlin, B. Narozhny, G. Schön, and A. Shnirman, Ann. Phys. (N. Y.) 361, 401 (2015). * (66) P. Schad, A. Shnirman, and Y. Makhlin, Phys. Rev. B 93, 174420 (2016). * (67) Junjie Liu, Hui Xu, Baowen Li, and Changqin Wu, Phys. Rev. E 96, 012135 (2017). * (68) In order to account for the counter-rotating terms in the coupling Hamiltonian, in the calculation of the Lamb-shift the spectral density $\Gamma_{\alpha}(\epsilon^{\prime})$ must be extended to negative values according to $\Gamma_{\alpha}(-\epsilon^{\prime})=-\Gamma_{\alpha}(\epsilon^{\prime})$. * (69) M. Sassetti and U. Weiss, Phys. Rev. A 41 5383 (1990). * (70) Y. Meir, N. S. Wingreen, and P. A. Lee, Phys. Rev. Lett. 66, 3048 (1991). * (71) Y. Meir, N. S. Wingreen, and P. A. Lee, Phys. Rev. Lett. 70, 2601 (1993). * (72) W. Mao, P. Coleman, C. Hooley, and D. Langreth, Phys. Rev. Lett., 91, 207203 (2003). * (73) Alexander Shnirman and Yuriy Makhlin, Phys. Rev. Lett. 91, 207204 (2003).
# Dynamic prediction of time to event with survival curves Jie Zhu<EMAIL_ADDRESS>Blanca Gallego Centre for Big Data Research in Health (CBDRH), University of New South Wales, Kensington, 2052, NSW, Australia ###### Abstract With the ever-growing complexity of primary health care system, proactive patient failure management is an effective way to enhancing the availability of health care resource. One key enabler is the dynamic prediction of time-to- event outcomes. Conventional explanatory statistical approach lacks the capability of making precise individual level prediction, while the data adaptive binary predictors does not provide nominal survival curves for biologically plausible survival analysis. The purpose of this article is to elucidate that the knowledge of explanatory survival analysis can significantly enhance the current black-box data adaptive prediction models. We apply our recently developed counterfactual dynamic survival model (CDSM) to static and longitudinal observational data and testify that the inflection point of its estimated individual survival curves provides reliable prediction of the patient failure time. ###### keywords: Survival Analysis, Bayesian Machine Learning, Predictive Modelling, Data Mining ## 1 Introduction Time-to-event (TTE) predictions are extensively used by medical statisticians. Traditional methods of logistic regression are not suited to include both the event and time aspects as the outcome in the model. Non-parametric models such as the Kaplan-Meier [1] estimator and the semi-parametric Cox proportional hazard models and its extentsions [2, 3] face the challenge of adjusting for multiple/time-varying covariates. The recent development of data adaptive models such as the deep neural networks [4] and Super-Learner [5] enable the efficient estimation of individual survival curves with static and longitudinal data, yet relatively little has been written about the implication of these explanatory techniques in the context of event time prediction. The strength of explanatory survival analysis has been applied in data adaptive predictive models to improve the estimation accuracy of survival curves. In Deephit [6], a rank loss function is designed to evaluate whether the model can order observations by their expected time to fail; in DeepSurv [7], authors approximates the Cox proportional hazard function using a densely connected neural network; and in WTTE-RNN [8], the predicted event time is assumed to follow a Weibull distribution whose parameters is estimated using a recurrent neural network. These model are in contrast with conventional binary predictors such as the recurrent neural networks proposed in the 2019 PhysioNet Challenge [9], where the prediction of TTE was equated to a longitudinal binary classification problem. In our recently proposed counterfactual dynamic survival model (CDSM), we relaxed major limitations of the three models aforementioned. Specifically, we do not assume Cox proportional hazard ratio or any parametric assumption in our model. At the same time, we allow longitudinal covariates and quantify the uncertainty of the neural network estimations using Bayesian dense layers. The focus of our previous work is the model development and its application to causal inference. In this study, we fixate on the prediction power of CDSM as an outcome model and explore how biologically plausible survival curve estimations can improve the TTE predictions. In Section 2, we describe the methodology to estimate the survival outcomes and predict the time to event. Section 3 introduces a set of case studies and model evaluation techniques. Results are presented in section 4. We end our study with a discussion. ## 2 Predicting the time to event with survival curves To formalize the framework for causal inference for longitudinal survival outcomes, we follow the notations in previous study [10]. Suppose we observe a sample $\mathcal{O}\text{ of }n$ independent observations generated from an unknown distribution $\mathcal{P}_{0}$ : $\displaystyle\mathcal{O}:=\big{(}X_{i}(t),Y_{i}(t),A_{i}(t),t_{i}=\min(t_{s,i},t_{c,i})\big{)},i~{}=1,2,\ldots,n$ where $X_{i}(t)=(X_{i,1}(t),X_{i,2}(t),\ldots,X_{i,\textit{d}}(t)),d=1,2,\ldots,D$ are baseline covariates at time $t$, $\;t=1,2,\dots,\Theta,$ with $\Theta$ being the maximum follow-up time of the study; $A(t)_{i}$ is the treatment condition at time $t$, $A(t)_{i}=1$ if observation $i$ receives the treatment and $A(t)_{i}=0$ if it is under control condition; $Y_{i}(t)$ denotes the outcome at time $t$, $Y_{i}=1$ if $i$ experienced an event and $Y_{i}=0$ otherwise; $t_{i}$ is determined by the event or censor time, $t_{s,i}$ or $t_{c,i}$, whichever happened first. For each individual $i$, we define the hazard rate $h(t)$, the probability of experiencing an event in interval $(t-1,t]$, as: $\displaystyle h(t)\;:=\;Pr(Y(t)=1\;|\;\overline{A}(t,u),\overline{X}(t,u)),$ (1) where $\bar{A}$ and $\bar{X}$ are the history of treatments and covariates from $t-u$ to $t-1$ with $u$ being the length of the observation history. Thus, the probability that an uncensored individual will experience the event in time $t$ can be written as a product of terms, one per period, describing the conditional probabilities that the event did not occur since time $0$ to $t-1$ but occur in period $(t-1,t]$: $\displaystyle\begin{split}Pr(t_{s,i}=t)&=h(t)(1-h(t-1))(1-h(t-2))\cdots(1-h(0))\\\ &=h(t)\prod_{j=0}^{t-1}(1-h(j)).\end{split}$ similarly, the probability that a censored individual will experience an event after time $t$ can be written as a product of terms describing the conditional probabilities that the event did not occur in any observations: $\displaystyle\begin{split}S(t)&=Pr(t_{s,i}>t)\\\ &=(1-h(t))(1-h(t-1))(1-h(t-2))\cdots(1-h(0))\\\ &=\prod_{j=0}^{t}(1-h(j)).\end{split}$ (2) which is the population survival function. The outcome label $Y^{M}$ for individual i is defined as a matrix over each time period $t\in[1,2,\ldots,\Theta]$: $\displaystyle\begin{split}Y^{M}_{i}&=\begin{bmatrix}E_{i}\\\ C_{i}\end{bmatrix},\\\ \text{where }\\\ E_{i}&=(e_{i}(t=1),e_{i}(t=2),\dots,e_{i}(t=t_{i}),\ldots,e_{i}(t=\Theta))\\\ e_{i}(\cdot)&=1\text{ for }t<t_{i}\text{ if }i\text{ is censored or having an event at }t_{i}\\\ e_{i}(\cdot)&=0\text{ for }t\geq t_{i};\\\ C_{i}&=(c_{i}(t=1),c_{i}(t=2),\ldots,c_{i}(t=t_{i}),\ldots,c_{i}(t=\Theta))\\\ c_{i}(\cdot)&=0\text{ if }i\text{ is censored at }t_{i};\\\ c_{i}(\cdot)&=0\text{ and }c_{i}(t_{i})=1\text{ if }i\text{ is having an event at }t_{i}.\end{split}$ (3) The output of our model will be a $\Theta$-dimension vector, $\hat{H}_{i,\Theta}$, and each element represents the predicted conditional probability of surviving a time interval, which will be $\\{1-\hat{h}_{i}(j)$, for $j\in[0,1,2,\ldots,\Theta]\\}$. Then the estimated survival curve will be given by $\hat{Y}_{i}(t)=\prod_{j=0}^{\Theta}(1-\hat{h}_{i}(j))$. Figure 1: Prediction of event time based on hazard curves. Conventional predictive models fit the multivariate logistic outcome $\hat{Y}_{1\theta}$ in Equation (3) using binary classifiers, where researchers have to set the optimal probability threshold to classify whether an event will occur (see hazard threshold in Figure 1). For instance, one can use the Nelder-Mead method to locate the optimal probability threshold via minimizing the distance between the actual and predicted event time, yet this in-sample threshold might not be the optimal for predicting the TTE on a new cohort. This study attempts to learn from the biological survival curve and uses the inflection point, $\hat{T}$, of the survival curve in Equation (2) to signify the event time, which we define as the time point equateing the second derivative of estimated survival curve $\hat{S}$ to zero: $\displaystyle\begin{array}[]{@{}l}\hat{T}=t\text{ where }\displaystyle\frac{\partial^{2}\hat{S}}{\partial t^{2}}=0\end{array}$ (4) In Figure 1, we can see the hazard rate has a rapid increase after $\hat{T}$, which means a high probability of experiencing an event. The uncertainty of the estimated survival curve quantifies the uncertainty of the predicted event time. ## 3 Study design and databases We built and then validated a survival outcome model based on the retrospective analysis of three static databases and three dynamic longitudinal databases. The summary of these data sets are presented in Table 1. Table 1: Summary of the clinical data sets. | | | | ---|---|---|---|--- Database | Sample | Covariates | Unique Time Points | % Censored SUPPORT | 8873 | 14 | 1714 | 32% METABRIC | 1904 | 9 | 1686 | 42% GBSG | 2232 | 7 | 1230 | 43% PhysioNet | 40336 | 40 | 20*2 hours | 93% MIMIC-III | 20938 | 44 | 20*2 hours | 86% CPRD AF | 18102 | 53 | 20*3 months | 82% The static data sets were provided by the DeepSurv python package [7] which includes: 1. 1. The Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT); 2. 2. The Molecular Taxonomy of Breast Cancer International Consortium (METABRIC); and 3. 3. The Rotterdam tumor bank and German Breast Cancer Study Group (GBSG). The longitudinal data sets are: 1. 1. The Medical Information Mart for Intensive Care version III (MIMIC-III), an open-access, anonymized database of 61,532 admissions from 2001–2012 in six ICUs at a Boston teaching hospital [11] . 2. 2. 2019 PhysioNet Sepsis prediction challenge data set [9] (PhysioNet). PhysioNet Sepsis prediction c containing more than 3.3 million admissions from 2003–2016 in 459 ICUs across the United States. 3. 3. The Clinical Practice Research DataLink data set [12] (CPRD AF) comparing Vitamin K Antagonists (VKAs) and Non-Vitamin K antagonist oral anticoagulants (NOAC) in preventing three combined outcomes (ischemic attack, major bleeding and death) of patients with non-valvular atrial fibrillation (AF). In both MIMIC-III and PhysioNet, we define Sepsis event as a suspected infection (prescription of antibiotics and sampling of bodily fluids for microbiological culture) combined with evidence of organ dysfunction, defined by a two points deterioration of SOFA score [13]. We follow previous papers [9, 14] for data extraction and processing. For the PhysioNet, we combined data from hospital A and B, and used hospital location (A or B) as the synthetic treatment condition. For MIMIC-III, we define the treatment as the usage of mechanical ventilation(MV). For the CPRD AF, the outcome of interest is the first occurrence of combined outcomes of major bleeding, death and stroke. The treatment is the usage of NOAC vs the control of using VKAs. For the static data sets, we discretized the time points into windows of 50 time steps and censored all steps do not form a complete window (i.e. $34\equiv 1714\mod 50$ windows for the SUPPORT data set, $33\equiv 1686\mod 50$ windows for METABRIC and $24\equiv 1230\mod 50$ windows for GBSG). For the longitudinal data sets, we considered the first 20 time stamps for each patient (i.e., the first 20 2-hour intervals for PhysioNet and MIMIC-III, and the first 20 months for the AF.). We split each database into estimation data set (70% of the original data for training and 10% validation) and testing data set (20% of the original). ### 3.1 Model evaluation We performed an evaluation of the estimations of survival curves and predictions of event time using the three metrics described below: The area under the receiver operating characteristic (AUROC) and C-Index: we use AUROC and Harrell’s C-index [15] to evaluate the models’ discrimination performance. Both indicators are calculated using the multivariate logistic outcome $\hat{Y}_{1\theta}$ in Equation (3) . Utility distance (Distance Score): we define the distance metric to evaluate the predicted event time as: $\displaystyle\text{Distance}=\frac{1}{n}\sum_{i}|\hat{T}_{i}-T_{i}|$ where $\hat{T}$ is defined in Equation (4) and $T$ is the true event/censoring time. We compared following algorithms on the estimation of survival curves and the prediction of event time: * 1. Dynamic Bayesian survival causal model (CDSM): the model targets the outcome defined in Equation (3) by training two counterfactual sub-networks for treated and controlled observations. If no treatment variable is defined, we create two copies of the original data set, with first one marked as receiving the treatment and the second one as under control. The loss function of CDSM has three components: 1) the partial log likelihood loss of the joint distribution of the first hitting time and corresponding event or right- censoring; 2) the rank loss function to capture the concordance score defined in survival analysis; 3) The calibration loss function minimizes the selection bias in for treatment assignment. Please refer to our previous paper for details. * 2. Plain recurrent neural network with survival outcomes (RNN): the model modifies the CDSM by removing the counterfactual sub-networks and the third loss function in CDSM. No treatment variable has to be specified in this model. * 3. Plain recurrent neural network with binary outcomes (RNN Binary): the model provides the direct prediction on the longitudinal outcome $\hat{Y}_{1\theta}$ in Equation (3) using the mean squared error loss function. * 4. DeepHit [16]: the model uses the same loss functions as the RNN but does not capture the history of covariates and is only evaluated for static databases. The model construction and training uses Python 3.8.0 with Tensorflow 2.3.0 and Tensorflow-Probability 0.11.0 [17] (code available at https://github.com/EliotZhu/CDSM). ## 4 Results In Table 2, we confirmed CDSM, RNN and DeepHit had similar performance on the estimation of survival curves (see the concordance index) and the prediction of event time (see the distance score) in the three static testing data sets. However, in terms of the AUROC, we noticed RNN Binary had superior performance than the others, although it had lower C-Index. The counterfactual sub-networks and the selection bias calibration loss function in CDSM did not affect the estimation accuracy, resulting the equivalency among CDSM, RNN and DeepHit in the static non-causal survival estimations. Table 2: Model performance on static datasets | | | | ---|---|---|---|--- Dataset | Metabric | | | Metrics | CDSM | RNN | RNN Binary | DeepHit AUROC | 0.869 | 0.877 | 0.885 | 0.874 C-Index | 0.685 | 0.655 | 0.590 | 0.683 Distance Score | 4.186 | 4.034 | 3.944 | 4.097 | GBSG | | | AUROC | 0.781 | 0.780 | 0.817 | 0.798 C-Index | 0.617 | 0.593 | 0.559 | 0.613 Distance Score | 4.384 | 4.974 | 4.464 | 4.668 | Support | | | AUROC | 0.792 | 0.788 | 0.802 | 0.820 C-Index | 0.653 | 0.650 | 0.550 | 0.633 Distance Score | 4.434 | 5.474 | 5.672 | 4.231 * 1. All metrics are averaged over estimation windows using testing data sets. The best value in each metric is in bold. Similar trend was observed when we evaluated CDSM, RNN, and RNN Binary using the longitudinal databases (see the estimation data set evaluations in Table LABEL:tw-03a37ae9b5b0). However, in the corresponding testing data sets, CDSM significantly outperformed RNN Binary, especially for the C-Index and distance score. We saw the imposition of survival outcome in Equation (3) and concordance loss functions defined in CDSM/RNN produced nominal survival curves, where the RNN Binary only maximized the discrimination performance on the binary indicator of whether the sepsis has occurred (i.e., the estimated survival probabilities for the AF testing data set were stacked at zeros and ones as shown in Figure 2 (a)). All metrics are averaged over 20 estimation windows using either estimation (the default) or testing data sets (specified in brackets). The best value in each metric is in bold. Table 3: Model performance on dynamic datasets | | | | | | | | | ---|---|---|---|---|---|---|---|---|--- Dataset | PhysioNet (hours) | MIMIC-III (hours) | AF (months) Metrics | CDSM | RNN | RNN Binary | CDSM | RNN | RNN Binary | CDSM | RNN | RNN Binary AUROC | 0.980 | 0.984 | 0.997 | 0.959 | 0.953 | 0.988 | 0.978 | 0.986 | 0.995 AUROC (test) | 0.869 | 0.858 | 0.824 | 0.969 | 0.941 | 0.983 | 0.984 | 0.983 | 0.933 C-Index | 0.991 | 0.985 | 0.992 | 0.749 | 0.851 | 0.823 | 0.871 | 0.880 | 0.885 C-Index (test) | 0.874 | 0.837 | 0.776 | 0.751 | 0.653 | 0.682 | 0.877 | 0.863 | 0.785 Distance Score | 3.388 | 4.034 | 2.017 | 2.230 | 2.410 | 2.191 | 1.331 | 1.082 | 0.751 Distance Score (test) | 3.047 | 3.635 | 3.767 | 2.291 | 2.375 | 2.339 | 1.116 | 1.035 | 1.240 Score Std | 11.586 | 7.584 | 0.248 | 1.793 | 0.498 | 0.041 | 3.294 | 1.753 | 0.009 Score Std (test) | 11.522 | 6.813 | 0.080 | 1.743 | 0.557 | 0.038 | 3.118 | 1.469 | 0.011 | | | | | | | | | Figure 2: Diagnostic plots for event time prediction with AF testing data set (a) distribution of estimated survival probabilities across all time points for AF testing data set by benchmark algorithms; (b) average difference between the predicted and true event/censor time estimated using probability threshold approach for AF testing data set; and (c) scatter plot of predicted and true event/censor time estimated using the inflection point approach for AF testing data set. The nominal survival curves by CDSM made it possible to apply Equation (4) to locate the inflection point as the event time. This is a better approach than choosing a probability threshold to construct a Binary classifier. In Figure 2 (b), we saw the error of predicted event time is sensitive to the chosen probability threshold, where the range of average timing difference was from -4.3 to 9.5 months in a small threshold range: 0.99 to 0.9999. In contrast, after applying the inflection point to determine the event time, we observed the predicted time accurately tracked the true time in Figure 2 (c), with most predictions happened ahead of the true AF event time. The average distance to from the predicted time is 1.720 months ahead of the true AF event time, while 1.014 months ahead for true AF censored time. CDSM allows the threshold-free prediction of the individual event time and early intervention on patients who might be prone to event occurrence. ## 5 Discussion This study demonstrated that injecting the knowledge of survival analysis into the design of recurrent neural network can significantly improve the prediction of time-to-event outcomes. Our proposed outcome model, CDSM fitting the joint distribution of both failure and censored observations. The conventional machine learning algorithms for binary discrimination can maximize evaluation scores such as AUROC, but failed to provide meaningful survival curves and reliable predictions of event time. The major drawback of these algorithms, as identified by our empirical study, is that they do not take account of censoring and had significant drop in accuracy when being evaluated on the testing database. Among these applications, few has discussed the use of statistical survival analysis to infer the event time. Rather, straightforward binary classification algorithms are used to hard code the inference questions into binary states at each time stamp. Researchers then apply a probability threshold to determine whether the event will happen at a point in time. ## Acknowledgments This work was supported by National Health and Medical Research Council, project grant no. 1125414. ## References * [1] E. L. Kaplan, P. Meier, Nonparametric estimation from incomplete observations, Journal of the American statistical association 53 (282) (1958) 457–481. * [2] R. Cox, Regression models and life tables (with discussion), J. Roy. Statist. Soc. Ser. B 34 (1972) 187–220. * [3] J. C. Recknor, A. J. Gross, Fitting Survival Data to a Piecewise Linear Hazard Rate in the Presence of Covariates, Biometrical Journaldoi:10.1002/bimj.4710360613. * [4] M. F. Gensheimer, B. Narasimhan, A scalable discrete-time survival model for neural networks, PeerJ 7 (2019) e6257–e6257. doi:10.7717/peerj.6257. URL https://dx.doi.org/10.7717/peerj.6257 * [5] M. K. Golmakani, E. C. Polley, Super Learner for Survival Data Prediction, The International Journal of Biostatistics 0 (0). doi:10.1515/ijb-2019-0065. URL https://dx.doi.org/10.1515/ijb-2019-0065 * [6] C. Lee, J. Yoon, M. van der Schaar, Dynamic-DeepHit: A Deep Learning Approach for Dynamic Survival Analysis With Competing Risks Based on Longitudinal Data, IEEE Transactions on Biomedical Engineering 67 (1) (2020) 122–133. doi:10.1109/tbme.2019.2909027. URL https://dx.doi.org/10.1109/tbme.2019.2909027 * [7] J. L. Katzman, U. Shaham, A. Cloninger, J. Bates, T. Jiang, Y. Kluger, DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network, BMC medical research methodology 18 (1) (2018) 24. * [8] E. Martinsson, Wtte-rnn: Weibull time to event recurrent neural network (2016). * [9] M. Reyna, C. Josef, R. Jeter, S. Shashikumar, B. Moody, M. B. Westover, A. Sharma, S. Nemati, G. Clifford, Early Prediction of Sepsis from Clinical Data – the PhysioNet Computing in Cardiology Challenge (2019). doi:https://doi.org/10.13026/v64v-d857. URL https://doi.org/10.13026/v64v-d857 * [10] K. Imai, A. Strauss, Estimation of Heterogeneous Treatment Effects from Randomized Experiments, with Application to the Optimal Planning of the Get-Out-the-Vote Campaign, Political Analysis 19 (1) (2011) 1–19. doi:10.1093/pan/mpq035. URL https://dx.doi.org/10.1093/pan/mpq035 * [11] A. Johnson, T. Pollard, L. Shen, MIMIC-III, a freely accessible critical care database, Sci Data 3 (2016) 160035–160035. * [12] E. Herrett, A. M. Gallagher, K. Bhaskaran, H. Forbes, R. Mathur, T. van Staa, L. Smeeth, Data Resource Profile: Clinical Practice Research Datalink (CPRD), International Journal of Epidemiology 44 (3) (2015) 827–836. doi:10.1093/ije/dyv098. * [13] C. W. Seymour, Assessment of clinical criteria for sepsis: For the third international consensus definitions for sepsis and septic shock (sepsis-3), J. Am. Med. Assoc 315 (2016) 762–774. * [14] M. Komorowski, L. A. Celi, O. Badawi, A. C. Gordon, A. A. Faisal, The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care, Nature Medicine 24 (11) (2018) 1716–1720. doi:10.1038/s41591-018-0213-5. URL https://dx.doi.org/10.1038/s41591-018-0213-5 * [15] E. Frank, R. M. Harrell, D. B. Califf, K. L. Pryor, R. A. Lee, Rosati, Evaluating the yield of medical tests, Journal of the American Medical Association 247 (18) (1982) 2543–2546. * [16] C. Lee, W. R. Zame, J. Yoon, M. van der Schaar, DeepHit: A Deep Learning Approach to Survival Analysis With Competing Risks., in: AAAI, 2018, pp. 2314–2321. * [17] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, X. Zheng, et al., TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems (2015).
# A Supernova-driven, Magnetically-collimated Outflow as the Origin of the Galactic Center Radio Bubbles Mengfei Zhang School of Astronomy and Space Science, Nanjing University, Nanjing 210023, China Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210023, China <EMAIL_ADDRESS>Zhiyuan Li School of Astronomy and Space Science, Nanjing University, Nanjing 210023, China Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210023, China<EMAIL_ADDRESS>Mark R. Morris Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA ###### Abstract A pair of non-thermal radio bubbles recently discovered in the inner few hundred parsecs of the Galactic center bears a close spatial association with elongated, thermal X-ray features called the X-ray chimneys. While their morphology, position, and orientation vividly point to an outflow from the Galactic center, the physical processes responsible for the outflow remain to be understood. We use three-dimensional magnetohydrodynamic simulations to test the hypothesis that the radio bubbles/X-ray chimneys are the manifestation of an energetic outflow driven by multiple core-collapsed supernovae in the nuclear stellar disk, where numerous massive stars are known to be present. Our simulations are run with different combinations of two main parameters, the supernova birth rate and the strength of a global magnetic field being vertically oriented with respect to the disk. The simulation results show that a hot gas outflow can naturally form and acquire a vertically elongated shape due to collimation by the magnetic pressure. In particular, the simulation with an initial magnetic field strength of 80 $\mu$G and a supernova rate of 1 kyr-1 can well reproduce the observed morphology, internal energy and X-ray luminosity of the bubbles after an evolutionary time of 330 kyr. On the other hand, a magnetic field strength of 200 $\mu$G gives rise to an overly elongated outflow that is inconsistent with the observed bubbles. The simulations also reveal that, inside the bubbles, mutual collisions between the shock waves of individual supernovae produce dense filaments of locally amplified magnetic field. Such filaments may account for a fraction of the synchrotron-emitting radio filaments known to exist in the Galactic center. Galactic center (565), Superbubbles (1656), Magnetic fields (994), Magnetohydrodynamical simulations (1966) ## 1 Introduction Galactic outflows driven by energy and momentum of an active galactic nucleus (AGN) and/or supernovae (SNe) are now understood to be an indispensable component of the galactic ecosystem (Fabian, 2012; Heckman & Best, 2014; Heckman & Thompson, 2017; Zhang, 2018). Multi-wavelength observations over the past decades have established an ever-growing inventory of galactic outflows, leading to the recognition that these outflows typically involve multi-scales and multi-phases. However, our physical understanding of galactic outflows, in particular their mass budget, energetics and life cycle, is still far from complete. The Galactic center, loosely defined here as the innermost few hundred parsec region of our Galaxy, provides the closest and perhaps the best laboratory for studying the formation and early evolution of a galactic outflow. Observational evidence has accumulated over recent years for a multi-phase outflow from the Galactic center (Bland-Hawthorn & Cohen, 2003; Law, 2010; Nakashima et al., 2019), collectively known as the Galactic Center Lobe (GCL; Sofue & Handa, 1984), a loop-like feature extending vertically out to $\gtrsim$ 1 degree (at a presumed distance of 8 kpc, $1^{\circ}$ corresponds to 140 pc) north of the disk mid-plane. Compelling evidence also exists for outflows at still larger (kiloparsec-) scales (Su et al., 2010; Carretti et al., 2013; Di Teodoro et al., 2018; di Teodoro et al., 2020; Predehl et al., 2020), but the physical relation between the outflows on different scales, e.g., whether they were produced by the same mechanism, remains an open question. More recently, our view of the Galactic center outflow is further sharpened. Based on high-resolution radio continuum observations afforded by the MeerKAT radio telescope, Heywood et al. (2019) found evidence for a pair of radio bubbles in the Galactic center, which are roughly symmetric about the disk mid-plane with a width of 140 pc and a full length of 430 pc. The northern bubble is spatially coincident with the GCL, but it is more clearly limb- brightened. In particular, the eastern side of the radio bubbles is delineated by the famous Radio Arc (Yusef-Zadeh et al., 1984) and its northern and southern extension toward higher latitudes; the western side is also bounded by prominent non-thermal filaments (NTFs; Yusef-Zadeh et al., 1984). Non- thermal emission is predominant in the radio bubbles at the observed frequency of 1284 MHz, although the GCL is known to show substantial thermal emission at different wavebands (Bland-Hawthorn & Cohen, 2003; Law, 2010; Nagoshi et al., 2019). Strikingly, the shells of the radio bubbles delineate the so-called “X-ray chimneys” recently discovered by X-ray observations (Ponti et al., 2019), which is a pair of diffuse, thermal X-ray features extending above and below the mid-plane. This strongly suggests a physical relation between the two features, reminiscent of a collimated hot gas outflow with an expanding shell (Ponti et al., 2021). Proposed origins for the Galactic center outflow as well as for the outflows on larger scales (i.e., the Fermi bubbles and the recently discovered eROSITA bubbles; Su et al., 2010; Predehl et al., 2020) fall in two categories (Heywood et al., 2019): (i) past activity from the central super-massive black hole (SMBH), commonly known as Sgr A*, which is currently in a quiescent state (Cheng et al., 2011; Zubovas et al., 2011; Zubovas & Nayakshin, 2012; Zhang & Guo, 2020; Ko et al., 2020); or (ii) episodic or continuous nuclear star formation (Genzel et al., 2010; Lacki, 2014; Crocker et al., 2015). In principle, both processes can drive an energetic outflow and produce the bubble-like structures observed at multi-wavelengths and multi-scales. Therefore a quantitative modeling and close comparison with the observations are crucial to distinguish between the two scenarios. In the literature, there have been a number of numerical simulations of a large-scale outflow from the Galactic center, which focuses on the formation of the Fermi bubbles by AGN jets or AGN winds (Guo & Mathews, 2012; Mou et al., 2014, 2015; Cheng et al., 2015; Zhang & Guo, 2020). In addition, Sarkar et al. (2015) and Sarkar et al. (2017) investigate the formation of the Fermi bubbles by simulating a nuclear starburst-driven wind. In this work, we investigate the specific scenario that the radio bubbles/X-ray chimneys are the manifestation of an outflow driven by sequential SN explosions concentrated in the Galactic center, using three- dimensional magnetohydrodynamic (MHD) simulations, which is the first attempt of this kind to our knowledge. Recently, Li et al. (2017) and Li & Bryan (2020) have performed advanced numerical simulations to study SNe-driven outflows on a similar physical scale, but these simulations were run with physical conditions typical of galactic disks. The Galactic center, on the other hand, is a unique environment characterized by a strong gravity, a concentration of massive stars, and a strong and ordered magnetic field. In particular, the presence of the NTFs, which have a strong tendency to be vertically oriented with respect to the disk, points to a vertical magnetic field in the Galactic center (see review by Ferrière, 2009). Theoretical studies have demonstrated that a strong external magnetic field can significantly affect the evolution of a supernova remnant (SNR; Insertis & Rees, 1991; Rozyczka & Tenorio-Tagle, 1995; Wu & Zhang, 2019), as the magnetic pressure confines the expansion of the SN ejecta in such a way that they preferentially propagate along the direction of the magnetic field. We are thus motivated to perform numerical simulations to test the scenario of an SN-driven, magnetically-collimated outflow for the radio bubbles/X-ray chimneys. In Section 2, we describe our basic model and settings of the simulation. In Section 3, we present the simulation results and confront them with the observations. In Section 4, we discuss the implications as well as limitations of our results. A summary is given in Section 5. ## 2 Simulation We use the publicly available MHD code PLUTO111http://plutocode.ph.unito.it/ (Mignone et al., 2007, 2012) to simulate sequential SNe explosions in the Galactic center and the formation of an SN-driven bubble. The global dynamical evolution and fine structures of the bubble necessarily depend on many physical processes and physical quantities of the Galactic center, some of which are not well constrained. Rather than pursuing a full degree of realism or a thorough exploration of the parameter space, our main aim here is to test a simplified but well-motivated model for the bubble formation. ### 2.1 Basic MHD Equations and Magnetic Field Configuration The simulation is based on a three-dimensional (3D) MHD cartesian frame with a grid of $512^{3}$, equivalent to a physical volume of 2003 pc3 and a linear resolution of 0.39 pc. We set the $z$-axis to be perpendicular to the Galactic disk (north as positive), the $y$-axis to be parallel to the line-of-sight (the observer at the negative side), and the $x$-axis to run along decreasing Galactic longitude. Because the radio bubbles are roughly symmetric about the Galactic plane, we only simulate the $z>0$ volume, sufficient to enclose the northern bubble, which exhibits a size of $\sim$120 pc (width) $\times$ 190 pc (height). We adopt an outflow boundary condition. The simulation is governed by the ideal MHD conservation equations, $\begin{cases}\dfrac{\partial\rho}{\partial t}+\nabla\cdot(\rho\bm{v})=0,\\\ \\\ \dfrac{\partial(\rho\bm{v})}{\partial t}+\nabla\cdot\left[\rho\bm{vv}-\dfrac{\bm{BB}}{4\pi}+\bm{1}\left(p+\dfrac{\bm{B}^{2}}{8\pi}\right)\right]^{T}=-\rho\nabla\Phi,\\\ \\\ \dfrac{\partial E_{t}}{\partial t}+\nabla\cdot\left[\left(\dfrac{\rho\bm{v}^{2}}{2}+\rho e+p+\rho\Phi\right)\dfrac{\bm{v}-\bm{v}\times\bm{B}\times\bm{B}}{4\pi}\right]\\\ =-\dfrac{\partial\left(\rho\Phi\right)}{\partial t},\\\ \\\ \dfrac{\partial\bm{B}}{\partial t}-\nabla\times(\bm{v}\times\bm{B})=0,\end{cases}$ (1) where $\rho$ is the mass density, $p$ the thermal pressure, $\bm{v}$ the velocity, $\bm{B}$ the magnetic field, $\bm{1}$ the dyadic tensor, $\Phi$ the gravitational potential, and $E_{t}$ the total energy density, defined as: $E_{t}=\rho\epsilon+\dfrac{(\rho\bm{v})^{2}}{2\rho}+\dfrac{\bm{B}^{2}}{8\pi},$ (2) where $\epsilon$ is the internal energy. We use an ideal equation of state, i.e., $\epsilon=p/(\Gamma-1)$, in which the ratio of specific heats $\Gamma$ = 5/3. As mentioned in Section 1, the orientation of the NTFs indicates that a vertical magnetic field is prevalent in the Galactic center. We adopt a dipole magnetic field structure generated by a current loop with a diameter of 300 pc, which can be expressed analytically (Simpson et al., 2001). With this large diameter, the magnetic field lines remain approximately vertical to the disk within our simulation volume. There are ample evidence that the Galactic center has an average magnetic field strength substantially higher than in the disk (Ferrière, 2009). Crocker et al. (2010) derived a lower limit of 50 $\mu$G for the central 400 pc, based on an upper limit in the detected diffuse $\gamma$-ray flux. Given the observed radio spectral energy distribution of the Galactic center, a weaker magnetic field would lead to more relativistic electrons and consequently a higher $\gamma$-ray flux due to inverse Compton emission. In fact, energy equipartition between the magnetic field, X-ray- emitting hot plasma and turbulent gas implies a magnetic field strength of $\sim$100 $\mu$G (Crocker et al., 2010). On the other hand, Thomas et al. (2020) suggested a stronger magnetic strength of 200 $\mu$G in the NTFs. In our fiducial run of simulation, the initial magnetic field strength at the origin ($x=y=z=0$) is set as $B_{\rm 0}=80~{}\mu$G. Values of $50~{}\mu$G and $200~{}\mu$G are also tested to examine the effect of a weaker/stronger magnetic field (see Section 2.4). The simulation neglects viscosity and thermal conduction, but takes into account radiative cooling. We adopt the TABULATED cooling function implemented in PLUTO, which is generated with Cloudy for an optically thin plasma and solar abundances (Ferland et al., 2017). We neglect the synchrotron cooling of relativistic electrons, which are presumably produced by the SN shocks (see Section 2.4). Table 1: Simulation Parameters for the Radio Bubbles Fiducial Parameters | Value | | | ---|---|---|---|--- SN Ejecta Mass | 10 M⊙ | | | SN Kinetic Energy | 1$\times$ 1051 erg | | | Injection Radius | 4 pc | | | Ambient Temperature | 1$\times$ 106 K | | | Diameter of Explosion Region | 50 pc | | | Height of Explosion Region | 10 pc | | | Simulation Runs | B80I1 | B80I2 | B50I1 | B200I1 Magnetic Field Strength | 80 $\mu$G | 80 $\mu$G | 50 $\mu$G | 200 $\mu$G Explosion Interval | 1 kyr | 2 kyr | 1 kyr | 1 kyr ### 2.2 Gravitational Potential and Initial ISM Conditions The gravitation in the Galactic center mainly originates from two components, namely, the nuclear star cluster (NSC), which dominates the innermost $\sim$20 pc, and the nuclear stellar disk (ND) that occupies the inner few hundred parsecs. We neglect larger-scale structures such as the bar and the Galactic disk. The SMBH, which has four million solar masses and a sphere of influence of a few parsecs in radius, can also be ignored given the scales of interest here. The NSC/ND will not evolve significantly on the timescale involved in our simulations, hence we adopt a fixed gravitational potential, which, following Stolte et al. (2008), can be approximated by a logarithmic form, $\Phi=0.5v_{0}^{2}\log(R_{c}^{2}+\dfrac{x^{2}}{a^{2}}+\dfrac{y^{2}}{b^{2}}+\dfrac{z^{2}}{c^{2}}),$ (3) where $v_{0}$ is the asymptotic velocity of a flat rotation curve, $R_{c}$ is the core radius, and $a$, $b$ and $c$ are stretching parameters. We adopt $v_{0}=98.6\rm~{}km~{}s^{-1}$, $R_{c}=2\rm~{}pc$, $a=b=c=1$ for the NSC, and $v_{0}=190\rm~{}km~{}s^{-1}$, $R_{c}=90\rm~{}pc$, $a=b=1$, $c=0.71$ for the ND, from Table 1 of Stolte et al. (2008). The combined NSC+ND potential has been found to provide a good match to the observed stellar mass distribution in the Galactic center (Launhardt et al., 2002). At the beginning of the simulation, the interstellar medium (ISM) is assumed to be isothermal and in hydrostatic equilibrium with the gravitational potential, $\dfrac{\nabla P}{\rho}=-\nabla\Phi,$ (4) where $P=n_{t}kT$ is the thermal pressure, and $n_{t}$ is the total number density of gas particles including protons, electrons and heavy elements. As usual we define $\rho=\mu m_{p}n_{t}$, where $m_{p}$ is the proton mass and $\mu\approx 0.6$ is the mean molecular weight for solar abundance. The initial temperature is set to be $10^{6}$ K, which is roughly the virial temperature given the enclosed gravitational mass of $1\times 10^{9}\rm~{}M_{\odot}$ within 100 pc. The prevalence of hot gas (with temperatures $\gtrsim 10^{6}$ K) in the Galactic center has been established observationally (e.g., Baganoff et al., 2003; Ponti et al., 2015). While cooler gas (with temperatures $\lesssim 10^{4}$ K) is also known to exist in the Galactic center, it tends to concentrate in dense filaments and clouds near the midplane and is not expected to play a significant role in the bubble formation. We discuss possible effects of a multi-phase ISM on the observed properties of the bubble in Section 4.4. The initial density distribution can then be derived by solving Eqn. 4, as shown in Figure 1 along with the initial magnetic field distribution. From the adopted initial conditions, it can be shown that the thermal pressure of the ISM ($n_{t}kT\sim 10^{-12}-10^{-10}\rm~{}dyn~{}cm^{-2}$) is everywhere significantly lower than the magnetic pressure ($B_{0}^{2}/8\pi\sim 2.5\times 10^{-10}\rm~{}dyn~{}cm^{-2}$), perhaps except in the innermost few parsecs. In the meantime, the Alfvén speed, $V_{\rm A}=(B_{0}^{2}/4\pi\rho)^{\frac{1}{2}}\lesssim 10^{3}\rm~{}km~{}s^{-1}$, is much lower than the typical expansion velocity of the SN. Therefore, the present case of the Galactic center satisfies the moderately strong field condition defined by Insertis & Rees (1991). ### 2.3 Supernova Input In the simulations, SNe are set to explode within a predefined cylindrical volume. The cylinder has a diameter of 50 pc in the $x-y$ plane and a thickness of 10 pc along the $z$-axis, to mimic the concentration of massive stars near the Galactic plane (Kruijssen et al., 2015). We have tested a wider explosion area in the $x-y$ plane (e.g., 100 pc in diameter, closer to that of the CMZ), finding that the resultant bubble would become significantly fatter, inconsistent with the observed morphology. In reality, the CMZ may provide a horizontal confinement to the bubble. However, a self-consistent implementation of the CMZ would necessarily introduce more free parameters, and is beyond the scope of the present work. The base of the radio bubbles shows a small but appreciable offset to the west of Sgr A* (Heywood et al., 2019). Thus we place the center of the cylinder at $x=5$ pc to mimic this behavior. Due to the otherwise axisymmetry in the simulation, this appears to be the most viable way to reproduce the observed offset. The fiducial SN birth rate is set to be $1\rm~{}kyr^{-1}$ (Di Teodoro et al., 2018), which is estimated by assuming an SFR of 0.1 M⊙ yr-1, a Kroupa (2001) initial mass function (IMF) and a minimum mass of 8 M⊙ for the progenitor star of a core-collapse SN. Barnes et al. (2017) and Sormani et al. (2020) estimated a current SFR of 0.1 M⊙ yr-1 inside the CMZ, while Nogueras-Lara et al. (2020) found that star formation in the ND (which has a similar radial extent as the CMZ) has been relatively active in the past 30 Myr, with an SFR of $0.2-0.8\rm~{}M_{\odot}~{}yr^{-1}$. Our assumed SFR of 0.1 M⊙ yr-1 is compatible with the smaller radial extent of our adopted exploding region, which may be the case if SN events have been episodic and clustering on a $\lesssim$ Myr timescale. We also test the effect of a lower SN birth rate of $0.5\rm~{}kyr^{-1}$ (see below). We have neglected Type Ia SNe, which have a birth rate of $\lesssim 0.05\rm~{}kyr^{-1}$ according to the enclosed stellar mass in the ND/NSC (Mannucci et al., 2005), though a recent study by Zhou et al. (2021) found evidence that Sgr A East, one of the few currently known SNRs in the Galactic center, was created by a Type Iax SN. Individual SNe are thus injected at random positions inside the cylindrical volume, one after another with a fixed interval according to the assumed birth rate. Each SN has an ejecta mass of $M_{\rm ej}=10\rm~{}M_{\odot}$ and a kinetic energy of $E_{\rm ej}=1\times$ 1051 erg (Poznanski, 2013). This energy is deposited into a sphere with a radius of $R_{\rm SN}=4$ pc, ignoring any intrinsic anisotropy. The analytic solution within $R_{\rm SN}$ is derived from Truelove & McKee (1999), in which the newly born SN is divided into two parts, the inner uniform density core region and the outer power-law density envelope region. The radius of the former is 10 times that of the latter, and the power-law index is set as zero. Figure 1: Initial distribution of gas density, plotted in logarithmic scale and in units of cm-3. The white arrows indicate the initial magnetic field distribution. The three panels are slices through the $z=0$, $y=0$ and $x=0$ planes, respectively. ### 2.4 Simulation Runs and Synthetic Emission Maps In this work, we perform four runs of simulation, each with a unique combination of magnetic field strength and SN explosion interval. Our fiducial simulation is represented by run B80I1, where B and I indicate the magnetic field and explosion interval, respectively. The fiducial run has $B_{0}=80\rm~{}{\mu}G$ and $I=1$ kyr. The other three runs have either one of the two parameters varied. B50I1 has $B_{0}=50\rm~{}{\mu}G$ and B200I1 has $B_{0}=200\rm~{}{\mu}G$, covering the empirical lower and upper limits inferred for the Galactic center (Section 2.1). Finally, B80I2 has an explosion interval of 2 kyr. The total elapsed time is set to be 330 kyr for all four runs. In the fiducial simulation, this is about the time when the top of the bubble approaches the edge of the simulation box. The time step is adaptive and ranges between $1-40$ yr. The simulation parameters are summarized in Table 1. To facilitate comparison with the observations, we generate synthetic radio and X-ray maps for the final snapshot (i.e., $t$ = 330 kyr) of the simulation. We include synchrotron radiation and free-free emission in the radio band (default at 1284 MHz, to be consistent with the MeerKAT observation), while for the X-ray band only thermal emission from a collisionally-ionized, optically-thin plasma is considered. First we need to distinguish regions inside and outside the evolving bubble. This is realized by adding a tracer parameter, $Q$, evaluated at every pixel in the simulation, which obeys a simple conservation equation: $\dfrac{\partial(\rho Q)}{\partial t}+\nabla\cdot(\rho Q\bm{v})=0.$ (5) $Q$ has a value of 1 for pure SN ejecta and 0 for the unpolluted ISM, and a value between 0–1 for pixels with mixed ejecta and ISM. We further calculate the Mach number for every pixel. The synthetic maps only take into account pixels with a non-zero tracer parameter or a Mach number greater than 2. The latter condition is employed to ensure that pixels with a high Mach number but a zero tracer parameter, such as those at or immediately behind the shock, are included. Synchrotron emissivity depends on the magnetic field strength and the density of relativistic electrons. However, the latter cannot be directly obtained from our simulation and thus requires some working assumption. Here, we assume that the relativistic electron density at a given pixel of interest is proportional to the local gas density (Orlando et al., 2007; Zhang et al., 2017), normalized to have a mean energy density of $0.1\rm~{}eV~{}cm^{-3}$ across the bubble volume. This is compatible with the estimated mean cosmic- ray energy density of $10\rm~{}eV~{}cm^{-3}$ in the bubble (Heywood et al., 2019) and the empirical fact that relativistic electrons account for $\sim 1\%$ of the total cosmic-ray energy density in the GeV band (Blasi, 2013). We calculate the synchrotron emissivity in each pixel and integrate along the light-of-sight (i.e., the $y$-axis) to derive the synchrotron intensity map. In this calculation the $y$-component of the magnetic field is neglected due to the nature of synchrotron radiation. Radio free-free emission is calculated following the standard formula of Longair (2011), which, at a give pixel, scales with density squared and is a function of temperature. A temperature threshold of $10^{4}$ K is adopted when calculating the free-free emission. We find that only a tiny fraction of all pixels in any of our simulations has a temperature below $10^{5}$ K. The X-ray emissivity of an optically-thin thermal plasma in collisional ionization equilibrium (Smith et al., 2001), also scaling with density squared, is extracted from ATOMDB222http://www.atomdb.org, version 3.0.9, for which we adopt a solar abundance. The free-free and X-ray intensity maps are again derived by integrating along the $y$-axis. We find that self-absorption is negligible in both the radio and X-ray bands, thanks to the relatively low column density involved. ## 3 Results In this section we present the simulation results. We first describe the formation and subsequent evolution of the bubble in the fiducial run, showing that a good agreement on the overall morphology of the bubble is achieved between the simulation and observation (Section 3.1). We then present the other three runs of simulations and examine the effect of varying magnetic field strength or SN birth rate on the bubble formation (Section 3.2). Lastly, we confront the synthetic emission maps with the radio and X-ray observations (Section 3.3). ### 3.1 Bubble Formation and Evolution in the Fiducial Simulation Figure 2: Density-velocity distributions after 30 (top row), 180 (middle row) and 330 (bottom row) kyr, for simulation B80I1, i.e., with initial magnetic field strength of 80 $\mu$G and an explosion interval of 1 kyr. The gas density is plotted in logarithmic scale and in units of cm-3. The white arrows indicate the velocity vector. The left, middle and right columns are slices through the $z=0$, $y=0$ and $x=0$ planes, respectively. Figure 3: Temperature-magnetic field distributions after 30 (top row), 180 (middle row) and 330 (bottom row) kyr, for simulation B80I1, The gas temperature is plotted in logarithmic scale and in units of Kelvin. The white arrows indicate the magnetic field vector. The left, middle and right columns are slices through the $z=0$, $y=0$ and $x=0$ planes, respectively. Figure 4: Magnetic strength distributions after 330 kyr for simulation B80I1, The left, middle and right columns are slices through the $z=0$, $y=0$ and $x=0$ planes, respectively. In Figures 2 and 3, we show the gas density and temperature maps of run B80I1. In each figure, the density or temperature distribution is shown for a slice through the $z=0$ (left columns), $y=0$ (middle columns) and $x=0$ (right columns) plane, after a simulation time of $t$ = 30 (top rows), 180 (middle rows) and 330 (bottom rows) kyr. By design, 30 SNe have exploded by the time of 30 kyr. The forward shock front of several youngest SNe are clearly revealed in the density map, as well as by the overlaid projected velocity vectors. A high-density region forms and persists around the origin ($x=y=z=0$), because of the steep gravitational potential even in the presumed absence of an SMBH. As the shocks propagate, they compress and heat the ambient gas and also frequently collide with each other, eventually forming an expanding complex of post-shock gas with temperatures of $10^{7-8}$ K. By the time of 180 kyr, this hot gas complex has developed into a bubble structure with a common dense shell, most clearly seen in the $x-z$ and $y-z$ planes. Inside the bubble, the density is low as a result of expansion, while the temperature remains high due to repeated shock heating. Numerous arc-like features are evident in the temperature map, especially in the $x-z$ and $y-z$ planes, which are the relic of individual SN shocks. At this stage, the bubble looks fat, with a similar extent ($\sim 100$ pc) along the three dimensions. However, the overall expansion starts to show a preference along the vertical (positive $z$) direction, with the vertical expansion velocity of the shell now being $\sim 690\rm~{}km~{}s^{-1}$, substantially larger than the average expansion velocity of $\sim 120\rm~{}km~{}s^{-1}$ in the $x-y$ plane. This is primarily due to the collimation effect by an ordered magnetic field (Insertis & Rees, 1991; Stone & Norman, 1992; Rozyczka & Tenorio-Tagle, 1995; Wu & Zhang, 2019). Specifically, the SN shocks tend to push the semi-vertical magnetic field to the sides, greatly suppressing the magnetic field inside the bubble and in the meantime amplifying the magnetic field near the bubble shell. In turn, the latter decelerates and even halts the horizontal expansion of the bubble. The vertical expansion, on the other hand, feels no such magnetic confinement, thus a high velocity along this direction remains. The relatively strong gravitational potential in the $x-y$ plane also contributes to retarding the horizontal expansion and facilitates the bubble collimation along the $z$-axis. As a result, by the time of 330 kyr, the bubble becomes much more elongated. The top of the bubble almost reaches the edge of the simulation box ($z=200$ pc), with a vertical expansion velocity still as high as $\sim 600\rm~{}km~{}s^{-1}$, whereas its horizontal extent has not grown significantly since $t$ = 180 yr. The width of the bubble at its base is about 120 pc, with a small but appreciable offset towards the positive $x$-axis, both in agreement with the observed bubble. Arc-like features tracing the sequential SN shocks remain prominent throughout the bubble interior. Near some of these arcs, locally enhanced magnetic fields are evident, which is the result of shock compression, as illustrated in Figure 4. The magnetic field strength takes a highest value of 175 $\mu$G across the bubble. Our simulation ends at this point. ### 3.2 Comparison with Other Simulation Runs Figure 5: Simulated density-velocity images after 330 kyr. In the upper, middle and lower rows, we show the results of runs B80I2, B50I1 and B200I1, respectively. The $x-z$ and $y-z$ panels are slices through the center of the box along each axis, while the $x-y$ panel shows the slice at $z$ = 0. The background is the density distribution in logarithmic scale, and the white arrows indicate the velocity. Figure 6: Simulated temperature-magnetic field images after 330 kyr. In the upper, middle and lower rows, we show the results of runs B80I2, B50I1 and B200I1, respectively. The $x-z$ and $y-z$ panels are slices through the center of the box along each axis, while the $x-y$ panel shows the slice at $z$ = 0. The background is the temperature distribution in logarithmic scale, and the white arrows indicate the magnetic field. Similarly, we show the snapshots of density and temperature maps of simulation runs B80I2, B50I1 and B200I1 in Figures 5 and 6, all at $t$ = 330 kyr. These three simulations share some common features with the fiducial simulation. In particular, a vertically-collimated, bubble-like structure is formed in all these simulations. The bubble is delineated by a dense outer shell with compressed magnetic field and has a low-density, high-temperature interior with vertically-oriented velocities and generally weak magnetic field. The bubble interior is not smooth, rather, it is filled with chaotic small-scale structures, again due to the sequential SN shocks and mutual interactions between them. Below we shall describe the more unique features in the individual simulations. Simulation B80I2 (top row in Figures 5 and 6) adopts a lower explosion frequency than the fiducial case. This leads to a smaller energy injection rate, but is still sufficient to form a bubble. The bubble evolves more slowly, reaching a height of only 140 pc by the time of 330 kyr. The width of the bubble is also somewhat smaller than in the fiducial case (thus also narrower than the observed bubble), which remains the case even if we followed the bubble growth to a height of 200 pc. This occurs because, given a weaker SN energy injection but the same magnetic confinement, reduction in the horizontal expansion is greater than in the vertical expansion. We note that at a further reduced explosion frequency, much of the SN ejecta would not be able to escape from the strong gravity near the mid-plane, and a bubble would never form. In B50I1 (middle row in Figures 5 and 6), which has a weaker magnetic field compared to the fiducial run, the resultant vertical collimation is less effective and thus the bubble appears fatter. We note that a thinner bubble could still be achieved, should a lower explosion frequency be adopted in combination with the weaker magnetic field, for the reason explained above. However, in this case it would take a much longer time for the bubble to grow to the observed height of 190 pc. In contrast, B200I1 results in a significantly thinner structure. The magnetic field in this run is so strong that it can resist the compression of the SN shocks and consequently there is little sweeping of magnetic field inside the bubble. With the strong magnetic collimation, some SN ejecta are able to rapidly propagate along the field lines, forming vertical protrusions (several of these are captured in the $x-z$ and $y-z$ slides). The overall morphology is obviously inconsistent with the observed bubble. ### 3.3 Comparison with Observations Here we shall provide a more quantitative comparison with the radio and X-ray observations, with a focus on the fiducial simulation, which has the best morphological agreement with the observed bubble. Figure 7: 1284 MHz radio intensity distribution in simulation B80I1 at 330 kyr. The red dotted line outlines the rim of the northern radio bubble. Left: Synchrotron emission; Right: Free-free emission. In the left panel, values lower than 10-5 Jy arcsec-2 are suppressed to enhance visualization of the faint features. Figure 8: Synthetic $0.5-1.5$ (left) and $1.5-10$ (right) keV X-ray intensity distribution in simulation B80I1 at 330 kyr. The red dotted line outlines the rim of the northern radio bubble, while the black circles highlight two young SNRs. Values lower than 10-9 erg s-1 cm-2 arcsec-2 are suppressed to enhance visualization of the faint features. The synthetic synchrotron and free-free intensity maps of B80I1, after an evolution time $t$ = 330 kyr, are shown in the left and right panels of Figure 7. The overall morphology is quite similar between the synchrotron and free- free emission, which is partially owing to our assumption that the density of relativistic electrons scales with the local gas density. However, the synchrotron intensity is everywhere orders of magnitude higher than the free- free counterpart in the synthetic maps. This holds true even considering the uncertainties in the energy density of the relativistic electrons and the magnetic strength. Consequently, synchrotron dominates the total flux density at 1284 MHz, consistent with the MeerKAT observation (Heywood et al., 2019). It is noteworthy that both the hydrogen recombination line, H90$\alpha$, at 8309 MHz and the 8.4 GHz continuum are found to trace the GCL (Nagoshi et al., 2019), which exhibits a loop-like structure spatially coincident with the northern radio bubble. This suggests that the thermal component may have an increasingly larger contribution toward higher frequencies, which can be due to a combined effect of substantial synchrotron cooling at higher frequencies and the presence of ambient cooler gas not taken into account in our simulation. The overall extent of the synthetic synchrotron emission highly resembles that of the northern radio bubble (delineated by the red dotted line in Figure 7), which, has a width of 120 pc at its base and a height of 190 pc. Another interesting feature in the simulation is the presence of numerous filaments both at the edge of and inside the bubble, which closely resemble the NTFs (Yusef-Zadeh et al., 1984), although the ones in the simulation appear thicker and fuzzier in general, which may be partly owing to our moderate resolution. In the simulation, these filaments originate from the sequential SN shocks and their mutual interactions, and are associated with locally amplified magnetic field (Figure 4). Their possible relation with the NTFs will be further addressed in Section 4. The 1284 MHz synchrotron flux density of the simulated bubble is found to be 5801 Jy, which is to be contrasted with our rough estimate of the observed flux density in the MeerKAT image, 970 Jy, obtained by assuming a mean flux density of 3 mJy beam-1 across the projected area of the bubble. We caution that the MeerKAT mosaic image presented in Heywood et al. (2019) was not corrected for the primary beam attenuation and that the extended emission from the bubble suffers from potential flux loss in the interferometric image (I. Heywood, private communication), thus our estimate should be treated as a lower limit of the true flux density. On the other hand, the simulated flux density depends heavily on the assumed energy density of relativistic electrons. Therefore, the apparently large discrepancy between the observed and simulated radio flux densities should be taken as a point for future improvement rather than a failure of the simulation. The synthetic 0.5–1.5 keV and 1.5–10 keV X-ray intensity maps are shown in Figure 8. Compared to its radio morphology, the simulated bubble appears smoother in the X-rays. The expanding shell of the bubble (Figure 2) leaves no significant sign of limb-brightening in the 1.5-10 keV map, which is roughly consistent with the X-ray observations. This might be due to the fact that the shell is on average cooler than the bubble interior (Figure 3). Indeed, in the 0.5-1.5 keV map, which is more sensitive to gas temperatures below $\sim$1 keV, limb-brightening is more evident especially at the northwestern side of the bubble, although this energy band is not directly observable due to the large foreground absorption column density (a few $10^{22}\rm~{}cm^{-2}$; Ponti et al., 2019). The 1.5-10 keV map also exhibits much fewer small-scale structures in the bubble interior, except near the $x-y$ plane where the gas density is high and the most recent SNe freshly deposit a fraction of their kinetic energy. In particular, remnants of two newly exploded SNe are evident near the center (marked in the right panel of Figure 8), although they are not clearly seen in the synthetic radio map. An SNR evolving near Sgr A* will be heavily shaped by the strong gravity, with a large part of the ejecta pulled to the mid-plane, resulting in an appearance resembling the bipolar X-ray lobes detected in the innermost 15 parsecs of the Galactic center (Ponti et al., 2015, 2019). The thermal, kinetic and magnetic energy of the bubble is calculated by summing over all “bubble pixels” (Section 2.4), which is found to be 1.9, 1.2 and 0.1$\times 10^{52}$ erg, respectively. The initial thermal and magnetic energy within the bubble volume are 0.7 and 1.1 $\times 10^{52}$ erg. A net decrease of the magnetic energy underscores the sweep-up of the magnetic field. Ponti et al. (2019) estimated a thermal energy of 4$\times 10^{52}$ erg for the X-ray chimneys (sum of the northern and southern halves), which is well matched by the simulated value of 1.9$\times 10^{52}$ erg for the northern chimney. Ponti et al. (2019) also measured density and temperature profiles along selected Galactic longitude, $l=0\arcdeg$, and Galactic latitude, $b=0\fdg 7$. For a direct comparison, we construct density and temperature profiles at $l=0\arcdeg$ and $b=0\fdg 7$ from the simulation, as shown in Figure 9. Precisely speaking, Sgr A* is located at $l=0{\fdg}05579$, $b=-0{\fdg}04608$, but here we neglect this small difference and simply take the $x=0$ plane and $z=98$ pc plane for comparison. We calculate the density-weighted mean density along the line-of-sight as $<n>=\sqrt{\dfrac{\int\ n_{\rm t}^{2}dV}{\int\ dV}},$ (6) where $dV=Adl$, $A$ is the projected area, and the line-of-sight integration ($dl$) is from the farthest side to the nearest side of the bubble. The projected area varies across the profiles to approximate the rather irregular spectral extraction regions used in Ponti et al. (2019). At $l=0\arcdeg$, the width is 12.5 pc, and the lengths are 20 pc and 70 pc respectively for $b<0\fdg 26$ and $b>0\fdg 26$. At $b=0\fdg 7$, the width is 12.5 pc, and the length is always 70 pc. The emissivity-weighted mean temperature is calculated as $<T>=\dfrac{\int\ Tn_{\rm t}^{2}\Lambda(T,Z)dV}{\int\ n_{\rm t}^{2}\Lambda(T,Z)dV},$ (7) where $\Lambda$ is the tabulated X-ray emissivity as a function of temperature and metallicity extracted from ATOMDB. By examining the distribution of the SN ejecta through the tracer parameter, we have verified that the assumption of a uniform metallicity is a reasonable approximation. At $l=0\arcdeg$, the simulated density profile peaks at the midplane and decreases untill $z\approx$ 40 pc, beyond which it flattens. This general trend is in reasonable agreement with the observed density profile. Notably, the observed density profile has a significantly higher peak at low $z$. This may be due partly to the smaller line-of-sight depth adopted by Ponti et al. (2019) for the two inner data points, and partly to contamination from unresolved stellar objects and non-thermal extended features to the apparently diffuse X-ray emission near the mid-plane (Zhu et al., 2018). The simulated temperature profile appears bumpy around a mean value of $\sim 1.0$ keV. The “bumps” are most likely due to consecutive SN shocks propagating upward. Near the top of the expanding shell the temperature quickly drops to $\sim$0.6 keV. The observed temperature profile, on the other hand, appears flatter and has a lower value of 0.7–0.8 keV between 20–150 pc. We note that the observed temperature was derived using a single-temperature spectral model to the underlying plasma having a range of temperatures (Ponti et al., 2019). The Galactic center hot ISM is expected to have a somewhat lower temperature than in the whole bubble interior. Inclusion of the hot ISM in the observed spectrum could have led to a lower observed temperature. At $b=0\fdg 7$, the simulated density profile peaks at the eastern and western edges of the bubble shell, which is consistent with Figure 1. However, there is no clear sign of limb-brightening in the observed density profile; an enhanced density is only weakly seen near the eastern edge ($X\approx$ -60 pc) but is absent near the western edge ($x\approx$ 70 pc). One possibility is that the soft X-ray emission from the denser and cooler western shell has largely dropped out of the observation band (but could have been seen in the 0.5–1.5 keV band, as shown in the left panel of Figure 8). The simulated temperature profile shows a roughly inverse “U”-shape, with values peaking at $\sim$1.25 keV at $x=20$ pc. It is noteworthy that the outermost few points in the simulated profile are actually outside the bubble volume, whose values only reflect the unperturbed ISM. The observed temperature profile, again derived from a spectral fit using a single-temperature model, appears flat around a mean value of 0.8 keV. Figure 9: Density and temperature profiles of B80I1. The light blue dots and pink pluses indicate the temperature and the total density in the simulation, respectively. The blue dots and red pluses respectively indicate the temperature and the density from the observations. The observed values are manually estimated from Ponti et al. (2019). Left: The profile at Galactic longitude $l=0\arcdeg$. Right: The profile at Galactic latitude $b=0\fdg 7$. Note that the observed density/temperature profiles cover a wider range reaching beyond the bubble volume. Ponti et al. (2019) did not provide an explicit total X-ray luminosity of the chimneys. A rough estimate of this value can be made by adopting a cylinder of 150 pc in both diameter and height, as assumed by (Ponti et al., 2019), a mean density of 0.1 cm-3 and a mean temperature of 1$\times 10^{7}$ K (0.86 keV), which are representative of the X-ray chimneys. This leads to an estimated 1.5–10 keV luminosity of $\sim$2.8$\times 10^{36}$ erg s-1 for the northern chimney, again well matched by the simulated value of 2.0$\times 10^{36}$ erg s-1. For completeness, the synthetic radio and X-ray maps of runs B80I2, B50I1 and B200I1 are shown in Figure 10. While these maps exhibit some interesting features, it is immediately clear that none of them matches the observed bubble morphology (again approximated by the red dotted line). Figure 10: Upper panels: Synthetic synchrotron intensity distribution at 1284 MHz. Values lower than 10-5 Jy arcsec-2 are masked for better visualization. Lower panels: Synthetic 1.5–10 keV X-ray intensity distribution. Values lower than 10-9 erg s-1 cm-2 arcsec-2 are suppressed to enhance visualization of the faint features . The red dotted line outlines the morphology of the northern radio bubble. The left, middle and right columns show the results of runs B80I2, B50I1 and B200I1, respectively. ## 4 Discussion ### 4.1 The Origin and Fate of the Galactic Center Radio Bubbles/X-ray Chimneys The simulations presented in the previous section show that an outflow driven by sequential SN explosions and collimated by a vertical magnetic field can provide a reasonable explanation for the observed radio bubbles/X-ray chimneys in the Galactic center. In particular, the simulations can well reproduce the overall morphology, X-ray luminosity and thermal energy of the northern bubble. This scenario relies on two key ingredients: SN explosions clustering in the nuclear disk to provide a semi-continuous energy input, and a vertical, moderately strong magnetic field to provide the collimation. Both ingredients are very likely available in the Galactic center. Indeed, direct evidence for contemporary SN explosions in the Galactic center was provided by at least a few SNRs clearly visible in radio or X-ray images (e.g., Ponti et al., 2015). Moreover, about two hundred emission-line objects have been detected in the Galactic center, most of which are likely evolved massive stars (Dong et al., 2012). These stars may belong to the same population that gave rise to the SNe responsible for launching the bubbles. As for the magnetic field, it is widely thought that it is predominantly poloidal in the Galactic center, at least in regions outside the giant molecular clouds (Ferrière, 2009). In this regard, an SNe-driven, magnetically-collimated outflow should naturally develop in the Galactic center, provided the correctness of our simulations. As mentioned in Section 1, a competing driver of a large-scale outflow is the kinetic power from the central SMBH, even though Sgr A* is by no means comparable with a classical AGN. While our simulations cannot automatically rule out an AGN-driven outflow, they share useful insight on the latter case. Compared to the distributed SN explosions, energy input from the SMBH is highly concentrated. Thus an AGN-driven outflow on the hundred-parsec scale may either acquire a highly elongated shape in the case of a canonical jet- driven outflow (e.g., Zhang & Guo, 2020), or inflate a fat bubble in the case of a more isotropic wind symbiotic with the hot accretion flow onto a weakly accreting SMBH (Yuan et al., 2015). Magnetic collimation may also shape the wind-blown bubble, but one expects that the resultant structure is again a highly elongated one. Thus matching the morphology of the radio bubbles with an AGN wind-driven outflow may require some fine-tuning, which awaits a detailed investigation. We now turn to consider the fate of the radio bubbles. In the framework of our simulations, the SNe-driven outflow is necessarily an evolving structure. In fact, at the end of our fiducial simulation, the top of the bubble still expands at a speed of $\sim 600\rm~{}km~{}s^{-1}$ (Section 3.1). Provided a continuous energy injection from future SNe, which is quite likely given the evolved massive stars near the disk plane (Dong et al., 2012), the bubbles should continue to grow and gradually evolve into a more “chimney”-like structure, as long as a moderately strong magnetic field persists to greater heights. Conversely, if SNe were temporarily shut off, one expects that the bubble/chimney would ultimately disperse and collapse within a time not much greater than the sound-crossing time (a few hundred kyr). We have run a test simulation to examine such a case. Specifically, we adopt the same setting as the fiducial simulation, except that SN explosions cease after a time of 200 kyr. It is found that the upper edge of the bubble can still climb to a height of $\sim$190 pc with its accumulated momentum. However, the interior of the bubble, especially its lower portion, begins to collapse soon after the shutoff of the SNe, due to the loss of energy injection against the strong central gravity. In addition, the mean gas temperature inside the bubble gradually declines. Such an effect might bring the simulated temperature profile into better agreement with the observed temperature profile (Figure 9), although we have no evidence that the Galactic center is currently experiencing a substantial drop in the SN birth rate. It is interesting to ask whether the radio bubbles/X-ray chimneys have a causal relation with the Fermi bubbles (Su et al., 2010) and eROSITA bubbles (Predehl et al., 2020) found on much larger scales. We note that the age of the radio bubbles inferred from our simulations is only a few hundred kyr, much shorter than the dynamical timescale of a few Myr originally suggested by Heywood et al. (2019). However, Heywood et al. (2019)’s estimate was based on the assumption of a constant expansion velocity of the bubbles, which is implausible, hence a shorter timescale is expected. The estimated age of the Fermi bubbles, on the other hand, ranges from 1 Myr (Yang et al., 2013) to 1 Gyr (Crocker & Aharonian, 2011). Thus, in the context of our supernova-based model for the origin of the radio bubbles/chimneys, the radio bubbles would be a dynamically younger and independent structure simply evolving in the interior of the Fermi/eROSITA bubbles, which themselves were formed by older activities in the Galactic center. Alternatively, as suggested by Ponti et al. (2019), the X-ray chimney may be a channel that transports energy from the Galactic center to the high-latitude region currently occupied by the Fermi bubbles. In this case, the channel should have existed for tens of Myr, so that star formation in the Galactic center can be sufficient to supply the total energy content of the Fermi bubbles, $\sim 10^{56}$ erg (Carretti et al., 2013). However, such a picture contradicts with the capped morphology of the radio bubbles (the southern bubble is not obviously capped in X-rays; Ponti et al., 2021), which, according to our simulations, is naturally explained as the expanding shell of a newly born outflow. This picture may be reconciled if star formation in the Galactic center has been episodic on a timescale of $\sim$10 Myrs (Krumholz & Kruijssen, 2015). In this case, the “chimney” is (re)established by consecutive generations of mini-starbursts and collapses inbetween. Of course, over such a long interval, the activity of Sgr A* can also play an important role in contributing to the inflation of the chimneys, especially in view of the fact it was likely much more active in the recent past (Ponti et al., 2010, 2013; Camilo et al., 2018). In a hybrid scenario, Sgr A*, with supernovae and even stellar winds, can simultaneously sustain the “chimney” and transport energy to larger scales, implying X-ray emission beyond the edge of the radio bubbles, which is also suggested by Ponti et al. (2021). ### 4.2 Origin of the Non-thermal Filaments The origin of the NTFs has been extensively debated since their discovery nearly four decades ago. Proposed models for the NTFs include expanding magnetic loops (Heyvaerts et al., 1988), induced electric fields (Benford, 1988; Morris & Yusef-Zadeh, 1989), thermal instability in relativistic gas (Rosso & Pelletier, 1993), cosmic strings (Chudnovsky et al., 1986), magnetic reconnection (Lesch & Reich, 1992; Serabyn & Morris, 1994; Morris, 1996; Banda-Barragán et al., 2016, 2018), analogs of cometary plasma tails (Shore & LaRosa, 1999), a turbulent magnetic field (Boldyrev & Yusef-Zadeh, 2006), stellar winds or SNe of the young star cluster (Yusef-Zadeh, 2003; Yusef-Zadeh & Wardle, 2019), pulsar wind nebulae (Barkov & Lyutikov, 2019), and the tidal destruction of gas clouds (Coughlin et al., 2021). Of course, a multi-SNe hypothesis has also been suggested (Sofue, 2020). In our simulations, filamentary features resembling the observed NTFs trigger and form primarily at the interface of colliding shocks of individual SNe (Figure 4). Magnetic fields are compressed and amplified in these filaments, where particle acceleration (e.g., due to diffusive shock acceleration) is expected to take place. Also the Radio Arc finds its possible counterpart in the simulations, which arises from the piling of consecutive SN shocks at the sides of the bubble (Figure 2). Comparing Figure 7 and Figure 10, it occurs that an SN-driven outflow evolving in a weaker magnetic field produces more filaments. This is because a strong magnetic field can more easily confine an SN shock and reduce its chance of encountering other shocks. We note that in the simulation many filaments are indeed one-dimensional structures, i.e, they have a distinct long-axis roughly oriented vertically, but some others arise from a projection effect, i.e., a two-dimensional surface viewed edge-on. Such a surface is also the result of colliding shock fronts. We stress that the moderate resolution of our simulation would smear the appearance of the shock fronts, so we anticipate that additional apparent filaments would show up with higher resolution. The viability of this formation mechanism for the NTFs could be assessed by direct comparison of the cross-sectional profiles of the filaments appearing in the simulations with those of observed NTFs, but a higher resolution simulation is needed for such a comparison. We note that there are NTFs found outside the radio bubbles (Heywood et al., 2019). These might have been formed in a past generation of clustering SN explosions, and they exist for a longer time than the associated outflow. Of course, we cannot rule out the aforementioned alternative models for all NTFs. In reality, the NTFs can have a mixed origin, i.e., different processes, including SN shocks, stellar winds and pulsar winds can produce seeds of NTFs which are further shaped by the compressed magnetic field or other mechanisms. ### 4.3 Strength of the Galactic Center Magnetic Field The magnetic field is a crucial component of the Galactic center environment. At present, the average field strength is still quite uncertain. The assumption of energy equipartition between the magnetic field and relativistic particles leads to estimates up to $\sim$ 1 mG in the brightest NTFs and as low as 10 ${\mu}$G in the more diffuse background. Crocker et al. (2010) derived a lower limit of $\sim 50~{}\mu$G based on the diffuse $\gamma$-ray flux and suggested a typical value of $\sim 100~{}\mu$G in the central 400 pc region. In our simulation B50I1, which adopts a field strength of 50 $\mu$G, an outflow can be developed, although the resultant bubble appears fatter due to the reduced magnetic confinement compared to the fiducial simulation (Section 3.2). This lends some support to the above lower limit. On the other hand, simulation B200I1, which assumes a field strength of $200~{}\mu$G, is obviously inconsistent with the observation (Figure 10). This conclusion holds even if the other parameter, the SN birth rate, were adjusted within a reasonable range. Qualitatively, at a lower SN birth rate, the shock and ejecta of individual SNe would be less resistant to the magnetic pressure, thus they are less likely to evolve into a mutual network. The resultant outflow hardly takes a bubble shape, rather it would consist of many barrel- like structures, through which individual SN ejecta propagate. Only a much higher SN birth rate can counteract the magnetic pressure, but this would be inconsistent with the currently accepted star formation rate in the Galactic center ($\sim 0.1\rm~{}M_{\odot}~{}yr^{-1}$). Therefore, our simulations provide a meaningful constraint on the average magnetic field on 100 pc scales in the Galactic center, $50\rm~{}{\mu}G\lesssim B_{0}\lesssim 200~{}{\mu}G$. Our fiducial run B80I1 demonstrates localized magnetic field amplification across the bubble, reaching a maximum field strength of 175 $\mu$G. It is expected that the global magnetic field would gradually restore to the initial configuration after the termination of clustering SN explosion and the dispersion/collapse of the outflow. ### 4.4 Caveats Despite the satisfactory reproduction of the major observed properties of the radio bubbles/X-ray chimneys, some notable discrepancies exist between our simulation results and the observations, which warrant the following remarks. The observed edge-brightened radio bubbles have a low-surface-brightness interior, while in our simulation the edge-interior contrast is less significant. A possible cause is that we have ignored synchrotron cooling. Using a magnetic field of 20 $\mu$G, Heywood et al. (2019) derived a synchrotron cooling time of 1–2 Myr by assuming that the electron energy density distribution has a power-law index of 2. Based on the same method, we estimate a cooling time of 250 kyr for 80 $\mu$G, which is comparable to the evolution time of the bubble in our simulation. Hence the relativistic electrons produced at the early stage and now filling the bubble interior should be subject to radiative cooling, an effect that is not taken into account but otherwise would enhance the edge-interior contrast. An alternative and more likely cause is the absence of a cool gas shell in our simulation. The presence of cool gas (with a temperature of $\sim 10^{4}$ K) in the outer part of the GCL has been known for some time (Law, 2010; Nakashima et al., 2019). This cool gas is not found in our simulations, owing to the very moderate radiative cooling even in the dense shell of post-shock gas. This is also the reason why the free-free emission predicted by our simulation is negligible compared to the synchrotron (Section 3.3). Hence the detected cool gas probably has an external origin that is missing in the framework of our simulation. Indeed a substantial amount of both cool and cold gas exist in the NSD/CMZ (Ferrière et al., 2007), and part of this gas may be swept into the bubble shell and/or entrained into the bubble interior. For example, Ponti et al. (2021) argued that a gas cloud associated with the bright 25 $\mu$m source AFGL5376 has been accelerated and is now defining part of the wall of the bubble. An additional source of cool gas is the stellar wind of the massive stars distributed in the nuclear disk. In principle, the Galactic center outflow may also be driven by stellar winds (Chevalier, 1992). Stellar winds as an additional energy and momentum source have not been included in our simulation. We can give a rough estimate of the collective energy input from the massive stars in the Galactic center. The stellar winds should be dominated by the Wolf–Rayet stars, which have a typical mass loss rate of $10^{-5}\rm~{}M_{\odot}~{}yr^{-1}$ and a wind velocity of $2000\rm~{}km~{}s^{-1}$. Thus the $\sim$200 evolved massive stars found by Dong et al. (2012) in the nuclear disk have a total kinetic power of $2.5\times 10^{39}\rm~{}erg~{}s^{-1}$ and would release a kinetic energy of $2.6\times 10^{52}\rm~{}erg$ in 330 kyr. The massive stars in the central parsec provide an additional kinetic energy of $3\times 10^{51}\rm~{}erg$ in 330 kyr, assuming a collective mass loss rate of $10^{-3}\rm~{}M_{\odot}~{}yr^{-1}$ and a wind velocity of $1000\rm~{}km~{}s^{-1}$ (Najarro et al., 1997; Quataert, 2004). Therefore, the energy input from the massive stars is about one order of magnitude smaller than that of the SNe in our simulation. Nevertheless, massive stars may start launching strong winds a few Myr before their core collapse, significantly shaping the ambient gas into which the bubbles expand. A self-consistent implementation of the stellar winds requires a reliable stellar evolution model and a much higher resolution, thus awaits future work. ## 5 Summary The recently discovered radio bubbles and X-ray chimneys in the Galactic center both point to a dynamically young outflow. In this work we have used three-dimensional MHD simulations, carefully tailored to the physical conditions of the Galactic center, to explore the scenario in which a SN- driven, magnetically-collimated outflow produces the observed bubbles/chimneys. The main results and implications of our study include: 1. 1. A SN-driven, magnetically-collimated outflow is naturally formed in almost all simulations performed. The morphology, X-ray luminosity and thermal energy of the radio bubbles/X-ray chimneys can be well reproduced for a reasonable choice of two parameters, namely, the SN birth rate and the strength of the vertical magnetic field. Meanwhile, we have examined the effect of changing these two parameters on the formation of the bubble. 2. 2. Dense filamentary features are seen both at the edge and in the interior of the simulated bubble, which are the sites of colliding shocks of individual SNe. This offers a plausible explanation for at least a fraction of the observed NTFs and the Radio Arc. 3. 3. In the framework of our simulations, the magnetic field in the Galactic center is likely to have a strength between 50–200 $\mu$G, consistent with previous estimates based on independent arguments. In conclusion, we are able to provide a viable formation mechanism for the radio bubbles/X-ray chimneys. This invites future work to explore the possible physical connection between Galactic outflows on various scales. This work is supported by the National Key Research and Development Program of China (grant 2017YFA0402703) and National Natural Science Foundation of China (grant 11873028). We acknowledge the computing resources of Nanjing University, Purple Mountain observatory and National Astronomical Observatories of China. We thank Miao Li and Feng Yuan for their helpful discussions, and G. Ponti and I. Heywood for their communications on the estimation of the X-ray luminosity and radio flux density, respectively. ## References * Baganoff et al. (2003) Baganoff, F. K., Maeda, Y., Morris, M., et al. 2003, ApJ, 591, 891 * Banda-Barragán et al. (2018) Banda-Barragán, W. E., Federrath, C., Crocker, R. M., & Bicknell, G. V. 2018, MNRAS, 473, 3454 * Banda-Barragán et al. (2016) Banda-Barragán, W. E., Parkin, E. R., Federrath, C., Crocker, R. M., & Bicknell, G. V. 2016, MNRAS, 455, 1309 * Barkov & Lyutikov (2019) Barkov, M. V., & Lyutikov, M. 2019, MNRAS, 489, L28 * Barnes et al. (2017) Barnes, A. T., Longmore, S. N., Battersby, C., et al. 2017, MNRAS, 469, 2263 * Benford (1988) Benford, G. 1988, ApJ, 333, 735 * Bland-Hawthorn & Cohen (2003) Bland-Hawthorn, J., & Cohen, M. 2003, ApJ, 582, 246 * Blasi (2013) Blasi, P. 2013, A&A Rev., 21, 70 * Boldyrev & Yusef-Zadeh (2006) Boldyrev, S., & Yusef-Zadeh, F. 2006, ApJ, 637, L101 * Camilo et al. (2018) Camilo, F., Scholz, P., Serylak, M., et al. 2018, ApJ, 856, 180 * Carretti et al. (2013) Carretti, E., Crocker, R. M., Staveley-Smith, L., et al. 2013, Nature, 493, 66 * Cheng et al. (2015) Cheng, K. S., Chernyshov, D. O., Dogiel, V. A., & Ko, C. M. 2015, ApJ, 804, 135 * Cheng et al. (2011) Cheng, K. S., Chernyshov, D. O., Dogiel, V. A., Ko, C. M., & Ip, W. H. 2011, ApJ, 731, L17 * Chevalier (1992) Chevalier, R. A. 1992, ApJ, 397, L39 * Chudnovsky et al. (1986) Chudnovsky, E. M., Field, G. B., Spergel, D. N., & Vilenkin, A. 1986, Phys. Rev. D, 34, 944 * Coughlin et al. (2021) Coughlin, E. R., Nixon, C. J., & Ginsburg, A. 2021, MNRAS, 501, 1868 * Crocker & Aharonian (2011) Crocker, R. M., & Aharonian, F. 2011, Phys. Rev. Lett., 106, 101102 * Crocker et al. (2015) Crocker, R. M., Bicknell, G. V., Taylor, A. M., & Carretti, E. 2015, ApJ, 808, 107 * Crocker et al. (2010) Crocker, R. M., Jones, D. I., Melia, F., Ott, J., & Protheroe, R. J. 2010, Nature, 463, 65 * di Teodoro et al. (2020) di Teodoro, E. M., McClure-Griffiths, N. M., Lockman, F. J., & Armillotta, L. 2020, Nature, 584, 364 * Di Teodoro et al. (2018) Di Teodoro, E. M., McClure-Griffiths, N. M., Lockman, F. J., et al. 2018, ApJ, 855, 33 * Dong et al. (2012) Dong, H., Wang, Q. D., & Morris, M. R. 2012, MNRAS, 425, 884 * Fabian (2012) Fabian, A. C. 2012, ARA&A, 50, 455 * Ferland et al. (2017) Ferland, G. J., Chatzikos, M., Guzmán, F., et al. 2017, Rev. Mexicana Astron. Astrofis., 53, 385 * Ferrière (2009) Ferrière, K. 2009, A&A, 505, 1183 * Ferrière et al. (2007) Ferrière, K., Gillard, W., & Jean, P. 2007, A&A, 467, 611 * Genzel et al. (2010) Genzel, R., Eisenhauer, F., & Gillessen, S. 2010, Reviews of Modern Physics, 82, 3121 * Guo & Mathews (2012) Guo, F., & Mathews, W. G. 2012, ApJ, 756, 181 * Heckman & Best (2014) Heckman, T. M., & Best, P. N. 2014, ARA&A, 52, 589 * Heckman & Thompson (2017) Heckman, T. M., & Thompson, T. A. 2017, Handbook of Supernovae, ed. A. W. Alsabti & P. Murdin (Springer International Publishing), 2431 * Heyvaerts et al. (1988) Heyvaerts, J., Norman, C., & Pudritz, R. E. 1988, ApJ, 330, 718 * Heywood et al. (2019) Heywood, I., Camilo, F., Cotton, W. D., et al. 2019, Nature, 573, 235 * Insertis & Rees (1991) Insertis, F. M., & Rees, M. J. 1991, MNRAS, 252, 82 * Ko et al. (2020) Ko, C. M., Breitschwerdt, D., Chernyshov, D. O., et al. 2020, ApJ, 904, 46 * Kroupa (2001) Kroupa, P. 2001, MNRAS, 322, 231 * Kruijssen et al. (2015) Kruijssen, J. M. D., Dale, J. E., & Longmore, S. N. 2015, MNRAS, 447, 1059 * Krumholz & Kruijssen (2015) Krumholz, M. R., & Kruijssen, J. M. D. 2015, MNRAS, 453, 739 * Lacki (2014) Lacki, B. C. 2014, MNRAS, 444, L39 * Launhardt et al. (2002) Launhardt, R., Zylka, R., & Mezger, P. G. 2002, A&A, 384, 112 * Law (2010) Law, C. J. 2010, ApJ, 708, 474 * Lesch & Reich (1992) Lesch, H., & Reich, W. 1992, A&A, 264, 493 * Li & Bryan (2020) Li, M., & Bryan, G. L. 2020, ApJ, 890, L30 * Li et al. (2017) Li, M., Bryan, G. L., & Ostriker, J. P. 2017, ApJ, 841, 101 * Longair (2011) Longair, M. S. 2011, High Energy Astrophysics (Cambridge University Press) * Mannucci et al. (2005) Mannucci, F., Della Valle, M., Panagia, N., et al. 2005, A&A, 433, 807 * Mignone et al. (2007) Mignone, A., Bodo, G., Massaglia, S., et al. 2007, ApJS, 170, 228 * Mignone et al. (2012) Mignone, A., Zanni, C., Tzeferacos, P., et al. 2012, ApJS, 198, 7 * Morris (1996) Morris, M. 1996, in IAU Symposium, Vol. 169, Unsolved Problems of the Milky Way, ed. L. Blitz & P. J. Teuben, 247 * Morris & Yusef-Zadeh (1989) Morris, M., & Yusef-Zadeh, F. 1989, ApJ, 343, 703 * Mou et al. (2014) Mou, G., Yuan, F., Bu, D., Sun, M., & Su, M. 2014, ApJ, 790, 109 * Mou et al. (2015) Mou, G., Yuan, F., Gan, Z., & Sun, M. 2015, ApJ, 811, 37 * Nagoshi et al. (2019) Nagoshi, H., Kubose, Y., Fujisawa, K., et al. 2019, PASJ, 71, 80 * Najarro et al. (1997) Najarro, F., Krabbe, A., Genzel, R., et al. 1997, A&A, 325, 700 * Nakashima et al. (2019) Nakashima, S., Koyama, K., Wang, Q. D., & Enokiya, R. 2019, ApJ, 875, 32 * Nogueras-Lara et al. (2020) Nogueras-Lara, F., Schödel, R., Gallego-Calvente, A. T., et al. 2020, Nature Astronomy, 4, 377 * Orlando et al. (2007) Orlando, S., Bocchino, F., Reale, F., Peres, G., & Petruk, O. 2007, A&A, 470, 927 * Ponti et al. (2021) Ponti, G., Morris, M. R., Churazov, E., Heywood, I., & Fender, R. P. 2021, A&A, 646, A66 * Ponti et al. (2013) Ponti, G., Morris, M. R., Terrier, R., & Goldwurm, A. 2013, in Cosmic Rays in Star-Forming Environments, ed. D. F. Torres & O. Reimer, Vol. 34, 331 * Ponti et al. (2010) Ponti, G., Terrier, R., Goldwurm, A., Belanger, G., & Trap, G. 2010, ApJ, 714, 732 * Ponti et al. (2015) Ponti, G., Morris, M. R., Terrier, R., et al. 2015, MNRAS, 453, 172 * Ponti et al. (2019) Ponti, G., Hofmann, F., Churazov, E., et al. 2019, Nature, 567, 347 * Poznanski (2013) Poznanski, D. 2013, MNRAS, 436, 3224 * Predehl et al. (2020) Predehl, P., Sunyaev, R. A., Becker, W., et al. 2020, Nature, 588, 227 * Quataert (2004) Quataert, E. 2004, ApJ, 613, 322 * Rosso & Pelletier (1993) Rosso, F., & Pelletier, G. 1993, A&A, 270, 416 * Rozyczka & Tenorio-Tagle (1995) Rozyczka, M., & Tenorio-Tagle, G. 1995, MNRAS, 274, 1157 * Sarkar et al. (2015) Sarkar, K. C., Nath, B. B., & Sharma, P. 2015, MNRAS, 453, 3827 * Sarkar et al. (2017) Sarkar, K. C., Nath, B. B., & Sharma, P. 2017, MNRAS, 467, 3544 * Serabyn & Morris (1994) Serabyn, E., & Morris, M. 1994, ApJ, 424, L91 * Shore & LaRosa (1999) Shore, S. N., & LaRosa, T. N. 1999, ApJ, 521, 587 * Simpson et al. (2001) Simpson, J. C., Lane, J. E., Immer, C. D., & Youngquist, R. C. 2001, Simple analytic expressions for the magnetic field of a circular current loop (NASA technical documents) * Smith et al. (2001) Smith, R. K., Brickhouse, N. S., Liedahl, D. A., & Raymond, J. C. 2001, ApJ, 556, L91 * Sofue (2020) Sofue, Y. 2020, PASJ, 72, L4 * Sofue & Handa (1984) Sofue, Y., & Handa, T. 1984, Nature, 310, 568 * Sormani et al. (2020) Sormani, M. C., Tress, R. G., Glover, S. C. O., et al. 2020, MNRAS, 497, 5024 * Stolte et al. (2008) Stolte, A., Ghez, A. M., Morris, M., et al. 2008, ApJ, 675, 1278 * Stone & Norman (1992) Stone, J. M., & Norman, M. L. 1992, ApJ, 389, 297 * Su et al. (2010) Su, M., Slatyer, T. R., & Finkbeiner, D. P. 2010, ApJ, 724, 1044 * Thomas et al. (2020) Thomas, T., Pfrommer, C., & Enßlin, T. 2020, ApJ, 890, L18 * Truelove & McKee (1999) Truelove, J. K., & McKee, C. F. 1999, ApJS, 120, 299 * Wu & Zhang (2019) Wu, D., & Zhang, M.-F. 2019, RAA, 19, 124 * Yang et al. (2013) Yang, H. Y. K., Ruszkowski, M., & Zweibel, E. 2013, MNRAS, 436, 2734 * Yuan et al. (2015) Yuan, F., Gan, Z., Narayan, R., et al. 2015, ApJ, 804, 101 * Yusef-Zadeh (2003) Yusef-Zadeh, F. 2003, ApJ, 598, 325 * Yusef-Zadeh et al. (1984) Yusef-Zadeh, F., Morris, M., & Chance, D. 1984, Nature, 310, 557 * Yusef-Zadeh & Wardle (2019) Yusef-Zadeh, F., & Wardle, M. 2019, MNRAS, 490, L1 * Zhang (2018) Zhang, D. 2018, Galaxies, 6, 114 * Zhang et al. (2017) Zhang, M. F., Tian, W. W., Leahy, D. A., et al. 2017, ApJ, 849, 147 * Zhang & Guo (2020) Zhang, R., & Guo, F. 2020, ApJ, 894, 117 * Zhou et al. (2021) Zhou, P., Leung, S.-C., Li, Z., et al. 2021, ApJ, 908, 31 * Zhu et al. (2018) Zhu, Z., Li, Z., & Morris, M. R. 2018, ApJS, 235, 26 * Zubovas et al. (2011) Zubovas, K., King, A. R., & Nayakshin, S. 2011, MNRAS, 415, L21 * Zubovas & Nayakshin (2012) Zubovas, K., & Nayakshin, S. 2012, MNRAS, 424, 666
lemmatheorem corollarytheorem definitiontheorem claimtheorem propositiontheorem remarktheorem hypothesistheorem observationtheorem lemma remark corollary proposition definition claim hypothesis observation # A Tight Lower Bound for on Planar DAGs††thanks: A preliminary version of this paper appeared in CIAC 2021. Rajesh Chitnis School of Computer Science, University of Birmingham, UK. <EMAIL_ADDRESS> ###### Abstract Given a graph $G$ and a set $\mathcal{T}=\big{\\{}(s_{\\_}i,t_{\\_}i):1\leq i\leq k\big{\\}}$ of $k$ pairs, the Vertex-Disjoint Paths (resp. ) problems asks to determine whether there exist pairwise vertex-disjoint (resp. edge- disjoint) paths $P_{\\_}1,P_{\\_}2,\ldots,P_{\\_}k$ in $G$ such that $P_{\\_}i$ connects $s_{\\_}i$ to $t_{\\_}i$ for each $1\leq i\leq k$. Unlike their undirected counterparts which are FPT (parameterized by $k$) from Graph Minor theory, both the edge-disjoint and vertex-disjoint versions in directed graphs were shown by Fortune et al. (TCS ’80) to be NP-hard for $k=2$. This strong hardness for Disjoint Paths on general directed graphs led to the study of parameterized complexity on special graph classes, e.g., when the underlying undirected graph is planar. For Vertex-Disjoint Paths on planar directed graphs, Schrijver (SICOMP ’94) designed an $n^{O(k)}$ time algorithm which was later improved upon by Cygan et al. (FOCS ’13) who designed an FPT algorithm running in $2^{2^{O(k^{2})}}\cdot n^{O(1)}$ time. To the best of our knowledge, the parameterized complexity of on planar111A directed graph is planar if its underlying undirected graph is planar. directed graphs is unknown. We resolve this gap by showing that is W[1]-hard parameterized by the number $k$ of terminal pairs, even when the input graph is a planar directed acyclic graph (DAG). This answers a question of Slivkins (ESA ’03, SIDMA ’10). Moreover, under the Exponential Time Hypothesis (ETH), we show that there is no $f(k)\cdot n^{o(k)}$ algorithm for on planar DAGs, where $k$ is the number of terminal pairs, $n$ is the number of vertices and $f$ is any computable function. Our hardness holds even if both the maximum in-degree and maximum out-degree of the graph are at most $2$. We now place our result in the context of previously known algorithms and hardness for on special classes of directed graphs: * • Implications for on DAGs: Our result shows that the $n^{O(k)}$ algorithm of Fortune et al. (TCS ’80) for on DAGs is asymptotically tight, even if we add an extra restriction of planarity. The previous best lower bound (also under ETH) for on DAGs was $f(k)\cdot n^{o(k/\log k)}$ by Amiri et al. (MFCS ’16, IPL ’19) which improved upon the $f(k)\cdot n^{o(\sqrt{k})}$ lower bound implicit in Slivkins (ESA ’03, SIDMA ’10). * • Implications for on planar directed graphs: As a special case of our result, we obtain that on planar directed graphs is W[1]-hard parameterized by the number $k$ of terminal pairs. This answers a question of Cygan et al. (FOCS ’13) and Schrijver (pp. 417-444, Building Bridges II, ’19), and completes the landscape (see Table 2) of the parameterized complexity status of edge and vertex versions of the Disjoint Paths problem on planar directed and planar undirected graphs. ## 1 Introduction The Disjoint Paths problem is one of the most fundamental problems in graph theory: given a graph and a set of $k$ terminal pairs, the question is to determine whether there exists a collection of $k$ pairwise disjoint paths where each path connects one of the given terminal pairs. There are four natural variants of this problem depending on whether we consider undirected or directed graphs and the edge-disjoint or vertex-disjoint requirement. In undirected graphs, the edge-disjoint version is reducible to the vertex- disjoint version in polynomial time by considering the line graph. In directed graphs, the edge-disjoint version and vertex-disjoint version are known to be equivalent in terms of designing exact algorithms. Besides its theoretical importance, the Disjoint Paths problem has found applications in VLSI design, routing, etc. The interested reader is referred to the surveys [20] and [42, Chapter 9] for more details. The case when the number of terminal pairs $k$ are bounded is of special interest: given a graph with $n$ vertices and $k$ terminal pairs the goal is to try to design either FPT algorithms, i.e., algorithms whose running time is $f(k)\cdot n^{O(1)}$ for some computable function $f$, or XP algorithms, i.e., algorithms whose running time is $n^{g(k)}$ for some computable function $g$. We now discuss some of the known results on exact222This paper focuses on exact algorithms for the Disjoint Paths problem so we do not discuss here the results regarding (in)approximability. algorithms for different variants of the Disjoint Paths problem before stating our result. ##### Prior work on exact algorithms for Disjoint Paths on undirected graphs: The NP-hardness for and Vertex-Disjoint Paths on undirected graphs was shown by Even et al. [16]. Solving the Vertex-Disjoint Paths problem on undirected graphs is an important subroutine in checking whether a fixed graph $H$ is a minor of a graph $G$. Hence, a core algorithmic result of the seminal work of Robertson and Seymour was their FPT algorithm [40] for Vertex-Disjoint Paths (and hence also ) on general undirected graphs which runs in $O(g(k)\cdot n^{3})$ time for some function $g$. The cubic dependence on the input size was improved to quadratic by Kawarabayashi et al. [28] who designed an algorithm running in $O(h(k)\cdot n^{2})$ time for some function $h$. Both the functions $g$ and $h$ are quite large (at least quintuple exponential as per [2]). This naturally led to the search for faster FPT algorithms on planar graphs: Adler et al. [2] designed an algorithm for Vertex-Disjoint Paths on planar graphs which runs in $2^{2^{O(k^{2})}}\cdot n^{O(1)}$ time. Very recently, this was improved to an single-exponential time FPT algorithm which runs in $2^{O(k^{2})}\cdot n^{O(1)}$ time by Lokshtanov et al. [32]. There are two more variants of the Disjoint Paths problem: the _half-integral_ version where each vertex/edge can belong to at most two paths, and the _parity_ version where the length of each path is required to respect a given parity (even or odd) condition. FPT algorithms are known for each of the following versions of Vertex-Disjoint Paths on general undirected graphs: the half-integral version [31, 24], the half-integral version with parity [25] and finally just the parity version (without half-integral) [27]. ##### Prior work on exact algorithms for Disjoint Paths on directed graphs: Unlike undirected graphs where both and Vertex-Disjoint Paths are FPT parameterized by $k$, the Disjoint Paths problem becomes significantly harder for directed graphs: Fortune et al. [19] showed that both and Vertex-Disjoint Paths on general directed graphs are NP-hard even for $k=2$. For general directed graphs, Giannopoulou et al. [21] recently designed an XP algorithm for the half-integral version of Disjoint Paths: here the goal is to either find a set of $k$ paths $P_{\\_}{1},P_{\\_}{2},\ldots,P_{\\_}{k}$ such that $P_{\\_}{i}$ is an $s_{\\_}i\leadsto t_{\\_}i$ path for each $i\in[k]$ and each vertex in the graph appears in at most two of the paths, or conclude that the given instance has no solution with pairwise disjoint paths. This algorithm improves upon an older XP algorithm of Kawarabayashi et al. [26] for the quarter-integral case in general digraphs. The Disjoint Paths problem has also been extensively studied on special subclasses of digraphs: * • Disjoint Paths on DAGs: It is easy to show that Vertex-Disjoint Paths and are equivalent on the class of directed acyclic graphs (DAGs). Fortune et al. [19] designed an $n^{O(k)}$ algorithm for on DAGs. Slivkins [44] showed W[1]-hardness for on DAGs and a $f(k)\cdot n^{o(\sqrt{k})}$ lower bound (for any computable function $f$) under the Exponential Time Hypothesis [22, 23] (ETH) follows from that reduction. Amiri et al. [3]333We note that [3] considers a more general version than Disjoint Paths which allows congestion improved the lower bound to $f(k)\cdot n^{o(k/\log k)}$ thus showing that the algorithm of Fortune et al. [19] is almost-tight. * • Disjoint Paths on directed planar graphs: Schrijver [41] designed an $n^{O(k)}$ algorithm for Vertex-Disjoint Paths on directed planar graphs. This was improved upon by Cygan et al. [12] who designed an FPT algorithm running in $2^{2^{O(k^{2})}}\cdot n^{O(1)}$ time. As pointed out by Cygan et al. [12], their FPT algorithm for Vertex-Disjoint Paths on directed planar graphs does not work for the problem. The status of parameterized complexity (parameterized by $k$) of on directed planar graphs remained an open question. Table 1 gives a summary of known results for exact algorithms for Disjoint Paths on (subclasses of) directed graphs. Graph class | Problem type | Algorithm | Lower Bound ---|---|---|--- General graphs | Vertex-disjoint = edge-disjoint | ???? | NP-hard for $k=2$ DAGs | Vertex-disjoint = edge-disjoint | $n^{O(k)}$ [19] | $f(k)\cdot n^{o(\sqrt{k})}$ [44] $f(k)\cdot n^{o(k/\log k)}$ [3] $f(k)\cdot n^{o(k)}$ [this paper] Planar graphs | Vertex-disjoint | $n^{O(k)}$ [41] | ???? $2^{2^{O(k^{2})}}\cdot n^{O(1)}$ [12] Edge-disjoint | ???? | $f(k)\cdot n^{o(k)}$ [this paper] Planar DAGs | Vertex-disjoint | $2^{2^{O(k^{2})}}\cdot n^{O(1)}$ [12] | ???? Edge-disjoint | $n^{O(k)}$ [19] | $f(k)\cdot n^{o(k)}$ [this paper] Table 1: The landscape of parameterized complexity results for Disjoint Paths on directed graphs. All lower bounds are under the Exponential Time Hypothesis (ETH). To the best of our knowledge, the entries marked with ???? have no known non-trivial results. ##### Our result: We resolve this open question by showing a slightly stronger result: the problem is W[1]-hard parameterized by $k$ when the input graph is a planar DAG whose max in-degree and max out-degree are both at most $2$. First we define the problem formally below, and then state our result: _Input_ : A directed graph $G=(V,E)$, and a set $\mathcal{T}\subseteq V\times V$ of $k$ terminal pairs given by $\big{\\{}(s_{\\_}i,t_{\\_}i):1\leq i\leq k\big{\\}}$. _Question_ : Do there exist $k$ pairwise edge-disjoint paths $P_{\\_}1,P_{\\_}2,\ldots,P_{\\_}k$ such that $P_{\\_}i$ is an $s_{\\_}i\leadsto t_{\\_}i$ path for each $1\leq i\leq k$? _Parameter_ : $k$ ###### Theorem 1.1. The problem on planar DAGs is W[1]-hard parameterized by the number $k$ of terminal pairs. Moreover, under ETH, the problem on planar DAGs cannot be solved $f(k)\cdot n^{o(k)}$ time where $f$ is any computable function, $n$ is the number of vertices and $k$ is the number of terminal pairs. The hardness holds even if both the maximum in-degree and maximum out-degree of the graph are at most $2$. Recall that the Exponential Time Hypothesis (ETH) states that $n$-variable $m$-clause 3-SAT cannot be solved in $2^{o(n)}\cdot(n+m)^{O(1)}$ time [22, 23]. Prior to our result, only the NP-completeness of on planar DAGs was known [45]. The reduction used in Theorem 1.1 is heavily inspired by some known reductions: in particular, the planar DAG structure (Figure 2) is from [6, 7] and the splitting operation (Figure 3 and Section 2.1.2) is from [4, 5]. We view the simplicity of our reduction as evidence of success of the (now) established methodology of showing W[1]-hardness (and ETH-based hardness) for planar graph problems using Grid-Tiling and its variants. ##### Placing Theorem 1.1 in the context of prior work: Theorem 1.1 answers a question of Slivkins [44] regarding the parameterized complexity of on planar DAGs. As a special case of Theorem 1.1, one obtains that on planar directed graphs is W[1]-hard parameterized by the number $k$ of terminal pairs: this answers a question of Cygan et al. [12] and Schrijver [43]. The W[1]-hardness result of Theorem 1.1 completes the landscape (see Table 2) of parameterized complexity of edge-disjoint and vertex-disjoint versions of the Disjoint Paths problem on planar directed and planar undirected graphs. Theorem 1.1 also shows that the $n^{O(k)}$ algorithm of Fortune et al. [19] for on DAGs is asymptotically optimal, even if we add an extra restriction of planarity to the mix. Theorem 1.1 adds another problem (on DAGs) to the relatively small list of problems for which it is provably known that the planar version has the same asymptotic complexity as the problem on general graphs: the only such other problems we are aware of are [5, 7, 38]. This is in contrast to the fact that for several problems [34, 29, 38, 30, 18, 14, 39, 36, 33, 1, 17]. the planar version is easier by (roughly) a square root factor in the exponent as compared to general graphs, and there are lower bounds indicating that this improvement is essentially the best possible [35]. Graph class | Problem type | Parameterized Complexity parameterized by $k$ ---|---|--- Planar undirected | Vertex-disjoint | FPT [2, 32, 28, 40] Edge-disjoint Planar directed | Vertex-disjoint | FPT [12] Edge-disjoint | W[1]-hard [this paper] Table 2: The landscape of parameterized complexity results for the four different versions (edge-disjoint vs vertex-disjoint & directed vs undirected) of Disjoint Paths on planar graphs. ##### Organization of the paper: In Section 2.1 we describe the construction of the instance $(G_{\\_}2,\mathcal{T})$ of . The two directions of the reduction are shown in Section 2.2 and Section 2.3 respectively. Finally, Section 2.4 contains the proof of Theorem 1.1. We conclude with some open questions in Section 3. ##### Notation: All graphs considered in this paper are directed and do not have self-loops or multiple edges. We use (mostly) standard graph theory notation [15]. The set $\\{1,2,3,\ldots,M\\}$ is denoted by $[M]$ for each $M\in\mathbb{N}$. A directed edge (resp. path) from $s$ to $t$ is denoted by $s\to t$ (resp. $s\leadsto t$). We use the non-standard notation (to avoid having to consider different cases in our proofs): $s\leadsto s$ does not represent a self-loop but rather is to be viewed as _“just staying put”_ at the vertex $s$. If $A,B\subseteq V(G)$ then we say that there is an $A\leadsto B$ path if and only if there exists two vertices $a\in A,b\in B$ such that there is an $a\leadsto b$ path. For $A\subseteq V(G)$ we define $N_{\\_}{G}^{+}(A)=\big{\\{}x\notin A\ :\exists\ y\in A\ \text{such that }(y,x)\in E(G)\big{\\}}$ and $N_{\\_}{G}^{-}(A)=\big{\\{}x\notin A\ :\exists\ y\in A\ \text{such that }(x,y)\in E(G)\big{\\}}$. For $A\subseteq V(G)$ we define $G[A]$ to be the graph induced on the vertex set $A$, i.e., $G[A]:=(A,E_{\\_}A)$ where $E_{\\_}A:=E(G)\cap(A\times A)$. ## 2 W[1]-hardness of on Planar DAGs To obtain W[1]-hardness for on planar DAGs, we reduce from the Grid- Tiling-$\leq$ problem [37] which is defined below: Grid-Tiling-$\leq$ _Input_ : Integers $k,N$, and a collection $\mathcal{S}$ of $k^{2}$ sets given by $\big{\\{}S_{\\_}{x,y}\subseteq[N]\times[N]\ :1\leq x,y\leq k\big{\\}}$. _Question_ : For each $1\leq x,y\leq k$ does there exist a pair $\gamma_{\\_}{x,y}\in S_{\\_}{x,y}$ such that • if $\gamma_{\\_}{x,y}=(a,b)$ and $\gamma_{\\_}{x+1,y}=(a^{\prime},b^{\prime})$ then $b\leq b^{\prime}$, and • if $\gamma_{\\_}{x,y}=(a,b)$ and $\gamma_{\\_}{x,y+1}=(a^{\prime},b^{\prime})$ then $a\leq a^{\prime}$ Figure 1: An instance of Grid-Tiling-$\leq$ with $k=3,N=5$ and a solution highlighted in red. Note that in a solution, the second coordinates in a row are non-decreasing as we go from left to right and the first coordinates in a column are non-decreasing as we go from bottom to top. Figure 1 gives an illustration of an instance of Grid-Tiling-$\leq$ along with a solution. It is known [13, Theorem 14.30] that Grid-Tiling-$\leq$ is W[1]-hard parameterized by $k$, and under the Exponential Time Hypothesis (ETH) has no $f(k)\cdot N^{o(k)}$ algorithm for any computable function $f$. We will exploit this result by reducing an instance $(k,N,\mathcal{S})$ of Grid-Tiling-$\leq$ in $\text{poly}(N,k)$ time to an instance $(G_{\\_}2,\mathcal{T})$ of such that $G_{\\_}2$ is a planar DAG, number of vertices in $G_{\\_}2$ is $|V(G_{\\_}2)|=O(N^{2}k^{2})$ and number of terminal pairs is $|\mathcal{T}|=2k$. ###### Remark . Our definition of Grid-Tiling-$\leq$ above is slightly different than the one given in [13, Theorem 14.30]: there the constraints are first coordinate of $\gamma_{\\_}{x,y}$ is $\leq$ first coordinate of $\gamma_{\\_}{x+1,y}$ and second coordinate of $\gamma_{\\_}{x,y}$ is $\leq$ second coordinate of $\gamma_{\\_}{x,y+1}$. By rotating the axis by $90^{\circ}$, i.e., swapping the indices, our version of Grid-Tiling-$\leq$ is equivalent to that from [13, Theorem 14.30]. ### 2.1 Construction of the instance $(G_{\\_}2,\mathcal{T})$ of Consider an instance $(N,k,\mathcal{S})$ of Grid-Tiling-$\leq$. We now build an instance $(G_{\\_}2,\mathcal{T})$ of as follows: first in Section 2.1.1 we describe the construction of an intermediate graph $G_{\\_}1$ (Figure 2). The splitting operation is defined in Section 2.1.2, and the graph $G_{\\_}2$ is obtained from $G_{\\_}1$ by splitting each (black) grid vertex. #### 2.1.1 Construction of the graph $G_{\\_}1$ $c_{\\_}{1}$$c_{\\_}{2}$$c_{\\_}{3}$$d_{\\_}{1}$$d_{\\_}{2}$$d_{\\_}{3}$$a_{\\_}{1}$$a_{\\_}{2}$$a_{\\_}{3}$$b_{\\_}{1}$$b_{\\_}{2}$$b_{\\_}{3}$Origin Figure 2: The graph $G_{\\_}1$ constructed for the input $k=3$ and $N=5$ via the construction described in Section 2.1.1. The final graph $G_{\\_}2$ for the instance is obtained from $G_{\\_}1$ by the splitting operation (Section 2.1.2) as described in Section 2.1.2. Given integers $k$ and $N$, we build a directed graph $G_{\\_}1$ as follows (refer to Figure 2): 1. 1. Origin: The origin is marked at the bottom left corner of Figure 2. This is defined just so we can view the naming of the vertices as per the usual $X-Y$ coordinate system: increasing horizontally towards the right, and vertically towards the top. 2. 2. Grid (black) vertices and edges: For each $1\leq i,j\leq k$ we introduce a (directed) $N\times N$ grid $G_{\\_}{i,j}$ where the column numbers increase from $1$ to $N$ as we go from left to right, and the row numbers increase from $1$ to $N$ as we go from bottom to top. For each $1\leq q,\ell\leq N$ the unique vertex which is the intersection of the $q^{\text{th}}$ column and $\ell^{\text{th}}$ row of $G_{\\_}{i,j}$ is denoted by $\textbf{w}_{\\_}{i,j}^{q,\ell}$. The vertex set and edge set of $G_{\\_}{i,j}$ is defined formally as: * • $V(G_{\\_}{i,j})=\big{\\{}\textbf{w}_{\\_}{i,j}^{q,\ell}:1\leq q,\ell\leq N\big{\\}}$ * • $E(G_{\\_}{i,j})=\Big{(}\bigcup_{\\_}{(q,\ell)\in[N]\times[N-1]}\textbf{w}_{\\_}{i,j}^{q,\ell}\to\textbf{w}_{\\_}{i,j}^{q,\ell+1}\Big{)}\cup\Big{(}\bigcup_{\\_}{(q,\ell)\in[N-1]\times[N]}\textbf{w}_{\\_}{i,j}^{q,\ell}\to\textbf{w}_{\\_}{i,j}^{q+1,\ell}\Big{)}$ All vertices and edges of $G_{\\_}{i,j}$ are shown in Figure 2 using black color. Note that each horizontal edge of the grid $G_{\\_}{i,j}$ is oriented to the right, and each vertical edge is oriented towards the top. We will later (Section 2.1.2) modify the grid $G_{\\_}{i,j}$ to _represent_ the set $S_{\\_}{i,j}$. For each $1\leq i,j\leq k$ we define the set of _boundary_ vertices of the grid $G_{\\_}{i,j}$ as follows: $\displaystyle\texttt{Left}(G_{\\_}{i,j}):=\big{\\{}\textbf{w}_{\\_}{i,j}^{1,\ell}\ :\ \ell\in[N]\big{\\}}\ ;\ \texttt{Right}(G_{\\_}{i,j}):=\big{\\{}\textbf{w}_{\\_}{i,j}^{N,\ell}\ :\ \ell\in[N]\big{\\}}$ (1) $\displaystyle\texttt{Top}(G_{\\_}{i,j}):=\big{\\{}\textbf{w}_{\\_}{i,j}^{\ell,N}\ :\ \ell\in[N]\big{\\}}\ ;\ \texttt{Bottom}(G_{\\_}{i,j}):=\big{\\{}\textbf{w}_{\\_}{i,j}^{\ell,1}\ :\ \ell\in[N]\big{\\}}$ 3. 3. Arranging the $k^{2}$ different $N\times N$ grids $\\{G_{\\_}{i,j}\\}_{\\_}{1\leq i,j\leq k}$ into a large $k\times k$ grid: We place the grids $G_{\\_}{i,j}$ into a big $k\times k$ grid of grids left to right according to growing $i$ and from bottom to top according to growing $j$ (see the naming of the sets in Figure 1 in blue color). In particular,the grid $G_{\\_}{1,1}$ is at bottom left corner of the construction, the grid $G_{\\_}{k,k}$ at the top right corner, and so on. 4. 4. Blue vertices and red edges for horizontal connections: For each $(i,j)\in[k-1]\times[k]$ we add a set of vertices $H_{\\_}{i,j}^{i+1,j}:=\big{\\{}\textbf{h}_{\\_}{i,j}^{i+1,j}(\ell)\ :\ \ell\in[N]\big{\\}}$ shown in Figure 2 using blue color. We also add the following three sets of edges (shown in Figure 2 using red color): * • a directed path of $N-1$ edges given by $\texttt{Path}(H_{\\_}{i,j}^{i+1,j}):=\big{\\{}\textbf{h}_{\\_}{i,j}^{i+1,j}(\ell)\to\textbf{h}_{\\_}{i,j}^{i+1,j}(\ell+1)\ :\ \ell\in[N-1]\big{\\}}$ * • a directed perfect matching from $\texttt{Right}(G_{\\_}{i,j})$ to $H_{\\_}{i,j}^{i+1,j}$ given by $\texttt{Matching}\big{(}G_{\\_}{i,j},H_{\\_}{i,j}^{i+1,j}\big{)}:=\big{\\{}\textbf{w}_{\\_}{i,j}^{N,\ell}\to\textbf{h}_{\\_}{i,j}^{i+1,j}(\ell)\ :\ \ell\in[N]\big{\\}}$ * • a directed perfect matching from $H_{\\_}{i,j}^{i+1,j}$ to $\texttt{Left}(G_{\\_}{i+1,j})$ given by $\texttt{Matching}\big{(}H_{\\_}{i,j}^{i+1,j},G_{\\_}{i+1,j}\big{)}:=\big{\\{}\textbf{h}_{\\_}{i,j}^{i+1,j}(\ell)\to\textbf{w}_{\\_}{i+1,j}^{1,\ell}\ :\ \ell\in[N]\big{\\}}$ 5. 5. Blue vertices and red edges for vertical connections: For each $(i,j)\in[k]\times[k-1]$ we add a set of vertices $V_{\\_}{i,j}^{i,j+1}:=\big{\\{}\textbf{v}_{\\_}{i,j}^{i,j+1}(\ell)\ :\ \ell\in[N]\big{\\}}$ shown in Figure 2 using blue color. We also add the following three sets of edges (shown in Figure 2 using red color): * • a directed path of $N-1$ edges given by $\texttt{Path}(V_{\\_}{i,j}^{i,j+1}):=\big{\\{}\textbf{v}_{\\_}{i,j}^{i,j+1}(\ell)\to\textbf{v}_{\\_}{i,j}^{i,j+1}(\ell+1)\ :\ \ell\in[N-1]\big{\\}}$ * • a directed perfect matching from $\texttt{Top}(G_{\\_}{i,j})$ to $V_{\\_}{i,j}^{i,j+1}$ given by $\texttt{Matching}\big{(}G_{\\_}{i,j},V_{\\_}{i,j}^{i,j+1}\big{)}:=\big{\\{}\textbf{w}_{\\_}{i,j}^{\ell,N}\to\textbf{v}_{\\_}{i,j}^{i,j+1}(\ell)\ :\ \ell\in[N]\big{\\}}$ * • a directed perfect matching from $V_{\\_}{i,j}^{i,j+1}$ to $\texttt{Bottom}(G_{\\_}{i,j+1})$ given by $\texttt{Matching}\big{(}V_{\\_}{i,j}^{i,j+1},G_{\\_}{i,j+1}\big{)}:=\big{\\{}\textbf{v}_{\\_}{i,j}^{i,j+1}(\ell)\to\textbf{w}_{\\_}{i,j+1}^{\ell,1}\ :\ \ell\in[N]\big{\\}}$ 6. 6. Green (terminal) vertices and magenta edges: For each $i\in[k]$ we add the following four sets of (terminal) vertices (shown in Figure 2 using green color) $\displaystyle A:=\big{\\{}a_{\\_}i\ :\ i\in[k]\big{\\}}\quad;\quad B:=\big{\\{}b_{\\_}i\ :\ i\in[k]\big{\\}}$ (2) $\displaystyle C:=\big{\\{}c_{\\_}i\ :\ i\in[k]\big{\\}}\quad;\quad D:=\big{\\{}d_{\\_}i\ :\ i\in[k]\big{\\}}$ For each $i\in[k]$ we add the edges (shown in Figure 2 using magenta color) $\displaystyle\texttt{Source}(A):=\big{\\{}a_{\\_}i\to\textbf{w}_{\\_}{i,1}^{\ell,1}\ :\ \ell\in[N]\big{\\}}\ ;\ \texttt{Sink}(B):=\big{\\{}\textbf{w}_{\\_}{i,N}^{\ell,N}\to b_{\\_}{i}\ :\ \ell\in[N]\big{\\}}$ (3) For each $j\in[k]$ we add the edges (shown in Figure 2 using magenta color) $\displaystyle\texttt{Source}(C):=\big{\\{}c_{\\_}j\to\textbf{w}_{\\_}{1,j}^{1,\ell}\ :\ \ell\in[N]\big{\\}}\ ;\ \texttt{Sink}(D):=\big{\\{}\textbf{w}_{\\_}{N,j}^{N,\ell}\to d_{\\_}{j}\ :\ \ell\in[N]\big{\\}}$ (4) This completes the construction of the graph $G_{\\_}1$ (see Figure 2). ###### Claim . $G_{\\_}1$ is a planar DAG ###### Proof. Figure 2 gives a planar embedding of $G_{\\_}1$. It is easy to verify from the construction of $G_{\\_}1$ described at the start of Section 2.1.1 (see also Figure 2) that $G_{\\_}1$ is a DAG. ∎ #### 2.1.2 Obtaining the graph $G_{\\_}2$ from $G_{\\_}1$ via the splitting operation Observe (see Figure 2) that every (black) grid vertex in $G_{\\_}1$ has in- degree two and out-degree two. Moreover, the two in-neighbors and two out- neighbors do not appear alternately. For each (black) grid vertex $z\in G_{\\_}1$ we set up the notation: ###### Definition . (four neighbors of each grid vertex in $G_{\\_}1$) For each (black) grid vertex $\textbf{z}\in G_{\\_}1$ we define the following four vertices * • $\texttt{west}(\textbf{z})$ is the vertex to the left of z (as seen by the reader) which has an edge incoming into z * • $\texttt{south}(\textbf{z})$ is the vertex below z (as seen by the reader) which has an edge incoming into z * • $\texttt{east}(\textbf{z})$ is the vertex to the right of z (as seen by the reader) which has an edge outgoing from z * • $\texttt{north}(\textbf{z})$ is the vertex above z (as seen by the reader) which has an edge outgoing from z We now define the splitting operation which allows us to obtain the graph $G_{\\_}2$ from the graph $G_{\\_}1$ constructed in Section 2.1.1. ###### Definition . (splitting operation) For each $i,j\in[k]$ and each $q,\ell\in[N]$ * • If $(q,\ell)\notin S_{\\_}{i,j}$, then we split the vertex $\textbf{w}_{\\_}{i,j}^{q,\ell}$ into two distinct vertices $\textbf{w}_{\\_}{i,j,\text{LB}}^{q,\ell}$ and $\textbf{w}_{\\_}{i,j,\text{TR}}^{q,\ell}$ and add the edge $\textbf{w}_{\\_}{i,j,\text{LB}}^{q,\ell}\to\textbf{w}_{\\_}{i,j,\text{TR}}^{q,\ell}$ (denoted by the dotted edge in Figure 3). The 4 edges (see Section 2.1.2) incident on $\textbf{w}_{\\_}{i,j}^{q,\ell}$ are now changed as follows (see Figure 3): * – Replace the edge $\texttt{west}(\textbf{w}_{\\_}{i,j}^{q,\ell})\to\textbf{w}_{\\_}{i,j}^{q,\ell}$ by the edge $\texttt{west}(\textbf{w}_{\\_}{i,j}^{q,\ell})\to\textbf{w}_{\\_}{i,j,\text{LB}}^{q,\ell}$ * – Replace the edge $\texttt{south}(\textbf{w}_{\\_}{i,j}^{q,\ell})\to\textbf{w}_{\\_}{i,j}^{q,\ell}$ by the edge $\texttt{south}(\textbf{w}_{\\_}{i,j}^{q,\ell})\to\textbf{w}_{\\_}{i,j,\text{LB}}^{q,\ell}$ * – Replace the edge $\textbf{w}_{\\_}{i,j}^{q,\ell}\to\texttt{east}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ by the edge $\textbf{w}_{\\_}{i,j,\text{TR}}^{q,\ell}\to\texttt{east}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ * – Replace the edge $\textbf{w}_{\\_}{i,j}^{q,\ell}\to\texttt{north}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ by the edge $\textbf{w}_{\\_}{i,j,\text{TR}}^{q,\ell}\to\texttt{north}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ * • Otherwise, if $(q,\ell)\in S_{\\_}{i,j}$ then the vertex $\textbf{w}_{\\_}{i,j}^{q,\ell}$ is not split, and we define $\textbf{w}_{\\_}{i,j,\text{LB}}^{q,\ell}=\textbf{w}_{\\_}{i,j}^{q,\ell}=\textbf{w}_{\\_}{i,j,\text{TR}}^{q,\ell}$. Note that the four edges (Section 2.1.2) incident on $\textbf{w}_{\\_}{i,j}^{q,\ell}$ are unchanged. ###### Remark . To avoid case distinctions in the forthcoming proof of correctness of the reduction, we will use the following non-standard notation: the edge $s\leadsto s$ does not represent a self-loop but rather is to be viewed as _“just staying put”_ at the vertex $s$. Note that this does not affect edge- disjointness. $\textbf{w}_{\\_}{i,j}^{q,\ell}$ $\texttt{west}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ $\texttt{east}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ $\texttt{south}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ $\texttt{north}(\textbf{w}_{\\_}{i,j}^{q,\ell})$Splitting Operation $\textbf{w}_{\\_}{i,j,\text{TR}}^{q,\ell}$ $\textbf{w}_{\\_}{i,j,\text{LB}}^{q,\ell}$ $\texttt{west}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ $\texttt{east}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ $\texttt{south}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ $\texttt{north}(\textbf{w}_{\\_}{i,j}^{q,\ell})$ Figure 3: The splitting operation for the vertex $\textbf{w}_{\\_}{i,j}^{q,\ell}$ when $(q,\ell)\notin S_{\\_}{i,j}$. The idea behind this splitting is if we want edge-disjoint paths then we can go either left-to-right or bottom-to-top but not in both directions. On the other hand, if $(q,\ell)\in S_{\\_}{i,j}$ then the picture on the right-hand side (after the splitting operation) would look exactly like that on the left-hand side. We are now ready to define the graph $G_{\\_}2$ and the set $\mathcal{T}$ of terminal pairs: ###### Definition . The graph $G_{\\_}2$ is obtained by applying the splitting operation ( Section 2.1.2) to each (black) grid vertex of $G_{\\_}1$, i.e., the set of vertices given by $\bigcup_{\\_}{1\leq i,j\leq k}V(G_{\\_}{i,j})$. The set of terminal pairs is $\mathcal{T}:=\big{\\{}(a_{\\_}i,b_{\\_}i):i\in[k]\big{\\}}\cup\big{\\{}(c_{\\_}j,d_{\\_}j):j\in[k]\big{\\}}$ Note that in $G_{\\_}2$ we have * • All vertices in $G_{\\_}{2}$ except $A\cup C$ have out-degree at most $2$ * • All vertices in $G_{\\_}{2}$ except $B\cup D$ have in-degree at most $2$ We will later show (see last paragraph in the proof of Theorem 1.1) how to edit $G_{\\_}2$ such that each vertex has both in-degree and out-degree at most $2$. The next claim shows that $G_{\\_}2$ is also both planar and acyclic (like $G_{\\_}1$). ###### Claim . $G_{\\_}2$ is a planar DAG ###### Proof. In Section 2.1.1, we have shown that $G_{\\_}1$ is a planar DAG. By Section 2.1.2, $G_{\\_}2$ is obtained from $G_{\\_}1$ by applying the splitting operation (Section 2.1.2) on every (black) grid vertex, i.e., every vertex from the set $\bigcup_{\\_}{1\leq i,j\leq k}V(G_{\\_}{i,j})$. By Section 2.1.2, every vertex of $G_{\\_}1$ that is split has exactly two in- neighbors and two out-neighbors in $G_{\\_}1$. Hence, it is easy to see (Figure 3) that the splitting operation (Section 2.1.2) does not destroy planarity when we construct $G_{\\_}2$ from $G_{\\_}1$. Since $G_{\\_}1$ is a DAG, replacing each split (black) grid vertex w in $G_{\\_}1$ by $\textbf{w}_{\\_}{\text{LB}}$ followed by $\textbf{w}_{\\_}{\text{TR}}$ in the topological order of $G_{\\_}1$ gives a topological order for $G_{\\_}2$. Hence, $G_{\\_}2$ is a planar DAG. ∎ We now set up notation for the grids in $G_{\\_}2$: ###### Definition . For each $i,j\in[k]$, we define $G_{\\_}{i,j}^{\texttt{split}}$ to be the graph obtained by applying the splitting operation (Section 2.1.2) to each vertex of $G_{\\_}{i,j}$. For each $i,j\in[k]$ and each $q,\ell\in[N]$ we define $\texttt{split}(\textbf{w}_{\\_}{i,j}^{q,\ell}):=\big{\\{}\textbf{w}_{\\_}{i,j,\text{LB}}^{q,\ell},\textbf{w}_{\\_}{i,j,\text{TR}}^{q,\ell}\big{\\}}$. ### 2.2 Solution for $\Rightarrow$ Solution for Grid-Tiling-$\leq$ In this section, we show that if the instance $(G_{\\_}2,\mathcal{T})$ of has a solution then the instance $(k,N,\mathcal{S})$ of Grid-Tiling-$\leq$ also has a solution. Suppose that the instance $(G_{\\_}2,\mathcal{T})$ of has a solution, i.e., there is a collection of $2k$ pairwise edge-disjoint paths $\big{\\{}P_{\\_}1,P_{\\_}2,\ldots,P_{\\_}k,$ $Q_{\\_}1,Q_{\\_}2,\ldots,Q_{\\_}k\big{\\}}$ in $G_{\\_}2$ such that $\displaystyle P_{\\_}{i}\ \text{is an}\ a_{\\_}i\leadsto b_{\\_}i\ \text{path}\ \forall\ i\in[k]$ (5) $\displaystyle Q_{\\_}{j}\ \text{is an}\ c_{\\_}j\leadsto d_{\\_}j\ \text{path}\ \forall\ j\in[k]$ To streamline the arguments of this section, we define the following subsets of vertices of $G_{\\_}2$: ###### Definition . (horizontal & vertical levels) For each $j\in[k]$, we define the following set of vertices: $\textsc{Horizontal}(j)=\\{c_{\\_}j,d_{\\_}j\\}\cup\Big{(}\bigcup_{\\_}{i=1}^{k}V(G_{\\_}{i,j}^{\texttt{split}})\Big{)}\cup\Big{(}\bigcup_{\\_}{i=1}^{k-1}H_{\\_}{i,j}^{i+1,j}\Big{)}$ For each $i\in[k]$, we define the following set of vertices: $\textsc{Vertical}(i)=\\{a_{\\_}i,b_{\\_}i\\}\cup\Big{(}\bigcup_{\\_}{j=1}^{k}V(G_{\\_}{i,j}^{\texttt{split}})\Big{)}\cup\Big{(}\bigcup_{\\_}{j=1}^{k-1}V_{\\_}{i,j}^{i,j+1}\Big{)}$ From Section 2.2, it is easy to verify that $\textsc{Vertical}(i)\cap\textsc{Vertical}(i^{\prime})=\emptyset=\textsc{Horizontal}(i)\cap\textsc{Horizontal}(i^{\prime})$ for every $1\leq i\neq i^{\prime}\leq k$. ###### Definition . (boundary vertices in $G_{\\_}2$) For each $1\leq i,j\leq k$ we define the set of boundary vertices of the grid $G_{\\_}{i,j}^{\texttt{split}}$ in the graph $G_{\\_}2$ as follows: $\displaystyle\texttt{Left}(G_{\\_}{i,j}^{\texttt{split}}):=\big{\\{}\textbf{w}_{\\_}{i,j,\text{LB}}^{1,\ell}\ :\ \ell\in[N]\big{\\}}\ ;\ \texttt{Right}(G_{\\_}{i,j}^{\texttt{split}}):=\big{\\{}\textbf{w}_{\\_}{i,j,\text{TR}}^{N,\ell}\ :\ \ell\in[N]\big{\\}}$ (6) $\displaystyle\texttt{Top}(G_{\\_}{i,j}^{\texttt{split}}):=\big{\\{}\textbf{w}_{\\_}{i,j,\text{TR}}^{\ell,N}\ :\ \ell\in[N]\big{\\}}\ ;\ \texttt{Bottom}(G_{\\_}{i,j}^{\texttt{split}}):=\big{\\{}\textbf{w}_{\\_}{i,j,\text{LB}}^{\ell,1}\ :\ \ell\in[N]\big{\\}}$ ###### Lemma . For each $i\in[k]$ the path $P_{\\_}i$ satisfies the following two structural properties: * • every edge of the path $P_{\\_}i$ has both end-points in $\textsc{Vertical}(i)$ * • $P_{\\_}i$ contains an $\texttt{Bottom}(G_{\\_}{i,j}^{\texttt{split}})\leadsto\texttt{Top}(G_{\\_}{i,j}^{\texttt{split}})$ path for each $j\in[k]$. ###### Proof. For this proof, define $H_{\\_}{0,j}^{1,j}:=\\{c_{\\_}j\\}$ and $H_{\\_}{k,j}^{k+1,j}:=\\{d_{\\_}j\\}$ for each $j\in[k]$. Fix any $i^{*}\in[k]$. Note that $P_{\\_}{i^{*}}$ is an $a_{\\_}{i^{*}}\leadsto b_{\\_}{i^{*}}$ path and hence starts and ends at a vertex in $\textsc{Vertical}(i^{*})$. We now prove the first part of lemma by showing two claims which state that $P_{\\_}{i^{*}}$ cannot contain any vertex of $N_{\\_}{G_{\\_}2}^{+}\big{(}\textsc{Vertical}(i^{*})\big{)}$ and $N_{\\_}{G_{\\_}2}^{-}\big{(}\textsc{Vertical}(i^{*})\big{)}$ respectively. ###### Claim . $P_{\\_}{i^{*}}$ does not contain any vertex of $N_{\\_}{G_{\\_}2}^{+}\big{(}\textsc{Vertical}(i^{*})\big{)}$. ###### Proof. The structure of $G_{\\_}2$ implies that * • $N_{\\_}{G_{\\_}2}^{+}\big{(}\textsc{Vertical}(i)\big{)}=\bigcup_{\\_}{j=1}^{k}H_{\\_}{i,j}^{i+1,j}$ for each $i\in[k]$ * • $N_{\\_}{G_{\\_}2}^{+}\big{(}\bigcup_{\\_}{j=1}^{k}H_{\\_}{i,j}^{i+1,j}\big{)}\subseteq\textsc{Vertical}(i+1)$ for each $0\leq i\leq k-1$ * • $N_{\\_}{G_{\\_}2}^{+}\big{(}\bigcup_{\\_}{j=1}^{k}H_{\\_}{k,j}^{k+1,j}\big{)}=\emptyset$ since each vertex of $D$ is a sink in $G_{\\_}2$ Hence, if $P_{\\_}{i^{*}}$ contains a vertex from $N_{\\_}{G_{\\_}2}^{+}\big{(}\textsc{Vertical}(i^{*})\big{)}$ then it cannot ever return back to $\textsc{Vertical}(i^{*})$ which contradicts the fact that the last vertex of $P_{\\_}{i^{*}}$ is $b_{\\_}{i^{*}}\in\textsc{Vertical}(i^{*})$. ∎ ###### Claim . $P_{\\_}{i^{*}}$ does not contain any vertex of $N_{\\_}{G_{\\_}2}^{-}\big{(}\textsc{Vertical}(i^{*})\big{)}$. ###### Proof. The structure of $G_{\\_}2$ implies that * • $N_{\\_}{G_{\\_}2}^{-}\big{(}\textsc{Vertical}(i)\big{)}=\bigcup_{\\_}{j=1}^{k}H_{\\_}{i-1,j}^{i,j}$ for each $i\in[k]$ * • $N_{\\_}{G_{\\_}2}^{-}\big{(}\bigcup_{\\_}{j=1}^{k}H_{\\_}{i,j}^{i+1,j}\big{)}\subseteq\textsc{Vertical}(i)$ for each $1\leq i\leq k$ * • $N_{\\_}{G_{\\_}2}^{-}\big{(}\bigcup_{\\_}{j=1}^{k}H_{\\_}{0,j}^{1,j}\big{)}=\emptyset$ since each vertex of $C$ is a source in $G_{\\_}2$ Hence, if $P_{\\_}{i^{*}}$ contains a vertex from $N_{\\_}{G_{\\_}2}^{-}\big{(}\textsc{Vertical}(i^{*})\big{)}$ then $P_{\\_}{i^{*}}$ cannot have started at a vertex of $\textsc{Vertical}(i^{*})$ which contradicts the fact that the first vertex of $P_{\\_}{i^{*}}$ is $a_{\\_}{i^{*}}\in\textsc{Vertical}(i^{*})$. ∎ This concludes the proof of the first part of the lemma. We now show the second part of the lemma. We define $V_{\\_}{i^{*},0}^{i^{*},1}:=\\{a_{\\_}{i^{*}}\\}$ and $V_{\\_}{i^{*},k}^{i^{*},k+1}:=\\{b_{\\_}{i^{*}}\\}$. The structure of $G_{\\_}2$ implies that * • $N^{+}_{\\_}{G_{\\_}{2}[\textsc{Vertical}(i^{*})]}\big{(}G_{\\_}{i^{*},j}^{\texttt{split}}\big{)}=V_{\\_}{i^{*},j}^{i^{*},j+1}$ and $N^{-}_{\\_}{G_{\\_}{2}[\textsc{Vertical}(i^{*})]}\big{(}G_{\\_}{i^{*},j}^{\texttt{split}}\big{)}=V_{\\_}{i^{*},j-1}^{i^{*},j}$ for each $j\in[k]$ * • $N^{+}_{\\_}{G_{\\_}{2}[\textsc{Vertical}(i^{*})]}\big{(}V_{\\_}{i^{*},j}^{i^{*},j+1}\big{)}=\texttt{Bottom}\big{(}G_{\\_}{i^{*},j+1}^{\texttt{split}}\big{)}$ for each $0\leq j\leq k-1$ * • $N^{-}_{\\_}{G_{\\_}{2}[\textsc{Vertical}(i^{*})]}\big{(}V_{\\_}{i^{*},j}^{i^{*},j+1}\big{)}=\texttt{Top}\big{(}G_{\\_}{i^{*},j}^{\texttt{split}}\big{)}$ for each $1\leq j\leq k$ These three relations, combined with the first part of the lemma which states that $P_{\\_}{i*}$ lies within $G_{\\_}{2}[\textsc{Vertical}(i^{*})]$, implies that $P_{\\_}{i^{*}}$ contains an $\texttt{Bottom}(G_{\\_}{i^{*},j}^{\texttt{split}})\leadsto\texttt{Top}(G_{\\_}{i^{*},j}^{\texttt{split}})$ path for each $j\in[k]$. This concludes the proof of Section 2.2. ∎ The proof of the next lemma is very similar to that of Section 2.2, and we skip repeating the details. ###### Lemma . For each $j\in[k]$ the path $Q_{\\_}j$ satisfies the following two structural properties: * • every edge of the path $Q_{\\_}j$ has both end-points in $\textsc{Horizontal}(j)$ * • $Q_{\\_}j$ contains an $\texttt{Left}(G_{\\_}{i,j}^{\texttt{split}})\leadsto\texttt{Right}(G_{\\_}{i,j}^{\texttt{split}})$ path for each $i\in[k]$ ###### Lemma . For any $(i,j)\in[k]\times[k]$, let $P^{\prime},Q^{\prime}$ be any $\texttt{Bottom}(G_{\\_}{i,j}^{\texttt{split}})\leadsto\texttt{Top}(G_{\\_}{i,j}^{\texttt{split}})$, $\texttt{Left}(G_{\\_}{i,j}^{\texttt{split}})\leadsto\texttt{Right}(G_{\\_}{i,j}^{\texttt{split}})$ paths in $G_{\\_}2$ respectively. If $P^{\prime}$ and $Q^{\prime}$ are edge- disjoint then there exists $(\mu,\delta)\in S_{\\_}{i,j}$ such that the vertex $\textbf{w}_{\\_}{i,j,\text{LB}}^{\mu,\delta}=\textbf{w}_{\\_}{i,j}^{\mu,\delta}=\textbf{w}_{\\_}{i,j,\text{TR}}^{\mu,\delta}=$ belongs to both $P^{\prime}$ and $Q^{\prime}$ ###### Proof. Let $P^{\prime\prime},Q^{\prime\prime}$ be the paths obtained from $P^{\prime},Q^{\prime}$ by contracting all the dotted edges on $P^{\prime},Q^{\prime}$ respectively. By the construction of $G_{\\_}2$ (Section 2.1.2) and the splitting operation (Section 2.1.2), it follows that $P^{\prime\prime},Q^{\prime\prime}$ are $\texttt{Bottom}(G_{\\_}{i,j})\leadsto\texttt{Top}(G_{\\_}{i,j}),\texttt{Left}(G_{\\_}{i,j})\leadsto\texttt{Right}(G_{\\_}{i,j})$ paths in $G_{\\_}1$ respectively. Hence, there exist $x_{\\_}1,x_{\\_}2\in[N]$ such that $P^{\prime\prime}$ is a $\textbf{w}_{\\_}{i,j}^{x_{\\_}1,1}\to\textbf{w}_{\\_}{i,j}^{x_{\\_}2,N}$ path and $y_{\\_}1,y_{\\_}2\in[N]$ such that $Q^{\prime\prime}$ is a $\textbf{w}_{\\_}{i,j}^{1,y_{\\_}1}\to\textbf{w}_{\\_}{i,j}^{N,y_{\\_}2}$ path. We now show that $P^{\prime\prime}$ and $Q^{\prime\prime}$ must intersect in $G_{\\_}1$ ###### Claim . $P^{\prime\prime}$ and $Q^{\prime\prime}$ have a common vertex in $G_{\\_}1$ ###### Proof. For each $x\in[N]$ such that $x_{\\_}1\leq x\leq x_{\\_}2$ define $P^{\prime\prime}(x)=\big{\\{}y\in[N]:\textbf{w}_{\\_}{i,j}^{x,y}\in P^{\prime\prime}\big{\\}}$. For each $x\in[N]$ such that $x_{\\_}1\leq x\leq x_{\\_}2$ define $Q^{\prime\prime}(x)=\big{\\{}y\in[N]:\textbf{w}_{\\_}{i,j}^{x,y}\in Q^{\prime\prime}\big{\\}}$. We will prove the claim by showing that there exists $x^{*},y^{*}\in[N]$ such that $y^{*}\in\big{(}P^{\prime\prime}(x^{*})\cap Q^{\prime\prime}(x^{*})\big{)}$. By the orientation of the edges in $G_{\\_}{i,j}$, it follows that $\displaystyle\max\ P^{\prime\prime}(z)=\min\ P^{\prime\prime}(z+1)\ \text{and }\max\ Q^{\prime\prime}(z)=\min\ Q^{\prime\prime}(z+1)\quad\forall\ x_{\\_}1\leq z<x_{\\_}2$ (7) $\displaystyle\text{If }1\leq u\leq z\leq N\ \text{then }\max P^{\prime\prime}(u)\leq\min\ P^{\prime\prime}(z)\ \text{and }\max\ Q^{\prime\prime}(u)\leq\min Q^{\prime\prime}(z)$ By definition of $Q^{\prime\prime}$, we have $y_{\\_}1\in Q^{\prime\prime}(1)$ and hence $y\geq y_{\\_}1\geq 1$ for each $y\in Q^{\prime\prime}(x_{\\_}1)$. If $\big{(}P^{\prime\prime}(x_{\\_}1)\cap Q^{\prime\prime}(x_{\\_}1)\big{)}\neq\emptyset$ then we are done. Otherwise, we have that $\min\ Q^{\prime\prime}(x_{\\_}1)>\max\ P^{\prime\prime}(x_{\\_}1)$ since $1\in P^{\prime\prime}(x_{\\_}1)$. Now if $\big{(}P^{\prime\prime}(x_{\\_}1+1)\cap Q^{\prime\prime}(x_{\\_}1+1)\big{)}\neq\emptyset$ then we are done. Otherwise, we have $\min Q^{\prime\prime}(x_{\\_}1+1)>\max P^{\prime\prime}(x_{\\_}1+1)$ since $\min Q^{\prime\prime}(x_{\\_}1+1)=\max Q^{\prime\prime}(x_{\\_}1)$. Continuing this way, we must find an $x^{*}\in\mathbb{N}$ such that $x_{\\_}1\leq x^{*}\leq x_{\\_}2$ and $\big{(}P^{\prime\prime}(x^{*})\cap Q^{\prime\prime}(x^{*})\big{)}\neq\emptyset$: this is because $N\in P^{\prime\prime}(x_{\\_}2)$ and hence $\min Q^{\prime\prime}(x_{\\_}2)\leq N=\max P^{\prime\prime}(x_{\\_}2)$. Since $\big{(}P^{\prime\prime}(x^{*})\cap Q^{\prime\prime}(x^{*})\big{)}\neq\emptyset$ let $y^{*}\in\big{(}P^{\prime\prime}(x^{*})\cap Q^{\prime\prime}(x^{*})\big{)}$, i.e., the vertex $\textbf{w}_{\\_}{i,j}^{x^{*},y^{*}}$ belongs to both $P^{\prime\prime}$ and $Q^{\prime\prime}$. ∎ By Section 2.2, the paths $P^{\prime\prime},Q^{\prime\prime}$ have a common vertex in $G_{\\_}1$. Let this vertex be $\textbf{w}_{\\_}{i,j}^{\mu,\delta}$. Viewing the paths $P^{\prime\prime},Q^{\prime\prime}$ in $G_{\\_}2$, i.e., “un-contracting” the dotted edges (Section 2.1.2), it follows that both $P^{\prime}$ and $Q^{\prime}$ share the dotted edge $\textbf{w}_{\\_}{i,j}^{\mu,\delta,\text{LB}}\to\textbf{w}_{\\_}{i,j,\text{TR}}^{\mu,\delta}$. Since $P^{\prime}$ and $Q^{\prime}$ are given to be edge-disjoint, this implies that the edge $\textbf{w}_{\\_}{i,j}^{\mu,\delta,\text{LB}}\to\textbf{w}_{\\_}{i,j,\text{TR}}^{\mu,\delta}$ cannot exist in $G_{\\_}2$, i.e., $(\mu,\delta)\in S_{\\_}{i,j}$ and the vertex $\textbf{w}_{\\_}{i,j,\text{LB}}^{\mu,\delta}=\textbf{w}_{\\_}{i,j}^{\mu,\delta}=\textbf{w}_{\\_}{i,j,\text{TR}}^{\mu,\delta}$ belongs to both $P^{\prime}$ and $Q^{\prime}$ (recall Section 2.1.2). This concludes the proof of Section 2.2. ∎ ###### Lemma . The instance $(k,N,\mathcal{S})$ of Grid-Tiling-$\leq$ has a solution. ###### Proof. Fix any $(i,j)\in[k]\times[k]$. By Section 2.2, $P_{\\_}i$ contains an $\texttt{Bottom}(G_{\\_}{i,j}^{\texttt{split}})\leadsto\texttt{Top}(G_{\\_}{i,j}^{\texttt{split}})$ path say $P_{\\_}{i,j}$. By Section 2.2, $Q_{\\_}j$ contains an $\texttt{Left}(G_{\\_}{i,j}^{\texttt{split}})\leadsto\texttt{Right}(G_{\\_}{i,j}^{\texttt{split}})$ path say $Q_{\\_}{i,j}$. Since $P_{\\_}i$ and $Q_{\\_}j$ are edge-disjoint (Equation 5), it follows that the paths $P_{\\_}{i,j}$ and $Q_{\\_}{i,j}$ are also edge-disjoint. Applying Section 2.2 to the paths $P_{\\_}{i,j}$ and $Q_{\\_}{i,j}$ we get that there exists $(\mu_{\\_}{i,j},\delta_{\\_}{i,j})\in[N]\times[N]$ such that $(\mu_{\\_}{i,j},\delta_{\\_}{i,j})\in S_{\\_}{i,j}$ and the vertex $\textbf{w}_{\\_}{i,j,\text{LB}}^{\mu_{\\_}{i,j},\delta_{\\_}{i,j}}=\textbf{w}_{\\_}{i,j}^{\mu_{\\_}{i,j},\delta_{\\_}{i,j}}=\textbf{w}_{\\_}{i,j,\text{TR}}^{\mu_{\\_}{i,j},\delta_{\\_}{i,j}}$ belongs to $P_{\\_}{i,j}$ (and hence also to $P_{\\_}i$) and $Q_{\\_}{i,j}$ (and hence also to $Q_{\\_}j$). We now claim that the values $\big{\\{}(\mu_{\\_}{i,j},\delta_{\\_}{i,j}):(i,j)\in[k]\times[k]\big{\\}}$ form a solution for the instance $(k,N,\mathcal{S})$ of Grid-Tiling-$\leq$. In the last paragraph, we have already shown that $(\mu_{\\_}{i,j},\delta_{\\_}{i,j})\in S_{\\_}{i,j}$ for each $(i,j)\in[k]\times[k]$. For each $(i,j)\in[k-1]\times[k]$ both the vertices $\textbf{w}_{\\_}{i,j,\text{LB}}^{\mu_{\\_}{i,j},\delta_{\\_}{i,j}}=\textbf{w}_{\\_}{i,j,\text{TR}}^{\mu_{\\_}{i,j},\delta_{\\_}{i,j}}$ and $\textbf{w}_{\\_}{i+1,j,\text{LB}}^{\mu_{\\_}{i+1,j},\delta_{\\_}{i+1,j}}=\textbf{w}_{\\_}{i+1,j,\text{TR}}^{\mu_{\\_}{i+1,j},\delta_{\\_}{i+1,j}}$ belong to the path $Q_{\\_}{j}$ which is contained in $G_{\\_}{2}[\textsc{Horizontal}(j)]$ (Section 2.2). Hence, by the orientation of the edges in $G_{\\_}2$, it follows that $\delta_{\\_}{i,j}\leq\delta_{\\_}{i+1,j}$. Similarly, it can be shown that $\mu_{\\_}{i,j}\leq\mu_{\\_}{i,j+1}$ for each $(i,j)\in[k]\times[k-1]$. ∎ ### 2.3 Solution for Grid-Tiling-$\leq$ $\Rightarrow$ Solution for In this section, we show that if the instance $(k,N,\mathcal{S})$ of Grid- Tiling-$\leq$ has a solution then the instance $(G_{\\_}2,\mathcal{T})$ of also has a solution. Suppose that the instance $(k,N,\mathcal{S})$ of Grid-Tiling-$\leq$ has a solution given by the pairs $\big{\\{}(\alpha_{\\_}{i,j},\beta_{\\_}{i,j})\ :i,j\in[k]\big{\\}}$. Hence, we have $\displaystyle\big{(}\alpha_{\\_}{i,j},\beta_{\\_}{i,j}\big{)}\in S_{\\_}{i,j}$ $\displaystyle\quad\text{for each }(i,j)\in[k]\times[k]$ (8) $\displaystyle\alpha_{\\_}{i,j}\leq\alpha_{\\_}{i,j+1}$ $\displaystyle\quad\text{for each }(i,j)\in[k]\times[k-1]$ $\displaystyle\beta_{\\_}{i,j}\leq\beta_{\\_}{i+1,j}$ $\displaystyle\quad\text{for each }(i,j)\in[k-1]\times[k]$ ###### Definition . (row-paths and column-paths in $G_{\\_}2$) For each $(i,j)\in[k]\times[k]$ and $\ell\in[N]$ we define * • $\texttt{RowPath}_{\\_}{\ell}(G_{\\_}{i,j}^{\texttt{split}})$ to be the $\textbf{w}_{\\_}{i,j,\text{LB}}^{1,\ell}\leadsto\textbf{w}_{\\_}{i,j,\text{TR}}^{N,\ell}$ path in $G_{\\_}2[G_{\\_}{i,j}^{\texttt{split}}]$ consisting of the following edges (in order): for each $r\in[N-1]$ * – $\textbf{w}_{\\_}{i,j,\text{LB}}^{r,\ell}\to\textbf{w}_{\\_}{i,j,\text{TR}}^{r,\ell}$ and $\textbf{w}_{\\_}{i,j,\text{TR}}^{r,\ell}\to\textbf{w}_{\\_}{i,j,\text{LB}}^{r+1,\ell}$ followed finally by the edge $\textbf{w}_{\\_}{i,j,\text{LB}}^{N,\ell}\to\textbf{w}_{\\_}{i,j,\text{TR}}^{N,\ell}$ * • $\texttt{ColumnPath}_{\\_}{\ell}(G_{\\_}{i,j}^{\texttt{split}})$ to be the $\textbf{w}_{\\_}{i,j,\text{LB}}^{\ell,1}\leadsto\textbf{w}_{\\_}{i,j,\text{TR}}^{\ell,N}$ path in $G_{\\_}2$ consisting of the following edges (in order): for each $r\in[N-1]$ * – $\textbf{w}_{\\_}{i,j,\text{LB}}^{\ell,r}\to\textbf{w}_{\\_}{i,j,\text{TR}}^{\ell,r}$ and $\textbf{w}_{\\_}{i,j,\text{TR}}^{\ell,r}\to\textbf{w}_{\\_}{i,j,\text{LB}}^{\ell,r+1}$ followed finally by the edge $\textbf{w}_{\\_}{i,j,\text{LB}}^{\ell,N}\to\textbf{w}_{\\_}{i,j,\text{TR}}^{\ell,N}$ Using the special types of paths from Section 2.3, we can now show the following lemma: ###### Lemma . The instance $(G_{\\_}2,\mathcal{T})$ of has a solution. ###### Proof. We build a collection of $2k$ paths $\mathcal{P}:=\big{\\{}R_{\\_}1,R_{\\_}2,\ldots,R_{\\_}k,T_{\\_}1,T_{\\_}2,\ldots,T_{\\_}k\big{\\}}$ and show that it forms a solution for the instance $(G_{\\_}2,\mathcal{T})$ of . First, we describe this collection of paths below: \- Description of the set of paths $\\{R_{\\_}1,R_{\\_}2,\ldots,R_{\\_}k\\}$ : For each $i\in[k]$, we build the path $R_{\\_}i$ as follows: * • Start with the edge $a_{\\_}i\to\textbf{w}_{\\_}{i,1,\text{LB}}^{\alpha_{\\_}{i,1},1}$ * • For each $j\in[k-1]$ use the $\textbf{w}_{\\_}{i,j,\text{LB}}^{\alpha_{\\_}{i,j},1}\leadsto\textbf{w}_{\\_}{i,j+1,\text{LB}}^{\alpha_{\\_}{i,j+1},1}$ path obtained by concatenating * – the $\textbf{w}_{\\_}{i,j,\text{LB}}^{\alpha_{\\_}{i,j},1}\leadsto\textbf{w}_{\\_}{i,j,\text{TR}}^{\alpha_{\\_}{i,j},N}$ path $\texttt{ColumnPath}_{\\_}{\alpha_{\\_}{i,j}}(G_{\\_}{i,j}^{\texttt{split}})$ from Section 2.3 * – the $\textbf{w}_{\\_}{i,j,\text{TR}}^{\alpha_{\\_}{i,j},N}\leadsto\textbf{w}_{\\_}{i,j+1,\text{LB}}^{\alpha_{\\_}{i,j+1},1}$ path $\textbf{w}_{\\_}{i,j,\text{TR}}^{\alpha_{\\_}{i,j},N}\to\textbf{v}_{\\_}{i,j}^{i,j+1}(\alpha_{\\_}{i,j})\to\cdots\cdots\to\textbf{v}_{\\_}{i,j}^{i,j+1}(\alpha_{\\_}{i,j+1})\to\textbf{w}_{\\_}{i,j+1,\text{LB}}^{\alpha_{\\_}{i,j+1},1}$ which exists since Equation 8 implies $\alpha_{\\_}{i,j}\leq\alpha_{\\_}{i,j+1}$. * • Now, we have reached the vertex $\textbf{w}_{\\_}{i,k,\text{LB}}^{\alpha_{\\_}{i,k},1}$. Use the $\textbf{w}_{\\_}{i,k,\text{LB}}^{\alpha_{\\_}{i,k},1}\leadsto\textbf{w}_{\\_}{i,k,\text{TR}}^{\alpha_{\\_}{i,k},N}$ path $\texttt{ColumnPath}_{\\_}{\alpha_{\\_}{i,k}}(G_{\\_}{i,k}^{\texttt{split}})$ from Section 2.3 to reach the vertex $\textbf{w}_{\\_}{i,k,\text{TR}}^{\alpha_{\\_}{i,k},N}$. * • Finally, use the edge $\textbf{w}_{\\_}{i,k,\text{TR}}^{\alpha_{\\_}{i,k},N}\to b_{\\_}i$ to reach $b_{\\_}i$. \- Description of the set of paths $\\{T_{\\_}1,T_{\\_}2,\ldots,T_{\\_}k\\}$ : For each $j\in[k]$, we build the path $T_{\\_}j$ as follows: * • Start with the edge $c_{\\_}j\to\textbf{w}_{\\_}{1,j,\text{LB}}^{1,\beta_{\\_}{1,j}}$ * • For each $i\in[k-1]$ use the $\textbf{w}_{\\_}{i,j,\text{LB}}^{1,\beta_{\\_}{i,j}}\leadsto\textbf{w}_{\\_}{i+1,j,\text{LB}}^{1,\beta_{\\_}{i+1,j}}$ path obtained by concatenating * – the $\textbf{w}_{\\_}{i,j,\text{LB}}^{1,\beta_{\\_}{i,j}}\leadsto\textbf{w}_{\\_}{i,j,\text{TR}}^{N,\beta_{\\_}{i,j}}$ path $\texttt{RowPath}_{\\_}{\beta_{\\_}{i,j}}(G_{\\_}{i,j}^{\texttt{split}})$ from Section 2.3 * – the $\textbf{w}_{\\_}{i,j,\text{TR}}^{N,\beta_{\\_}{i,j}}\leadsto\textbf{w}_{\\_}{i+1,j,\text{LB}}^{1,\beta_{\\_}{i+1,j}}$ path $\textbf{w}_{\\_}{i,j,\text{TR}}^{N,\beta_{\\_}{i,j}}\to\textbf{h}_{\\_}{i,j}^{i+1,j}(\beta_{\\_}{i,j})\to\cdots\cdots\to\textbf{h}_{\\_}{i,j}^{i+1,j}(\beta_{\\_}{i+1,j})\to\textbf{w}_{\\_}{i+1,j,\text{LB}}^{1,\beta_{\\_}{i+1,j}}$ which exists since Equation 8 implies $\beta_{\\_}{i,j}\leq\beta_{\\_}{i+1,j}$. * • Now, we have reached the vertex $\textbf{w}_{\\_}{k,j,\text{LB}}^{1,\beta_{\\_}{k,j}}$. Use the $\textbf{w}_{\\_}{k,j,\text{LB}}^{1,\beta_{\\_}{k,j}}\leadsto\textbf{w}_{\\_}{k,j,\text{TR}}^{N,\beta_{\\_}{k,j}}$ path $\texttt{RowPath}_{\\_}{\beta_{\\_}{k,j}}(G_{\\_}{k,j}^{\texttt{split}})$ from Section 2.3 to reach the vertex $\textbf{w}_{\\_}{k,j,\text{TR}}^{N,\beta_{\\_}{k,j}}$. * • Finally, use the edge $\textbf{w}_{\\_}{k,j,\text{TR}}^{N,\beta_{\\_}{k,j}}\to d_{\\_}j$ to reach $d_{\\_}j$. By Section 2.2, it follows that every edge of the path $R_{\\_}i$ has both endpoints in $\textsc{Vertical}(i)$ for every $i\in[k]$. Since $\textsc{Vertical}(i)\cap\textsc{Vertical}(i^{\prime})=\emptyset$ for every $1\leq i\neq i^{\prime}\neq k$, it follows that the collection of paths $\\{R_{\\_}1,R_{\\_}2,\ldots,R_{\\_}k\\}$ are pairwise edge-disjoint. By Section 2.2, it follows that every edge of the path $T_{\\_}j$ has both endpoints in $\textsc{Horizontal}(j)$ for every $j\in[k]$. Since $\textsc{Horizontal}(j)\cap\textsc{Horizontal}(j^{\prime})=\emptyset$ for every $1\leq j\neq j^{\prime}\neq k$, it follows that the collection of paths $\\{T_{\\_}1,T_{\\_}2,\ldots,T_{\\_}k\\}$ are pairwise edge-disjoint. Fix any $(i,j)\in[k]\times[k]$. We now conclude the proof of this lemma by showing that $R_{\\_}i$ and $T_{\\_}j$ are edge-disjoint. By the construction of $G_{\\_}2$ (Figure 2 and Figure 3) and definitions of the paths $R_{\\_}i$ and $T_{\\_}j$, it follows that the only common edge between $R_{\\_}i$ and $T_{\\_}j$ could be $\textbf{w}_{\\_}{i,j,\text{LB}}^{\alpha_{\\_}{i,j},\beta_{\\_}{i,j}}\to\textbf{w}_{\\_}{i,j,\text{TR}}^{\alpha_{\\_}{i,j},\beta_{\\_}{i,j}}$. By Equation 8, we have that $(\alpha_{\\_}{i,j},\beta_{\\_}{i,j})\in S_{\\_}{i,j}$. Hence, by the splitting operation (Section 2.1.2), we have that $\textbf{w}_{\\_}{i,j,\text{LB}}^{\alpha_{\\_}{i,j},\beta_{\\_}{i,j}}=\textbf{w}_{\\_}{i,j}^{\alpha_{\\_}{i,j},\beta_{\\_}{i,j}}=\textbf{w}_{\\_}{i,j,\text{TR}}^{\alpha_{\\_}{i,j},\beta_{\\_}{i,j}}$, i.e., the only possible common edge $\textbf{w}_{\\_}{i,j,\text{LB}}^{\alpha_{\\_}{i,j},\beta_{\\_}{i,j}}\to\textbf{w}_{\\_}{i,j,\text{TR}}^{\alpha_{\\_}{i,j},\beta_{\\_}{i,j}}$ between $R_{\\_}i$ and $T_{\\_}j$ is not an edge in $G_{\\_}2$. Hence, $R_{\\_}i$ and $T_{\\_}j$ are edge-disjoint. ∎ ### 2.4 Proof of Theorem 1.1 Finally we are ready to prove our main theorem (Theorem 1.1) which is restated below: See 1.1 ###### Proof. Given an instance $(k,N,\mathcal{S})$ of Grid-Tiling-$\leq$, we use the construction from Section 2.1 to build an instance $(G_{\\_}2,\mathcal{T})$ of such that $G_{\\_}2$ is a planar DAG (Section 2.1.2). It is easy to see that $n=|V(G_{\\_}2)|=O(N^{2}k^{2})$ and $G_{\\_}2$ can be constructed in $\text{poly}(N,k)$ time. It is known [13, Theorem 14.30] that Grid-Tiling-$\leq$ is W[1]-hard parameterized by $k$, and under ETH cannot be solved in $f(k)\cdot N^{o(k)}$ time for any computable function $f$. Combining the two directions from Section 2.2 and Section 2.3, we get a parameterized reduction from Grid- Tiling-$\leq$ to an instance of which is a planar DAG and has $|\mathcal{T}|=2k$ terminal pairs. Hence, it follows that on planar DAGs is W[1]-hard parameterized by number $k$ of terminal pairs, and under ETH cannot be solved in $f(k)\cdot n^{o(k)}$ time for any computable function $f$. Finally we show how to edit $G_{\\_}2$, without affecting the correctness of the reduction, so that both the max out-degree and max in-degree are at most $2$. We present the argument for reducing the out-degree: the argument for reducing the in-degree is analogous. Note that the only vertices in $G_{\\_}2$ with out-degree $>2$ are $A\cup C$. For each $c_{\\_}j\in C$ we replace the directed star whose edges are from $c_{\\_}j$ to each vertex of $\texttt{Left}(G_{\\_}{1,j})$ with a directed binary tree whose root is $c_{\\_}i$, leaves are the set of vertices $\texttt{Left}(G_{\\_}{1,j})$ and each edge is directed away from the root. It is easy to see that in this directed binary tree the set of paths from $c_{\\_}j$ to the different leaves (i.e.,vertices of $\texttt{Left}(G_{\\_}{1,j})$) are pairwise edge-disjoint, and we have only increased the number of vertices by $O(k)$ while maintaining both planarity and (directed) acyclicity. We do a similar transformation for each $a_{\\_}i\in A$. It is easy to see that this editing adds $O(k^{2})$ new vertices and takes $\text{poly}(k)$ time, and therefore it is still true that $n=|V(G_{\\_}2)|=O(N^{2}k^{2})$ and $G_{\\_}2$ can be constructed in $\text{poly}(N,k)$ time. ∎ ## 3 Conclusion & Open Questions In this paper we have shown that on planar DAGs is W[1]-hard parameterized by $k$, and has no $f(k)\cdot n^{o(k)}$ algorithm under the Exponential Time Hypothesis (ETH) for any computable function $f$. The hardness holds even if both the maximum in-degree and maximum out-degree of the graph are at most $2$. Our result answers a question of Slivkins [44] regarding the parameterized complexity of on planar DAGS, and a question of Cygan et al. [12] and Schrijver [43] regarding the parameterized complexity of on planar directed graphs. We now propose some open questions related to the complexity of the Disjoint Paths problem: * • What is the _correct_ parameterized complexity of on planar graphs parameterized by $k$? Can we design an XP algorithm, or is the problem NP-hard even for $k=O(1)$ like the general version? Note that to prove the latter result, one would need to have directed cycles involved in the reduction since there is $n^{O(k)}$ algorithm of Fortune et al. [19] for on DAGs. * • Is the half-integral version444Each edge can belong to at most two of the paths of FPT on directed planar graphs or DAGs? It is easy to see that our W[1]-hardness reduction does not work for this problem. * • Given our W[1]-hardness result, can we obtain FPT (in)approximability results for the problem on planar DAGs? To the best of our knowledge, there are no known (non-trivial) FPT (in)approximability results for any variants of the Disjoint Paths problem. This question might be worth considering even for those versions of the Disjoint Paths problem which are known to be FPT since the running times are astronomical (except maybe [32]). Some of the recent work [8, 9, 10, 11] on polynomial time (in)approximability of the Disjoint Paths problem might be relevant. ### Acknowledgements We thank the anonymous reviewers of CIAC 2021 for their helpful comments. In particular, one of the reviewers suggested the strengthening of Theorem 1.1 for the case when the input graph has both in-degree and out-degree at most $2$. ## References * Aboulker et al. [2017] Pierre Aboulker, Nick Brettell, Frédéric Havet, Dániel Marx, and Nicolas Trotignon. Coloring graphs with constraints on connectivity. _Journal of Graph Theory_ , 85(4):814–838, 2017\. doi: 10.1002/jgt.22109. URL https://doi.org/10.1002/jgt.22109. * Adler et al. [2017] Isolde Adler, Stavros G. Kolliopoulos, Philipp Klaus Krause, Daniel Lokshtanov, Saket Saurabh, and Dimitrios M. Thilikos. Irrelevant vertices for the planar Disjoint Paths Problem. _J. Comb. Theory, Ser. B_ , 122:815–843, 2017. URL https://doi.org/10.1016/j.jctb.2016.10.001. * Amiri et al. [2019] Saeed Akhoondian Amiri, Stephan Kreutzer, Dániel Marx, and Roman Rabinovich. Routing with congestion in acyclic digraphs. _Inf. Process. Lett._ , 151, 2019. URL https://doi.org/10.1016/j.ipl.2019.105836. * [4] Rajesh Chitnis and Andreas Emil Feldmann. A Tight Lower Bound for Steiner Orientation. In _CSR 2018_ , pages 65–77. URL https://doi.org/10.1007/978-3-319-90530-3_7. * Chitnis et al. [2019] Rajesh Chitnis, Andreas Emil Feldmann, and Ondrej Suchý. A Tight Lower Bound for Planar Steiner Orientation. _Algorithmica_ , 81(8):3200–3216, 2019. URL https://doi.org/10.1007/s00453-019-00580-x. * Chitnis et al. [2014] Rajesh Hemant Chitnis, MohammadTaghi Hajiaghayi, and Dániel Marx. Tight Bounds for Planar Strongly Connected Steiner Subgraph with Fixed Number of Terminals (and Extensions). In _SODA 2014_ , pages 1782–1801, 2014. URL https://doi.org/10.1137/1.9781611973402.129. * Chitnis et al. [2020] Rajesh Hemant Chitnis, Andreas Emil Feldmann, Mohammad Taghi Hajiaghayi, and Dániel Marx. Tight Bounds for Planar Strongly Connected Steiner Subgraph with Fixed Number of Terminals (and Extensions). _SIAM J. Comput._ , 49(2):318–364, 2020. URL https://doi.org/10.1137/18M122371X. * Chuzhoy et al. [a] Julia Chuzhoy, David H. K. Kim, and Shi Li. Improved approximation for node-disjoint paths in planar graphs. In _STOC 2016_ , pages 556–569, a. URL https://doi.org/10.1145/2897518.2897538. * Chuzhoy et al. [b] Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat. New hardness results for routing on disjoint paths. In _STOC 2017_ , pages 86–99, b. URL https://doi.org/10.1145/3055399.3055411. * Chuzhoy et al. [c] Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat. Almost polynomial hardness of node-disjoint paths in grids. In _STOC 2018_ , pages 1220–1233, c. URL https://doi.org/10.1145/3188745.3188772. * Chuzhoy et al. [2018] Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat. Improved Approximation for Node-Disjoint Paths in Grids with Sources on the Boundary. In _ICALP 2018_ , volume 107, pages 38:1–38:14, 2018. URL https://doi.org/10.4230/LIPIcs.ICALP.2018.38. * [12] Marek Cygan, Dániel Marx, Marcin Pilipczuk, and Michal Pilipczuk. The Planar Directed $k$-Vertex-Disjoint Paths Problem Is Fixed-Parameter Tractable. In _FOCS 2013_ , pages 197–206. URL https://doi.org/10.1109/FOCS.2013.29. * Cygan et al. [2015] Marek Cygan, Fedor V. Fomin, Lukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michal Pilipczuk, and Saket Saurabh. _Parameterized Algorithms_. Springer, 2015. ISBN 978-3-319-21274-6. URL https://doi.org/10.1007/978-3-319-21275-3. * Demaine et al. [2005] Erik D. Demaine, Fedor V. Fomin, Mohammad Taghi Hajiaghayi, and Dimitrios M. Thilikos. Subexponential parameterized algorithms on bounded-genus graphs and _H_ -minor-free graphs. _J. ACM_ , 52(6):866–893, 2005. URL https://doi.org/10.1145/1101821.1101823. * Diestel [2012] Reinhard Diestel. _Graph Theory, 4th Edition_. Volume 173 of Graduate Texts in Mathematics. Springer, 2012. ISBN 978-3-642-14278-9. URL https://doi.org/10.1007/978-3-662-53622-3. * [16] Shimon Even, Alon Itai, and Adi Shamir. On the Complexity of Timetable and Multi-Commodity Flow Problems. In _FOCS 1975_ , pages 184–193. URL https://doi.org/10.1109/SFCS.1975.21. * Fomin et al. [a] Fedor V. Fomin, Sudeshna Kolay, Daniel Lokshtanov, Fahad Panolan, and Saket Saurabh. Subexponential Algorithms for Rectilinear Steiner Tree and Arborescence Problems. In _SoCG 2016_ , pages 39:1–39:15, a. URL https://doi.org/10.4230/LIPIcs.SoCG.2016.39. * Fomin et al. [b] Fedor V. Fomin, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michal Pilipczuk, and Saket Saurabh. Subexponential Parameterized Algorithms for Planar and Apex-Minor-Free Graphs via Low Treewidth Pattern Covering. In _FOCS 2016_ , pages 515–524, b. URL https://doi.org/10.1109/FOCS.2016.62. * Fortune et al. [1980] Steven Fortune, John E. Hopcroft, and James Wyllie. The Directed Subgraph Homeomorphism Problem. _Theor. Comput. Sci._ , 10:111–121, 1980. URL https://doi.org/10.1016/0304-3975(80)90009-2. * Frank [1990] András Frank. Packing paths, circuits, and cuts - a survey,. In Alexander Schrijver, Laszlo Lovasz, Bernhard Korte, Hans Jurgen Promel, and R. L. Graham, editors, _Paths, Flows and VLSI-Layouts_ , volume 148 of _LIPIcs_ , pages 49–100. Springer-Verlag, 1990. ISBN 0387526854. URL https://dl.acm.org/doi/book/10.5555/574821. * Giannopoulou et al. [2020] Archontia C. Giannopoulou, Ken-ichi Kawarabayashi, Stephan Kreutzer, and O-joung Kwon. The canonical directed tree decomposition and its applications to the directed disjoint paths problem. _CoRR_ , abs/2009.13184, 2020. URL https://arxiv.org/abs/2009.13184. * Impagliazzo and Paturi [2001] Russell Impagliazzo and Ramamohan Paturi. On the Complexity of $k$-SAT. _J. Comput. Syst. Sci._ , 62(2):367–375, 2001\. URL https://doi.org/10.1006/jcss.2000.1727. * Impagliazzo et al. [2001] Russell Impagliazzo, Ramamohan Paturi, and Francis Zane. Which Problems Have Strongly Exponential Complexity? _J. Comput. Syst. Sci._ , 63(4):512–530, 2001\. URL https://doi.org/10.1006/jcss.2001.1774. * Kawarabayashi and Reed [a] Ken-ichi Kawarabayashi and Bruce A. Reed. A nearly linear time algorithm for the half integral disjoint paths packing. In _SODA 2008_ , pages 446–454, a. URL http://dl.acm.org/citation.cfm?id=1347082.1347131. * Kawarabayashi and Reed [b] Ken-ichi Kawarabayashi and Bruce A. Reed. A nearly linear time algorithm for the half integral parity disjoint paths packing problem. In _SODA 2009_ , pages 1183–1192, b. URL http://dl.acm.org/citation.cfm?id=1496770.1496898. * Kawarabayashi et al. [a] Ken-ichi Kawarabayashi, Yusuke Kobayashi, and Stephan Kreutzer. An excluded half-integral grid theorem for digraphs and the directed disjoint paths problem. In _STOC 2014_ , pages 70–78, a. URL https://doi.org/10.1145/2591796.2591876. * Kawarabayashi et al. [b] Ken-ichi Kawarabayashi, Bruce A. Reed, and Paul Wollan. The Graph Minor Algorithm with Parity Conditions. In _FOCS 2011_ , pages 27–36, b. URL https://doi.org/10.1109/FOCS.2011.52. * Kawarabayashi et al. [2012] Ken-ichi Kawarabayashi, Yusuke Kobayashi, and Bruce A. Reed. The disjoint paths problem in quadratic time. _J. Comb. Theory, Ser. B_ , 102(2):424–435, 2012. URL https://doi.org/10.1016/j.jctb.2011.07.004. * Klein and Marx [a] Philip N. Klein and Dániel Marx. Solving Planar $k$-Terminal Cut in O$(n^{c\sqrt{k}})$ Time. In _ICALP 2012_ , pages 569–580, a. URL https://doi.org/10.1007/978-3-642-31594-7_48. * Klein and Marx [b] Philip N. Klein and Dániel Marx. A subexponential parameterized algorithm for Subset TSP on planar graphs. In _SODA 2014_ , pages 1812–1830, b. URL https://doi.org/10.1137/1.9781611973402.131. * Kleinberg [1998] Jon M. Kleinberg. Decision Algorithms for Unsplittable Flow and the Half-Disjoint Paths Problem. In _STOC 1998_ , pages 530–539. ACM, 1998. URL https://doi.org/10.1145/276698.276867. * Lokshtanov et al. [a] Daniel Lokshtanov, Pranabendu Misra, Michal Pilipczuk, Saket Saurabh, and Meirav Zehavi. An exponential time parameterized algorithm for planar disjoint paths. In _STOC 2020_ , pages 1307–1316, a. URL https://doi.org/10.1145/3357713.3384250. * Lokshtanov et al. [b] Daniel Lokshtanov, Saket Saurabh, and Magnus Wahlström. Subexponential Parameterized Odd Cycle Transversal on Planar Graphs. In _FSTTCS 2012_ , pages 424–434, b. URL https://doi.org/10.4230/LIPIcs.FSTTCS.2012.424. * [34] Dániel Marx. A Tight Lower Bound for Planar Multiway Cut with Fixed Number of Terminals. In _ICALP 2012_ , pages 677–688. URL https://doi.org/10.1007/978-3-642-31594-7_57. * Marx [2013] Dániel Marx. The Square Root Phenomenon in Planar Graphs. In _ICALP 2013_ , volume 7966, page 28. Springer, 2013. URL https://doi.org/10.1007/978-3-642-39212-2_4. * [36] Dániel Marx and Michal Pilipczuk. Optimal Parameterized Algorithms for Planar Facility Location Problems Using Voronoi Diagrams. In _ESA 2015_ , pages 865–877. URL https://doi.org/10.1007/978-3-662-48350-3_72. * [37] Dániel Marx and Anastasios Sidiropoulos. The limited blessing of low dimensionality: when $1-1/d$ is the best possible exponent for $d$-dimensional geometric problems. In _SOCG 2014_ , page 67. URL https://doi.org/10.1145/2582112.2582124. * [38] Dániel Marx, Marcin Pilipczuk, and Michal Pilipczuk. On Subexponential Parameterized Algorithms for Steiner Tree and Directed Subset TSP on Planar Graphs. In _FOCS, 2018_ , pages 474–484. URL https://doi.org/10.1109/FOCS.2018.00052. * [39] Marcin Pilipczuk, Michal Pilipczuk, Piotr Sankowski, and Erik Jan van Leeuwen. Subexponential-Time Parameterized Algorithm for Steiner Tree on Planar Graphs. In _STACS 2013_ , pages 353–364. URL https://doi.org/10.4230/LIPIcs.STACS.2013.353. * Robertson and Seymour [1995] Neil Robertson and Paul D. Seymour. Graph Minors XIII. The Disjoint Paths Problem. _J. Comb. Theory, Ser. B_ , 63(1):65–110, 1995\. URL https://doi.org/10.1006/jctb.1995.1006. * Schrijver [1994] Alexander Schrijver. Finding $k$ Disjoint Paths in a Directed Planar Graph. _SIAM J. Comput._ , 23(4):780–788, 1994. URL https://doi.org/10.1137/S0097539792224061. * Schrijver [2003] Alexander Schrijver. _Combinatorial Optimization: Polyhedra and Efficiency_. Springer-Verlag, 2003. ISBN 978-3-540-44389-6. URL https://www.springer.com/gp/book/9783540443896. * Schrijver [2019] Alexander Schrijver. Finding $k$ Partially Disjoint Paths in a Directed Planar Graph. _Building Bridges II. Bolyai Society Mathematical Studies._ , 28:417–444, 2019. URL https://doi.org/10.1007/978-3-662-59204-5˙13. * Slivkins [2010] Aleksandrs Slivkins. Parameterized Tractability of Edge-Disjoint Paths on Directed Acyclic Graphs. _SIAM J. Discret. Math._ , 24(1):146–157, 2010\. URL https://doi.org/10.1137/070697781. * Vygen [1995] Jens Vygen. NP-completeness of Some Edge-disjoint Paths Problems. _Discret. Appl. Math._ , 61(1):83–90, 1995. URL https://doi.org/10.1016/0166-218X(93)E0177-Z.
# Quasi Static Atmospheric Model for Aircraft Trajectory Prediction and Flight Simulation Eduardo Gallo111The author holds a MSc in Aerospace Engineering by the Polytechnic University of Madrid and has twenty-two years of experience working in aircraft trajectory prediction, modeling, and flight simulation. He is currently a Senior Trajectory Prediction and Aircraft Performance Engineer at Boeing Research & Technology Europe (BR&TE), although he is publishing this article in his individual capacity and time as part of his PhD thesis titled “Autonomous Unmanned Air Vehicle GNSS-Denied Navigation”, advised by Dr. Antonio Barrientos within the Centre for Automation and Robotics of the Polytechnic University of Madrid. 222Contact<EMAIL_ADDRESS>https://orcid.org/0000-0002-7397-0425 (January 2021) ## Abstract Aircraft trajectory prediction requires the determination of the atmospheric properties (pressure, temperature, and density) encountered by the aircraft during its flight. This is accomplished by employing a tabulated prediction published by a meteorological service, a static atmosphere model that does not consider the atmosphere variation with time or horizontal position, such as the International Civil Aviation Organization (ICAO) Standard Atmosphere (ISA), or a variation to the later so it better conforms with the expected flight conditions. This article proposes an easy-to-use quasi static model that introduces temperature and pressure variations while respecting all the hypotheses of the ISA model, resulting in more realistic trajectory predictions as the obtained atmospheric properties bear a higher resemblance with the local conditions encountered during the flight. The proposed model relies on two parameters, the temperature and pressure offsets, and converges to the ISA model when both are zero. Expressions to obtain the atmospheric properties are established, and their dependencies with both parameters explained. The author calls this model INSA (ICAO Non Standard Atmosphere) and releases its $\displaystyle\mathrm{{C\nolinebreak[4]\raisebox{1.72218pt}{\tiny\bf++}}}$ implementation as open-source software [1]. ## 1 Introduction - Influence of Atmosphere in Trajectory Prediction A six degrees of freedom flight simulation is composed of different modules, such as the aircraft performances, the navigation and control systems, the onboard sensors, the mission or flight plan, the flight deck inputs, and models for the wind and atmosphere. The accuracy of the resulting trajectory is based on the resemblance of the different modules to the real entities that they represent. In particular, the atmospheric conditions (pressure, temperature, and density) comprise a key factor in the resulting trajectory as they heavily influence the aircraft performances, both aerodynamic (lift, drag, moments of the control surfaces) and propulsive (power plant thrust and torque, fuel consumption). Different performances induce different climbing and descent angles, together with different optimum air speeds and altitudes, all of which result in different time and altitude profiles for the resulting trajectory. A three degrees of freedom simulation is sufficient when it is not necessary to obtain a detailed description of the dynamics of the different flight maneuvers, as is generally the case for Air Traffic Management (ATM) applications. In this case the simulation modules are only the aircraft performances (lift, drag, thrust, and fuel consumption), the mission or flight plan, and models for the wind and atmosphere. A more efficient ATM system needs to be more predictable, allowing higher automation, which requires precise trajectory computation and hence an accurate aircraft performance model such as BADA [2], together with an atmosphere model capable of closely representing the real atmospheric conditions encountered during the flight [3, 4]. The elevated uncertainty of the current ATM system is related to the aircraft intentions (mission or flight plan) and its performances, together with the wind and atmospheric predictions. ATM systems are evolving towards an operational concept known as Trajectory Based Operations (TBO) that aims to enable a more coordinated decision making process between the different stakeholders. Such systems will rely on human centered automation schemes with widespread use of Decision Support Tools (DSTs), each of which containing an accurate trajectory predictor [5, 6]. An in-depth description of the components of trajectory prediction including the atmosphere model, together with their importance in the reduction of the ATM system uncertainty, is presented in [7]. One of the main challenges for the success of the emerging TBO concepts is the coordination between the onboard automation systems, such as the Flight Management System (FMS), and the different ground based DSTs; these systems each rely on their own trajectory predictors with its different components, some of which may be proprietary. The importance of synchronization between the trajectory predictions generated onboard the aircraft and on the ground for successful TBO operations is described in [8]. Research suggests that the most important factor in the accuracy of the resulting trajectory is the meteorological data (wind and atmosphere), and in that regard ground based systems usually have access to more accurate predictions [9]. However, nowadays the FMS is in command, which disconnects both the pilot and the ground controller from the direct control over the flight, even as ground based trajectory prediction is more accurate because it has access to more detailed and up to date wind and atmosphere predictions than those stored in the FMS [10, 11]. Communication and synchronization of wind and atmospheric predictions between the air and the ground is hence necessary for the successful interaction among the different DSTs of a TBO based ATM system. This article focuses on how to effectively provide a flight simulation or trajectory predictor with atmospheric properties that, while capturing the intrinsic variations caused by altitude, also adapt to the temperature and pressure differences that exist between different Earth locations as well as those at the same location at different times. The proposed easy-to-use model also facilitates determining the influence of different atmospheric scenarios on the resulting trajectory, as it is possible to model the presence anywhere along the trajectory of warmer (colder) conditions, or that of a high (low) pressure front. Meteorological services publish weather predictions [12, 13] covering a given area for the next few days as tables providing the atmospheric properties based on time, longitude, latitude, and altitude; they also provide past data in the same format. This is adequate when simulating a given flight not long before its departure, or when replicating the exact conditions of a prior flight, but not too practical if the objective is to determine the influence of varying atmospheric conditions on the proposed mission. The International Civil Aviation Organization (ICAO) Standard Atmosphere (ISA) [14] is an atmosphere model created for the purposes of aircraft instrument calibration and aircraft performance rating. It is based on a series of hypotheses that capture the relationships among the different atmospheric variables, but it is by construction a static model intended for standardization and only captures the atmosphere variations with altitude, but not those with time and horizontal position. The ISA model is however widely employed for flight simulations and trajectory prediction. There also exist models based on similar hypotheses, such as the U.S. Standard Atmosphere [15] or the Jet Standard Atmosphere, but they share the same shortcomings when applied to trajectory prediction. This article describes an atmospheric model specifically designed for the requirements of trajectory prediction and flight simulation. It enables the introduction of user defined variations of temperature and pressure based on time and horizontal position, but respects all the hypotheses included in ISA, hence ensuring a realistic variation of the atmospheric properties with altitude. As it is based on the same hypotheses as ISA, the author has called it the ICAO Non Standard Atmosphere, or INSA. After introducing a few basic concepts in section 2, section 3 provides a high level description of what comprises an atmospheric model and introduces the architecture of the proposed INSA model. The differences between the ISA and INSA models are described in section 4, which is followed by section 5 that lists the hypotheses on which ISA relies, which are shared by INSA. Section 6 comprises the bulk of this article and contains the derivation of the INSA model equations. In section 7 the author has added important suggestions on how to best employ the proposed model and adjust it to ground observations. Finally, the conclusions are provided in section 8. ## 2 Preliminary Concepts Before focusing on the INSA atmosphere model itself, this section introduces several concepts that are required to properly understand the rest of the article: * • The World Geodetic System 1984 (WGS84) [16] is the de facto standard for aircraft geopositioning and navigation, defining the WGS84 ellipsoid as the reference Earth surface. * • In a static atmosphere, the potential energy of an air particle is given by its gravity potential or sum of the potentials caused by the Earth gravitation plus that created by the Earth rotation around its axis [14]. For aircraft positioning and navigation, the WGS84 ellipsoid can be considered as a geopotential surface333An isopotential or geopotential surface is that in which all its points have the same potential value. and adopted as mean sea level (MSL). $\displaystyle\mathrm{O_{\scriptscriptstyle E}}$$\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 1}^{\scriptscriptstyle E}}$$\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 2}^{\scriptscriptstyle E}}$$\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 3}^{\scriptscriptstyle E}}$$\displaystyle\mathrm{O_{\scriptscriptstyle N}}$$\displaystyle\mathrm{\lambda}$$\displaystyle\mathrm{\varphi}$$\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 3}^{\scriptscriptstyle N}}$$\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 2}^{\scriptscriptstyle N}}$$\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 1}^{\scriptscriptstyle N}}$$\displaystyle\mathrm{h}$Greenwich Mer.WGS84 Ell.Equator Figure 1: The NED frame and geodetic coordinates * • Vectors representing different kinematic or dynamic aspects of the aircraft motion with respect to the Earth can be viewed in the Earth Centered Earth Fixed (ECEF) frame, a Cartesian reference system $\displaystyle\mathrm{F_{\scriptscriptstyle E}=\left\\{O_{\scriptscriptstyle E},\,\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 1}^{\scriptscriptstyle E},\,\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 2}^{\scriptscriptstyle E},\,\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 3}^{\scriptscriptstyle E}\right\\}}$ shown in figure 1, where $\displaystyle\mathrm{O_{\scriptscriptstyle E}}$ is located at the Earth center of mass (WGS84 ellipsoid center), $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 3}^{\scriptscriptstyle E}}$ points towards the geodetic North pole along the Earth rotation axis (ellipsoid symmetry axis), $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 1}^{\scriptscriptstyle E}}$ is contained in both the Equator and Greenwich meridian planes pointing towards zero longitude, and $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 2}^{\scriptscriptstyle E}}$ is orthogonal to $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 1}^{\scriptscriptstyle E}}$ and $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 3}^{\scriptscriptstyle E}}$ forming a right hand system. * • The ECEF frame is also useful to define the geodetic coordinates $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{x}$}}_{\scriptscriptstyle GDT}=\left[\lambda,\,\varphi,\,h\right]^{T}}$, also shown in figure 1, which enable the positioning of any point with respect to the Earth. _Longitude_ $\displaystyle\mathrm{\lambda}$ $\displaystyle\mathrm{\left(0\leq\lambda<2\pi\right)}$ is the angle formed between the Greenwich meridian plane (formed by $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 1}^{\scriptscriptstyle E}}$ and $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{i}$}}_{\scriptscriptstyle 3}^{\scriptscriptstyle E}}$) and the point meridian plane, _latitude_ $\displaystyle\mathrm{\varphi}$ $\displaystyle\mathrm{\left(-\pi/2\leq\varphi\leq\pi/2\right)}$ is the angle formed between the Equator plane and the line passing through the point that is orthogonal to the ellipsoid surface at the point where it intersects it, and _geodetic altitude_ h is the distance between the ellipsoid surface and the point measured along a line that is orthogonal to the ellipsoid surface (mean sea level MSL) at the point where it intersects it. $\mathrm{h_{\scriptscriptstyle MSL}=0}$ (1) * • _Geopotential altitude_ H is defined so that, when moving from a geopotential surface in a direction normal to its surface, the same differential work is performed by the gravity acceleration $\displaystyle\mathrm{g_{c}}$ when displacing the unit of mass a geodetic distance dh as that performed by the standard acceleration of free fall $\displaystyle\mathrm{g_{\scriptscriptstyle 0}}$ (table 4) when displacing the unit of mass a distance dH [14]. The relationship between geopotential and geodetic altitudes is hence the following: $\mathrm{-g_{c}\ dh=-g_{0}\ dH}$ (2) Being a geopotential surface, mean sea level is not only taken as the reference for geodetic altitudes but also for geopotential ones [14]: $\mathrm{H_{\scriptscriptstyle MSL}=0}$ (3) The relationship between the geodetic and geopotential altitudes is obtained by integrating (2) between mean sea level ($\displaystyle\mathrm{h_{\scriptscriptstyle MSL}=H_{\scriptscriptstyle MSL}=0}$) and a generic point. $\mathrm{\int_{h_{\scriptscriptstyle MSL}=0}^{h}g_{c}\ dh=g_{0}\int_{H_{\scriptscriptstyle MSL}=0}^{H}dH}$ (4) As the gravity acceleration $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{g}$}}_{c}}$ in an ellipsoidal Earth depends on both latitude and geodetic altitude, (4) lacks any explicit solution, although the results can be easily tabulated: $\displaystyle\displaystyle\mathrm{H}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{f\left(h,\varphi\right)}$ (5) $\displaystyle\displaystyle\mathrm{h}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{f\left(H,\varphi\right)}$ (6) The influence of latitude is very small and the penalties for implementing a tabulated solution are significant, so it is common practice to employ a simplified solution obtained by solving (4) based on an spherical Earth surface of radius $\displaystyle\mathrm{R_{\scriptscriptstyle E}}$ (table 4) [14], an spherical gravitation obtained by removing all terms except the first from the Earth Gravitational Model 1996 (EGM96) [17], and an average centrifugal effect. This results in: $\displaystyle\displaystyle\mathrm{H}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{\dfrac{R_{\scriptscriptstyle E}\cdot h}{R_{\scriptscriptstyle E}+h}}$ (7) $\displaystyle\displaystyle\mathrm{h}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{\dfrac{R_{\scriptscriptstyle E}\cdot H}{R_{\scriptscriptstyle E}-H}}$ (8) ## 3 Static and Quasi Static Atmosphere Models When simulating the flight of an aircraft or predicting its trajectory, it is generally necessary to estimate the atmospheric properties (pressure p, temperature T, and density $\displaystyle\mathrm{\rho}$) of the air through which the aircraft flies, and sometimes also its derivatives with time. The atmospheric properties can vary with both time and geodetic position: $\mathrm{\left[p,\,T,\,\rho\right]^{T}=\mbox{\boldmath{$\displaystyle\mathrm{f}$}}\left(t,\,\mbox{\boldmath{$\displaystyle\mathrm{x}$}}_{\scriptscriptstyle GDT}\right)=\mbox{\boldmath{$\displaystyle\mathrm{f}$}}\left(t,\,\lambda,\,\varphi,\,h\right)}$ (9) Based on (8), the atmosphere model can also be expressed as: $\mathrm{\left[p,\,T,\,\rho\right]^{T}=\mbox{\boldmath{$\displaystyle\mathrm{f}$}}\left(t,\,\lambda,\,\varphi,\,H\right)}$ (10) An _static atmosphere model_ neglects the influence of time and horizontal position, and only provides the atmosphere dependency with geopotential altitude. ISA [14] is such an static model. $\mathrm{\left[p,\,T,\,\rho\right]^{T}=\mbox{\boldmath{$\displaystyle\mathrm{f}$}}\left(H\right)}$ (11) Static atmosphere models are too restrictive for flight simulation and trajectory prediction as they do not enable a proper representation of the variation of the atmosphere properties with time at a fixed location, or those between two different locations at the same time. A _quasi static atmosphere model_ acknowledges that the variations of atmospheric properties with time and horizontal position are much smaller than those with altitude, and hence its time derivatives can be neglected: $\displaystyle\displaystyle\mathrm{\left[p,\,T,\,\rho\right]^{T}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{f}$}}\left(t,\,\lambda,\,\varphi,\,H\right)}$ (12) $\displaystyle\displaystyle\mathrm{\dfrac{d{\left[p,\,T,\,\rho\right]^{T}}}{dt}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{\dfrac{d{\mbox{\boldmath{$\displaystyle\mathrm{f}$}}\left(t,\,\lambda,\,\varphi,\,H\right)}}{dt}\approx\dfrac{\partial{\mbox{\boldmath{$\displaystyle\mathrm{f}$}}\left(t,\,\lambda,\,\varphi,\,H\right)}}{\partial{H}}\ \dfrac{d{H}}{dt}}$ (13) This article proposes a quasi static atmosphere model called the ICAO Non Standard Atmosphere (INSA), represented in figure 2, that separates the computation of the atmospheric properties (12) in two steps: 1. 1. The atmosphere variation with time and horizontal position is modeled through two parameters named the temperature and pressure offsets ($\displaystyle\mathrm{\Delta T}$, $\displaystyle\mathrm{\Delta p}$): $\mathrm{\left[\Delta T,\,\Delta p\right]^{T}=\mbox{\boldmath{$\displaystyle\mathrm{f}$}}_{1}\left(t,\,\lambda,\,\varphi\right)}$ (14) 2. 2. At each time and horizontal position, the atmospheric properties are based on a static atmospheric model the complies with the ISA model [14] hypotheses: $\mathrm{\left[p,\,T,\,\rho\right]^{T}=\mbox{\boldmath{$\displaystyle\mathrm{f}$}}_{2}\left(H,\,\Delta T,\,\Delta p\right)}$ (15) (14) (7) (15) $\displaystyle\mathrm{t}$$\displaystyle\mathrm{\lambda}$$\displaystyle\mathrm{\varphi}$$\displaystyle\mathrm{h}$$\displaystyle\mathrm{H}$$\displaystyle\mathrm{\Delta T}$$\displaystyle\mathrm{\Delta p}$$\displaystyle\mathrm{T}$$\displaystyle\mathrm{p}$$\displaystyle\mathrm{\rho}$ Figure 2: Quasi static non standard atmosphere model flow diagram This model presents several advantages for flight simulation and trajectory prediction as it separates the weather (14) from the local equilibrium of the atmosphere (15). The former is not defined but allows the external user to employ any two three-dimensional functions $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{f}$}}_{1}}$ to represent the weather conditions encountered throughout the specific trajectory being computed. The _temperature offset_ $\displaystyle\mathrm{\Delta T}$ represents the atmospheric temperature variations that occur daily444The temperature at a given location generally increases during the morning and descends during the evening. or seasonally555The temperature at a given location of the Northern hemisphere is generally higher in summer and lower in winter. at a single location, as well as those that occur at the same time at different latitudes666Locations near the poles are generally colder than those placed near the Equator. or simply at different Earth locations, while the _pressure offset_ $\displaystyle\mathrm{\Delta p}$ represents the atmospheric pressure changes due to the presence of high or low pressure weather systems. Note that none of these parameters model the diminution of atmospheric temperature and pressure with altitude, which is captured by (15). The static atmosphere model represented by (15) provides the variation of the atmospheric properties with geopotential altitude along an infinitesimally narrow column of air normal to the Earth geopotential surfaces, which is uniquely defined by its temperature and pressure offsets. As the aircraft position changes during its flight, so do the offsets and hence the relationship between the atmospheric properties and geopotential altitude. ## 4 Standard and Non Standard Static Atmosphere Models The ICAO International Atmosphere model, also known as the _standard atmosphere_ or ISA, is a static model (11) defined by [14] that provides expressions for the atmospheric pressure, standard temperature, and density as functions of the pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$: $\mathrm{p,\,T_{\scriptscriptstyle ISA},\,\rho\,=f\,(H_{\scriptscriptstyle P})}$ (16) Note that _pressure altitude_ $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ is defined as the geopotential altitude H that occurs in standard conditions, and _standard temperature_ $\displaystyle\mathrm{T_{\scriptscriptstyle ISA}}$ is the atmospheric temperature that would occur at a given pressure altitude if the conditions where those of the standard atmosphere. These two variables are also widely employed in non standard conditions below, but it is important to remark that in general $\displaystyle\mathrm{T_{\scriptscriptstyle ISA}\neq T}$ and $\displaystyle\mathrm{H_{\scriptscriptstyle P}\neq H}$. _Standard conditions_ are those provided by ISA (16), while _standard mean sea level conditions_ , identified by the sub index “$\displaystyle\mathrm{H_{\scriptscriptstyle P}=0}$”, correspond to their values where pressure altitude is zero ($\displaystyle\mathrm{H_{\scriptscriptstyle{P,H_{\scriptscriptstyle P}=0}}=0}$). They are $\displaystyle\mathrm{p_{\scriptscriptstyle H_{\scriptscriptstyle P}=0}=p_{\scriptscriptstyle 0}}$, $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,H_{\scriptscriptstyle P}=0}=T_{\scriptscriptstyle 0}}$, and $\displaystyle\mathrm{\rho_{\scriptscriptstyle ISA,H_{\scriptscriptstyle P}=0}=\rho_{\scriptscriptstyle 0}}$. The values of these constants are shown in table 4, which lists all the constants defined in [14] that are employed throughout this article. Additionally, the ISA atmosphere is divided into two layers called the _troposphere_ below and the _stratosphere_ above; their separation is known as the _tropopause_. Constant | Definition | Value | Units ---|---|---|--- $\displaystyle\mathrm{g_{\scriptscriptstyle 0}}$ | standard acceleration of free fall | 9.80665 | $\displaystyle\mathrm{\left[m/sec^{\scriptscriptstyle 2}\right]}$ $\displaystyle\mathrm{R_{\scriptscriptstyle E}}$ | Earth nominal radius | 6356766.0 | [m] $\displaystyle\mathrm{p_{\scriptscriptstyle 0}}$ | standard pressure at mean sea level | 101325 | $\displaystyle\mathrm{\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$ $\displaystyle\mathrm{T_{\scriptscriptstyle 0}}$ | standard temperature at mean sea level | 288.15 | $\displaystyle\mathrm{\left[{}^{\circ}K\right]}$ $\displaystyle\mathrm{\rho_{\scriptscriptstyle 0}}$ | standard density at mean sea level | 1.225 | $\displaystyle\mathrm{\left[kg/m^{\scriptscriptstyle 3}\right]}$ $\displaystyle\mathrm{R}$ | specific air constant | 287.05287 | $\displaystyle\mathrm{\left[m^{\scriptscriptstyle 2}/^{\circ}K\;sec^{\scriptscriptstyle 2}\right]}$ $\displaystyle\mathrm{H_{\scriptscriptstyle P,TROP}}$ | tropopause pressure altitude | 11000 | [m] $\displaystyle\mathrm{\beta_{\scriptscriptstyle T,<}}$ | temperature gradient below tropopause | $\displaystyle\mathrm{-6.5\cdot 10^{\scriptscriptstyle-3}}$ | $\displaystyle\mathrm{\left[{}^{\circ}K/m\right]}$ $\displaystyle\mathrm{\beta_{\scriptscriptstyle T,>}}$ | temperature gradient above tropopause | 0 | $\displaystyle\mathrm{\left[{}^{\circ}K/m\right]}$ Table 1: Constants defined by ISA [14] This article defines a _non standard atmosphere_ or INSA as a static model (15) based on the same hypotheses as ISA but in which either $\displaystyle\mathrm{\Delta T}$ or $\displaystyle\mathrm{\Delta p}$ is not zero (or both). Accordingly, _non standard conditions_ are those provided by an INSA model. The _temperature offset_ $\displaystyle\mathrm{\Delta T}$ and the _pressure offset_ $\displaystyle\mathrm{\Delta p}$ are the differences in mean sea level conditions between a given INSA and ISA. _Mean sea level_ conditions, identified by the sub index “MSL”, are those that occur where the geopotential altitude is zero ($\displaystyle\mathrm{H_{\scriptscriptstyle MSL}=0}$), and differ from $\displaystyle\mathrm{\left(p_{\scriptscriptstyle H_{\scriptscriptstyle P}=0},\,T_{\scriptscriptstyle ISA,H_{\scriptscriptstyle P}=0},\,\rho_{\scriptscriptstyle ISA,H_{\scriptscriptstyle P}=0}\right)}$ in non standard conditions. The limitations of using a static model such as ISA for trajectory prediction are obvious, and it is common to use a parameter similar to $\displaystyle\mathrm{\Delta T}$ to correct the temperature values provided by ISA, in what is known as “OFF ISA” conditions. Correcting for local pressure variations is far less common in ground based trajectory prediction, although the Query Nautical Height (QNH) and the Query Field Elevation (QFE) are employed to adjust the aircraft altimeters (and hence their FMS trajectory predictions) in the vicinity of airports777In this context, QNE refers to ISA conditions, and is only employed above a certain transition altitude.. QNH represents the pressure altitude at mean sea level conditions ($\displaystyle\mathrm{h=H=0}$), and QFE the pressure altitude at the airport. All these ISA modifications are valid as long as the aircraft does not fly to far from the location where they apply because their values ($\displaystyle\mathrm{\Delta T}$, QNH, and QFE) are constant; as such, they can only be employed for short trajectory segments (such as descents). The quasi static INSA model introduced in this article generalizes these modifications to the static ISA model into a comprehensive and easy-to-use scheme that enables continuous variations with time and horizontal position of the temperature and pressure differences with respect to ISA, while respecting all ISA hypotheses at the local level. Variable | | Standard Mean Sea Level | | Mean Sea Level | | Tropopause ---|---|---|---|---|---|--- $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ | | $\displaystyle\mathrm{H_{\scriptscriptstyle{P,H_{\scriptscriptstyle P}=0}}}$ | = | $\displaystyle\mathrm{0}$ | | $\displaystyle\mathrm{H_{\scriptscriptstyle P,MSL}}$ | | (41) | | $\displaystyle\mathrm{H_{\scriptscriptstyle P,TROP}}$ | = | $\displaystyle\mathrm{11000\left[m\right]}$ $\displaystyle\mathrm{H}$ | | $\displaystyle\mathrm{H_{\scriptscriptstyle H_{\scriptscriptstyle P}=0}}$ | | (49) | | $\displaystyle\mathrm{H_{\scriptscriptstyle MSL}}$ | = | $\displaystyle\mathrm{0}$ | | $\displaystyle\mathrm{H_{\scriptscriptstyle TROP}}$ | | (47) $\displaystyle\mathrm{p}$ | | $\displaystyle\mathrm{p_{\scriptscriptstyle H_{\scriptscriptstyle P}=0}}$ | = | $\displaystyle\mathrm{p_{\scriptscriptstyle 0}}$ | | $\displaystyle\mathrm{p_{\scriptscriptstyle MSL}}$ | = | $\displaystyle\mathrm{p_{\scriptscriptstyle 0}+\Delta p}$ | | $\displaystyle\mathrm{p_{\scriptscriptstyle TROP}}$ | | (37) $\displaystyle\mathrm{T_{\scriptscriptstyle ISA}}$ | | $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,H_{\scriptscriptstyle P}=0}}$ | = | $\displaystyle\mathrm{T_{\scriptscriptstyle 0}}$ | | $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,MSL}}$ | | (28) | | $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,TROP}}$ | | (26) $\displaystyle\mathrm{T}$ | | $\displaystyle\mathrm{T_{\scriptscriptstyle H_{\scriptscriptstyle P}=0}}$ | = | $\displaystyle\mathrm{T_{\scriptscriptstyle 0}+\Delta T}$ | | $\displaystyle\mathrm{T_{\scriptscriptstyle MSL}}$ | | (32) | | $\displaystyle\mathrm{T_{\scriptscriptstyle TROP}}$ | | (30) Table 2: Standard mean sea level, mean sea level, and tropopause atmospheric conditions Table 4 shows the values of the atmospheric variables at standard mean sea level conditions ($\displaystyle\mathrm{H_{\scriptscriptstyle P}=0}$), mean sea level conditions (MSL), and at the tropopause (TROP), identifying whether the value corresponds to a definition obtained from ISA [14] or the previous paragraphs, or whether it is obtained from an expression derived in this article. A non standard atmosphere INSA is uniquely identified by its $\displaystyle\mathrm{\Delta T}$ and $\displaystyle\mathrm{\Delta p}$ values, resulting in (15). If both are zero, the non standard atmosphere converges to ISA. An INSA hence provides expressions for the atmospheric pressure, temperature, and density as functions of the geopotential altitude H and its two offset parameters (15). ## 5 Hypotheses of the ICAO Standard Atmosphere - ISA The non standard atmospheric model INSA proposed in this article is based on the same hypotheses as the ISA model, which are the following [14]: * • The law of perfect gases reflects the relationship between the atmospheric properties, where R (table 4) is the specific air constant. $\mathrm{p=\rho\,R\,T}$ (17) * • Every air column corresponding to a given time and horizontal location is static in relation to the Earth surface, so the fluid equilibrium of forces in the direction normal to the Earth geopotential surfaces is the following: $\mathrm{dp=-\rho\,g_{c}\,dh=-\rho\,g_{\scriptscriptstyle 0}\,dH}$ (18) * • The tropopause altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P,TROP}}$ (table 4) is constant when expressed in terms of pressure altitude. * • Each atmosphere layer is characterized by a constant temperature gradient with pressure altitude $\displaystyle\mathrm{\beta_{\scriptscriptstyle T}}$ (table 4), as shown in figure 3: $\displaystyle 0$$\displaystyle 1$$\displaystyle 2$$\displaystyle 3$$\displaystyle 4$$\displaystyle 5$$\displaystyle 6$$\displaystyle 7$$\displaystyle 8$$\displaystyle 9$$\displaystyle 10$$\displaystyle 11$$\displaystyle 12$$\displaystyle 13$$\displaystyle 14$$\displaystyle 15$$\displaystyle-7$$\displaystyle-6$$\displaystyle-5$$\displaystyle-4$$\displaystyle-3$$\displaystyle-2$$\displaystyle-1$$\displaystyle 0$$\displaystyle 1$$\displaystyle\mathrm{H_{\scriptscriptstyle P}\left[km\right]}$$\displaystyle\mathrm{dT/dH_{\scriptscriptstyle P}\left[{{}^{\circ}K}/km\right]}$ Figure 3: $\displaystyle\mathrm{dT/dH_{\scriptscriptstyle P}}$ versus $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ $\mathrm{\dfrac{d{T}}{d{H_{\scriptscriptstyle P}}}=\;\left\\{\begin{array}[]{lcl}\mathrm{\beta_{\scriptscriptstyle T,<}}&\longrightarrow&\mathrm{H_{\scriptscriptstyle P}<=H_{\scriptscriptstyle P,TROP}}\\\ \mathrm{\beta_{\scriptscriptstyle T,>}}&\longrightarrow&\mathrm{H_{\scriptscriptstyle P}\ >\ H_{\scriptscriptstyle P,TROP}}\end{array}\right.}$ (19) ## 6 Relationships among Atmospheric Variables As indicated by (15), a non standard atmosphere INSA identified by its temperature and pressure offsets ($\displaystyle\mathrm{\Delta T}$ and $\displaystyle\mathrm{\Delta p}$) defines the atmospheric pressure, temperature, and density as functions of geopotential altitude and the two offsets. These dependencies are repeated here for clarity: $\mathrm{\left[p,\,T,\,\rho\right]^{T}=\mbox{\boldmath{$\displaystyle\mathrm{f}$}}_{2}\left(H,\,\Delta T,\,\Delta p\right)}$ (20) As shown in figure 4, the least complex way of obtaining these variables is to first obtain the pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ per (21). Atmospheric pressure can then be obtained as a sole function of $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ (22), while temperature also depends on $\displaystyle\mathrm{\Delta T}$ but not $\displaystyle\mathrm{\Delta p}$ (23). The standard temperature $\displaystyle\mathrm{T_{\scriptscriptstyle ISA}}$, useful to simplify the previous expressions, is also a sole function of pressure altitude (24). Density $\displaystyle\mathrm{\rho}$ is finally obtained per (17) as a function of pressure and temperature. The following sections derive these expressions from the hypotheses listed above. $\displaystyle\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{f\left(H,\,\Delta T,\,\Delta p\right)}$ (21) $\displaystyle\displaystyle\mathrm{p}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{f\left(H_{\scriptscriptstyle P}\right)}$ (22) $\displaystyle\displaystyle\mathrm{T}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{f\left(H_{\scriptscriptstyle P},\,\Delta T\right)}$ (23) $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle ISA}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{f\left(H_{\scriptscriptstyle P}\right)}$ (24) (21) (22) (24) (23) $\displaystyle\mathrm{H}$$\displaystyle\mathrm{\Delta p}$$\displaystyle\mathrm{\Delta T}$$\displaystyle\mathrm{H_{\scriptscriptstyle P}}$$\displaystyle\mathrm{p}$$\displaystyle\mathrm{T_{\scriptscriptstyle ISA}}$$\displaystyle\mathrm{T}$ Figure 4: Non standard atmosphere model flow diagram (static part) ### Standard Temperature The standard temperature $\displaystyle\mathrm{T_{\scriptscriptstyle ISA}}$ is that occurring at a given pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ in standard conditions ($\displaystyle\mathrm{\Delta T=\Delta p=0}$). As indicated by (24), it is a function of pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ exclusively. Its troposphere expression is obtained by integrating (19) in standard conditions between standard mean sea level ($\displaystyle\mathrm{H_{\scriptscriptstyle P}=H_{\scriptscriptstyle{P,H_{\scriptscriptstyle P}=0}}=0}$, $\displaystyle\mathrm{T_{\scriptscriptstyle ISA}=T_{\scriptscriptstyle ISA,H_{\scriptscriptstyle P}=0}=T_{\scriptscriptstyle 0}}$) and any point below the tropopause. $\mathrm{T_{\scriptscriptstyle ISA,<}=T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,<}}$ (25) $\displaystyle 0$$\displaystyle 1$$\displaystyle 2$$\displaystyle 3$$\displaystyle 4$$\displaystyle 5$$\displaystyle 6$$\displaystyle 7$$\displaystyle 8$$\displaystyle 9$$\displaystyle 10$$\displaystyle 11$$\displaystyle 12$$\displaystyle 13$$\displaystyle 14$$\displaystyle 15$$\displaystyle 190$$\displaystyle 210$$\displaystyle 230$$\displaystyle 250$$\displaystyle 270$$\displaystyle 290$$\displaystyle 310$$\displaystyle\mathrm{H_{\scriptscriptstyle P}\left[km\right]}$$\displaystyle\mathrm{T_{\scriptscriptstyle ISA}\left[{}^{\circ}K\right]}$ Figure 5: $\displaystyle\mathrm{T_{\scriptscriptstyle ISA}}$ versus $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ The stratosphere expression is obtained by integrating (19) between the tropopause ($\displaystyle\mathrm{H_{\scriptscriptstyle P}=H_{\scriptscriptstyle P,TROP}}$, $\displaystyle\mathrm{T_{\scriptscriptstyle ISA}=T_{\scriptscriptstyle ISA,TROP}}$) and any point above it, where $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,TROP}}$ is obtained from (25): $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle ISA,TROP}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,TROP}}$ (26) $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle ISA,>}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle ISA,TROP}+\beta_{\scriptscriptstyle T,>}\;\left(H_{\scriptscriptstyle P,>}-H_{\scriptscriptstyle P,TROP}\right)=T_{\scriptscriptstyle ISA,TROP}}$ (27) Note that (24), which is a combination of (25) and (27), can not be reversed because of the constant standard temperature in the stratosphere. The relationship between $\displaystyle\mathrm{T_{\scriptscriptstyle ISA}}$ and $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$, graphically represented in figure 5, is valid for all INSA non standard atmospheres as it does not depend on $\displaystyle\mathrm{\Delta T}$ or $\displaystyle\mathrm{\Delta p}$. The standard temperature at mean sea level $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,MSL}}$ shown in table 4 is obtained by inserting the mean sea level pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P,MSL}}$ obtained in (41) into (25), resulting in: $\mathrm{T_{\scriptscriptstyle ISA,MSL}=T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,MSL}}$ (28) ### Temperature Temperature T is a function of both pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ and temperature offset $\displaystyle\mathrm{\Delta T}$ (23). The expression for the troposphere is also obtained by integrating (19), represented in figure 3, between standard mean sea level ($\displaystyle\mathrm{H_{\scriptscriptstyle P}=H_{\scriptscriptstyle{P,H_{\scriptscriptstyle P}=0}}=0,\,T=T_{\scriptscriptstyle H_{\scriptscriptstyle P}=0}=T_{\scriptscriptstyle 0}+\Delta T}$), where the temperature is taken from the temperature offset definition, and any point below the tropopause: $\mathrm{T_{\scriptscriptstyle<}=T_{\scriptscriptstyle 0}+\Delta T+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,<}=T_{\scriptscriptstyle ISA,<}+\Delta T}$ (29) The stratosphere expression is obtained by integrating (19) between the tropopause ($\displaystyle\mathrm{H_{\scriptscriptstyle P}=H_{\scriptscriptstyle P,TROP}}$, $\displaystyle\mathrm{T=T_{\scriptscriptstyle TROP}}$) and any point above it, where $\displaystyle\mathrm{T_{\scriptscriptstyle TROP}}$ is obtained from (29): $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle TROP}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle 0}+\Delta T+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,TROP}=T_{\scriptscriptstyle ISA,TROP}+\Delta T}$ (30) $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle>}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle TROP}+\beta_{\scriptscriptstyle T,>}\;\left(H_{\scriptscriptstyle P,>}-H_{\scriptscriptstyle P,TROP}\right)=T_{\scriptscriptstyle TROP}}$ (31) Note that (23), which is a combination of (29) and (31), can not be reversed because of the constant temperature in the stratosphere. The relationship between $\displaystyle\mathrm{T}$ and $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ is graphically represented in figure 6 for various temperature offsets $\displaystyle\mathrm{\Delta T}$ as it does not depend on the pressure offset $\displaystyle\mathrm{\Delta p}$. $\displaystyle 0$$\displaystyle 1$$\displaystyle 2$$\displaystyle 3$$\displaystyle 4$$\displaystyle 5$$\displaystyle 6$$\displaystyle 7$$\displaystyle 8$$\displaystyle 9$$\displaystyle 10$$\displaystyle 11$$\displaystyle 12$$\displaystyle 13$$\displaystyle 14$$\displaystyle 15$$\displaystyle 190$$\displaystyle 210$$\displaystyle 230$$\displaystyle 250$$\displaystyle 270$$\displaystyle 290$$\displaystyle 310$$\displaystyle\mathrm{H_{\scriptscriptstyle P}\left[km\right]}$$\displaystyle\mathrm{T\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=-20\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=-10\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=0\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=+10\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=+20\left[{}^{\circ}K\right]}$ Figure 6: T versus $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ for various $\displaystyle\mathrm{\Delta T}$ The temperature at mean sea level $\displaystyle\mathrm{T_{\scriptscriptstyle MSL}}$ shown in table 4 is obtained by inserting the mean sea level pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P,MSL}}$ obtained in (41) into (29), resulting in: $\mathrm{T_{\scriptscriptstyle MSL}=T_{\scriptscriptstyle 0}+\Delta T+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,MSL}}$ (32) ### Pressure The relationship between pressure p and pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ does not depend on either the temperature or pressure offsets (22). The ratio between differential increments of pressure and pressure altitude is obtained by combining the law of perfect gases (17) with the atmosphere fluid equilibrium (18) in standard conditions ($\displaystyle\mathrm{H_{\scriptscriptstyle P}=H}$ and $\displaystyle\mathrm{T_{\scriptscriptstyle ISA}=T}$), resulting in: $\mathrm{\frac{dp}{p}=-\frac{g_{\scriptscriptstyle 0}}{R\ T_{\scriptscriptstyle ISA}}\ dH_{\scriptscriptstyle P}}$ (33) $\displaystyle 0$$\displaystyle 1$$\displaystyle 2$$\displaystyle 3$$\displaystyle 4$$\displaystyle 5$$\displaystyle 6$$\displaystyle 7$$\displaystyle 8$$\displaystyle 9$$\displaystyle 10$$\displaystyle 11$$\displaystyle 12$$\displaystyle 13$$\displaystyle 14$$\displaystyle 15$$\displaystyle-12$$\displaystyle-10$$\displaystyle-8$$\displaystyle-6$$\displaystyle-4$$\displaystyle-2$$\displaystyle 0$$\displaystyle\mathrm{H_{\scriptscriptstyle P}\left[km\right]}$$\displaystyle\mathrm{dp/dH_{\scriptscriptstyle P}\left[kg/m^{\scriptscriptstyle 2}\,sec^{\scriptscriptstyle 2}\right]}$ Figure 7: $\displaystyle\mathrm{dp/dH_{\scriptscriptstyle P}}$ versus $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ Below the tropopause, expression (25) can be inserted into (33), resulting in: $\mathrm{\frac{dp_{\scriptscriptstyle<}}{p_{\scriptscriptstyle<}}=-\frac{g_{\scriptscriptstyle 0}}{R}\ \frac{dH_{\scriptscriptstyle P,<}}{T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,<}}}$ (34) Its integration between standard mean sea level conditions ($\displaystyle\mathrm{H_{\scriptscriptstyle P}=H_{\scriptscriptstyle{P,H_{\scriptscriptstyle P}=0}}=0}$, $\displaystyle\mathrm{p=p_{\scriptscriptstyle H_{\scriptscriptstyle P}=0}=p_{\scriptscriptstyle 0}}$) and any point below the tropopause results in: $\mathrm{p_{\scriptscriptstyle<}=p_{\scriptscriptstyle 0}\ \left(1+\frac{\beta_{\scriptscriptstyle T,<}}{T_{\scriptscriptstyle 0}}\ H_{\scriptscriptstyle P,<}\right)^{\scriptscriptstyle\dfrac{-g_{\scriptscriptstyle 0}}{\beta_{\scriptscriptstyle T,<}\;R}}}$ (35) In the case of the stratosphere, inserting (27) into (33) results in: $\mathrm{\frac{dp_{\scriptscriptstyle>}}{p_{\scriptscriptstyle>}}=-\frac{g_{\scriptscriptstyle 0}}{R}\ \frac{dH_{\scriptscriptstyle P,>}}{T_{\scriptscriptstyle ISA,TROP}}}$ (36) This expression can then be integrated between the tropopause ($\displaystyle\mathrm{H_{\scriptscriptstyle P}=H_{\scriptscriptstyle P,TROP}}$, $\displaystyle\mathrm{p=p_{\scriptscriptstyle TROP}}$) and any point above it, where $\displaystyle\mathrm{p_{\scriptscriptstyle TROP}}$ is obtained from (35): $\displaystyle\displaystyle\mathrm{p_{\scriptscriptstyle TROP}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{p_{\scriptscriptstyle 0}\ \left(1+\frac{\beta_{\scriptscriptstyle T,<}}{T_{\scriptscriptstyle 0}}\ H_{\scriptscriptstyle P,TROP}\right)^{\scriptscriptstyle\dfrac{-g_{\scriptscriptstyle 0}}{\beta_{\scriptscriptstyle T,<}\;R}}}$ (37) $\displaystyle\displaystyle\mathrm{p_{\scriptscriptstyle>}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{p_{\scriptscriptstyle TROP}\ exp\left[\frac{-g_{\scriptscriptstyle 0}}{R\;T_{\scriptscriptstyle ISA,TROP}}\left(H_{\scriptscriptstyle P,>}-H_{\scriptscriptstyle P,TROP}\right)\right]}$ (38) $\displaystyle 0$$\displaystyle 1$$\displaystyle 2$$\displaystyle 3$$\displaystyle 4$$\displaystyle 5$$\displaystyle 6$$\displaystyle 7$$\displaystyle 8$$\displaystyle 9$$\displaystyle 10$$\displaystyle 11$$\displaystyle 12$$\displaystyle 13$$\displaystyle 14$$\displaystyle 15$$\displaystyle 0$$\displaystyle 20$$\displaystyle 40$$\displaystyle 60$$\displaystyle 80$$\displaystyle 100$$\displaystyle\mathrm{H_{\scriptscriptstyle P}\left[km\right]}$$\displaystyle\mathrm{p\left[10^{\scriptscriptstyle 3}\cdot kg/m\,sec^{\scriptscriptstyle 2}\right]}$ Figure 8: p versus $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ The relationship between $\displaystyle\mathrm{p}$ and $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$, graphically represented in figure 8, is valid for all INSA non standard atmospheres as it does not depend on $\displaystyle\mathrm{\Delta T}$ or $\displaystyle\mathrm{\Delta p}$. Note that the decrease in atmospheric pressure with pressure altitude is slower the higher the pressure altitude. Expressions (35) and (38) are easily reversed, resulting in: $\displaystyle\displaystyle\mathrm{H_{\scriptscriptstyle P,<}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{\frac{T_{\scriptscriptstyle 0}}{\beta_{\scriptscriptstyle T,<}}\;\left[\left(\frac{p_{\scriptscriptstyle<}}{p_{\scriptscriptstyle 0}}\right)^{\scriptscriptstyle\dfrac{-\beta_{\scriptscriptstyle T,<}\;R}{g_{\scriptscriptstyle 0}}}-1\right]}$ (39) $\displaystyle\displaystyle\mathrm{H_{\scriptscriptstyle P,>}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{H_{\scriptscriptstyle P,TROP}-\frac{R\;T_{\scriptscriptstyle ISA,TROP}}{g_{\scriptscriptstyle 0}}\;\log_{n}{\left(\frac{p_{\scriptscriptstyle>}}{p_{\scriptscriptstyle TROP}}\right)}}$ (40) The mean sea level pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P,MSL}}$ is obtained by replacing $\displaystyle\mathrm{p_{\scriptscriptstyle<}}$ by $\displaystyle\mathrm{p_{\scriptscriptstyle MSL}=p_{\scriptscriptstyle 0}+\Delta p}$ within (39), where $\displaystyle\mathrm{p_{\scriptscriptstyle MSL}}$ is taken from its table 4 definition: $\mathrm{H_{\scriptscriptstyle P,MSL}=\frac{T_{\scriptscriptstyle 0}}{\beta_{\scriptscriptstyle T,<}}\;\left[\left(\frac{p_{\scriptscriptstyle MSL}}{p_{\scriptscriptstyle 0}}\right)^{\scriptscriptstyle\dfrac{-\beta_{\scriptscriptstyle T,<}\;R}{g_{\scriptscriptstyle 0}}}-1\right]}$ (41) ### Geopotential Altitude The combination of the law of perfect gases (17) and the fluid static atmosphere vertical equilibrium (18) results in: $\mathrm{\frac{dp}{p}=-\frac{g_{\scriptscriptstyle 0}}{R\ T}\ dH}$ (42) Dividing (42) by (33) results in the ratio between differential changes in geopotential and pressure altitudes, which is equal to that between the atmospheric temperature and the temperature that would occur at the same pressure altitude if the atmospheric conditions were standard. This ratio is represented in figure 9. Note that geopotential altitude H grows quicker than pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ in warm atmospheres ($\displaystyle\mathrm{\Delta T>0}$), and slower in cold ones. $\mathrm{\frac{dH}{dH_{\scriptscriptstyle P}}=\frac{T}{T_{\scriptscriptstyle ISA}}}$ (43) $\displaystyle 0$$\displaystyle 1$$\displaystyle 2$$\displaystyle 3$$\displaystyle 4$$\displaystyle 5$$\displaystyle 6$$\displaystyle 7$$\displaystyle 8$$\displaystyle 9$$\displaystyle 10$$\displaystyle 11$$\displaystyle 12$$\displaystyle 13$$\displaystyle 14$$\displaystyle 15$$\displaystyle 0.9$$\displaystyle 0.95$$\displaystyle 1$$\displaystyle 1.05$$\displaystyle 1.1$$\displaystyle\mathrm{H_{\scriptscriptstyle P}\left[km\right]}$$\displaystyle\mathrm{dH/dH_{\scriptscriptstyle P}\left[-\right]}$$\displaystyle\mathrm{\Delta T=-20\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=-10\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=0\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=+10\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=+20\left[{}^{\circ}K\right]}$ Figure 9: $\displaystyle\mathrm{dH/dH_{\scriptscriptstyle P}}$ versus $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ for various $\displaystyle\mathrm{\Delta T}$ In the troposphere, the introduction of (25) and (29) into (43) results in: $\mathrm{\dfrac{d{H_{\scriptscriptstyle<}}}{d{H_{\scriptscriptstyle P,<}}}=\frac{T_{\scriptscriptstyle 0}+\Delta T+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,<}}{T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,<}}=1+\frac{\Delta T}{T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,<}}}$ (44) Its integration between mean sea level conditions ($\displaystyle\mathrm{H_{\scriptscriptstyle P}=H_{\scriptscriptstyle P,MSL}}$, $\displaystyle\mathrm{H=H_{\scriptscriptstyle MSL}=0}$) and any point below the tropopause results in: $\mathrm{H_{\scriptscriptstyle<}=H_{\scriptscriptstyle P,<}-H_{\scriptscriptstyle P,MSL}+\frac{\Delta T}{\beta_{\scriptscriptstyle T,<}}\;\log_{n}\left(\frac{T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,<}}{T_{\scriptscriptstyle ISA,MSL}}\right)}$ (45) where $\displaystyle\mathrm{H_{\scriptscriptstyle P,MSL}}$ is given by (41) and $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,MSL}}$ by (28). In the stratosphere, the introduction of (27) and (31) into (43) results in: $\mathrm{\dfrac{d{H_{\scriptscriptstyle>}}}{d{H_{\scriptscriptstyle P,>}}}=\frac{T_{\scriptscriptstyle 0}+\Delta T+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,TROP}}{T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,TROP}}=\frac{T_{\scriptscriptstyle TROP}}{T_{\scriptscriptstyle ISA,TROP}}=1+\frac{\Delta T}{T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,TROP}}}$ (46) Expression (46) can then be integrated between the tropopause ($\displaystyle\mathrm{H_{\scriptscriptstyle P}=H_{\scriptscriptstyle P,TROP}}$, $\displaystyle\mathrm{H=H_{\scriptscriptstyle TROP}}$) and any point above it, where $\displaystyle\mathrm{H_{\scriptscriptstyle TROP}}$ is obtained from (45): $\displaystyle\displaystyle\mathrm{H_{\scriptscriptstyle TROP}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{H_{\scriptscriptstyle P,TROP}-H_{\scriptscriptstyle P,MSL}+\frac{\Delta T}{\beta_{\scriptscriptstyle T,<}}\;\log_{n}\left(\frac{T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,TROP}}{T_{\scriptscriptstyle ISA,MSL}}\right)}$ (47) $\displaystyle\displaystyle\mathrm{H_{\scriptscriptstyle>}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{H_{\scriptscriptstyle TROP}+\frac{T_{\scriptscriptstyle TROP}}{T_{\scriptscriptstyle ISA,TROP}}\;\left(H_{\scriptscriptstyle P,>}-H_{\scriptscriptstyle P,TROP}\right)}$ (48) The standard mean sea level geopotential altitude $\displaystyle\mathrm{H_{\scriptscriptstyle H_{\scriptscriptstyle P}=0}}$, shown in table 4, is obtained by replacing $\displaystyle\mathrm{H_{\scriptscriptstyle P,<}}$ with $\displaystyle\mathrm{H_{\scriptscriptstyle P,H_{\scriptscriptstyle P}=0}=0}$ in (45), resulting in: $\mathrm{H_{\scriptscriptstyle H_{\scriptscriptstyle P}=0}=-H_{\scriptscriptstyle P,MSL}+\frac{\Delta T}{\beta_{\scriptscriptstyle T,<}}\;\log_{n}\left(\frac{T_{\scriptscriptstyle 0}}{T_{\scriptscriptstyle ISA,MSL}}\right)}$ (49) $\displaystyle 0$$\displaystyle 1$$\displaystyle 2$$\displaystyle 3$$\displaystyle 4$$\displaystyle 5$$\displaystyle 6$$\displaystyle 7$$\displaystyle 8$$\displaystyle 9$$\displaystyle 10$$\displaystyle 11$$\displaystyle 12$$\displaystyle 13$$\displaystyle 14$$\displaystyle 15$$\displaystyle 0$$\displaystyle 2$$\displaystyle 4$$\displaystyle 6$$\displaystyle 8$$\displaystyle 10$$\displaystyle 12$$\displaystyle 14$$\displaystyle 16$$\displaystyle\mathrm{\Delta p=0}$$\displaystyle\mathrm{H_{\scriptscriptstyle P}\left[km\right]}$$\displaystyle\mathrm{H\left[km\right]}$$\displaystyle\mathrm{\Delta T=-20\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=-10\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=0\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=+10\left[{}^{\circ}K\right]}$$\displaystyle\mathrm{\Delta T=+20\left[{}^{\circ}K\right]}$ Figure 10: H versus $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ at $\displaystyle\mathrm{\Delta p=0}$ for various $\displaystyle\mathrm{\Delta T}$ Note that while both $\displaystyle\mathrm{\Delta T}$ and $\displaystyle\mathrm{\Delta p}$ influence the relationship between H and $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ (21), they do so in different ways. While $\displaystyle\mathrm{\Delta T}$ sets the ratio between increments of both types of altitudes, the influence of $\displaystyle\mathrm{\Delta p}$ is by means of the mean sea level pressure $\displaystyle\mathrm{p_{\scriptscriptstyle MSL}}$. In other words, when representing H versus $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$, the temperature offset $\displaystyle\mathrm{\Delta T}$ sets the slope and the pressure offset $\displaystyle\mathrm{\Delta p}$ marks the zero point. $\displaystyle 0$$\displaystyle 1$$\displaystyle 2$$\displaystyle 3$$\displaystyle 4$$\displaystyle 5$$\displaystyle 6$$\displaystyle 7$$\displaystyle 8$$\displaystyle 9$$\displaystyle 10$$\displaystyle 11$$\displaystyle 12$$\displaystyle 13$$\displaystyle 14$$\displaystyle 15$$\displaystyle 0$$\displaystyle 2$$\displaystyle 4$$\displaystyle 6$$\displaystyle 8$$\displaystyle 10$$\displaystyle 12$$\displaystyle 14$$\displaystyle 16$$\displaystyle\mathrm{\Delta T=0}$$\displaystyle\mathrm{H_{\scriptscriptstyle P}\left[km\right]}$$\displaystyle\mathrm{H\left[km\right]}$$\displaystyle\mathrm{\Delta p=-5000\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$$\displaystyle\mathrm{\Delta p=-2500\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$$\displaystyle\mathrm{\Delta p=0\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$$\displaystyle\mathrm{\Delta p=+2500\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$$\displaystyle\mathrm{\Delta p=+5000\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$ Figure 11: H versus $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ at $\displaystyle\mathrm{\Delta T=0}$ for various $\displaystyle\mathrm{\Delta p}$ The dependency of the relationship between the geopotential and pressure altitudes with the temperature offset is graphically represented in figure 10 for the case without pressure offset ($\displaystyle\mathrm{\Delta p=0}$). In this case, the geopotential and pressure altitudes at both mean sea level and standard mean sea level coincide: $\mathrm{\Delta p=0\ \longrightarrow\ H_{\scriptscriptstyle H_{\scriptscriptstyle P}=0}=H_{\scriptscriptstyle P,H_{\scriptscriptstyle P}=0}=H_{\scriptscriptstyle MSL}=H_{\scriptscriptstyle P,MSL}=0}$ (50) Both altitudes diverge as altitude increases in accordance with figure 9. Geopotential altitude H is higher than pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ when $\displaystyle\mathrm{\Delta T>0}$, while the opposite is true if $\displaystyle\mathrm{\Delta T<0}$. As the tropopause pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P,TROP}}$ is constant (table 4), the corresponding $\displaystyle\mathrm{H_{\scriptscriptstyle TROP}}$ varies with $\displaystyle\mathrm{\Delta T}$. Figure 11 graphically represents the dependency with the pressure offset $\displaystyle\mathrm{\Delta p}$ for the case without temperature offset ($\displaystyle\mathrm{\Delta T=0}$). As the ratio between increments of both altitudes is unity (figure 9), all lines in the figure are parallel to each other and separated by the difference in their respective $\displaystyle\mathrm{H_{\scriptscriptstyle P,MSL}}$ or $\displaystyle\mathrm{H_{\scriptscriptstyle H_{\scriptscriptstyle P}=0}}$ (horizontal or vertical separation, respectively), which depend on $\displaystyle\mathrm{\Delta p}$. $\mathrm{\Delta T=0\ \longrightarrow\ H=H_{\scriptscriptstyle P}-H_{\scriptscriptstyle P,MSL}(\Delta p)}$ (51) $\displaystyle 0$$\displaystyle 1$$\displaystyle 2$$\displaystyle 3$$\displaystyle 4$$\displaystyle 5$$\displaystyle 6$$\displaystyle 7$$\displaystyle 8$$\displaystyle 9$$\displaystyle 10$$\displaystyle 11$$\displaystyle 12$$\displaystyle 13$$\displaystyle 14$$\displaystyle 15$$\displaystyle 0$$\displaystyle 2$$\displaystyle 4$$\displaystyle 6$$\displaystyle 8$$\displaystyle 10$$\displaystyle 12$$\displaystyle 14$$\displaystyle 16$$\displaystyle\mathrm{H_{\scriptscriptstyle P}\left[km\right]}$$\displaystyle\mathrm{H\left[km\right]}$$\displaystyle\mathrm{\Delta T=-20\left[{}^{\circ}K\right],\,\Delta p=-5000\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$$\displaystyle\mathrm{\Delta T=-20\left[{}^{\circ}K\right],\,\Delta p=+5000\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$$\displaystyle\mathrm{\Delta T=0\left[{}^{\circ}K\right],\,\Delta p=0\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$$\displaystyle\mathrm{\Delta T=+20\left[{}^{\circ}K\right],\,\Delta p=-5000\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$$\displaystyle\mathrm{\Delta T=+20\left[{}^{\circ}K\right],\,\Delta p=+5000\left[kg/m\,sec^{\scriptscriptstyle 2}\right]}$ Figure 12: H versus $\displaystyle\mathrm{H_{\scriptscriptstyle P}}$ for various $\displaystyle\mathrm{\Delta T}$ and $\displaystyle\mathrm{\Delta p}$ Figure 12 graphically shows the results when neither offset is zero. Atmospheres with the same temperature offset $\displaystyle\mathrm{\Delta T}$ are represented with parallel lines, while those with the same pressure offset $\displaystyle\mathrm{\Delta p}$ intersect the horizontal axis ($\displaystyle\mathrm{H=H_{\scriptscriptstyle MSL}=0}$) at the same point as their $\displaystyle\mathrm{H_{\scriptscriptstyle P,MSL}}$ is the same. Note that the relationship between geopotential and pressure altitudes can only be explicitly reversed above the tropopause, resulting in (52). In the troposphere it is solved by iteration, which converges quickly. $\mathrm{H_{\scriptscriptstyle P,>}=H_{\scriptscriptstyle P,TROP}+\frac{T_{\scriptscriptstyle ISA,TROP}}{T_{\scriptscriptstyle ISA}}\;\left(H_{\scriptscriptstyle>}-H_{\scriptscriptstyle TROP}\right)}$ (52) ## 7 Suggested Use for the Non Standard Atmosphere Model The ICAO Non Standard Atmosphere or INSA is a quasi static model based on expressions (14) and (15). The previous sections of this article have focused on the static part of the model (15), this is, how to obtain the atmospheric properties at a given geopotential altitude once the temperature and pressure offsets at the current aircraft horizontal position and time have already been determined through (14), which is repeated here for clarity. $\mathrm{\left[\Delta T,\,\Delta p\right]^{T}=\mbox{\boldmath{$\displaystyle\mathrm{f}$}}_{1}\left(t,\,\lambda,\,\varphi\right)}$ (53) The INSA model does not define the implementation of expression (53), instead letting the user customize the functions providing the temperature and pressure offsets so they resemble the conditions encountered by the aircraft in the most accurate way. In this section the author suggests a possible implementation of (53) for illustration purposes only, but it is up to the user to implement (53) with the required level of realism. Let’s assume without any loss of generality that the trajectory being analyzed corresponds to a flight between two given airports, whose geodetic coordinates are $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{x}$}}_{{\scriptscriptstyle GDT},1}=\left[\lambda_{1},\,\varphi_{1},\,h_{1}\right]^{T}}$ and $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{x}$}}_{{\scriptscriptstyle GDT},2}=\left[\lambda_{2},\,\varphi_{2},\,h_{2}\right]^{T}}$. Let’s also consider that the aircraft is expected to depart from $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{x}$}}_{{\scriptscriptstyle GDT},1}}$ at time $\displaystyle\mathrm{t_{1}}$ and arrive at $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{x}$}}_{{\scriptscriptstyle GDT},2}}$ at $\displaystyle\mathrm{t_{2}}$. The temperature and pressure offsets at both airports can be manually set by the user according to the specific conditions to be tested in the simulation. For example, the departing airport $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{x}$}}_{{\scriptscriptstyle GDT},1}}$ and time $\displaystyle\mathrm{t_{1}}$ may correspond to a high latitude Northern hemisphere location in winter during the night, which would imply a significantly negative $\displaystyle\mathrm{\Delta T_{1}}$, while the landing airport $\displaystyle\mathrm{\mbox{\boldmath{$\displaystyle\mathrm{x}$}}_{{\scriptscriptstyle GDT},2}}$ and time $\displaystyle\mathrm{t_{2}}$ may be those of daytime in the Southern hemisphere close to the tropics, where being summer $\displaystyle\mathrm{\Delta T_{2}}$ should be quite elevated. Similarly, a low pressure weather front may be active at the departure airport, implying a negative $\displaystyle\mathrm{\Delta p_{1}}$, while high pressures may be prevalent at the landing location, resulting in a positive $\displaystyle\mathrm{\Delta p_{2}}$. Note that the altitudes of both airports ($\displaystyle\mathrm{h_{1}}$ and $\displaystyle\mathrm{h_{2}}$) do not play any role in the resulting offsets. As explained below, the offsets at both locations may also be determined based on observations taken at both airports at times where the conditions were similar to those expected during the flight. Once $\displaystyle\mathrm{\Delta T}$ and $\displaystyle\mathrm{\Delta p}$ at both airports have been determined, a simple model for (53) would be to have both parameters vary linearly as flight progresses from their values at the departing airport ($\displaystyle\mathrm{\Delta T_{1}}$ and $\displaystyle\mathrm{\Delta p_{1}}$) to those expected when landing ($\displaystyle\mathrm{\Delta T_{2}}$ and $\displaystyle\mathrm{\Delta p_{2}}$). Further realism may be introduced by introducing additional points throughout the flight, so the linear interpolation occurs between the two points closest to the aircraft position. The highest possible accuracy is obtained by employing a grid of temperature and pressure offsets based on longitude, latitude, and time, and using three-dimensional interpolation to get the value of both offsets at each position and time during the flight. ### Offsets Identification from Ground Observations This section shows how to determine the values of the temperature and pressure offsets based on atmospheric pressure and temperature measurements ($\displaystyle\mathrm{p_{\scriptscriptstyle D},\,T_{\scriptscriptstyle D}}$) taken at a given position and time ($\displaystyle\mathrm{\lambda_{\scriptscriptstyle D},\,\varphi_{\scriptscriptstyle D},\,h_{\scriptscriptstyle D},\,t_{\scriptscriptstyle D}}$), usually corresponding to those of an airport or meteorological station, which is assumed to be in the troposphere. A similar approach can be used if the data is taken from a meteorological service such as [12, 13]. The process is the following: 1. 1. The location geodetic altitude $\displaystyle\mathrm{h_{\scriptscriptstyle D}}$ is converted into geopotential altitude $\displaystyle\mathrm{H_{\scriptscriptstyle D}}$ based on (7): $\mathrm{H_{\scriptscriptstyle D}=\dfrac{R_{\scriptscriptstyle E}\cdot h_{\scriptscriptstyle D}}{R_{\scriptscriptstyle E}+h_{\scriptscriptstyle D}}}$ (54) 2. 2. Pressure $\displaystyle\mathrm{p_{\scriptscriptstyle D}}$ is converted into pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P,D}}$ based on (39) and then into standard temperature $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,D}}$ based on (25). Neither conversion depends on the temperature or pressure offsets: $\displaystyle\displaystyle\mathrm{H_{\scriptscriptstyle P,D}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{\frac{T_{\scriptscriptstyle 0}}{\beta_{\scriptscriptstyle T,<}}\;\left[\left(\frac{p_{\scriptscriptstyle D}}{p_{\scriptscriptstyle 0}}\right)^{\scriptscriptstyle\dfrac{-\beta_{\scriptscriptstyle T,<}\;R}{g_{\scriptscriptstyle 0}}}-1\right]}$ (55) $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle ISA,D}}$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\mathrm{T_{\scriptscriptstyle 0}+\beta_{\scriptscriptstyle T,<}\;H_{\scriptscriptstyle P,D}}$ (56) 3. 3. The temperature offset $\displaystyle\mathrm{\Delta T_{\scriptscriptstyle D}}$ is obtained from the difference between the real temperature measurement $\displaystyle\mathrm{T_{\scriptscriptstyle D}}$ and the standard temperature $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,D}}$ based on (29): $\mathrm{\Delta T_{\scriptscriptstyle D}=T_{\scriptscriptstyle D}-T_{\scriptscriptstyle ISA,D}}$ (57) 4. 4. The combination of expressions (25), (28), and (45) results in the following relationship between troposphere standard temperature $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,<}}$, mean sea level standard temperature $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,MSL}}$, troposphere geopotential altitude $\displaystyle\mathrm{H_{\scriptscriptstyle<}}$, and temperature offset $\displaystyle\mathrm{\Delta T}$. It can be solved to obtain $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,MSL,D}}$ from $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,D}}$, $\displaystyle\mathrm{H_{\scriptscriptstyle D}}$, and $\displaystyle\mathrm{\Delta T_{\scriptscriptstyle D}}$. $\mathrm{H_{\scriptscriptstyle<}=\frac{1}{\beta_{\scriptscriptstyle T,<}}\,\left[T_{\scriptscriptstyle ISA,<}-T_{\scriptscriptstyle ISA,MSL}+\Delta T\;log_{n}\,\left(\frac{T_{\scriptscriptstyle ISA,<}}{T_{\scriptscriptstyle ISA,MSL}}\right)\right]}$ (58) 5. 5. The mean sea level pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P,MSL,D}}$ is obtained from the mean sea level standard temperature $\displaystyle\mathrm{T_{\scriptscriptstyle ISA,MSL,D}}$ based on (28): $\mathrm{H_{\scriptscriptstyle P,MSL,D}=\frac{T_{\scriptscriptstyle ISA,MSL,D}-T_{\scriptscriptstyle 0}}{\beta_{\scriptscriptstyle T,<}}}$ (59) 6. 6. The mean sea level pressure altitude $\displaystyle\mathrm{H_{\scriptscriptstyle P,MSL,D}}$ is converted into mean sea level pressure $\displaystyle\mathrm{p_{\scriptscriptstyle MSL,D}}$ based on (35): $\mathrm{p_{\scriptscriptstyle MSL,D}=p_{\scriptscriptstyle 0}\ \left(1+\frac{\beta_{\scriptscriptstyle T,<}}{T_{\scriptscriptstyle 0}}\ H_{\scriptscriptstyle P,MSL,D}\right)^{\scriptscriptstyle\dfrac{-g_{\scriptscriptstyle 0}}{\beta_{\scriptscriptstyle T,<}\;R}}}$ (60) 7. 7. The pressure offset $\displaystyle\mathrm{\Delta p_{\scriptscriptstyle D}}$ is obtained according to its definition as the difference between the mean sea level pressure $\displaystyle\mathrm{p_{\scriptscriptstyle MSL,D}}$ and the standard pressure at mean sea level $\displaystyle\mathrm{p_{\scriptscriptstyle 0}}$: $\mathrm{\Delta p_{\scriptscriptstyle D}=p_{\scriptscriptstyle MSL,D}-p_{\scriptscriptstyle 0}}$ (61) Once both offsets $\displaystyle\mathrm{\Delta T_{\scriptscriptstyle D}}$ and $\displaystyle\mathrm{\Delta p_{\scriptscriptstyle D}}$ have been identified, the atmospheric properties at any altitude can be computed based on the expressions obtained in this article. ## 8 Conclusions This article describes an easy-to-use quasi static atmospheric model suited for the requirements of aircraft trajectory prediction and flight simulation. Named the ICAO Non Standard Atmosphere or INSA, it models the possible variations of temperature and pressure with time and horizontal position by means of two parameters, the temperature and pressure offsets. Once these two parameters have been identified for a given location and time, the model provides the variation of the atmospheric properties (temperature, pressure, and density) with altitude complying with all the hypotheses of the ICAO Standard Atmosphere or ISA model [14]. The INSA model can be customized so the temperature and pressure offsets adjust to the expected conditions during flight, resulting in more accurate predictions for the atmosphere properties than those provided by ISA. The author has implemented an open-source $\displaystyle\mathrm{{C\nolinebreak[4]\raisebox{1.72218pt}{\tiny\bf++}}}$ version of the INSA model, available in [1]. ## Acknowledgments The content of this article is mostly taken from work that the author performed under contract for the European Organization for the Safety of Air Navigation (EUROCONTROL), who possesses all relevant intellectual proprietary rights (©2021 All rights reserved). The outcome of that work is described in [18], which has restricted distribution. The author would like to thank EUROCONTROL for their permission to publish a section of that work as part of his PhD thesis. ## References * [1] E. Gallo, “Quasi Static Atmosphere Model for Aircraft Trajectory Prediction and Flight Simulation.” https://github.com/edugallogithub/nonstandard-atmosphere, 2020. C++ open source code. * [2] A. Nuic, D. Poles, and V. Mouillet, “BADA: An Advanced Aircraft Performance Model for Present and Future ATM Systems,” International Journal of Adaptive Control and Signal Processing, 2010. * [3] E. Gallo, F. Navarro, A. Nuic, and M. Iagaru, “Advanced Aircraft Performance Modeling for ATM: BADA 4.0 Results,” in 2006 IEEE/AIAA 25th Digital Avionics Systems Conference, 2006. * [4] E. Gallo, J. Lopez-Leones, M. Vilaplana, F. Navarro, and A. Nuic, “Trajectory computation Infrastructure based on BADA Aircraft Performance Model,” in 2007 IEEE/AIAA 26th Digital Avionics Systems Conference, 2007. * [5] J. Lopez-Leones, M. Vilaplana, E. Gallo, F. Navarro, and C. Querejeta, “The Aircraft Intent Description Language: A Key Enabler for Air-Ground Synchronization in Trajectory-Based Operations,” in 2007 IEEE/AIAA 26th Digital Avionics Systems Conference, 2007. * [6] S. Ruiz, J. Lopez-Leones, and A. Ranieri, “A Novel Performance Framework and Methodology to Analyze the Impact of 4D Trajectory Based Operations in the Future Air Traffic Management System,” Journal of Advanced Transportation, 2018. * [7] J. Bronsvoort and E. Gallo, “Model for a Combined Air-Ground Approach to Closed-Loop Trajectory Prediction in Support of Trajectory Management,” in AIAA Modeling and Simulation Technologies (MST) Conference, 2013. * [8] J. Bronsvoort, G. McDonald, M. Paglione, C. Young, J. Boucquey, J. Hochwarth, and E. Gallo, “Real-Time Trajectory Predictor Calibration through Extended Projected Profile DownLink,” in Eleventh USA/Europe Air Traffic Management Research and Development Seminar (ATM2015), 2015. * [9] S. Swierstra, C. Garcia-Avello, J. Bronsvoort, G. McDonald, and I. Bayraktutar, “The Dawn of the Trajectory: Air-Ground Trajectory Synchronization for Improved ATM Performance,” in Ninth USA/Europe Air Traffic Management Research and Development Seminar, 2011. * [10] J. Bronsvoort, Contributions to Trajectory Prediction Theory and its Application to Arrival Management for Air Traffic Control. PhD thesis, Polytechnic University of Madrid, 2014. * [11] S. Swierstra and C. Garcia-Avello, “Emperor SESAR has No Clothes.” 2019. * [12] National Centers for Environmental Information, National Oceanic and Atmospheric Administration. https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/global-forcast-system-gfs. * [13] European Centre for Medium Range Weather Forecasts. https://www.ecmwf.int/en/forecasts. * [14] “Manual of the ICAO International Standard Atmosphere,” tech. rep., International Civil Aviation Organization, 2000. ICAO DOC-7488/3, $\displaystyle\mathrm{3^{\scriptscriptstyle rd}}$edition. * [15] “U.S. Standard Atmosphere,” tech. rep., National Aeronautics and Space Administration, 1976. NASA-TM-X-7433b,N77-16482l. * [16] “Department of Defense World Geodetic System 1984,” tech. rep., National Imagery and Mapping Agency, January 2000. NIMA TR8350.2 Amendment 1, $\displaystyle\mathrm{3^{\scriptscriptstyle rd}}$edition. * [17] “The Development of the Joint NASA GSFC and NIMA Geopotential Model EGM96,” tech. rep., National Aeronautics and Space Administration, July 1998. $\displaystyle\mathrm{3^{\scriptscriptstyle rd}}$edition. * [18] E. Gallo and A. Nuic, “Concept Document for the Base of Aircraft Data (BADA) Family 4,” tech. rep., Eurocontrol Experimental Centre, November 2012. EEC Technical Scientific Report No. 12/11/22-57, restricted distribution.
# Interference Alignment Using Reaction in Molecular Interference Channels Maryam Farahnak-Ghazani, Mahtab Mirmohseni, and Masoumeh Nasiri-Kenari Sharif University of Technology ###### Abstract Co-channel interference (CCI) is a performance limiting factor in molecular communication (MC) systems with shared medium. Interference alignment (IA) is a promising scheme to mitigate CCI in traditional communication systems. Due to the signal-dependent noise in MC systems, the traditional IA schemes are less useful in MC systems. In this paper, we propose a novel IA scheme in molecular interference channels (IFCs), based on the choice of releasing/sampling times. To cancel the aligned interference signals and reduce the signal dependent noise, we use molecular reaction in the proposed IA scheme. We obtain the feasible region for the releasing/sampling times in the proposed scheme. Further, we investigate the error performance of the proposed scheme. Our results show that the proposed IA scheme using reaction improves the performance significantly.††This work was supported in part by the Iran National Science Foundation (INSF) Research Grant on Nano-Network Communications and in part by the Research Center of Sharif University of Technology. ††The authors are with the Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran (email: <EMAIL_ADDRESS><EMAIL_ADDRESS>mnasiri@sharif.edu). ## I Introduction In MC networks, where molecules are used as information carriers [1], multiple transmitter-receiver pairs may operate in a shared medium [2]. In this case, if the transmitters use the same molecule type for communication and operate at the same time, there is a co-channel interference (CCI) and we have a molecular interference channel (IFC). Inter-symbol interference (ISI) and CCI are two performance limiting factors in MC systems [3]. ISI mitigating techniques in MC systems are presented in [4, 5, 6]. The effect of CCI is considered in [3, 7]. To avoid CCI in MC systems, the common approach is to use different molecule types for each transmitter- receiver pair, which is analogous to using different frequencies in wireless communication systems. As shown in [3], the effect of CCI is negligible after a certain distance called molecular re-use distance and the same molecule type can be used after this distance, which is similar to the concept of frequency re-use distance in wireless communication systems. However, when the distance is less than the molecular re-use distance, CCI is not negligible, and we should use different molecule types to avoid CCI. Using different molecule types, we achieve the degrees of freedom (DoF) of $\frac{1}{2}$. However, as shown in [8], a $K$-user IFC with single antenna at the transmitters and the receivers has the DoF of $\frac{K}{2}$, i.e., each user can reach half of the capacity it can reach in the absence of interference. Interference alignment (IA) is a technique to increase the DoF of a network when the distances of the transmitter-receiver pairs are less than the re-use distance, and the interference signal is comparable to the desired signal. The idea of IA is to align the signal vectors at the receivers such that the interference signals occupy the same space, while the desired signal lies in a separate space [9]. With the help of IA techniques, we can use the resources in the network more efficiently. In molecular communication (MC) systems, due to the low data rate and the limitation in the number of molecule types, the usage of IA techniques in the networks can be very beneficial. In this paper, we propose an IA scheme to mitigate the effect of CCI when the distance is less than the re-use distance. In classic communication systems, for $K$-user IFC with time/frequency varying channel coefficients, an asymptotic IA scheme is proposed to achieve the DoF of $\frac{K}{2}$ as the number of time slots or frequencies used for a super- symbol increases [8]. This scheme can be applied to MC systems using multiple time slots or multiple molecule types to form a super-symbol. However, this scheme is not practical since to achieve the DoF of $\frac{K}{2}$, the number of time slots or molecule types should be very large. Another scheme in classic IFCs is to align interference signals by changing the channel coefficients, or changing the propagation delays [8]. To adopt this idea to MC systems, in this paper, we make use of changing the channel coefficients by choosing different releasing times at the transmitters and/or sampling times at the receivers. Based on this idea, we propose an IA scheme by the choice of releasing/sampling times in molecular IFCs. The transmitters and receivers should be synchronized for the proposed IA scheme. The synchronization methods in MC systems are investigated in [10, 11, 12]. In this paper, to avoid complexity, we assume perfect synchronization among the transmitters and receivers. We also face another challenge in applying IA schemes to MC systems. Most IA schemes in classic communications, e.g., the asymptotic IA scheme, require high signal-to-noise ratio (SNR) [9]. However, in MC systems, there is usually a signal dependent receiver noise [1], which results in high noise levels for high signal levels, and hence, the classic IA schemes may not be very useful in MC systems. In this paper, we propose using molecular reaction [5] to cancel the aligned interference signals in the medium and reduce the signal dependent noise. We investigate the error performance and show that the proposed IA scheme using reaction improves the performance significantly. Our main contributions in this paper are as follows: $\mathbin{\vbox{\hbox{\scalebox{0.75}{$\bullet$}}}}$ We apply the asymptotic IA scheme in the classic communications to a $K$-user molecular IFC. $\mathbin{\vbox{\hbox{\scalebox{0.75}{$\bullet$}}}}$ We propose an IA scheme based on the choice of releasing/sampling times in molecular systems for a 3-user IFC using two molecule types, and use the reaction in the proposed IA scheme to cancel the aligned interference signals in the medium and to reduce the signal dependent noise. $\mathbin{\vbox{\hbox{\scalebox{0.75}{$\bullet$}}}}$ We obtain the feasible region for the releasing/sampling times in the proposed IA scheme with and without reaction. For a special case of the releasing/sampling times, we simplify the equations. $\mathbin{\vbox{\hbox{\scalebox{0.75}{$\bullet$}}}}$ We investigate the error performance of the interference channel using the proposed IA scheme with and without reaction. It is seen that the IA scheme with reaction improves the performance of the system significantly. The structure of the paper is as follows: In Section II, we describe the system model. In Section III, we describe the asymptotic IA and the proposed IA schemes. In Section IV, we investigate the optimum and sub-optimum sampling times and in Section V, we obtain the error probabilities of the IA schemes. The numerical results are given in Section VI. Finally, in Section VII we conclude the paper. Notation: Throughout the paper, vectors and matrices are shown with bold letters, $\bm{X}^{\textrm{T}}$ shows the transpose of vector or matrix $\bm{X}$, $\textrm{diag}(\bm{A})$ is a diagonal matrix whose entries are the elements in the vector $\bm{A}$, and $\bm{A}\equiv\bm{B}$ is equivalent to $\textrm{column-span}(\bm{A})=\textrm{column-span}(\bm{B})$. ## II System Model We consider an IFC with $K$ molecular transmitter-receiver pairs (Fig. 1). The location of the $j$-th transmitter ($\textrm{Tx}_{j}$) and the $i$-th receiver ($\textrm{Rx}_{i}$), for $i,j\in\\{1,...,K\\}$, are noted by $\bm{r}_{j}^{\textrm{Tx}}$ and $\bm{r}_{i}^{\textrm{Rx}}$, respectively, and the message of $\textrm{Tx}_{j}$ is noted by $M_{j}\in\\{0,...,M-1\\}$. The time is slotted with duration $T_{\textrm{s}}$. Figure 1: $K$-user molecular IFC. Transmitter: Each $\textrm{Tx}_{j},j=1,...,K$ uses the same $L$ molecule types, and releases $X_{j}^{[l]}=N_{j}V_{j}^{[l]},l=1,...,L$ molecules of type $l$ in time $\tilde{t}_{j}^{[l]}$, where $0<\tilde{t}_{j}^{[l]}<T_{\textrm{s}}$ and $N_{j}\in\\{\zeta_{0},...,\zeta_{M-1}\\}$ is the total number of molecules of all types released from $\textrm{Tx}_{j}$, i.e., $\sum_{l=1}^{L}X_{j}^{[l]}=N_{j}$, which requires $\sum_{l=1}^{L}V_{j}^{[l]}=1$. We use concentration shift keying (CSK) modulation at the transmitters, i.e., for the message $M_{j}=m$, $m=0,...,M-1$, the total released molecules at $\textrm{Tx}_{j}$ is $N_{j}=\zeta_{m}$. We denote the number of released molecules of all types from $\textrm{Tx}_{j}$ with vector $\bm{X}_{j}=[X_{j}^{[1]},...,X_{j}^{[L]}]^{\textrm{T}}=N_{j}\bm{V}_{j}$, where $\bm{V}_{j}=[V_{j}^{[1]},...,V_{j}^{[L]}]^{\textrm{T}}$ is called the beamforming vector. Channel: The channel impulse response from $\textrm{Tx}_{j}$ to $\textrm{Rx}_{i}$ for the $l$-th molecule type, noted by $h_{ij}^{[l]}(t)$, is defined as the concentration of the $l$-th molecule type at $\textrm{Rx}_{i}$ and in time $t$ when one molecule is released from $\textrm{Tx}_{j}$ in time $\tilde{t}_{j}^{[l]}$. If different molecule types do not interfere with each other, $h_{ij}^{[l]}(t)$ can be obtained from advection-diffusion equation for 3-D space as[1] $\displaystyle h_{ij}^{[l]}(t)=\frac{1[t>\tilde{t}_{j}^{[l]}]}{(4\pi D_{l}(t-\tilde{t}_{j}^{[l]}))^{\frac{3}{2}}}e^{-\frac{||\bm{r}_{i}^{\textrm{Rx}}-\bm{r}_{j}^{\textrm{Tx}}-\bm{\nu}(t-\tilde{t}_{j}^{[l]})||^{2}}{4D_{l}(t-\tilde{t}_{j}^{[l]})}},$ (1) for $i,j\in\\{1,...,K\\}$ and $l\in\\{1,...,L\\}$, where $D_{l}$ is the diffusion coefficient of the $l$-th molecule type and $\bm{\nu}$ is the flow velocity of the medium. For simplicity of exposition, for the analytical derivations, we assume that the ISI is negligible in the channel, which can be satisfied by using enzymes that remove the remaining molecules from the previous transmissions in the medium [6]. In Section VI, we investigate the error probability of the proposed IA scheme in the presence of ISI using simulation. In the absence of ISI, the concentration of the $l$-th molecule type at $\textrm{Rx}_{i}$ from $\textrm{Tx}_{j}$ can be obtained as $C_{ij}^{[l]}(t)=X_{i}^{[l]}h_{ij}^{[l]}(t)$. Receiver: Each receiver $\textrm{Rx}_{i},i=1,....,K$ is assumed to be transparent with volume $V_{\textrm{R}}$ (radius $r_{\textrm{R}}$), and counts the number of molecules that enters its volume. We assume that $\textrm{Rx}_{i}$ uses one-sample decoder with sampling time $t_{i}^{[l]}$ for the $l$-th molecule type, where ${\tilde{t}_{j}^{[l]}<t_{i}^{[l]}<T_{\textrm{s}}}$, $i,j\in\\{1,...,K\\},l\in\\{1,...,L\\}$.111The one sample decoder at the receiver is commonly used in MC systems, as in [6, 13]. The releasing and sampling times at each transmitter and receiver are shown in Fig. 1 for two molecule types, i.e., $L=2$. Assuming uniform concentration in the receiver volume, which is valid if the distance of the transmitter and the receiver is sufficiently large compared to the receiver radius [14], the mean number of counted molecules at $\textrm{Rx}_{i}$ from $\textrm{Tx}_{j}$ is obtained as $\mu_{ij}^{[l]}=C_{ij}^{[l]}(t_{i}^{[l]})V_{\textrm{R}}=X_{i}^{[l]}h_{ij}^{[l]}(t_{i}^{[l]})V_{\textrm{R}}$. We denote $H_{ij}^{[l]}=h_{ij}^{[l]}(t_{i}^{[l]})V_{\textrm{R}}$. Hence, the mean number of counted molecules of type $l$ at $\textrm{Rx}_{i}$ transmitted from all transmitters is $\mu_{i}^{[l]}=\sum_{j=1}^{K}\mu_{ij}^{[l]}=\sum_{j=1}^{K}X_{j}^{[l]}H_{ij}^{[l]}$. We denote the vector of mean number of counted molecules of all types at $\textrm{Rx}_{i}$ with vector $\bm{\mu}_{i}=[\mu_{i}^{[1]},...,\mu_{i}^{[L]}]^{\textrm{T}}$. Hence, $\bm{\mu}_{i}=\sum_{j=1}^{K}\bm{H}_{ij}\bm{X}_{j}$, where $\displaystyle\bm{H}_{ij}=\textrm{diag}(H_{ij}^{[1]},H_{ij}^{[2]},...,H_{ij}^{[L]}),\qquad i,j\in\\{1,...,K\\}.$ (2) We assume that the channel state information (CSI) is known at the transmitters and receivers. Further, we assume that the transmitters and receivers are synchronized. Noise: Let $Y_{i}^{[l]}$ be the number of counted molecules of type $l$ at $\textrm{Rx}_{i}$ transmitted from all transmitters. Assuming the counting noise at the receivers and an environment noise with mean $\mu_{\textrm{n}}^{[l]}$ for the $l$-th molecule type, $Y_{i}^{[l]}$ has a Poisson distribution conditioned on the transmitter messages with mean $\mu_{i}^{[l]}+\mu_{\textrm{n}}^{[l]}$ [1]. So, we have a signal dependent noise at the receivers. For simplicity of the analysis, we assume $\mu_{\textrm{n}}^{[l]}=0$. However, in Section VI, we investigate the effect of the environment noise. ## III Interference Alignment In this section, we first apply the asymptotic IA scheme of [8] to $K$-user molecular IFC, where we use different molecule types or time slots for sending a super-symbol and increase their number to reach the DoF of $\frac{K}{2}$, asymptotically. However, we should note that increasing the number of molecule types or the number of time slots is not practical. A practical solution is to partition the $K$-user pairs to clusters with $3$-user pairs and use this scheme with $3$ molecule types in each cluster, as in [15], where we reach the DoF of $\frac{4}{3}$. Further, this scheme is only useful in systems with high SNRs. Since in MC systems, we face a signal dependent receiver noise, which makes the SNR low, the traditional IA schemes are less useful. To overcome this issue, in the second part, we propose a non-asymptotic IA scheme by the choice of releasing/sampling times for 3-user IFC using 2 molecule types, which has a DoF of $\frac{3}{2}$. We use reaction in the proposed IA scheme to reduce the signal dependent noise. We should note that using the asymptotic IA scheme for 3 user pairs, we can’t reach the DoF of $\frac{3}{2}$ with limited number of molecule types (for 3 molecule types, we can reach the DoF of $\frac{4}{3}$) and we should increase the number of molecules types to reach the DoF of $\frac{3}{2}$, asymptotically. Asymptotic IA scheme: We consider a super-symbol at each transmitter $\textrm{Tx}_{i}$ consisting of $n_{\textrm{s},i}$ symbols and send it using different variations of the channel to reach the DoF of $\frac{K}{2}$, asymptotically. To have the varying channel gains, we can use either different molecule types in one time slot, or one molecule type in multiple time slots with different releasing/sampling times in each time slot. Here, we assume $L$ different molecule types in one time slot. Hence, according to the system model, the number of released molecules of all types from $\textrm{Tx}_{j}$ is obtained from $\bm{X}_{j}=N_{j}\bm{V}_{j}$. Using the method in [8], we assume $n_{\textrm{s},1}=(n+1)^{N}$, $n_{\textrm{s},i}=n^{N},i=2,...,K$, and $L=(n+1)^{N}+n^{N}$, in which $N=(K-1)(K-2)-1$ and $n$ is an auxiliary variable, which is going to be increased in order to achieve the DoF of $\frac{K}{2}$. The beamforming vectors at the transmitters are chosen as $\displaystyle\bm{V}_{1}$ $\displaystyle=\bigg{\\{}\prod_{\begin{subarray}{c}g,q\in\\{2,...,K\\},\\\ m\neq g,(g,q)\neq(2,3)\end{subarray}}(\bm{T}_{gq})^{\alpha_{gq}}\bm{w}:\forall\alpha_{gq}\in\\{0,...,n\\}\bigg{\\}},$ $\displaystyle\bm{V}_{j}$ $\displaystyle=\bm{S}_{j}\bm{B},\qquad j=2,...,K,$ (3) where $\bm{w}$ is a $L\times 1$ vector as $\bm{w}=[1,...,1]^{\textrm{T}}$, and $\displaystyle\bm{S}_{j}=(\bm{H}_{1j})^{-1}\bm{H}_{13}(\bm{H}_{23})^{-1}\bm{H}_{21},\qquad j=2,...,K,$ $\displaystyle\bm{T}_{ij}=(\bm{H}_{i1})^{-1}\bm{H}_{ij}\bm{S}_{j},\qquad i,j=2,...,K,\quad i\neq j,$ $\displaystyle\bm{B}=\bigg{\\{}\prod_{\begin{subarray}{c}g,q\in\\{2,..K\\},\\\ g\neq q,(g,q)\neq(2,3)\end{subarray}}(\bm{T}_{gq})^{\alpha_{gq}}\bm{w}:\forall\alpha_{gq}\in\\{0,...,n-1\\}\bigg{\\}}.$ The DoF is equal to $\frac{(n+1)^{N}+(K-1)n^{N}}{(n+1)^{N}+n^{N}}$ and by increasing $n$, we can reach the DoF of $\frac{K}{2}$. Proposed non-asymptotic IA scheme by choice of releasing/sampling times: We assume $K=3$ and $L=2$. For the 3-user IFC, the vector of mean number of counted molecules at $\textrm{Rx}_{i}$, $i\in\\{1,2,3\\}$, is $\displaystyle\bm{\mu}_{i}$ $\displaystyle=\sum_{j=1}^{3}\bm{H}_{ij}X_{j}=\bm{H}_{i1}\bm{X}_{1}+\bm{H}_{i2}\bm{X}_{2}+\bm{H}_{i3}\bm{X}_{3}$ $\displaystyle=\bm{H}_{i1}\bm{V}_{1}N_{1}+\bm{H}_{i2}\bm{V}_{2}N_{2}+\bm{H}_{i3}\bm{V}_{3}N_{3}.$ (4) For IA, we should have $\displaystyle\textrm{Rx}_{1}:\quad\bm{H}_{12}\bm{V}_{2}\equiv\bm{H}_{13}\bm{V}_{3}\not\equiv\bm{H}_{11}\bm{V}_{1},$ (5) $\displaystyle\textrm{Rx}_{2}:\quad\bm{H}_{21}\bm{V}_{1}\equiv\bm{H}_{23}\bm{V}_{3}\not\equiv\bm{H}_{22}\bm{V}_{2},$ $\displaystyle\textrm{Rx}_{3}:\quad\bm{H}_{31}\bm{V}_{1}\equiv\bm{H}_{32}\bm{V}_{2}\not\equiv\bm{H}_{33}\bm{V}_{3}.$ From (5), if the channel matrices are such that $T=\bm{H}_{13}^{-1}\bm{H}_{12}\bm{H}_{21}^{-1}\bm{H}_{23}\bm{H}_{32}^{-1}\bm{H}_{31}\equiv I$, where $I$ is an identity matrix of order 2, or equivalently, $\displaystyle\frac{H_{13}^{[2]}}{H_{13}^{[1]}}.\frac{H_{12}^{[1]}}{H_{12}^{[2]}}.\frac{H_{21}^{[2]}}{H_{21}^{[1]}}.\frac{H_{23}^{[1]}}{H_{23}^{[2]}}.\frac{H_{32}^{[2]}}{H_{32}^{[1]}}.\frac{H_{31}^{[1]}}{H_{31}^{[2]}}=1,$ (6) we can choose the following beamforming vectors that satisfy (5): $\displaystyle\bm{V}_{1}$ $\displaystyle\equiv[1,1]^{\textrm{T}},\quad\bm{V}_{2}\equiv\bm{H}_{32}^{-1}\bm{H}_{31}\bm{V}_{1}=\left[\frac{H_{31}^{[1]}}{H_{32}^{[1]}},\frac{H_{31}^{[2]}}{H_{32}^{[2]}}\right]^{\textrm{T}},$ $\displaystyle\bm{V}_{3}$ $\displaystyle\equiv\bm{H}_{23}^{-1}\bm{H}_{21}\bm{V}_{1}=\left[\frac{H_{21}^{[1]}}{H_{23}^{[1]}},\frac{H_{21}^{[2]}}{H_{23}^{[2]}}\right]^{\textrm{T}}.$ (7) We should normalize the beamforming vectors such that we have a unit norm, i.e., $|\bm{V}_{i}|_{1}=1$. Further, (5) requires the following conditions: $\displaystyle\bm{H}_{12}\bm{V}_{2}\not\equiv\bm{H}_{11}\bm{V}_{1},~{}\bm{H}_{21}\bm{V}_{1}\not\equiv\bm{H}_{22}\bm{V}_{2},~{}\bm{H}_{31}\bm{V}_{1}\not\equiv\bm{H}_{33}\bm{V}_{3}.$ (8) We call the condition in (6) as the IA condition and the conditions in (8) as the independency conditions. Our goal is to change the channel coefficients such that the IA and independency conditions hold, thus we can align the interference signals at the receivers by choosing the beamforming vectors at the transmitters as (III). Using the beamforming vectors in (III), the mean signal vectors at the receivers are obtained from (III) as $\displaystyle\textrm{Rx}_{1}:\bm{\mu}_{1}$ $\displaystyle=\frac{N_{1}}{2}\left[\begin{matrix}H_{11}^{[1]}\\\ H_{11}^{[2]}\end{matrix}\right]+\frac{N_{2}}{A}\left[\begin{matrix}H_{12}^{[1]}\frac{H_{31}^{[1]}}{H_{32}^{[1]}}\\\ H_{12}^{[2]}\frac{H_{31}^{[2]}}{H_{32}^{[2]}}\end{matrix}\right]+\frac{N_{3}}{B}\left[\begin{matrix}H_{13}^{[1]}\frac{H_{21}^{[1]}}{H_{23}^{[1]}}\\\ H_{13}^{[2]}\frac{H_{21}^{[2]}}{H_{23}^{[2]}}\end{matrix}\right],$ (9) $\displaystyle\textrm{Rx}_{2}:\bm{\mu}_{2}$ $\displaystyle=\frac{N_{1}}{2}\left[\begin{matrix}H_{21}^{[1]}\\\ H_{21}^{[2]}\end{matrix}\right]+\frac{N_{2}}{A}\left[\begin{matrix}H_{22}^{[1]}\frac{H_{31}^{[1]}}{H_{32}^{[1]}}\\\ H_{22}^{[2]}\frac{H_{31}^{[2]}}{H_{32}^{[2]}}\end{matrix}\right]+\frac{N_{3}}{B}\left[\begin{matrix}H_{21}^{[1]}\\\ H_{21}^{[2]}\end{matrix}\right],$ $\displaystyle\textrm{Rx}_{3}:\bm{\mu}_{3}$ $\displaystyle=\frac{N_{1}}{2}\left[\begin{matrix}H_{31}^{[1]}\\\ H_{31}^{[2]}\end{matrix}\right]+\frac{N_{2}}{A}\left[\begin{matrix}H_{31}^{[1]}\\\ H_{31}^{[2]}\end{matrix}\right]+\frac{N_{3}}{B}\left[\begin{matrix}H_{33}^{[1]}\frac{H_{21}^{[1]}}{H_{23}^{[1]}}\\\ H_{33}^{[2]}\frac{H_{21}^{[2]}}{H_{23}^{[2]}}\end{matrix}\right],$ where $A=\frac{H_{31}^{[1]}}{H_{32}^{[1]}}+\frac{H_{31}^{[2]}}{H_{32}^{[2]}}$ and $B=\frac{H_{21}^{[1]}}{H_{23}^{[1]}}+\frac{H_{21}^{[2]}}{H_{23}^{[2]}}$. From (9), it can be seen that the interference signals at Rx2 and Rx3 are aligned without resorting to (6). However, to align the interference signals at Rx1, (6) is needed to hold. For this purpose, we change the channel coefficients by the choice of the releasing times at the transmitters, $\tilde{t}_{j}^{[l]}$, and/or the sampling times at the receivers, $t_{i}^{[l]}$, to meet the conditions in (6) and (8).222In MC systems, releasing time is usually chosen as zero, i.e., the molecules are released at the beginning of the time slot, and the sampling time is usually chosen such that the concentration of the molecules is maximized at the time [6]. However, it can be shown that if we use these times for each transmitter-receiver pair, the desired signal and the interference signals will be all aligned and the independency condition cannot be met. In the following, we simplify the alignment and independency equations in (6) and (8) to obtain a feasible region for $t_{i}^{[l]}$s and $\tilde{t}_{j}^{[l]}$s. $\displaystyle r_{13}^{2}.f(\Delta t_{13}^{[1]},\Delta t_{13}^{[2]})-r_{12}^{2}.f(\Delta t_{12}^{[1]},\Delta t_{12}^{[2]})+r_{21}^{2}.f(\Delta t_{21}^{[1]},\Delta t_{21}^{[2]})-r_{23}^{2}.f(\Delta t_{23}^{[1]},\Delta t_{23}^{[2]})$ (10) $\displaystyle\quad+r_{32}^{2}.f(\Delta t_{32}^{[1]},\Delta t_{32}^{[2]})-r_{31}^{2}.f(\Delta t_{31}^{[1]},\Delta t_{31}^{[2]})+6\ln(\frac{\Delta t_{13}^{[1]}}{\Delta t_{13}^{[2]}}.\frac{\Delta t_{12}^{[2]}}{\Delta t_{12}^{[1]}}.\frac{\Delta t_{21}^{[1]}}{\Delta t_{21}^{[2]}}.\frac{\Delta t_{23}^{[2]}}{\Delta t_{23}^{[1]}}.\frac{\Delta t_{32}^{[1]}}{\Delta t_{32}^{[2]}}.\frac{\Delta t_{31}^{[2]}}{\Delta t_{31}^{[1]}})=0.$ $\displaystyle\textrm{Rx}_{1}:$ $\displaystyle r_{11}^{2}.f(\Delta t_{11}^{[1]},\Delta t_{11}^{[2]})-r_{12}^{2}.f(\Delta t_{12}^{[1]},\Delta t_{12}^{[2]})+r_{32}^{2}.f(\Delta t_{32}^{[1]},\Delta t_{32}^{[2]})-r_{31}^{2}.f(\Delta t_{31}^{[1]},\Delta t_{31}^{[2]})+6\ln(\frac{\Delta t_{11}^{[1]}}{\Delta t_{11}^{[2]}}.\frac{\Delta t_{12}^{[2]}}{\Delta t_{12}^{[1]}}.\frac{\Delta t_{32}^{[1]}}{\Delta t_{32}^{[2]}}.\frac{\Delta t_{31}^{[2]}}{\Delta t_{31}^{[1]}})\neq 0,$ $\displaystyle\textrm{Rx}_{2}:$ $\displaystyle r_{22}^{2}.f(\Delta t_{22}^{[1]},\Delta t_{22}^{[2]})-r_{21}^{2}.f(\Delta t_{21}^{[1]},\Delta t_{21}^{[2]})+r_{31}^{2}.f(\Delta t_{31}^{[1]},\Delta t_{31}^{[2]})-r_{32}^{2}.f(\Delta t_{32}^{[1]},\Delta t_{32}^{[2]})+6\ln(\frac{\Delta t_{22}^{[1]}}{\Delta t_{22}^{[2]}}.\frac{\Delta t_{21}^{[2]}}{\Delta t_{21}^{[1]}}.\frac{\Delta t_{31}^{[1]}}{\Delta t_{31}^{[2]}}.\frac{\Delta t_{32}^{[2]}}{\Delta t_{32}^{[1]}})\neq 0,$ $\displaystyle\textrm{Rx}_{3}:$ $\displaystyle r_{33}^{2}.f(\Delta t_{33}^{[1]},\Delta t_{33}^{[2]})-r_{31}^{2}.f(\Delta t_{31}^{[1]},\Delta t_{31}^{[2]})+r_{21}^{2}.f(\Delta t_{21}^{[1]},\Delta t_{21}^{[2]})-r_{23}^{2}.f(\Delta t_{23}^{[1]},\Delta t_{23}^{[2]})+6\ln(\frac{\Delta t_{33}^{[1]}}{\Delta t_{33}^{[2]}}.\frac{\Delta t_{31}^{[2]}}{\Delta t_{31}^{[1]}}.\frac{\Delta t_{21}^{[1]}}{\Delta t_{21}^{[2]}}.\frac{\Delta t_{23}^{[2]}}{\Delta t_{23}^{[1]}})\neq 0.$ (11) Using (1), the IA condition in (6) simplifies to (10) and the independency conditions in (8) simplify to (III), for $\tilde{t}_{j}^{[l]}<t_{i}^{[l]}<T_{\textrm{s}},i,j\in\\{1,2,3\\}$, $l\in\\{1,2\\}$, where $f(\Delta t_{ij}^{[1]},\Delta t_{ij}^{[2]})=\frac{1}{D_{1}\Delta t_{ij}^{[1]}}-\frac{1}{D_{2}\Delta t_{ij}^{[2]}}$, $\Delta t_{ij}^{[l]}=t_{i}^{[l]}-\tilde{t}_{j}^{[l]}$, and $r_{ij}^{2}=||\bm{r}_{i}^{\textrm{Rx}}-\bm{r}_{j}^{\textrm{Tx}}||^{2},i,j\in\\{1,2,3\\}$, $l\in\\{1,2\\}$. The conditions in (10) and (III) form a feasible region for $t_{i}^{[l]}$s and $\tilde{t}_{j}^{[l]}$s noted by the set $\mathcal{T}_{\textrm{nr}}$, where “nr” stands for “no reaction”. In Lemma 1, we simplify these conditions for a special case of the releasing and sampling times. ###### Remark 1. It can be seen that the conditions do not depend on the flow velocity of the medium. ###### Lemma 1. For $t_{i}^{[1]}=t_{i}^{[2]}=t_{i},\tilde{t}_{j}^{[1]}=\tilde{t}_{j}^{[2]}=\tilde{t}_{j}$, $i,j\in\\{1,2,3\\}$, the IA condition reduces to: $\displaystyle\frac{r_{13}^{2}}{t_{1}-\tilde{t}_{3}}-\frac{r_{12}^{2}}{t_{1}-\tilde{t}_{2}}+\frac{r_{21}^{2}}{t_{2}-\tilde{t}_{1}}$ $\displaystyle\quad-\frac{r_{23}^{2}}{t_{2}-\tilde{t}_{3}}+\frac{r_{32}^{2}}{t_{3}-\tilde{t}_{2}}-\frac{r_{31}^{2}}{t_{3}-\tilde{t}_{1}}=0,$ (12) and the independency conditions reduce to $\displaystyle\textrm{Rx}_{1}:\frac{r_{11}^{2}}{t_{1}-\tilde{t}_{1}}-\frac{r_{12}^{2}}{t_{1}-\tilde{t}_{2}}+\frac{r_{32}^{2}}{t_{3}-\tilde{t}_{2}}-\frac{r_{31}^{2}}{t_{3}-\tilde{t}_{1}}\neq 0,$ (13) $\displaystyle\textrm{Rx}_{2}:\frac{r_{22}^{2}}{t_{2}-\tilde{t}_{2}}-\frac{r_{21}^{2}}{t_{2}-\tilde{t}_{1}}+\frac{r_{31}^{2}}{t_{3}-\tilde{t}_{1}}-\frac{r_{32}^{2}}{t_{3}-\tilde{t}_{2}}\neq 0,$ $\displaystyle\textrm{Rx}_{3}:\frac{r_{33}^{2}}{t_{3}-\tilde{t}_{3}}-\frac{r_{31}^{2}}{t_{3}-\tilde{t}_{1}}+\frac{r_{21}^{2}}{t_{2}-\tilde{t}_{1}}-\frac{r_{23}^{2}}{t_{2}-\tilde{t}_{3}}\neq 0.$ The conditions in (1) and (13) form a feasible region for $t_{i}$s and $\tilde{t}_{j}$s, noted by the set $\mathcal{T}_{\textrm{nr,s}}$, where “s” stands for “special case”. ###### Proof. The proof is straightforward from (10) and (III). ∎ Using Reaction in the IA scheme: To cancel the aligned interference in the proposed IA scheme and reduce the signal dependent noise, we exploit molecular reaction. Consider the following reactions among the molecules of type 1 and 2 ($\textrm{M}_{1}$ and $\textrm{M}_{2}$) at $\textrm{Rx}_{i}$: $\displaystyle\textrm{Rx}_{i}:c_{i}\textrm{M}_{1}+\textrm{M}_{2}+\textrm{E}_{i}\rightarrow\textrm{products},$ (14) for $i\in\\{1,2,3\\}$, where $\textrm{E}_{i}$ is an enzyme released from $\textrm{Rx}_{i}$, which enables $\textrm{M}_{1}$ and $\textrm{M}_{2}$ to react with coefficient $c_{i}$ and with high reaction rate. The products are not detectable by the receivers. Further, we assume that the sampling times of different molecule types at the each receiver are the same, i.e., $t_{i}^{[1]}=t_{i}^{[2]}=t_{i}$, and enzymes are released at the sampling time $t_{i}$ from each receiver. According to (14), to cancel the interference signals at the receivers, we require that the ratio of the interference signal of $\textrm{M}_{1}$ to the interference signal of $\textrm{M}_{2}$ at $\textrm{Rx}_{i}$ be equals to $c_{i}$. Hence, using the signal vectors in (9), we should have $\displaystyle\textrm{Rx}_{1}:\frac{H_{12}^{[1]}\frac{H_{31}^{[1]}}{H_{32}^{[1]}}}{H_{12}^{[2]}\frac{H_{31}^{[2]}}{H_{32}^{[2]}}}=c_{1},~{}\textrm{Rx}_{2}:\frac{H_{21}^{[1]}}{H_{21}^{[2]}}=c_{2},~{}\textrm{Rx}_{3}:\frac{H_{31}^{[1]}}{H_{31}^{[2]}}=c_{3}.$ (15) We call the conditions in (15) as the reaction conditions. Hence, if it is feasible to choose the releasing/sampling times such that along with the IA and independency conditions in (6) and (8) the reaction conditions also hold, the aligned interference signals at the receivers will be canceled using reaction. Using (1), the reaction conditions in (15) simplifies to (III), where $f(.,.)$ is defined in (10). Note that in the IA, independency, and reaction conditions in (10), (III), and (III), we should further have $t_{i}^{[1]}=t_{i}^{[2]}=t_{i}$. $\displaystyle\textrm{Rx}_{1}:$ $\displaystyle r_{12}^{2}.f(\Delta t_{12}^{[1]},\Delta t_{12}^{[2]})+r_{31}^{2}.f(\Delta t_{31}^{[1]},\Delta t_{31}^{[2]})-r_{32}^{2}.f(\Delta t_{32}^{[1]},\Delta t_{32}^{[2]})+6\ln(\frac{D_{1}}{D_{2}}\frac{\Delta t_{12}^{[1]}}{\Delta t_{12}^{[2]}}\frac{\Delta t_{31}^{[1]}}{\Delta t_{31}^{[2]}}\frac{\Delta t_{32}^{[2]}}{\Delta t_{32}^{[1]}})$ $\displaystyle+||\bm{\nu}||^{2}(\frac{\Delta t_{12}^{[1]}+\Delta t_{31}^{[1]}-\Delta t_{32}^{[1]}}{D_{1}}-\frac{\Delta t_{12}^{[2]}+\Delta t_{31}^{[2]}-\Delta t_{32}^{[2]}}{D_{2}})-2(\bm{r}_{1}^{\textrm{Rx}}-\bm{r}_{1}^{\textrm{Tx}}).\bm{\nu}(\frac{1}{D_{1}}-\frac{1}{D_{2}})=-4\ln{c_{1}},$ $\displaystyle\textrm{Rx}_{2}:$ $\displaystyle r_{21}^{2}.f(\Delta t_{21}^{[1]},\Delta t_{21}^{[2]})+6\ln(\frac{D_{1}}{D_{2}}\frac{\Delta t_{21}^{[1]}}{\Delta t_{21}^{[2]}})+||\bm{\nu}||^{2}(\frac{\Delta t_{21}^{[1]}}{D_{1}}-\frac{\Delta t_{21}^{[2]}}{D_{2}})-2(\bm{r}_{2}^{\textrm{Rx}}-\bm{r}_{1}^{\textrm{Tx}}).\bm{\nu}(\frac{1}{D_{1}}-\frac{1}{D_{2}})=-4\ln{c_{2}},$ $\displaystyle\textrm{Rx}_{3}:$ $\displaystyle r_{31}^{2}.f(\Delta t_{31}^{[1]},\Delta t_{31}^{[2]})+6\ln(\frac{D_{1}}{D_{2}}\frac{\Delta t_{31}^{[1]}}{\Delta t_{31}^{[2]}})+||\bm{\nu}||^{2}(\frac{\Delta t_{31}^{[1]}}{D_{1}}-\frac{\Delta t_{31}^{[2]}}{D_{2}})-2(\bm{r}_{3}^{\textrm{Rx}}-\bm{r}_{1}^{\textrm{Tx}}).\bm{\nu}(\frac{1}{D_{1}}-\frac{1}{D_{2}})=-4\ln{c_{3}}.$ (16) Two approaches are possible: (i) We may choose the releasing and/or sampling times such that they satisfy the IA and independency conditions in (10) and (III), and then obtain $c_{1},c_{2},$ and $c_{3}$ from (III). However, we may not find reactions among $\textrm{M}_{1}$ and $\textrm{M}_{2}$ with these coefficients. (ii) We may choose $c_{1},c_{2}$, and $c_{3}$ arbitrarily and then choose the releasing and/or sampling times such that they satisfy the IA, independency, and reaction conditions in (10), (III), and (III). These conditions form a feasible region for $t_{i}^{[l]}$s and $\tilde{t}_{j}^{[l]}$s, noted by the set $\mathcal{T}_{\textrm{r}}$, where “r” stands for “reaction”. We take the second approach and for a special case, simplify the IA, independency, and reaction conditions in the following Lemma. ###### Lemma 2. For $t_{i}^{[1]}=t_{i}^{[2]}=t_{i},\tilde{t}_{j}^{[1]}=\tilde{t}_{j}^{[2]}=\tilde{t}_{j}$, $i,j\in\\{1,2,3\\}$, $\bm{\nu}=0$, and $c_{1}=c_{2}=c_{3}=c$, the IA and reaction conditions reduce to $\displaystyle\Delta\tilde{t}_{12}(1-\frac{r_{12}^{2}}{r_{32}^{2}})-\Delta\tilde{t}_{13}(1-\frac{r_{13}^{2}}{r_{23}^{2}})=s.(\frac{r_{12}^{2}}{r_{32}^{2}}.r_{31}^{2}-\frac{r_{13}^{2}}{r_{23}^{2}}.r_{21}^{2}),$ (17) and the independency conditions reduce to $\displaystyle\Delta\tilde{t}_{12}\neq s.\frac{r_{22}^{2}.r_{31}^{2}-r_{21}^{2}.r_{32}^{2}}{r_{32}^{2}-r_{22}^{2}},$ $\displaystyle\Delta\tilde{t}_{13}\notin\big{\\{}s.\frac{r_{13}^{2}.r_{21}^{2}-r_{11}^{2}.r_{23}^{2}}{r_{23}^{2}-r_{13}^{2}},s.\frac{r_{33}^{2}.r_{21}^{2}-r_{31}^{2}.r_{23}^{2}}{r_{23}^{2}-r_{33}^{2}}\big{\\}},$ (18) where $\Delta\tilde{t}_{1j}=\tilde{t}_{1}-\tilde{t}_{j},j=2,3$, $s=-\frac{\frac{1}{D_{1}}-\frac{1}{D_{2}}}{4\ln{c}-6\ln{\frac{D_{2}}{D_{1}}}}$. Further, it is needed that $\Delta t_{ij}>0,i,j\in\\{1,2,3\\}$, which results in $s>0$ and $\displaystyle\begin{cases}\Delta\tilde{t}_{13}>-s\min\\{r_{21}^{2},r_{31}^{2}\\},\qquad&\textrm{if}~{}r_{13}>r_{23}\\\ -s\min\\{r_{21}^{2},r_{31}^{2}\\}<\Delta\tilde{t}_{13}<-\frac{s.r_{21}^{2}}{1-\frac{r_{23}^{2}}{r_{13}^{2}}},~{}&\textrm{if}~{}r_{13}<r_{23}\end{cases}.$ $\displaystyle\Delta\tilde{t}_{12}>-s\min\\{r_{21}^{2},r_{31}^{2}\\}.$ (19) The conditions in (17)-(2) form a feasible region for $\Delta\tilde{t}_{12}$ and $\Delta\tilde{t}_{13}$, noted by $\mathcal{T}_{\textrm{r,s}}$. After choosing $\Delta\tilde{t}_{12}$ and $\Delta\tilde{t}_{13}$ in the feasible region, we choose one of the releasing times arbitrarily and obtain the other two releasing times from the values of $\Delta\tilde{t}_{12}$ and $\Delta\tilde{t}_{13}$. Then, $t_{1}$, $t_{2}$, and $t_{3}$ can be obtained from $\tilde{t}_{1}$ and $\tilde{t}_{3}$ as follows: $\displaystyle t_{1}=\frac{r_{13}^{2}}{r_{23}^{2}}\tilde{t}_{1}+(1-\frac{r_{13}^{2}}{r_{23}^{2}})\tilde{t}_{3}+s.\frac{r_{13}^{2}}{r_{23}^{2}}.r_{21}^{2},$ $\displaystyle t_{2}=\tilde{t}_{1}+s.r_{21}^{2},\quad t_{3}=\tilde{t}_{1}+s.r_{31}^{2}.$ (20) ###### Proof. The proof is provided in Appendix A. ∎ ###### Remark 2. Without loss of generality we assume $D_{1}>D_{2}$. Hence, for the spacial case of the times in Lemma 2, we should choose molecules to have reaction with $c>(\frac{D_{2}}{D_{1}})^{\frac{3}{2}}$ to garantee $s>0$. ## IV Optimum Sampling and Releasing Times To obtain the optimum sampling and releasing times, we minimize the total error probability of the system over the feasible region of the times. Using the error probability at $\textrm{Rx}_{i}$, noted by $P_{\textrm{e},i}$, $i\in\\{1,2,3\\}$, the total error probability of the system is $P_{\textrm{e}}=\frac{1}{3}(P_{\textrm{e},1}+P_{\textrm{e},2}+P_{\textrm{e},3}).$ IA without reaction: For the general case of the sampling and releasing times, the times are obtained from the following optimization problem: $\displaystyle\bm{t}_{\textrm{nr}}=\arg\min_{\bm{t}\in\mathcal{T}_{\textrm{nr}}}P_{\textrm{e,nr}},$ (21) where $\bm{t}=[\tilde{t}_{1}^{[1]},\tilde{t}_{1}^{[2]},\tilde{t}_{2}^{[1]},\tilde{t}_{2}^{[2]},\tilde{t}_{3,}^{[1]},\tilde{t}_{3}^{[2]},t_{1}^{[1]},t_{1}^{[2]},t_{2}^{[1]},t_{2}^{[2]},t_{3}^{[1]}$, $t_{3}^{[2]}]$, $P_{\textrm{e,nr}}$ is the total error probability in the IA scheme without reaction, and as defined in Section III, $\mathcal{T}_{\textrm{nr}}$ is the feasible region of the times in the IA with no reaction. For the special case of the times when $t_{i}^{[l]}=t_{i}$ and $\tilde{t}_{j}^{[l]}=\tilde{t}_{j}$, $i,j\in\\{1,2,3\\}$, we should solve $\displaystyle\bm{t}_{\textrm{nr,s}}=\arg\min_{\bm{t}_{\textrm{s}}\in\mathcal{T}_{\textrm{nr,s}}}P_{\textrm{e,nr}},$ (22) where $\bm{t}_{\textrm{s}}=[\tilde{t}_{1},\tilde{t}_{2},\tilde{t}_{3},t_{1}$, $t_{2},t_{3}]$ and $\mathcal{T}_{\textrm{nr,s}}$ is characterized in Lemma 1. IA with reaction: Similar to the no reaction case, for the general case of the times, we have $\displaystyle\bm{t}_{\textrm{r}}=\arg\min_{\bm{t}\in\mathcal{T}_{\textrm{r}}}P_{\textrm{e,r}},$ (23) where $P_{\textrm{e,r}}$ is the total error probability in the IA scheme with reaction, and as defined in Section III, $\mathcal{T}_{\textrm{r}}$ is the feasible region of the times in the IA with reaction. For the special case of the times, according to Lemma 2, the feasible region is obtained for $\Delta\tilde{t}_{12}$ and $\Delta\tilde{t}_{13}$. Hence, we should solve the following optimization problem $\displaystyle[\Delta\tilde{t}_{12,\textrm{r,s}},\Delta\tilde{t}_{13,\textrm{r,s}}]=\arg\min_{[\Delta\tilde{t}_{12},\Delta\tilde{t}_{13}]\in\mathcal{T}_{\textrm{r,s}}}P_{\textrm{e,r}}.$ (24) where $\mathcal{T}_{\textrm{r, s}}$ is characterized in Lemma 2. The values of $t_{i}$s and $\tilde{t}_{j}$s can be obtained from $\Delta\tilde{t}_{12,\textrm{r,s}},\Delta\tilde{t}_{13,\textrm{r,s}}$, which are noted by $\tilde{t}_{1,\textrm{r,s}}$, $\tilde{t}_{2,\textrm{r,s}}$, $\tilde{t}_{3,\textrm{r,s}}$, $t_{1,\textrm{r,s}}$, $t_{2,\textrm{r,s}}$, and $t_{3,\textrm{r,s}}$. The optimum sampling times in each case are obtained numerically in Section VI. We note that the values of the channel coefficients and hence the error probability depend on $\Delta t_{ij}^{[l]}$s. Hence, if we shift the releasing and sampling times by a same value, the error probability does not change and thus the optimum times are not unique. We choose the times that are all positive and the smallest one is zero. ## V Error Performance Analysis of the Proposed IA Scheme In the proposed IA scheme, according to (9), the mean signal vector at $\textrm{Rx}_{i}$, assuming no reaction is $\displaystyle\bm{\mu}_{i}=\tilde{\bm{H}}_{i}N_{i}+\tilde{\bm{H}}_{\textrm{I},i}N_{\textrm{I},i},\qquad i\in\\{1,2,3\\},$ (25) where $N_{\textrm{I},1}=\frac{N_{2}}{A}+\frac{N_{3}}{B}$, $N_{\textrm{I},2}=\frac{N_{1}}{2}+\frac{N_{3}}{B}$, and $N_{\textrm{I},3}=\frac{N_{1}}{2}+\frac{N_{2}}{A}$; $A$ and $B$ are defined in (9). Further, $\tilde{\bm{H}}_{i}=[\tilde{H}_{i}^{[1]},\tilde{H}_{i}^{[2]}]^{\textrm{T}}$ and $\tilde{\bm{H}}_{\textrm{I},i}=[\tilde{H}_{\textrm{I},i}^{[1]},\tilde{H}_{\textrm{I},i}^{[2]}]^{\textrm{T}}$ can be obtained from the channel coefficients using (9). IA without reaction: The message of $\textrm{Tx}_{i}$ ($M_{i}$) can be obtained from the number of released molecules, $N_{i}$. Using (25), $N_{i}$ can be obtained at $\textrm{Rx}_{i}$ from the mean number of received molecules by zero-forcing as $N_{i}=\frac{\tilde{H}_{\textrm{I},i}^{[2]}\mu_{i}^{[1]}-\tilde{H}_{\textrm{I},i}^{[1]}\mu_{i}^{[2]}}{\tilde{H}_{i}^{[1]}\tilde{H}_{\textrm{I},i}^{[2]}-\tilde{H}_{i}^{[2]}\tilde{H}_{\textrm{I},i}^{[1]}}$. Due to the counting noise, the number of received molecules of type $l$, $Y_{i}^{[l]}$, has Poisson distribution with mean $\mu_{i}^{[l]}$. Using the noisy observations, we obtain a random variable $\tilde{N}_{i}=\frac{\tilde{H}_{\textrm{I},i}^{[2]}Y_{i}^{[1]}-\tilde{H}_{i}^{[1]}Y_{i}^{[2]}}{\tilde{H}_{i}^{[1]}\tilde{H}_{\textrm{I},i}^{[2]}-\tilde{H}_{i}^{[2]}\tilde{H}_{\textrm{I},i}^{[1]}}$, with mean $N_{i}$, whose distribution can be obtained from the distribution of $Y_{i}^{[1]}$ and $Y_{i}^{[2]}$. We use maximum a-posteriori (MAP) decision rule on $\tilde{N}_{i}$ to obtain $M_{i}$: $\displaystyle\mathrm{P}(\tilde{N}_{i}=\tilde{n}_{i}|M_{i}=1)\overset{M_{i}=1}{\underset{M_{i}=0}{\gtrless}}\mathrm{P}(\tilde{N}_{i}=\tilde{n}_{i}|M_{i}=0).$ (26) For $\textrm{Rx}_{1}$, the above equation results in $\displaystyle\sum_{m_{2},m_{3}\in\\{0,1\\}}\mathrm{P}(\tilde{n}_{1}|M_{1}=1,M_{2}=m_{2},M_{3}=m_{3})\overset{M_{1}=1}{\underset{M_{1}=0}{\gtrless}}$ $\displaystyle\sum_{m_{2},m_{3}\in\\{0,1\\}}\mathrm{P}(\tilde{n}_{1}|M_{1}=0,M_{2}=m_{2},M_{3}=m_{3}).$ (27) To obtain the MAP decision rule, we need to obtain the distribution of $\tilde{N}_{i}$ conditioned on the messages of the transmitters. $\tilde{N}_{i}$ is a linear combination of two Poisson random variables $Y_{i}^{[1]}$ and $Y_{i}^{[2]}$, i.e., $\tilde{N}_{i}=a_{i}Y_{i}^{[1]}+b_{i}Y_{i}^{[2]}$, where $a_{i}=\frac{\tilde{H}_{\textrm{I},i}^{[2]}}{\tilde{H}_{i}^{[1]}\tilde{H}_{\textrm{I},i}^{[2]}-\tilde{H}_{i}^{[2]}\tilde{H}_{\textrm{I},i}^{[1]}}$ and $b_{i}=\frac{-\tilde{H}_{\textrm{I},i}^{[1]}}{\tilde{H}_{i}^{[1]}\tilde{H}_{\textrm{I},i}^{[2]}-\tilde{H}_{i}^{[2]}\tilde{H}_{\textrm{I},i}^{[1]}}$. This yields to a very complex problem. For simplicity, we assume Gaussian approximation to the Poisson distribution of $Y_{i}^{[l]}$, i.e., ${Y_{i}^{[l]}}_{|m_{1},m_{2},m_{3}}\sim\mathcal{N}(\mu_{i}^{[l]},\sqrt{\mu_{i}^{[l]}})$, which is a good approximation if the number of released molecules is large. Since different molecule types do not interfere with each other, $Y_{i}^{[1]}$ and $Y_{i}^{[2]}$ are independent conditioned on the messages of the transmitters. Hence, $\tilde{N}_{i}$ has a Gaussian distribution conditioned on $M_{1}=m_{1}$, $M_{2}=m_{2}$, and $M_{3}=m_{3}$ with mean $\tilde{\mu}_{i}(m_{1},m_{2},m_{3})=a_{i}\mu_{i}^{[1]}+b_{i}\mu_{i}^{[2]}$ and variance $\tilde{\sigma}_{i}^{2}(m_{1},m_{2},m_{3})=a_{i}^{2}\mu_{i}^{[1]}+b_{i}^{2}\mu_{i}^{[2]}$. Therefore, the MAP decision rule in (V) becomes $\displaystyle\sum_{m_{2},m_{3}\in\\{0,1\\}}\big{[}\frac{1}{\sqrt{2\pi\tilde{\sigma}_{i}^{2}(1,m_{2},m_{3})}}e^{-\frac{(\tilde{n}_{1}-\tilde{\mu}_{i}(1,m_{2},m_{3}))^{2}}{2\tilde{\sigma}_{i}^{2}(1,m_{2},m_{3})}}$ (28) $\displaystyle\qquad\qquad-\frac{1}{\sqrt{2\pi\tilde{\sigma}_{i}^{2}(0,m_{2},m_{3})}}e^{-\frac{(\tilde{n}_{1}-\tilde{\mu}_{i}(0,m_{2},m_{3}))^{2}}{2\tilde{\sigma}_{i}^{2}(0,m_{2},m_{3})}}\big{]}\overset{M_{1}=1}{\underset{M_{1}=0}{\gtrless}}0.$ However, obtaining a closed-form of this decision rule is not possible and we derive it numerically in Section VI. Since the MAP decision rule does not yield to a simple form, we cannot obtain a closed-form equation for the error probability of this scheme and we obtain the error probability using simulation in Section VI. IA with reaction: For the proposed IA scheme with perfect reaction, the average signal vector at $\textrm{Rx}_{i}$ after reaction is, [5], $\displaystyle\bm{\mu}_{\textrm{r},i}=[\mu_{\textrm{r},i}^{[1]},\mu_{\textrm{r},i}^{[2]}]^{\textrm{T}}=\begin{cases}\big{[}(\tilde{H}_{i}^{[1]}-c_{i}\tilde{H}_{i}^{[2]})N_{i},0\big{]}^{\textrm{T}},&\textrm{if}~{}\frac{\tilde{H}_{i}^{[1]}}{\tilde{H}_{i}^{[2]}}>c_{i}\\\ \big{[}0,(\tilde{H}_{i}^{[2]}-\frac{1}{c_{i}}\tilde{H}_{i}^{[1]})N_{i}\big{]}^{\textrm{T}},&\textrm{if}~{}\frac{\tilde{H}_{i}^{[1]}}{\tilde{H}_{i}^{[2]}}<c_{i}\end{cases}.$ Again, the number of received molecules of type $l$ after reaction, $Y_{\textrm{r},i}^{[l]}$, has Poisson distribution with mean $\mu_{\textrm{r},i}^{[l]}$ [5]. If $\frac{\tilde{H}_{i}^{[1]}}{\tilde{H}_{i}^{[2]}}>c_{i}$, we have $y_{\textrm{r},i}^{[2]}=0$, and hence, we use $y_{\textrm{r},i}^{[1]}$ to detect $M_{i}$ using MAP decision rule, $\mathrm{P}(y_{\textrm{r},i}^{[1]}|M_{i}=1)\overset{M_{i}=1}{\underset{M_{i}=0}{\gtrless}}\mathrm{P}(y_{\textrm{r},i}^{[1]}|M_{i}=0),$ which results in a simple threshold rule as $y_{\textrm{r},i}^{[1]}\overset{M_{i}=1}{\underset{M_{i}=0}{\gtrless}}\gamma_{i}^{[1]}$, where $\gamma_{i}^{[1]}=\frac{(\tilde{H}_{i}^{[1]}-c_{i}\tilde{H}_{i}^{[2]})(\zeta_{1}-\zeta_{0})}{\ln(\frac{\zeta_{1}}{\zeta_{0}})}$. Hence, for $\frac{\tilde{H}_{i}^{[1]}}{\tilde{H}_{i}^{[2]}}>c_{i}$, the error probability at $\textrm{Rx}_{i}$, $i\in\\{1,2,3\\}$, is obtained as $\displaystyle P_{\textrm{e},i}$ $\displaystyle=\frac{1}{2}(P\\{Y_{\textrm{r},i}^{[1]}>\gamma_{i}^{[1]}|M_{i}=0\\}+P\\{Y_{\textrm{r},i}^{[1]}<\gamma_{i}^{[1]}|M_{i}=1\\})$ $\displaystyle=\frac{1}{2}\big{[}1-\sum_{y_{\textrm{r},i}^{[1]}=0}^{\lfloor\gamma_{i}^{[1]}\rfloor}\big{(}\frac{((\tilde{H}_{i}^{[1]}-c_{i}\tilde{H}_{i}^{[2]})\zeta_{0})^{y_{\textrm{r},i}^{[1]}}e^{-(\tilde{H}_{i}^{[1]}-c_{i}\tilde{H}_{i}^{[2]})\zeta_{0}}}{y_{\textrm{r},i}^{[1]}!}$ $\displaystyle\quad-\frac{((\tilde{H}_{i}^{[1]}-c_{i}\tilde{H}_{i}^{[2]})\zeta_{1})^{y_{\textrm{r},i}^{[1]}}e^{-(\tilde{H}_{i}^{[1]}-c_{i}\tilde{H}_{i}^{[2]})\zeta_{1}}}{y_{\textrm{r},i}^{[1]}!}\big{)}\big{]}.$ (29) The error probability for the case of $\frac{\tilde{H}_{i}^{[1]}}{\tilde{H}_{i}^{[2]}}<c_{i}$ can be obtained similarly. ## VI Simulation and Numerical Results We evaluate the performance of the proposed IA scheme in a 3-user IFC. We consider the parameters in Table VI. TABLE I: Simulation and numerical analysis parameters Parameter | Symbol | Value ---|---|--- Location of Tx1 | $r_{1}^{\textrm{Tx}}$ | $(0,0,0)~{}\mu\textrm{m}$ Location of Tx2 | $r_{2}^{\textrm{Tx}}$ | $(0,20,10)~{}\mu\textrm{m}$ Location of Tx3 | $r_{3}^{\textrm{Tx}}$ | $(0,0,30)~{}\mu\textrm{m}$ Location of Rx1 | $r_{1}^{\textrm{Rx}}$ | $(0,150,0)~{}\mu\textrm{m}$ Location of Rx2 | $r_{2}^{\textrm{Rx}}$ | $(0,200,10)~{}\mu\textrm{m}$ Location of Rx3 | $r_{3}^{\textrm{Rx}}$ | $(0,300,20)~{}\mu\textrm{m}$ Diffusion coefficient of M1 | $D_{1}$ | $10^{-8}~{}\textrm{m}^{2}/\textrm{s}$ Diffusion coefficient of M2 | $D_{2}$ | $5\times 10^{-8}~{}\textrm{m}^{2}/\textrm{s}$ Reaction coefficients at Rx1, Rx2, and Rx3 | $c_{1}$, $c_{2}$, $c_{3}$ | $2$ Receiver radius | $r_{\textrm{R}}$ | $15~{}\mu\textrm{m}$ Medium flow velocity | $\bm{\nu}$ | $\bm{0}$ Figure 2: Feasible region of $\Delta\tilde{t}_{12}$ and $\Delta\tilde{t}_{13}$ in the proposed IA scheme with reaction. Figure 3: The error probability versus $\Delta t_{12}$ for the proposed IA scheme with reaction. Figure 4: The total error probability versus $\zeta_{0}$ for the proposed IA scheme with and without reaction. The feasible region in the proposed IA scheme with reaction for $\Delta\tilde{t}_{12}$ and $\Delta\tilde{t}_{13}$ for the special case of the releasing and sampling times ($t_{i}^{[1]}=t_{i}^{[2]}=t_{i},\tilde{t}_{j}^{[1]}=\tilde{t}_{j}^{[2]}=\tilde{t}_{j}$, $i,j\in\\{1,2,3\\}$), i.e., $\mathcal{T}_{\textrm{r, s}}$, based on Lemma 2, is shown in Fig. 4, which is a line segment between the points $(-0.303,-0.466)$ and $(0.303,0.6414)$, where three points $(-0.0538,0.0005)$, $(-0.0492,0.0092)$, and $(-0.0434,0.0201)$ are excluded from it. The error probability at the receivers and the total error probability of the system versus $\Delta\tilde{t}_{12}$ in the feasible region is shown in Fig. 4.333Note that since the sampling times are not necessarily chosen in the peak times of the received concentrations, $P_{\textrm{e}}$ does not go below $0.0012$, while $P_{\textrm{e},1}$ is near $10^{-14}$ for some sampling and releasing times. However, if we do not use IA, for 3 users with 2 molecules types, two transmitters should use one molecule type and the error probability of them becomes near $0.5$ because of the high co-channel interference, and hence the total error probability of system will be $P_{\textrm{e}}\approx 0.5$, which is not acceptable. Here, we assume $\zeta_{0}=\frac{\zeta_{1}}{2}=2\times 10^{6}$. It can be seen that the optimum value of $\Delta\tilde{t}_{12}$ which minimizes the total error probability is $\Delta\tilde{t}_{12,\textrm{r,s}}=-0.2323$, and using (17), we have $\Delta\tilde{t}_{13,\textrm{r,s}}=-0.3317$. Now, assuming $\tilde{t}_{1,\textrm{r,s}}=0$, we obtain the other releasing times ($\tilde{t}_{2,\textrm{r,s}}$ and $\tilde{t}_{3,\textrm{r,s}}$), and then from (2), we obtain $t_{i,\textrm{r,s}}$s, which are provided in Table VI.444Since $\Delta\tilde{t}_{12,\textrm{r,s}}$ and $\Delta\tilde{t}_{13,\textrm{r,s}}$ are both negative, by assuming $\tilde{t}_{1,\textrm{r,s}}=0$, we can assure to have positive times with smallest one equal to zero. For the general case of the times in the IA scheme with reaction, the optimum times are obtained numerically from (23) in Table VI. The optimum sampling and releasing times in the IA scheme without reaction are also provided in Table VI. The optimum times are obtained numerically from (22) for the special case, and from (21) for the general case, TABLE II: Optimum releasing and sampling times | $\tilde{t}_{1}^{[1]}$ | $\tilde{t}_{1}^{[2]}$ | $\tilde{t}_{2}^{[1]}$ | $\tilde{t}_{2}^{[2]}$ | $\tilde{t}_{3}^{[1]}$ | $\tilde{t}_{3}^{[2]}$ | $t_{1}^{[1]}$ | $t_{1}^{[2]}$ | $t_{2}^{[1]}$ | $t_{2}^{[2]}$ | $t_{3}^{[1]}$ | $t_{3}^{[2]}$ ---|---|---|---|---|---|---|---|---|---|---|---|--- IA with reaction (Special case) | $0$ | $0$ | $0.232$ | $0.232$ | $0.332$ | $0.332$ | $0.410$ | $0.410$ | $0.466$ | $0.466$ | $1.051$ | $1.051$ IA with reaction (General case) | $0.012$ | $0$ | $0.335$ | $0.396$ | $0.329$ | $0.323$ | $0.411$ | $0.411$ | $0.471$ | $0.471$ | $1.055$ | $1.055$ IA without reaction (Special case) | $0.592$ | $0.592$ | $0$ | $0$ | $0.623$ | $0.623$ | $0.674$ | $0.674$ | $0.666$ | $0.666$ | $1.758$ | $1.758$ IA without reaction (General case) | $1.262$ | $0.004$ | $0$ | $1.079$ | $1.102$ | $1.555$ | $1.636$ | $1.575$ | $1.392$ | $2.385$ | $1.877$ | $1.814$ The total error probabilities of the system using the IA scheme with and without reaction for the optimum times in the general and special cases are shown in Fig. 4 versus the number of released molecules for bit 0 (i.e., $\zeta_{0}$). Here, we assume $\zeta_{1}=2\zeta_{0}$. For the IA with reaction, we obtain the error probability using the analytical result in (V) and simulation, and for the IA without reaction, we obtain the decision rule numerically from (28) and obtain the error probability using simulation. It is seen that using reaction in the IA scheme improves the total error probability significantly. Further, for the IA scheme with reaction, the simulations and the analysis provide the same result. Figure 5: The total error probability versus $\zeta_{0}$ for the proposed IA scheme with reaction in the presence of ISI. Figure 6: The total error probability versus $\zeta_{0}$ for the proposed IA scheme with reaction in the presence of the environment noise. Fig. 5 investigates the ISI effect in the system. In this figure, the total error probability of the system using IA with reaction assuming one slot memory in the channels along with the error probability in the no ISI case are shown versus $\zeta_{0}$. We obtain the error probabilities for the optimum decision rule, the sub-optimum decision rule assuming ISI as noise, and the adaptive decision rule using the previously detected bits introduced in [16]. Here, we assume $T_{\textrm{s}}=10~{}\textrm{s}$. It can be seen that the error performance degrades in the presence of ISI. However, we have an acceptable performance using adaptive and optimum decision rules. The effect of the environment noise on the proposed IA scheme with reaction is shown in Fig. 6 for $\mu_{n}^{[1]}=14,17,21$ and $\mu_{n}^{[2]}=5$, which shows that the proposed scheme performs well in the presence of noise. It should be noted that, due to the perfect reaction between the molecules of type 1 and 2 with coefficient $c$ at the receivers, the average noise concentration for molecule type 1 is $\textrm{max}\\{0,\mu_{\textrm{n},1}-c\mu_{\textrm{n},2}\\}$, and the average noise concentration for molecule type 2 is $\textrm{max}\\{0,\mu_{\textrm{n},2}-\mu_{\textrm{n},1}/c\\}$. Therefore, the error probability depends on the value of $\mu_{\textrm{n},1}-c\mu_{\textrm{n},2}$ and increases as $\mu_{\textrm{n},1}-c\mu_{\textrm{n},2}$ goes away from zero. ## VII Conclusion In this paper, we proposed an interference alignment (IA) scheme for a 3-user molecular interference channel (IFC) based on the choice of the releasing times of the molecules at the transmitters and the sampling times at the receivers using two molecule types. We proposed using reaction in the IA scheme to cancel the aligned interference signal in the medium and reduce the signal dependent noise. Further, we applied the asymptotic IA scheme in classic communications to a $K$-user molecular IFC. For the proposed IA scheme, we obtained the feasible region for the releasing/sampling times and simplified it for a special case. We investigated the error performance of the IFC using the proposed IA scheme with and without reaction and showed that the proposed IA scheme using reaction improves the performance significantly. ## References * [1] A. Gohari, M. Mirmohseni, and M. Nasiri-Kenari, “Information theory of molecular communication: Directions and challenges,” IEEE Trans. on Molecular, Biological and Multi-Scale Comm., vol. 2, no. 2, pp. 120–142, 2016\. * [2] E. Dinc and O. B. Akan, “Theoretical limits on multiuser molecular communication in internet of nano-bio things,” IEEE Trans. on NanoBiosci., vol. 16, no. 4, pp. 266–270, 2017. * [3] M. Ş. Kuran and T. Tugcu, “Co-channel interference for communication via diffusion system in molecular communication,” in Int. Conf. on Bio-Inspired Models of Netw., Inf., and Computing Sys., pp. 199–212, Springer, 2011. * [4] R. Mosayebi, A. Gohari, M. Mirmohseni, and M. Nasiri-Kenari, “Type based sign modulation for molecular communication,” in Iran Workshop on Comm. and Inf. Theory (IWCIT), May 2016. * [5] M. Farahnak-Ghazani, G. Aminian, M. Mirmohseni, A. Gohari, and M. Nasiri-Kenari, “On medium chemical reaction in diffusion-based molecular communication: A two-way relaying example,” IEEE Trans. on Comm., vol. 67, pp. 1117–1132, 2018. * [6] A. Noel, K. C. Cheung, and R. Schober, “Improving receiver performance of diffusive molecular communication with enzymes,” IEEE Trans. on NanoBiosci., vol. 13, no. 1, pp. 31–43, 2014. * [7] M. Pierobon and I. F. Akyildiz, “Intersymbol and co-channel interference in diffusion-based molecular communication,” in IEEE Int. Conf. on Comm. (ICC), pp. 6126–6131, IEEE, 2012. * [8] V. R. Cadambe and S. A. Jafar, “Interference alignment and degrees of freedom of the $k$-user interference channel,” IEEE Trans. on Inf. Theory, vol. 54, no. 8, pp. 3425–3441, 2008. * [9] S. A. Jafar, “Interference alignment —- a new look at signal dimensions in a communication network,” Found. and Trends in Comm. and Inf. Theory, vol. 7, no. 1, pp. 1–134, 2011. * [10] L. Huang, L. Lin, F. Liu, and H. Yan, “Clock synchronization for mobile molecular communication systems,” IEEE Trans. on NanoBiosci., vol. 20, no. 4, pp. 406–415, 2020. * [11] V. Jamali, A. Ahmadzadeh, and R. Schober, “Symbol synchronization for diffusive molecular communication systems,” in IEEE Int. Conf. on Comm. (ICC), pp. 1–7, IEEE, 2017. * [12] Q. Li, “The clock-free asynchronous receiver design for molecular timing channels in diffusion-based molecular communications,” IEEE Trans. on NanoBiosci., vol. 18, no. 4, pp. 585–596, 2019. * [13] B. C. Akdeniz, M. Egan, and B. Q. Tang, “Equilibrium signaling: molecular communication robust to geometry uncertainties,” IEEE Trans. on Comm., 2020\. * [14] A. Noel, K. C. Cheung, and R. Schober, “Diffusive molecular communication with disruptive flows,” in IEEE Int. Conf. on Comm. (ICC), pp. 3600–3606, IEEE, 2014. * [15] A. Loch, T. Nitsche, A. Kuehne, M. Hollick, J. Widmen, and A. Klein, “Practical interference alignment in the frequency domain for ofdm-based wireless access networks,” in Proc. of IEEE Int. Symp. on a World of Wireless, Mobile and Multimedia Netw., pp. 1–9, IEEE, 2014. * [16] R. Mosayebi, H. Arjmandi, A. Gohari, M. Nasiri-Kenari, and U. Mitra, “Receivers for diffusion-based molecular communication: Exploiting memory and sampling rate,” IEEE J. on Selected Areas in Comm., vol. 32, no. 12, pp. 2368 – 2380, 2014. ## Appendix A Proof of Lemma 2 For $t_{i}^{[1]}=t_{i}^{[2]}=t_{i},\tilde{t}_{j}^{[1]}=\tilde{t}_{j}^{[2]}=\tilde{t}_{j}$, $i,j\in\\{1,2,3\\}$, $\bm{\nu}=0$, and $c_{1}=c_{2}=c_{3}=c$, the reaction conditions in (III) simplify to $\displaystyle\textrm{Rx}_{1}:$ $\displaystyle\frac{r_{12}^{2}}{t_{1}-\tilde{t}_{2}}+\frac{r_{31}^{2}}{t_{3}-\tilde{t}_{1}}-\frac{r_{32}^{2}}{t_{3}-\tilde{t}_{2}}=\frac{1}{s},$ (30) $\displaystyle\textrm{Rx}_{2}:$ $\displaystyle\frac{r_{21}^{2}}{t_{2}-\tilde{t}_{1}}=\frac{1}{s},\quad\textrm{Rx}_{3}:\frac{r_{31}^{2}}{t_{3}-\tilde{t}_{1}}=\frac{1}{s},$ where $s=-\frac{\frac{1}{D_{1}}-\frac{1}{D_{2}}}{4\ln{c}+6\ln(\frac{D_{1}}{D_{2}})}$. Combining the IA condition in (1) and the reaction conditions in (30) results in $\displaystyle r_{12}^{2}.(t_{3}-\tilde{t}_{2})=r_{32}^{2}.(t_{1}-\tilde{t}_{2}),$ (31a) $\displaystyle r_{13}^{2}.(t_{2}-\tilde{t}_{3})=r_{23}^{2}.(t_{1}-\tilde{t}_{3}),$ (31b) $\displaystyle t_{2}-\tilde{t}_{1}=s.r_{21}^{2},\qquad t_{3}-\tilde{t}_{1}=s.r_{31}^{2}.$ (31c) From (31b)-(31c), $t_{1}$, $t_{2}$, and $t_{3}$ can be obtained from $\tilde{t}_{1}$ and $\tilde{t}_{3}$ as in (2) and if we substitute them in (31a), we obtain (17). Combining the independency conditions in (13) and the reaction conditions in (30) results in $\displaystyle\textrm{Rx}_{1}:t_{1}-\tilde{t}_{1}\neq s.r_{11}^{2},\textrm{Rx}_{2}:r_{32}^{2}.(t_{2}-\tilde{t}_{2})\neq r_{22}^{2}.(t_{3}-\tilde{t}_{2}),$ $\displaystyle\textrm{Rx}_{3}:r_{33}^{2}.(t_{2}-\tilde{t}_{3})\neq r_{23}^{2}.(t_{3}-\tilde{t}_{3}).$ (32) Now, substituting $t_{i}$s from (2), we obtain (2). Further, we should have $\Delta t_{ij}=t_{i}-\tilde{t}_{j}>0$, $i,j\in\\{1,2,3\\}$. From (2), the values of $\Delta t_{ij}$ are related to $\Delta\tilde{t}_{12}$ and $\Delta\tilde{t}_{13}$ as follows: $\displaystyle\Delta t_{11}=t_{1}-\tilde{t}_{1}=(\frac{r_{13}^{2}}{r_{23}^{2}}-1)\Delta\tilde{t}_{13}+s.\frac{r_{13}^{2}}{r_{23}^{2}}.r_{21}^{2},$ $\displaystyle\Delta t_{12}=t_{1}-\tilde{t}_{2}=\Delta\tilde{t}_{12}+(\frac{r_{13}^{2}}{r_{23}^{2}}-1)\Delta\tilde{t}_{13}+s.\frac{r_{13}^{2}}{r_{23}^{2}}.r_{21}^{2},$ $\displaystyle\Delta t_{13}=t_{1}-\tilde{t}_{3}=\frac{r_{13}^{2}}{r_{23}^{2}}\Delta\tilde{t}_{13}+s.\frac{r_{13}^{2}}{r_{23}^{2}}.r_{21}^{2},$ $\displaystyle\Delta t_{21}=t_{2}-\tilde{t}_{1}=s.r_{21}^{2},\Delta t_{22}=t_{2}-\tilde{t}_{2}=\Delta\tilde{t}_{12}+s.r_{21}^{2},$ $\displaystyle\Delta t_{23}=t_{2}-\tilde{t}_{3}=\Delta\tilde{t}_{13}+s.r_{21}^{2},$ $\displaystyle\Delta t_{31}=t_{3}-\tilde{t}_{1}=s.r_{31}^{2},\Delta t_{32}=t_{3}-\tilde{t}_{2}=\Delta\tilde{t}_{12}+s.r_{31}^{2},$ $\displaystyle\Delta t_{33}=t_{3}-\tilde{t}_{3}=\Delta\tilde{t}_{12}+s.r_{31}^{2}.$ (33) Hence, $\Delta t_{ij}>0$ and (A) result in (2).
# The hybrid number of a ploidy profile Katharina T. Huber and Liam J. Maher School of Computing Sciences, University of East Anglia, Norwich, UK<EMAIL_ADDRESS>School of Computing Sciences, University of East Anglia, Norwich, UK<EMAIL_ADDRESS> ###### Abstract. Polyploidization, whereby an organism inherits multiple copies of the genome of their parents, is an important evolutionary event that has been observed in plants and animals. One way to study such events is in terms of the ploidy number of the species that make up a dataset of interest. It is therefore natural to ask: How much information about the evolutionary past of the set of species that form a dataset can be gleaned from the ploidy numbers of the species? To help answer this question, we introduce and study the novel concept of a ploidy profile which allows us to formalize it in terms of a multiplicity vector indexed by the species the dataset is comprised of. Using the framework of a phylogenetic network, we present a closed formula for computing the hybrid number (i.e. the minimal number of polyploidization events required to explain a ploidy profile) of a large class of ploidy profiles. This formula relies on the construction of a certain phylogenetic network from the simplification sequence of a ploidy profile and the hybrid number of the ploidy profile with which this construction is initialized. Both of them can be computed easily in case the ploidy numbers that make up the ploidy profile are not too large. To help illustrate the applicability of our approach, we apply it to a simplified version of a publicly available Viola dataset. ###### 1991 Mathematics Subject Classification: 1991 Mathematics Subject Classification. 05C05; 92D15 ## 1\. Introduction Datasets such as the Viola dataset considered in [17] arise when species inherit multiple sets of chromosomes from their parents. Generally referred to as polyploidization, this can be due to whole genome duplication (also called autopolyplodization) as in the case of e.g. watermelons and bananas [26], or by obtaining an additional complete set of chromosomes via hybridization (also called allopolyploidization), as in the case of the frog genus Xenopus [20]. This poses the following intriguing question at the center of this paper: How much information about the evolutionary past of a set of species can be gleaned from the ploidy number (i.e. the number of complete chromosome sets in a genome) of the species? Evoking parsimony to capture the idea that polyploidization is a relatively rare evolutionary event we re-phrase this question as follows: What is the minimum number of polyploidization events necessary to explain a dataset’s observed ploidy profile. For a set $X$ of species that make up a dataset, we define such a profile to be the multiplicity vector $(m_{1},\ldots,m_{n})$ for $n=|X|$, indexed by the species in $X$ where, for each $1\leq i\leq n$, the ploidy number of species $i\in X$ is $m_{i}\geq 1$. As it turns out, an answer to this question is well-known if the ploidy profile in question is presented in terms of a multi-labelled tree (see e.g. [8, 12, 16, 17]). Since it is, however, not always clear how to derive a biologically meaningful multi-labelled tree from the dataset in the first place [10], we focus here on ploidy profiles for which such a tree is not necessarily available. Due to the reticulate nature of the signal left behind by polyploidization [15, 19, 22], phylogenetic networks offer themselves as a natural framework to formalize and answer our question. Although we present a definition of such structures (and all other concepts used in this section) below, from an intuition development point of view, it suffices to observe at this stage that a phylogenetic network can sometimes be thought of as a rooted directed bifurcating tree $T$ with a pre-given set $X$ as leaves to which additional arcs have been added via joining subdivision vertices of arcs of $T$ so that the following property holds. The resulting graph is a rooted directed acyclic graph with leaf set $X$ such that a subdivision vertex $v$ of $T$ either only has additional arcs starting at it or only additional arcs ending at it. For our purposes we only allow the case that $v$ has one additional outgoing arc. Subdivision vertices that have at least one additional incoming arc are called hybrid vertices and are assumed to represent reticulate evolutionary events such as polyploidization. If a hybrid vertex in a phylogenetic network $N$ also has overall degree three then $N$ is generally called a binary phylogenetic network. We refer the interested reader to Figure 1(i) for an example of a binary phylogenetic network on $X=\\{x_{1},x_{2},x_{3},x_{4}\\}$ that is obtained from the tree depicted in Figure 1(ii) and to [5, 9, 14, 23] for methodology and construction algorithms surrounding phylogenetic networks. Note that to be able to account for autopolyploidization, we deviate from the usual notion of a phylogenetic network by allowing our phylogenetic networks to have parallel arcs (but no loops) – see e.g. [6, 24] and the references therein for further results concerning such networks. By taking for every leaf $x$ of a binary phylogenetic network $N$ on some finite set $X$ the number of directed paths from the root of $N$ to $x$, every phylogenetic network induces a multiplicity vector $\vec{m}$ indexed by the elements in $X$. Saying that $N$ realizes $\vec{m}$ in this case (see Section 3 for an extension of this concept to phylogenetic networks) allows us to formalize our question as follows. Suppose $\vec{m}$ is a ploidy profile indexed by the elements of some finite set $X$. What can be said about the minimum number of hybrid vertices required by a binary phylogenetic network on $X$ to realize $\vec{m}$? We call this number which is central to the paper the hybrid number of $\vec{m}$ and denote it by $h(\vec{m})$. If a binary phylogenetic network $N$ has $h(\vec{m})$ hybrid vertices then we also say that $N$ attains $\vec{m}$ (see again Section 3 for an extension of this concept to phylogenetic networks). The interested reader is referred to [23] for an overview of the related concept of the hybrid number of a set of phylogenetic trees (i.e. leaf-labelled rooted trees without any vertices of indegree and outdegree one whose leaf set is a pre-given set). Before proceeding with presenting an example to help illustrate this question we remark that multiplicity vectors realized by binary phylogenetic networks have been used in [4] to define a metric for a certain class of binary phylogenetic networks. Furthermore, the stronger assumption that the number of directed paths from every vertex of a binary phylogenetic network $N$ to every leaf of $N$ is known, has led to the introduction of the concept of an ancestral profile for $N$ [21]. Returning to our question, consider the ploidy profile $\vec{m}=({\color[rgb]{0,0,0}12},6,6,5)$ indexed by $X=\\{x_{1},x_{2},x_{3},x_{4}\\}$ where the multiplicity of $x_{1}$ is $12$, that of $x_{2}$ and $x_{3}$ is 6, and that of $x_{4}$ is $5$. Since no binary phylogenetic network on one leaf and two hybrid vertices can realize the ploidy profile $\vec{m}^{\prime}=(5)$ because it has at most $2^{2}=4$ directed paths from the root to the leaf, it follows that a binary phylogenetic network that realizes $\vec{m}^{\prime}$ and therefore also $\vec{m}$ must have at least three hybrid vertices. In fact, the subnetwork $N^{\prime}$ in bold of the phylogenetic network depicted in Figure 1(i) is the unique (subject to letting the arc $a$ finish at a subdivision vertex of an outgoing or incoming arc of the hybrid vertex $h$ or letting $a$ start at a subdivision vertex of an outgoing or incoming arc of the vertex $t$) binary phylogenetic network that realizes $\vec{m}^{\prime}$ and uses a minimum number of hybrid vertices. To be able to realize the ploidy profile $(6,5)$ and therefore also the ploidy profile $\vec{m}^{\prime\prime}=(6,6,5)$ at least four hybrid vertices are therefore needed. By counting directed paths from the root to each leaf of the phylogenetic network depicted in Figure 1(i) with $x_{1}$, the hybrid vertex $h^{\prime}$ above $x_{1}$, the two incoming arcs of $h^{\prime}$, and the arc $(h^{\prime},x_{1})$ removed and any resulting vertices of indegree and outdegree one suppressed clearly realizes $\vec{m}^{\prime\prime}$. Calling that phylogenetic network $N^{\prime\prime}$ then, in a similar sense as $N^{\prime}$, we also have that $N^{\prime\prime}$ is unique. To obtain a binary phylogenetic network from $N^{\prime\prime}$ that realizes $\vec{m}$ at least one further hybrid vertex is needed. Again by counting directed paths from the root to each leaf, it is easy to check that the binary phylogenetic network $N(\vec{m})$ depicted in Figure 1(i) Figure 1. (i) One of potentially many phylogenetic networks that realize the ploidy profile $\vec{m}=(12,6,6,5)$ on $X=\\{x_{1},x_{2},x_{3},x_{4}\\}$. To improve clarity of exposition, we always assume that arcs are directed downward, away from the root. (ii) A (phylogenetic) tree to which subdivision vertices and arcs have been added to obtain the phylogenetic network in (i) – see the text for details. realizes $\vec{m}$ and postulates five hybrid vertices. As we shall see as a direct consequence of Theorem 6.1, $h(N)=5$. As a further consequence of that theorem, we obtain a closed formula for the hybrid number of a ploidy profile (Corollary 6.2). The outline of the paper is as follows. In the next section, we present some relevant basic terminology and notation concerning phylogenetic networks. This also includes an unfold-operation for phylogenetic networks and a fold-up operation that generates phylogenetic networks, both of which were introduced originally in [8]. In Section 3, we extend the concept of attainment from binary phylogenetic networks to phylogenetic networks and study structural properties of phylogenetic networks that attain ploidy profile. As part of this, we introduce the two main concepts of the paper: a simple ploidy profile and an attainment of a ploidy profile. In Section 4, we associate two binary phylogenetic networks to a simple ploidy profile $\vec{m}$ which we denote by $D(\vec{m})$ and $B(\vec{m})$, respectively. As we shall see, the former is based on the prime factor decomposition of a positive integer $m$ and the latter on a binary representation of $m$. In Section 5, we associate a sequence $\sigma(\vec{m})$ to a ploidy profile $\vec{m}$ which we call the simplification sequence of $\vec{m}$ (Algorithm 1). As part of this, we also present some basic results concerning such sequences. This includes an infinite family of ploidy profiles that shows that such a sequence can grow exponentially large. Denoting the last element of the simplification sequence for $\vec{m}$ by $\vec{m}_{t}$, we then employ a traceback through $\sigma(\vec{m})$ to obtain the aforementioned binary phylogenetic network $N(\vec{m})$ from a binary phylogenetic network that attains $\vec{m}_{t}$ (Algorithm 2). Motivated by our partial results for binary phylogenetic networks that realize a simple ploidy profile summarized in Theorem 4.3, we provide an upper bound on the hybrid number $h(\vec{m})$ of a ploidy profile $\vec{m}$ for special cases of $\vec{m}$ (Proposition 5.2). After collecting some preliminary results for $N(\vec{m})$ in Section 5, we establish in Section 6 that $N(\vec{m})$ attains $\vec{m}$ for a large class of ploidy profiles $\vec{m}$ (Theorem 6.1). In Section 7, we turn our attention to computing the hybrid number of the ploidy profile of a simplified version of the aforementioned Viola dataset from [17]. We conclude with Section 8 where we outline potential directions of further research. ## 2\. Preliminaries We start with introducing basic concepts surrounding phylogenetic networks. Subsequent to this, we briefly describe two basic operations concerning phylogenetic networks that are central for establishing a key result (Proposition 3.2). For the convenience of the reader, we illustrate both operations in Figures 2 and 3 by means of an example. Throughout the paper we assume that $X$ is a non-empty finite set. We denote the size of $X$ by $n$. ### 2.1. Basic concepts Suppose for the following that $G$ is a rooted directed connected acyclic graph which might contain parallel arcs but no loops. Then we denote the vertex set of $G$ by $V(G)$ and its set of arcs by $A(G)$. We denote an arc $a\in A(G)$ starting at a vertex $u$ and ending in a vertex $v$ by $(u,v)$ and refer to $u$ as the tail of $a$ and to $v$ as the head of $a$. We call an arc $a\in A(G)$ a cut-arc if the deletion of $a$ disconnects $G$. We call a cut- arc $a$ of $G$ trivial if the head of $a$ is a leaf. Following [24], we call an induced subgraph of $G$ with two vertices $u$ and $v$ and two parallel arcs form $u$ to $v$ a bead of $G$. Suppose $v\in V(G)$. Then we refer to the number of arcs coming into $v$ as the indegree of $v$, denoted by $indeg_{G}(v)$, and the number of outgoing arcs of $v$ as the outdegree of $v$, denoted by $outdeg_{G}(v)$. If $G$ is clear from the context then we will omit the subscript in $indeg_{G}(v)$ and $outdeg_{G}(v)$, respectively. We call $v$ the root of $G$, denoted by $\rho_{G}$, if $indeg(v)=0$, and we call $v$ a leaf of $G$ if $indeg(v)=1$ and $outdeg(v)=0$. We denote the set of leaves of $G$ by $L(G)$. We call $v$ a tree vertex if $outdeg(v)=2$ and $indeg(v)=1$. And we call $v$ a hybrid vertex if $indeg(v)\geq 2$ and $outdeg(v)=1$. We denote the set of hybrid vertices of $G$ by $H(G)$. We call any two leaves $x$ and $y$ of $G$ a cherry, denoted by $\\{x,y\\}$, if $x$ and $y$ share a parent. We say that $G$ is binary if, $outdeg(\rho_{G})=2$ and, for all $v\in V(G)-L(G)$ other than $\rho_{G}$, we have that the degree sum is three. We say that a vertex $w\in V(G)$ is above $v$ if there exists a directed path $P$ from $w$ to $v$. In that case, we also say that $v$ is below $w$. If, in addition, $v\not=w$ then we say that $w$ is strictly above $v$ and that $v$ is strictly below $w$. We call $G$ a (phylogenetic) network (on $X$) if $L(G)=X$, every vertex $v\in V(G)-L(G)$ other than $\rho_{G}$ is a tree vertex or a hybrid vertex and $outdeg(\rho_{G})=2$. Note that phylogenetic networks in our sense were called semi-resolved phylogenetic networks in [8]. Also note that our definition of a phylogenetic network differs from the standard definition of such an object (see e.g. [23]) by allowing beads. To emphasise that a phylogenetic network has no beads, we will sometimes refer to it as a beadless phylogenetic network. Suppose $G$ is a phylogenetic network on $X$. Then following [3], we define the hybrid number $h(G)$ of $G$ to be $h(G)=\sum_{h\in H(G)}(indeg(h)-1).$ We refer to a phylogenetic network $G$ (on $X$) as a phylogenetic tree (on $X$) if $h(G)=0$. For a phylogenetic tree $T$ on $X$ and a non-root vertex $v\in V(T)$ we denote by $T(v)$ the subtree of $T$ obtained by deleting the incoming arc of $v$ and the subsequently generated connected component that does not contain $v$. Suppose that $N$ is a phylogenetic network on $X$. Then we denote the number of directed paths from the root $\rho_{N}$ of $N$ to a leaf $x$ of $N$ by $m_{N}(x)$. In case $N$ is clear from the context, we will write $m(x)$ rather than $m_{N}(x)$. For $N^{\prime}$ a further phylogenetic network on $X$, we say that $N$ and $N^{\prime}$ are equivalent if there exists a graph isomorphism between $N$ and $N^{\prime}$ that is the identity on $X$. Furthermore, we say that $N^{\prime}$ is a (binary) resolution of $N$ if $N^{\prime}$ is obtained from $N$ by resolving all vertices in $H(N)$ so that every vertex in $H(N^{\prime})$ has indegree two. Note that for any resolution $N^{\prime}$ of $N$, we have $h(N)=|H(N^{\prime})|=h(N^{\prime})$. ### 2.2. The fold-up $F(U(N))$ of the unfold $U(N)$ of a phylogenetic network $N$ Phylogenetic trees on $X$ were generalized in [8] to so called multi-labelled trees (on $X$) or MUL-trees (on $X$), for short, by replacing the leaf set of a phylogenetic tree by a multiset $Y$ on $X$. Put differently, $X$ is the set obtained from $Y$ by ignoring the multiplicities of the elements in $Y$. As was pointed out in the same paper, every phylogenetic network $N$ gives rise to a MUL-tree $U(N)$ on $X$ by recording, for every vertex $v$ of $N$, every directed path from the root $\rho_{N}$ of $N$ to $v$. More precisely, the vertex set of $U(N)$ is, for all vertices $v\in V(N)$, the set of all directed paths $P$ from $\rho_{N}$ to $v$ where we identify $P$ with its end vertex $v$. Two vertices $P$ and $P^{\prime}$ in $U(N)$ are joined by an arc $(P^{\prime},P)$ if there exists an arc $a\in A(N)$ such that $P$ is obtained from $P^{\prime}$ by extending $P^{\prime}$ by the arc $a$. For example, the vertex $u$ in Figure 2(i) Figure 2. (i) The MUL-tree $M$ obtained by unfolding the phylogenetic network on $X=\\{x,y\\}$ in (iv). The trees $T(u)$ and $T(v)$ rooted at $u$ and $v$ and indicated with a double arrow, respectively, are equivalent. In fact, they are maximal inextendible. (ii) Subdivision of the incoming arcs of $u$ and $v$ by $h_{u}$ and $h_{v}$, respectively. (iii) Identifying the vertices $h_{u}$ and $h_{v}$. (iv) Deleting the subtree $T(v)$ and the incoming arc of $v$ (indicated by dotted lines in (iii)). is the directed path $\rho$, $s$, $u$ in the phylogenetic network in Figure 2(iv) which crosses the arc $a$. The vertex $v$ in Figure 2(i) is the directed path $\rho$, $s$, $u$ in Figure 2(iv) which crosses the arc $a^{\prime}$. Reading Figure 2 from left to right suggests that the unfolding operation can also be reversed. We next briefly outline this reversal operation which may be thought of as the fold-up of a MUL-tree $M$ into a phylogenetic network $F(M)$ (see [8] for details, [11, 13] for more on both constructions, and Figure 3 for an example). To make this more precise, we require further terminology. Suppose that $M$ is a MUL-tree on $X$. Then we denote for a non-root vertex $v$ of $M$ the parent of $v$ by $\overline{v}$. Extending the relevant notions from phylogenetic trees to MUL-trees, we say that a subMUL-tree $T$ with root $u$ of $M$ is inextendible if there exists a subMUL-tree $T^{\prime}$ of $M$ with root vertex $w\not=u$ such that $T$ and $T^{\prime}$ are equivalent and either $\overline{v}=\overline{w}$ or $\overline{v}\not=\overline{w}$ and $T(\overline{v})$ and $T(\overline{w})$ are not equivalent. By definition, every subMUL-tree of $M$ that is equivalent with an inextendible subMUL-tree of $M$ is necessarily also inextendible. In view of this, we refer to an inextendible subMUL-tree $T$ of $M$ as maximal inextendible if no subMUL-tree of $M$ that is equivalent with $T$ is a subMUL-tree of an inextendible subMUL- tree of $M$. So, for example, the subMUL-tree $T(u)$ of the MUL-tree $M$ depicted in Figure 3(i) is inextendible but the subMUL-tree $T(u^{\prime})$ is not. In fact, $T(u)$ is maximal inextendible because the only equivalent copy of $T(u)$ in $M$ that is not $T(u)$ is $T(v)$ and neither $T(u)$ nor $T(v)$ is a subMUL-tree of an inextendible subMUL-tree in $M$. To construct $F(M)$, we first construct a sequence $\gamma_{M}$ of subMUL- trees of $M$ which we call a guide sequence for $F(M)$ and which we initialize with the empty sequence. Let $T$ denote a maximal inextendible subMUL-tree of $M$. Let $u$ denote the root of $T$, and let ${\color[rgb]{0,0,0}U=U_{u}}\subseteq V(M)$ denote the set of vertices $v\in V(M)$ such that the subMUL-tree rooted at $v$ is equivalent with $T(u)$. Note that, by definition, $|U|\geq 2$. Then, for all $v\in U$, we first subdivide the incoming arc of $v$ by a vertex $h_{v}$ (cf Figure 2(ii)) and then identify all vertices $h_{v}$, $v\in U$, with the vertex $h_{u}$ (cf Figure 2(iii)). By construction, $h_{u}$ clearly has $|U|$ incoming arcs and also $|U|$ outgoing arcs. From these $|U|$ outgoing arcs of $h_{u}$, we delete all but one arc and, for each deleted arc $a$, we remove the subMULtree $T(v)$ rooted at the head $v$ of $a$ (Figure 2(iv)). We then grow $\gamma_{M}$ by adding an equivalent copy of $T(u)$ at the end of $\gamma_{M}$ in case $\gamma_{M}$ is not the empty sequence. Otherwise we add $T(u)$ as the first element to $\gamma_{M}$. Replacing $M$ with the resulting graph $N_{U}$, we then find a new maximal inextendible subMUL-tree in $N_{U}$ and proceed as before (where we canonically extend the notions of a maximal inextendible subMUL-tree and of a subMUL-tree rooted at a vertex to $N_{U}$). In the case of the example in Figure 3, the next maximal inextendible subMUL-tree in Figure 3(ii) is one of the leaves labelled $x_{1}$. By construction, the process of subdividing (cf Figure 2(ii)), identifying (cf Figure 2(iii))), and deleting (cf Figure 2(iv)) terminates in a phylogenetic network on $X$. That network is $F(M)$. We depict $F(M)$ in Figure 3(iv) for the MUL-tree $M$ pictured in Figure 3(i). As was pointed out in [8, Section 6], $F(M)$ is independent of the order in which ties are resolved when processing maximal inextendible subMUL-trees. Also, all tree vertices of $F(M)$ have outdegree two because $M$ is a binary MUL-tree. However, $F(M)$ might contain hybrid vertices whose indegree is two or more since when processing a maximal inextendible subMUL-tree $T$ there might be more than two subMUL-trees in the graph generated thus far that are equivalent with $T$. Finally, $F(M)$ cannot contain arcs whose tail and head is a hybrid vertex because the hybrid vertices of $F(M)$ are in bijective correspondence with the elements in the guide sequence for $F(M)$. Figure 3. (i) The MUL-tree $M$ obtained by unfolding the phylogenetic network on $\\{x_{1},x_{2}\\}$ pictured in (iv). The vertices $u$ and $v$ as indicated in (i) are the root of the maximal inexendible subtrees of $M$ to which the subdivision, identification and deletion process described in Figure 2 is applied to obtain the rooted directed acyclic graph $G$ presented in (iii). The two leaves labelled $x_{1}$ in $G$ are the roots of two equivalent maximal inextendible subtree of $G$ and applying the subdivision, identification, and deletion process to it results in $F(M)$. In each case, the equivalent subMUL- trees are indicated by a double arrow. We conclude the outline of both constructions with the following remark. Suppose $N$ is a phylogenetic network on $X$. Then we call two tree vertices $u$ and $v$ in $V(N)$ distinct an identifiable pair if the subMUL-trees of $U(N)$ rooted at the vertex that is a directed path in $N$ from the root $\rho_{N}$ of $N$ to $u$ is equivalent with the subMUL-trees of $U(N)$ rooted at the vertex that is a directed path in $N$ from $\rho_{N}$ to $v$. Let $C(N)$ denote the compressed phylogenetic network obtained from $N$ i. e. the phylogenetic network obtained from $N$ by contracting all arcs $(u,v)$ for which both $u$ and $v$ is a hybrid vertex. Bearing in mind that the phylogenetic network $F(M)$ associated to a MUL-tree $M$ was denoted $\mathcal{D}(M)$ in [8], the following holds * (R1) $F(U(N))$ does not contain an identifiable pair of vertices [8, Theorem 3]. * (R2) If $N$ and $N^{\prime}$ are phylogenetic networks such that the MUL-trees $U(N)$ and $U(N^{\prime})$ are equivalent then $h(F(U(N)))\leq h(N^{\prime})$ [8, Corollary 2(ii)]. * (R3) If $N$ is a phylogenetic network that does not contain an identifiable pair of vertices then the compressed phylogenetic networks $C(F(U(N)))=F(U(N))$ and $C(N)$ are equivalent (Consequence of (R1) and [8, Theorem 2]). ## 3\. Properties of phylogenetic networks that attain the hybrid number of a ploidy profile In this section, we collect structural properties of phylogenetic networks that attain the hybrid number of a ploidy profile. For ease of readability, we will assume from now on that for a ploidy profile $\vec{m}=(m_{1},\ldots,m_{n})$ on $X$ the elements in $X$ are always ordered in such a way that $m(x_{i})=m_{i}$ holds for all $1\leq i\leq n$ and that $\vec{m}$ is in descending order, that is, $m_{i}\geq m_{i+1}$ holds for all $1\leq i\leq n-1$. We start with some notations and definitions. Suppose that $N$ is a phylogenetic network on $X=\\{x_{1},\ldots,x_{n}\\}$ and that $\vec{m}=(m_{1},\ldots,m_{n})$ is a ploidy profile on $X$. Then we call $\vec{m}$ simple if $m_{i}=1$ for all $2\leq i\leq n$ (i. e. $m_{1}$ is the only component of $\vec{m}$ that is at least $2$). Moreover, we call $\vec{m}$ strictly simple if $\vec{m}$ is simple and $|X|=1$. We say that $N$ realizes a ploidy profile $\vec{m}$ if the elements in $X$ can be ordered in such a way that $m_{i}=m(x_{i})$ holds for all $1\leq i\leq n$. In this case, we also call $N$ a realization of $\vec{m}$. Furthermore, we say that $N$ is a binary realization of $\vec{m}$ if $N$ is binary. We say that $N$ attains $\vec{m}$ if $N$ realizes $\vec{m}$ and $h(\vec{m})=h(N)=\sum_{h\in H(N)}(indeg(h)-1)$. In this case, we refer to $N$ as an attainment of $\vec{m}$. If $N$ is an attainment and also binary then we call $N$ a binary attainment of $\vec{m}$. As is straight-forward to verify using the construction of the phylogenetic network indicated in Figure 4 and the definition of $m(x)$, $x\in X$, every ploidy profile $\vec{m}=(m_{1},\ldots,m_{n})$ on $X=\\{x_{1},\ldots,x_{n}\\}$ with $n\geq 1$ is realized by a phylogenetic network that contains at most $\sum_{i=1}^{n}(m_{i}-1)$ hybrid vertices. Thus, the hybrid number of a ploidy profile always exists. Figure 4. A phylogenetic network on $X=\\{x_{1},\ldots,x_{n}\\}$ that realizes the ploidy profile $\vec{m}=(m_{1},\ldots,m_{n})$ on $X$. For all $1\leq i\leq n$, the number of curved lines is $m_{i}-1$. As we shall see in Proposition 5.2, this bound can be improved for many ploidy profiles. To be able to collect some simple properties of attainments which we will do next, we require further terminology and notation. Suppose $N$ is a binary phylogenetic network on $X$. Then we say that $N$ is semi-stable if $N$ is equivalent to a resolution of $F(U(N))$. Motivated by the fact that a beadless phylogenetic network $N$ that is equivalent to $F(U(N))$ was called stable in [11], we canonically extend this concept to our types of phylogenetic networks by saying that a phylogenetic network $N$ is stable if $N$ is equivalent with $F(U(N))$. For example, the binary phylogenetic network $N$ depicted in Figure 5(i) is semi-stable but not stable since $U(N)$ is the MUL-tree depicted in Figure 5(ii) and $F(U(N))$ is the phylogenetic network depicted in Figure 5(iii). The phylogenetic network $N^{\prime}$ pictured in Figure 5(iv) is not semi-stable. In fact, for a binary phylogenetic network $N$ to be stable it cannot contain the phylogenetic network $N^{\prime}$ pictured in Figure 5(iv) as an induced subgraph (where $x_{1}$ and $x_{2}$ need not be leaves in $N^{\prime}$) since $F(U(N^{\prime}))$ is the phylogenetic network depicted in Figure 5(v). Figure 5. The phylogenetic network $N$ depicted in (i) is semi-stable but not stable since it is not equivalent with $F(U(N))$ i. e. the phylogenetic network depicted in (iii). the MUL-tree $U(N)$ is pictured in (ii). The phylogenetic network pictured in (iv) is not semi-stable. For a phylogenetic network to be stable it cannot contain the phylogenetic network $N^{\prime}$ pictured in (iv) as an induced subgraph since $F(U(N^{\prime}))$. As we shall see below, certain types of binary phylogenetic networks called beaded trees are examples of stable phylogenetic networks. Although introduced in [24] in the context of a study of binary phylogenetic networks whose root have indegree one and not zero as in our case, the main feature of beaded trees is that a hybrid vertex must be contained in a bead. In view of this, we call a binary phylogenetic network $N$ on $X$ a beaded tree if $N$ is either a phylogenetic tree on $X$ or every hybrid vertex is contained in a bead (see e. g. [6] for more on such graphs). Then since a beaded tree $N$ cannot contain an identifiable pair of vertices, it follows by (R3) that the compressed phylogenetic networks $C(N)$ and $F(U(N))$ are equivalent. Since $N$ is a beaded tree and so does not contain arcs whose tail and head are hybrid vertices, it follows that $C(N)$ is in fact $N$. Thus, $N$ must be stable. Suppose $N$ is an attainment of a ploidy profile $\vec{m}$ on $X$ that contains a cut-arc $a$. Then deleting $a$ results in two connected components $N_{1}$ and $N_{2}$, one of which contains the root of $N$, say $N_{1}$, and the other is a phylogenetic network on $X-L(N_{1})$. For $x\not\in L(N_{1})$ we let $N_{1}^{x}$ denote the phylogenetic network on $L(N_{1})\cup\\{x\\}$ obtained from $N_{1}$ by adding a pendant arc $a^{\prime}$ to $tail(a)$ and labelling the head of $a^{\prime}$ by $x$. For any phylogenetic network $N$ on $X$, we denote by $\vec{m}(N)$ the ploidy profile on $X$ realized by $N$. ###### Lemma 3.1. Suppose that $N$ is an attainment of a ploidy profile $\vec{m}$ on $X$. Then the following holds. 1. (i) $F(U(N))$ and any resolution of $F(U(N))$ is an attainment of $\vec{m}$. 2. (ii) $N$ is semi-stable. 3. (iii) Suppose $N$ contains a cut-arc $a$ and $N_{1}$ and $N_{2}$ are the connected components of $N$ obtained by deleting $a$. If $\rho_{N}\in V(N_{1})$ and $x\not\in L(N_{1})$ then $N_{1}^{x}$ is an attainment of $\vec{m}(N_{1}^{x})$ and $N_{2}$ is an attainment of $\vec{m}(N_{2})$. ###### Proof. (i): Clearly, $U(N)$ is the unfold of $N$ and also of $F(U(N))$. In view of (R2), we obtain $h(F(U(N)))\leq h(N)$. Since $N$ is a attainment of $\vec{m}$ and $F(U(N))$ realizes $\vec{m}$ it follows that $h(N)\leq h(F(U(N)))$ must hold too. Thus, $h(F(U(N)))=h(N)$. Consequently, $F(U(N))$ is an attainment of $\vec{m}$. To see the remainder, suppose for contradiction that $F(U(N))$ has a resolution $D$ that is not an attainment of $\vec{m}$. Then $h(D)=h(F(U(N)))<h(D)$; a contradiction. (ii): Since $N$ is an attainment of $\vec{m}$ it cannot contain a pair of identifiable vertices as otherwise $h(F(U(N)))<h(N)$ would hold which is impossible in view of Assertion (i). By (R3) it follows that the compressed networks $C(N)$ and $C(F(U(N)))$ are equivalent. Hence $N$ must be a resolution of $F(U(N))$. (iii): Since $a$ is a cut-arc of $N$ and therefore cannot have a head that is a hybrid vertex, we have $h(\vec{m})=h(\vec{m}(N_{1}^{x}))+h(\vec{m}(N_{2}))$. Since every directed path from the root of $N$ to a leaf of $N_{2}$ must cross $a$ because $a$ is a cut-arc of $N$ it follows that $m_{N}(y)=m_{N_{1}^{x}}(x)\times m_{N_{2}}(y)$ holds for all $y\in L(N_{2})$. This implies the statement. ∎ The unfold and fold-up operations described in Section 2.2 lie at the heart of the proof of Proposition 3.2. ###### Proposition 3.2. Suppose $\vec{m}$ is a ploidy profile on $X=\\{x_{1},\ldots,x_{n}\\}$ and that $N$ is an attainment of $\vec{m}$. Then there must exist a directed path $P$ from the root of $F(U(N))$ to $x_{1}$ in $F(U(N))$ such that every hybrid vertex in $F(U(N))$ lies on $P$. If, in addition, $N$ is stable then $P$ must be a directed path in $N$. ###### Proof. Put $\vec{m}=(m_{1},\ldots,m_{n})$. Suppose for contradiction that there exists no directed path from the root $\rho$ of $F(U(N))$ to $x_{1}$ in $F(U(N))$ that contains all hybrid vertices of $F(U(N))$. Then since $N$ is an attainment of $\vec{m}$, Lemma 3.1 implies that $F(U(N))$ is also an attainment of $\vec{m}$. Consequently, $h(N)=h(F(U(N)))$. Let $\gamma_{U(N)}:T_{1},T_{2},\ldots,T_{l}$, some $l\geq 1$, denote a guide sequence for $F(U(N))$. Without loss of generality we may assume that $l\geq 2$ since otherwise $F(U(N))$ only contains one hybrid vertex and, so, the proposition holds. Then there must exist some $i\in\\{2,\ldots,l\\}$ such that $T_{i}$ is not a subMUL-tree of $T_{i-1}$ as otherwise all hybrid vertices of $F(U(N))$ would lie on a directed path from $\rho$ to $x_{1}$. Without loss of generality, we may assume that $i$ is as small as possible with this property, i. e. $T_{j+1}$ is a subMUL-tree of $T_{j}$, for all $1\leq j\leq i-2$. Let $M$ denote the MUL-tree obtained from $U(N)$ as follows. For $j\in\\{1,i\\}$ let $t_{j}$ denote the number of equivalent copies of $T_{j}$ in $U(N)$. Let $t=\min\\{t_{1},t_{i}\\}$. Then $t\geq 2$. Choose $t$ equivalent copies $R_{1},\ldots,R_{t}$ of $T_{i}$ in $U(N)$. For all $1\leq j\leq t$, delete the incoming arc of the root $r_{j}$ of $R_{j}$. Next choose $t$ equivalent copies of $T_{1}$ in $U(N)$ and, for all $1\leq j\leq t$, subdivide the incoming arc of the root of $T_{j}$ by a vertex $s_{j}$. Note that this is possible since $T_{1}$ is the first element in $\gamma_{U(N)}$ and so cannot be $U(N)$. Last-but-not-least, add the arcs $(s_{j},r_{j})$, for all $1\leq j\leq t$. Since this might have resulted in arcs whose head is not contained in $X$ and also vertices that have indegree one and outdegree one, we clean the resulting MUL-tree by removing the former and repeatedly suppressing the latter. Also we repeatedly identify the root with its unique child if this has rendered it a vertex with outdegree one. By construction, $F(M)$ is a phylogenetic network that realizes $\vec{m}$. Furthermore, $h(F(M))=h(F(U(N)))-(t-1)=h(N)-(t-1)<h(N)$ must hold since $t\geq 2$; a contradiction as $N$ is an attainment of $\vec{m}$. The remainder of the proposition is an immediate consequence because $N$ and $F(U(N))$ are equivalent in this case. ∎ Since, as mentioned above, beaded trees are stable phylogenetic networks the corresponding result for beaded trees in [24, Lemma 13] is a consequence of Proposition 3.2 (once an incoming arc has been added to the root). ###### Lemma 3.3. Suppose $\vec{m}=(m_{1},\ldots,m_{n})$ is a simple ploidy profile on $X$ such that $m_{1}$ is a prime number. Then any cut-arc in an attainment of $\vec{m}$ must be trivial. ###### Proof. Suppose $N$ is an attainment of $\vec{m}$. Then the phylogenetic network $N^{\prime}$ obtained from $N$ by removing, for all $2\leq i\leq n$, the cut arcs ending in a leaf $x_{i}$ of $N$ as well as the leaves $x_{i}$ (suppressing the resulting vertices of indegree one and outdegree one and also the root in case this has rendered it an outdegree one vertex) is a phylogenetic network on $X^{\prime}=\\{x_{1}\\}$. Note that since none of the elements $x_{i}$ indexing $m_{i}$, $2\leq i\leq n$, contributes to $h(N)$, we have $h(N)=h(N^{\prime})$. Thus, $N^{\prime}$ is an attainment of the ploidy profile $\vec{m_{1}}=(m_{1})$. Put $m=m_{1}$ and $x=x_{1}$. If $m\in\\{2,3\\}$ then the lemma clearly holds since the only cut arc of $N^{\prime}$ is the incoming arc of $x_{1}$ and therefore is trivial. So assume that $m\geq 4$. Assume for contradiction that $N^{\prime}$ has a non-trivial cut-arc $a$. Let $N_{1}$ and $N_{2}$ denote the connected components of $N^{\prime}$ obtained by deleting $a$. Assume without loss of generality that the root of $N^{\prime}$ is contained in $V(N_{1})$. Let $y\not\in L(N_{1})$. Then since for all leaves $z$ in a phylogenetic network $M$ the number of directed paths from the root of $M$ to $z$ is $m_{M}(z)$ it follows that $m=m_{N^{\prime}}(x)=m_{N_{1}^{y}}(y)\times m_{N_{2}}(x)$. Since $1\not\in\\{m_{N_{1}^{y}}(y),m_{N_{2}}(x)\\}$ and $m$ is prime this is impossible. ∎ ## 4\. Realizing simple ploidy profiles We start this section with associating to a simple ploidy profile $\vec{m}$ a binary phylogenetic network $D(\vec{m})$ that is based on the prime factor decomposition of $m_{1}$ and also a binary phylogenetic network $B(\vec{m})$ that is based on the unique bitwise representation of $m_{1}$. As we shall see, other ways to define binary realizations of $\vec{m}$ that are based on the prime factor decomposition of $m_{1}$ or on the bitwise representation of $m_{1}$ and that are similar in spirit to the definitions of $D(\vec{m})$ and $B(\vec{m})$ are conceivable. Furthermore, the ploidy profiles considered in Figure 6 suggest that the relationship between the number of hybrid vertices in $D(\vec{m})$ and in $B(\vec{m})$ is not straight forward. Suppose that $\vec{m}=(m_{1},\ldots,m_{n})$, $n\geq 1$, is a ploidy profiles on $X=\\{x_{1},\ldots,x_{n}\\}$. Figure 6. For a strictly simple ploidy profile $\vec{m}$ we depict in (i), (iii), (v) and (viii) the phylogenetic network $B=B(\vec{m})$ and in (ii), (iv), and (vi) the phylogenetic network $D=D(\vec{m})$. (i) and (ii): $\vec{m}=(15)$ and $h(B)=6>5=h(D)$; (iii) and (iv): $\vec{m}=(9)$ and $h(B)=4=h(D)$; (v) and (vi): $\vec{m}=(265)$ and $h(B)=10<11=h(D)$. (vii) A realization of the ploidy profile $\vec{m}=(47)$ that uses eight hybrid vertices. (viii) The realization of the ploidy profile in (vii) in terms of $B(\vec{m})$. ### 4.1. The phylogenetic network $D(\vec{m})$ We begin with introducing further terminology. Suppose that $m$ is a positive integer and that, for all $1\leq i\leq k$, $p_{i}$ is a prime and $\alpha_{i}\geq 1$ is an integer such that $p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\cdot\ldots\cdot p_{k}^{\alpha_{k}}$ is a prime factor decomposition of $m$. Without loss of generality, we may assume throughout the remainder of the paper that the primes $p_{1},\ldots,p_{k}$ are indexed in such a way that $p_{i}>p_{i+1}$ holds for all $1\leq i\leq k-1$. For all $1\leq i\leq k$, let $\vec{p}_{i}=(p_{i})$ denote the strictly simple ploidy profile on $Y=\\{x_{1}\\}$. Also let $\mathcal{A}(\vec{p}_{i})$ denote a binary phylogenetic network on $Y$ that attains $\vec{p}_{i}$. Note that $\mathcal{A}(\vec{p}_{i})$ need not be unique. For all $1\leq i\leq k$, we then define a binary phylogenetic network $\mathcal{A}(\vec{p}_{i})^{\alpha_{i}}$ on $Y$ as follows: #### 4.1.1. The phylogenetic network $\mathcal{A}(\vec{p}_{i})^{\alpha_{i}}$ We take the root $\rho_{i}$ of $\mathcal{A}(\vec{p}_{i})$ to be the root of $\mathcal{A}(\vec{p}_{i})^{\alpha_{i}}$. If $\alpha_{i}=1$ then we take ${\color[rgb]{0,0,0}\mathcal{A}(\vec{p}_{i})^{\alpha_{i}}}$ to be $\mathcal{A}(\vec{p}_{i})$. If $\alpha_{i}\geq 2$ then we make $\alpha_{i}$ equivalent copies of $\mathcal{A}(\vec{p}_{i})$ and order them in some way. Next, we identify the unique leaf of the first of the $\alpha_{i}$ copies of $\mathcal{A}(\vec{p}_{i})$ under that ordering with the root of the second copy of $\mathcal{A}(\vec{p}_{i})$ and so on until we have processed all $\alpha_{i}$ copies of $\mathcal{A}(\vec{p}_{i})$ this way. The resulting directed acyclic graph is $\mathcal{A}(\vec{p}_{i})^{\alpha_{i}}$ in this case. To illustrate this construction, assume that $m=4$. Then $k=1$, $p_{1}=2=\alpha_{1}$, and $Y=\\{x_{1}\\}$. Furthermore, the phylogenetic network depicted in Figure 3(iv) with the leaf $x_{2}$ and its incoming arc removed, and the resulting vertex of indegree and outdegree one suppressed, is $\mathcal{A}(\vec{p}_{1})^{\alpha_{1}}$. #### 4.1.2. From $\mathcal{A}(\vec{p}_{i})^{\alpha_{i}}$ to $D(\vec{m})$ in case $\vec{m}$ is strictly simple Suppose $\vec{m}$ is strictly simple. Then we obtain $D(\vec{m})$ by ‘stacking’ the networks $\mathcal{A}(\vec{p}_{1})^{\alpha_{1}},\ldots,\mathcal{A}(\vec{p}_{k})^{\alpha_{k}}$ obtained as described above for a prime factor decomposition $p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\cdot\ldots\cdot p_{k}^{\alpha_{k}}$ of $m=m_{1}$ and a choice of attainment $\mathcal{A}(\vec{p}_{i})$ of $\vec{p}_{i}=(p_{i})$, for all $1\leq i\leq k$. If $k=1$ then $D(\vec{m})$ is $\mathcal{A}(\vec{p}_{1})^{\alpha_{1}}$. So assume $k\geq 2$. Then we define $D(\vec{m})$ to be the phylogenetic network on $\\{x_{1}\\}$ obtained by identifying, for all $1\leq i\leq k-1$, the unique leaf of $\mathcal{A}(\vec{p}_{i})^{\alpha_{i}}$ with the root of $\mathcal{A}(\vec{p}_{i+1})^{\alpha_{i+1}}$. For the convenience of the reader, we depict $D(\vec{m})$ for the strictly simple ploidy profile $\vec{m}=(9)$ on $\\{x\\}$ in Figure 6(iv). #### 4.1.3. From $\mathcal{A}(\vec{p}_{i})^{\alpha_{i}}$ to $D(\vec{m})$ in case $\vec{m}$ is not strictly simple For all primes $p$ in the prime factor decomposition of $m_{1}$, choose a binary attainment $\mathcal{A}(\vec{p})$ of the strictly simple ploidy profile $\vec{p}=(p)$ and construct the network $D(\vec{m^{\prime}})$ for the strictly simple ploidy profile $\vec{m^{\prime}}=(m_{1})$ as described above. That network we then process further as follows. First, we choose an outgoing arc $a$ of the root of $D(\vec{m^{\prime}})$ and subdivide it with $n-1$ subdivision vertices $s_{2},\ldots,s_{n}$ where, starting at the tail of $a$, the first subdivision vertex is $s_{2}$, the next is $s_{3}$, and so on. To the vertices $s_{i}$, $2\leq i\leq n$ we then add the arcs $(s_{i},x_{i})$ to obtain $D(\vec{m}$). As an immediate consequence of the construction of $D(\vec{m})$, we have that $D(\vec{m})$ does not contain an identifiable pair of vertices. In view of (R1) it follows that $D(\vec{m})$ is semi-stable. In summary, we therefore have the following result. ###### Lemma 4.1. Suppose $\vec{m}$ is a simple ploidy profile on $X$. Then $D(\vec{m})$ is a binary, semi-stable phylogenetic network on $X$ that realizes $\vec{m}$. Note that as the strictly simple ploidy profile $\vec{m}=(m)$ with $m=265$ shows, the phylogenetic network depicted in Figure 6(v) uses fewer hybrid vertices to attain $\vec{m}$ than the phylogenetic network $D(\vec{m})$ depicted in Figure 6(vi). Thus, an attainment of a simple ploidy profile $\vec{m}$ need not be obtained from a prime factor decomposition of the first component of $\vec{m}$. For the remainder of this section, assume again that $\vec{m}=(m_{1},\ldots,m_{n})$, $n\geq 1$ is a simple ploidy profile on $X=\\{x_{1},\ldots,x_{n}\\}$. ### 4.2. The phylogenetic network $B(\vec{m})$ We start with associating two vectors to a positive integer $m$ which we call the bitwise representation (of $m$) and the binary representation (of $m$), respectively. For $m$ a positive integer, the first is the 0-1 vector $\vec{v}_{m}=(v_{m}^{f},\ldots,v_{m}^{1},v_{m}^{0})$ such that $m=\sum_{i=0}^{f}2^{i}v_{m}^{i}$. For ease of presentation, and unless stated otherwise, we denote by $v_{m}^{f}$ the most significant bit that is one. The second is the vector $(i_{1},\ldots,i_{q})$, $q\geq 1$ and $i_{j}\not=0$, for all $1\leq j\leq q-1$, such that $m=\sum_{j=1}^{q}2^{i_{j}}$ holds. Informally speaking, the $j$-th entry of that vector is the exponent of the term $2^{i_{j}}$ in the bitwise representation of $m$. Note that $2^{i_{1}}$ indexes the component $v_{m}^{f}$ of $\vec{v}_{m}$. For example, the bitwise representation of $m=11$ is $(1,0,1,1)$ and the binary representation of $m$ is $(3,1,0)$. #### 4.2.1. The phylogenetic network $B(\vec{m})$ in case $\vec{m}$ is strictly simple Then $\vec{m}=(m_{1})$ and $X=\\{x_{1}\\}$. Let $B(q)$ denote the beaded tree with unique leaf $x_{1}$ and $q\geq 0$ hybrid vertices. Let $(i_{1},\ldots,i_{q})$ denote the binary representation of $m_{1}$. Then $B(\vec{m})$ is obtained from the beaded tree $B(i_{1})$ as follows. Choose one the two outgoing arcs of the root of $B(i_{1})$ and subdivide it with $q-1$ vertices $s_{2},\ldots,s_{q}$ not contained in $B(i_{1})$ so that $s_{2}$ is the child of the root of $B(i_{1})$, $s_{3}$ is the child of $s_{2}$, and so on. For all $1\leq j\leq q$, we then add an arc $a_{j}$ to $s_{j}$ whose head is a subdivision vertex of the outgoing arc of the hybrid vertex of $B(i_{1})$ that has precisely $i_{j}$ hybridization vertices of $B(i_{1})$ strictly below it. We refer the interested reader to Figure 6(iii) for an illustration of $B(\vec{m})$ for the strictly simple ploidy profile $\vec{m}=(9)$. #### 4.2.2. The phylogenetic network $B(\vec{m})$ in case $\vec{m}$ is not strictly simple We first construct the phylogenetic network $B(\vec{m^{\prime}})$ for the strictly simple ploidy profile $\vec{m^{\prime}}=(m_{1})$ on $\\{x_{1}\\}$. Next, we choose one of the two outgoing arcs of the root of $B(\vec{m^{\prime}})$ and subdivide that arc with $n-1$ subdivision vertices $t_{2},\ldots,t_{n}$ such that $t_{2}$ is the child of the root of $B(\vec{m^{\prime}})$, $t_{3}$ is the child of $t_{2}$ and so on. Finally, we attach to each $t_{i}$ the arc $(t_{i},x_{i})$, $2\leq i\leq n$. To illustrate this construction, consider the simple ploidy profile $\vec{m}_{1}=(5,1)$ on $X^{\prime}=\\{x_{1},x_{2}\\}$. Then $\vec{m^{\prime}}=(5)$ and the phylogenetic network $D$ depicted in Figure 8 is $B(\vec{m})$. In fact, $B(\vec{m})$ is a binary attainment of $\vec{m}$. As indicated in Figure 6, the relationship between $D(\vec{m})$, $B(\vec{m})$, and a binary attainment of a simple ploidy profile $\vec{m}$ is far from clear in general. This holds even if $\vec{m}=(m)$ is strictly simple and $m$ is a prime. Indeed for $m=47$ the hybrid number of $\vec{m}$ is at most eight since the phylogenetic network depicted in Figure 6(vi) realizes $\vec{m}$. However $h(B(\vec{m}))=9$. This implies that, in general, $B(\vec{m})$ with $\vec{m}=(p)$ and $p$ a prime cannot be used as an attainment with which to initialize the construction of $D(\vec{m})$. As an immediate consequence of the construction of $B(\vec{m})$, we have the following companion result of Lemma 4.1 since similar arguments as in the case of $D(\vec{m})$ imply that $B(\vec{m})$ is semi-stable. ###### Lemma 4.2. Suppose $\vec{m}$ is a simple ploidy profile on $X$. Then $B(\vec{m})$ is a binary, semi-stable phylogenetic network on $X$ that realizes $\vec{m}$. To gain insight into the structure of $B(\vec{m})$, we next present formulae for counting, for a simple ploidy profile $\vec{m}$, the number $b(\vec{m})$ of vertices in $B(\vec{m})$ and also the number of hybrid vertices of $B(\vec{m})$. Note that such formulae are known for certain types of phylogenetic networks without beads (see e.g.[18, 25] and [23] for more). To state them, we require further terminology. Suppose $m\geq 1$ is an integer and $\vec{v}_{m}$ is the bitwise representation of $m$. Then we denote by $p(m)$ the number of non-zero bits in $\vec{v}_{m}$ bar the first one. For example, if $m=6$ then $p(m)=1$. Furthermore, we denote the dimension of a vector $\vec{v}$ by $\dim(\vec{v})$. Armed with this, the construction of $B(\vec{m})$ from a simple ploidy profile $\vec{m}$ implies our first main result. ###### Theorem 4.3. Suppose that $\vec{m}=(m_{1},m_{2},\ldots,m_{n})$, $n\geq 1$, is a simple ploidy profile. Let $\vec{i}_{m_{1}}=(i_{1},i_{2},\ldots,i_{l})$, some $l\geq 1$, denote the binary representation of $m_{1}$. Then ${\color[rgb]{0,0,0}b(\vec{m})}=2({\color[rgb]{0,0,0}i_{1}+\dim(\vec{i}_{m_{1}})-1}+n-1)+1{\color[rgb]{0,0,0}=2(\dim(\vec{v}_{m_{1}})-1+p(m_{1})+n-1)+1}$ Furthermore, $B(\vec{m})$ has ${\color[rgb]{0,0,0}i_{1}+\dim(\vec{i}_{m_{1}})-1}$ hybrid vertices. We remark in passing that in case $\vec{m}=(m)$ is strictly simple then any binary phylogenetic network $N$ that realizes $\vec{m}$ has $2h(N)+1$ vertices since $N$ has only one leaf and, so, the number of tree vertices of $N$ plus the root must equal its number of hybrid vertices. Note that in case $N$ is $B(\vec{m})$ then this also follows from Theorem 4.3 since $n=1$ and $i_{1}+\dim(\vec{i}_{m_{1}})-1$ is the number of hybrid vertices of $N$ and therefore also the number of tree vertices of $N$ plus the root. ## 5\. Realizing general ploidy profiles To help establish a formula for computing the hybrid number of a ploidy profile, we start by associating a binary phylogenetic network $N(\vec{m})$ on $X$ to a ploidy profile $\vec{m}$ on $X$ that realizes $\vec{m}$. This network is recursively obtained via a two-phase process which we present in the form of pseudo-code in Algorithms 1 (Phase I) and 2 (Phase II). We next outline both phases and refer the reader to Figure 7 for an illustration of the three cases considered in Algorithm 2 and to Figure 8 for an illustration of the construction of $N(\vec{m})$ from the ploidy profile $\vec{m}=(12,6,6,5)$. The phylogenetic network $D$ in that figure is the phylogenetic network with which the construction of $N(\vec{m})$ is initialized. Suppose $\vec{m}=(m_{1},\ldots m_{n})$ is a ploidy profile on $X$. Then, in Phase I, we iteratively generate a simple ploidy profile $\vec{m}_{t}$ from $\vec{m}$. This process is captured via a sequence $\sigma(\vec{m})$ of ploidy profiles which we call the simplification sequence for $\vec{m}$ and formally define as the output of Algorithm 1 when given $\vec{m}$ as input. The first element of $\sigma(\vec{m})$ is $\vec{m}$ and the last element is a simple ploidy profile which we call the terminal element of $\sigma(\vec{m})$ and denote by $\vec{m}_{t}$. We denote the number of elements of $\sigma(\vec{m})$ other than $\vec{m}$ by $s(\vec{m})$. Note that if $\vec{m}$ is a simple ploidy profile then $s(\vec{m})=0$ as $\vec{m}=\vec{m}_{t}$ holds in this case. Informally speaking, the purpose of $\sigma(\vec{m}):\vec{m}_{0}=\vec{m},\vec{m}_{i},\ldots\vec{m}_{s(\vec{m})}=\vec{m}_{t}$ is to allow us to construct, for all $0\leq i\leq s(\vec{m})$, the network $N(\vec{m}_{i})$ from $N(\vec{m}_{i+1})$ by reusing $N(\vec{m}_{i+1})$ (or parts of it) as much as possible (see [7] for more on such sequences). To formally state Algorithm 1, we require further notations. Suppose $\vec{m}=(m_{1},\ldots,m_{n})$ is a ploidy profile on $X$. Then we denote for all $1\leq i\leq n$ the element of $X$ that indexes $m_{i}$ by $x(m_{i})$. Furthermore, for any non-empty sequence $\sigma$ and any $z$, we denote by $\sigma\cup\\{z\\}$ the sequence obtained by adding $z$ to the end of $\sigma$. Algorithm 1 The simplification sequence of a ploidy profile. 1:A ploidy profile $\vec{m}=(m_{1},m_{2},\ldots,m_{n})$ on $X=\\{x_{1},x_{2},\ldots,x_{n}\\}$, $n\geq 1$. 2:The simplification sequence $\sigma(\vec{m})$ for $\vec{m}$ and a set $X(\vec{m})$ that contains, for all ploidy profiles $\vec{m}^{\prime}$ in $\sigma(\vec{m})$, the set $X^{\prime}$ that indexes $\vec{m}^{\prime}$. 3:Put $\vec{m}_{0}\leftarrow\vec{m}$, $\sigma(\vec{m_{0}})\leftarrow\vec{m_{0}}$, $X_{0}\leftarrow X$, $X(\vec{m_{0}})\leftarrow\\{X_{0}\\}$, and $k\leftarrow n$. 4:if $\vec{m}_{0}$ is simple then 5: Return $\sigma(\vec{m}_{0})$ and $X(\vec{m}_{0})$. 6: while $\vec{m}=(m_{1},\ldots,m_{k})$ is not simple do 7: Put $\alpha=m_{1}-m_{2}$ and compute a ploidy profile $\vec{m^{\prime}}$ on a set $X^{\prime}$ as follows: 8: if $\alpha=0$ then 9: $\vec{m^{\prime}}=(m_{2},m_{3},\ldots,m_{k})$ and $X^{\prime}=\\{x(m_{2}),x(m_{3}),\ldots,x(m_{k})\\}$. 10: if $\alpha>m_{2}$ then 11: $\vec{m^{\prime}}=(\alpha,m_{2},m_{3},\ldots,m_{k})$ and $X^{\prime}=\\{x(\alpha),x(m_{2}),x(m_{3}),\ldots,x(m_{k})\\}$. 12: if $\alpha\leq m_{2}$ then 13: if there exists some $j\in\\{1,\ldots,k-1\\}$ so that $m_{j+1}<\alpha\leq m_{j}$ then 14: $\vec{m^{\prime}}=(m_{2},m_{3},\ldots,m_{j},\alpha,m_{j+1},\ldots,m_{k})$ and $X^{\prime}=\\{x(m_{2}),x(m_{3}),\ldots$, $x(m_{j}),x(\alpha),x(m_{j+1}),\ldots,x(m_{k})\\}$. 15: if $\alpha=m_{k}$ then 16: $\vec{m^{\prime}}=(m_{2},m_{3},\ldots,m_{k},\alpha)$ and $X^{\prime}=\\{x(m_{2}),x(m_{3}),\ldots,x(m_{k}),x(\alpha)\\}$. 17: Put $\sigma(\vec{m_{0}})\leftarrow\sigma(\vec{m_{0}})\cup\\{\vec{m^{\prime}}\\}$, $X(\vec{m_{0}})\leftarrow X(\vec{m_{0}})\cup\\{X^{\prime}\\}$, $k\leftarrow|X^{\prime}|$, and $\vec{m}\leftarrow\vec{m^{\prime}}$ and return to Line 6. 18: Return $\sigma(\vec{m}_{0})$ and $X(\vec{m}_{0})$. Phase II is concerned with generating the phylogenetic network $N(\vec{m})$ from the simplification sequence of $\vec{m}$ and the set $X(\vec{m})$ (for both see Phase I), and an attainment $\mathcal{A}(\vec{m}_{t})$ of $\vec{m}_{t}$. Note that in case an attainment for $\vec{m}_{t}$ is not known, we can always initialize the construction of $N(\vec{m})$ with $D(\vec{m})$ or $B(\vec{m})$. The number of hybrid vertices of the generated network in this case is an upper bound on $h(N(\vec{m}))$ and therefore also on the hybrid number of $\vec{m}$. To obtain $N(\vec{m})$, we use a trace-back through $\sigma(\vec{m})$ starting with $\vec{m}_{t}$. More precisely, assume that $\vec{m}_{i}=(m_{1},\ldots,m_{k})$, some $k\geq 2$ and $\vec{m}_{i+1}$ are two ploidy profiles in $\sigma(\vec{m})$, some $0\leq i\leq s(\vec{m})-1$. Then to obtain $N(\vec{m}_{i})$ from $N(\vec{m}_{i+1})$ we distinguish again between the cases that $\alpha:=m_{1}-m_{2}=0$, $\alpha>m_{2}$ and $\alpha\leq m_{2}$, see Figure 7. Note that there might be non-equivalent attainments of $\vec{m}_{t}$ with which to initialize the construction of $N(\vec{m})$. Algorithm 2 The construction of the phylogenetic network $N(\vec{m})$ from a ploidy profie $\vec{m}$ and an attainment for $\vec{m}_{t}$. 1:A ploidy profile $\vec{m}$ on $X$, an attainment $\mathcal{A}(\vec{m}_{t})$ of $\vec{m}_{t}$, and the output of Algorithm 1 2:The phylogenetic network $N(\vec{m})$ constructed from $\mathcal{A}(\vec{m}_{t})$. 3:Put $\vec{m}_{0}\leftarrow\vec{m}$, $\vec{m^{\prime}}\leftarrow\vec{m}_{t}$, and $N(\vec{m^{\prime}})\leftarrow\mathcal{A}(\vec{m}_{t})$. 4:if $\vec{m^{\prime}}=\vec{m}_{0}$ then 5: return $N(\vec{m^{\prime}})$. 6: while $\vec{m^{\prime}}\not=\vec{m}_{0}$ do 7: let $\vec{m}=(m_{1},\ldots,m_{l})$ denote the predecessor of $\vec{m^{\prime}}=(m_{1}^{\prime},\ldots,m_{k}^{\prime})$ in $\sigma(\vec{m_{0}})$, some $k$ and some $l$. Put $\alpha=m_{1}-m_{2}$ and construct the phylogenetic network $N(\vec{m})$ from $N(\vec{m^{\prime}})$ as follows. 8: if $\alpha=0$ then 9: for all $2\leq i\leq k$, relabel the leaf $x(m_{i}^{\prime})$ of $N(\vec{m^{\prime}})$ by $x(m_{i+1})$. Replace the leaf $x(m_{1}^{\prime})$ of $N(\vec{m^{\prime}})$ by the cherry $\\{x(m_{1}),x(m_{2})\\}$. 10: if $\alpha>m_{2}$ then 11: for all $1\leq i\leq k$, relabel the leaf $x(m_{i}^{\prime})$ of $N(\vec{m^{\prime}})$ by $x(m_{i})$. Subdivide the incoming arcs of leaves $x(m_{1})$ and $x(m_{2})$ by vertices $u$ and $v$, respectively, and add the arc $(v,u)$. 12: if $\alpha\leq m_{2}$ then 13: let $j$ be such that $m_{j+1}<\alpha\leq m_{j}$. Subdivide the incoming arc of $x(m_{j}^{\prime})$ by a new vertex $v$ and replace $x(m_{1}^{\prime})$ by the cherry $\\{x(m_{1}),x(m_{2})\\}$. Subdivide the incoming arc of $x(m_{1})$ by a new vertex $u$. Add the arc $(v,u)$ and delete $x(m_{j}^{\prime})$ as well as its incoming arc $(v,x(m_{j}^{\prime}))$ (suppressing $v$ as $indeg(v)=1=outdeg(v)$ now holds). For all $2\leq k\leq j-1$, put $x(m_{k+1})\leftarrow x(m_{k}^{\prime})$ and, for all remaining $k$, put $x(m_{k})\leftarrow x(m_{k}^{\prime})$. 14: Put $\vec{m^{\prime}}\leftarrow\vec{m}$ and return to line 6. 15: Return $N(\vec{m})$. Figure 7. The three cases in the construction of the network $N(\vec{m})$ from a ploidy profile $\vec{m}=(m_{1},m_{2}\ldots,m_{n})$ considered in Algorithm 2. For $\alpha=m_{1}-m_{2}$, the case $\alpha=0$ is depicted in (i), the case $\alpha>m_{2}$ in (ii), and the case $\alpha\leq m_{2}$ in (iii). In (iii), the dashed arc and the vertex $x(m_{j}^{\prime})$ are deleted and the vertex $v$ is suppressed. In each case, the grey disk indicates the part of the phylogenetic network of no relevance to the discussion. To illustrate the construction of $N(\vec{m})$, consider the ploidy profile $\vec{m}=(12,6,6,5)$ on $X=\\{x_{1},\ldots,x_{4}\\}$. Then $\vec{m}$, $(6,6,6,5)$, $(6,6,5)$, $(6,5)$, $(5,1)$ is the simplification sequence $\sigma(\vec{m})$ associated to $\vec{m}$ because, by definition, the first element of $\sigma(\vec{m})$ is always $\vec{m}$. The ploidy profile $(5,1)$ is $\vec{m}_{t}$. The phylogenetic network $D$ on $X=\\{x_{1},x_{2}\\}$ on the left of Figure 8 is an attainment of $\vec{m}_{t}$ in the form of $B(\vec{m}_{t})$. Initializing Algorithm 2 with $B(\vec{m}_{t})$ yields the phylogenetic network $N(\vec{m})$ at the right of that figure. Apart from the second arrow which is labelled $(6,5)\to(6,6,6,5)$ as it combines the steps $(6,5)\to(6,6,5)$ and $(6,6,5)\to(6,6,6,5)$, each arrow is labelled with the corresponding traceback step in $\sigma(\vec{m})$. Figure 8. The construction of $N(\vec{m})$ for the ploidy profile $\vec{m}=(12,6,6,5)$ on $X=\\{x_{1},x_{2},x_{3},x_{4}\\}$ where we have combined the steps $(6,5)\rightarrow(6,6,5)$ and $(6,6,5)\rightarrow(6,6,6,5)$ into the step $(6,5)\rightarrow(6,6,6,5)$. The leftmost network $D$ on $X^{\prime}=\\{x_{1},x_{2}\\}$ is an attainment of $\vec{m}_{t}=(5,1)$ in the form of $B(\vec{m})$ and initializes the construction of $N(\vec{m})$. The network $N(\vec{m}_{2})$ on $X^{\prime}$ realizes the ploidy profile $\vec{m}_{2}=(6,5)$ and the network $N(\vec{m}_{1})$ on $X$ realizes the ploidy profile $\vec{m}_{1}=(6,6,6,5)$. The rightmost network is $N(\vec{m})$. The arrow labels indicate how a ploidy profile in $\sigma(\vec{m})$ was obtained. For any attainment $\mathcal{A}(\vec{m}_{t})$ of the terminal element $\vec{m}_{t}$ of the simplification sequence $\sigma(\vec{m})$ of a ploidy profile $\vec{m}$ on $X$, the graph $N(\vec{m})$ is a phylogenetic network on $X$ that realizes $\vec{m}$. Also, at each step in the traceback through $\sigma(\vec{m})$ the number of vertices is increased by exactly two. Denoting the number of vertices of $N(\vec{m})$ by $n(\vec{m})$ and the number of vertices in a binary attainment $\mathcal{A}(\vec{m}_{t})$ of $\vec{m}_{t}$ by $a(\vec{m}_{t})$, we obtain our next result. ###### Lemma 5.1. Suppose $\vec{m}$ is a ploidy profile on $X$. Then for any binary attainment of $\vec{m}_{t}$ used in the initialization of the construction of $N(\vec{m})$, we have that $N(\vec{m})$ is a binary phylogenetic network on $X$ that realizes $\vec{m}$. Furthermore, $n(\vec{m})=a(\vec{m}_{t})+2s(\vec{m})$. In combination with Theorem 4.3, it follows that $N(\vec{m})$ has at most $b(\vec{m}_{t})+2s(\vec{m})=2{\color[rgb]{0,0,0}(i_{1}+\dim(\vec{i}_{m_{1}})}+n+s(\vec{m})+l)-3$ vertices and also at most ${\color[rgb]{0,0,0}i_{1}+\dim(\vec{i}_{m_{1}})-1}+s(\vec{m})$ hybrid vertices where $\vec{m}_{t}=(m_{1},\ldots,m_{l})$, some $l\geq 1$, and $i_{1}$ is the first component in the binary representation $\vec{i}_{m_{1}}$ of $m_{1}$. Furthermore, we have ###### Proposition 5.2. Suppose $\vec{m}=(m_{1},\ldots,m_{n})$ is a ploidy profile on $X$ such that $B(\vec{m}_{t})$ is a binary attainment of $\vec{m}_{t}$. For all $1\leq k\leq n$, let $(i_{k,1},\ldots,i_{{k,l_{k}}})$ denote the binary representation of $m_{k}$, some $l_{k}\geq 1$. Then the following holds. 1. (i) $h(\vec{m})\leq\sum_{k=1}^{n}({\color[rgb]{0,0,0}i_{k,1}}+l_{k}-1)$. In case $\vec{m}$ is simple, $h(\vec{m})={\color[rgb]{0,0,0}i_{1,1}}+l_{1}-1$ which is sharp. 2. (ii) If $m_{i}=2^{{\color[rgb]{0,0,0}i_{k,1}}}$ holds for all $1\leq k\leq n$ then $h(\vec{m})={\color[rgb]{0,0,0}i_{1,1}}$. ###### Proof. (i) To see the stated inequality, we construct a binary phylogenetic network $B$ on $X=\\{x_{1},\ldots,x_{n}\\}$ from $\vec{m}$ as follows. For all $1\leq k\leq n$, we first construct $B_{k}=B(\vec{m}_{k})$ where $\vec{m}_{k}$ is the strictly simple ploidy profile $(m_{k})$. Next, we add a new vertex $\rho$ and, for all $1\leq k\leq n$, an arc from $\rho$ to the root of $B_{k}$. If the resulting phylogenetic network on $X$ is binary then that network is $B$. Otherwise, $B$ is a phylogenetic network obtained by resolving $\rho$ so that $\rho$ has outdegree two. By construction, $B$ realizes $\vec{m}$ because $B_{k}$ realizes $\vec{m}_{k}$, for all $1\leq k\leq n$. By Theorem 4.3, it follows that $h(B_{k})={\color[rgb]{0,0,0}i_{k,1}}+l_{k}-1$. Thus, $h(\vec{m})\leq h(B)=\sum_{k=1}^{n}({\color[rgb]{0,0,0}i_{k,1}}+l_{k}-1)$, as required. If $\vec{m}$ is simple then $k=1$ and so $h(B)=h(B_{1})={\color[rgb]{0,0,0}i_{1,1}}+l_{1}-1$. (ii) This is a straight forward consequence of (i) and the fact that in this case $B_{k}$ is the beaded tree $B(i_{k,1})$. ∎ Note that as the example of the ploidy profile $(k^{l},k)$ for some $l,k\geq 2$ shows, there exists an infinite family of ploidy profiles $\vec{m}$ for which the length of the simplification sequence for $\vec{m}$ is at least $k^{l-1}+1$ and therefore grows exponentially in $l$. As a consequence of this, we also have, for any attainment of $\vec{m}_{t}$, that the number of hybrid vertices in $N(\vec{m})$ can grow exponentially in $l$. In view of this, we next study simplification sequences for special types of ploidy profiles. To this end we call an element $j\in\\{1,\ldots,n\\}$ maximum if $m_{j}$ is the last component of a ploidy profile $\vec{m}=(m_{1},\ldots,m_{n})$, $n\geq 1$, that is not one. ###### Proposition 5.3. Suppose $\vec{m}=(m_{1},\ldots,m_{n})$ is a ploidy profile on $X$. Let $q$ denote the maximum index of $\vec{m}$. Then the following holds 1. (i) If $k\geq 2$ is an integer such that $m_{i}=k$ holds for all $1\leq i\leq q$ then $s(\vec{m})=q-1$. 2. (ii) If $k\geq 1$ and $l\geq q+2$ are integers such that $m_{i}=k(l-i)$ holds for all $1\leq i\leq q$ then $s(\vec{m})={\color[rgb]{0,0,0}l+q-3}$. ###### Proof. Note first that for both statements, we may assume without loss of generality that $q=n$ since elements in $X$ with ploidy number one do not contribute to $s(\vec{m})$. (i): Since $m_{i}=m_{i+1}$ holds for all $1\leq i\leq n-1$, the difference in dimension between any two consecutive ploidy profiles in $\sigma(\vec{m})$ is one. Hence, $q-1$ operations are needed to transform $\vec{m}$ into $\vec{m}_{t}$. Consequently, $s(\vec{m})=q-1$. (ii): Since $m_{i-1}-m_{i}=k$ holds for all $2\leq i\leq q$, it follows that $q-1$ operations are needed to transform $\vec{m}$ into a ploidy profile $\vec{m^{\prime}}$ of the form $(k(l-q),k,\ldots,k,1,\ldots,1)$ where the components after the last $k$ may or may not exist. To transform $\vec{m^{\prime}}$ into a ploidy profile $\vec{m^{\prime\prime}}$ of the from $(k,k,\ldots,k,1,\ldots,1)$ a further $l-q-1$ operations are needed. By Assertion (i), a further $q-1$ operations are needed to transform $\vec{m^{\prime\prime}}$ into a simple ploidy profile. Since $\sigma(\vec{m})$ is the concatenation of the underlying simplification sequences it follows that $s(\vec{m})=q-1+l-q-1+q-1=q+l-3$. ∎ Together with Lemma 5.1, the next result may be viewed as the companion result of Lemmas 4.1 and 4.2 for general ploidy profiles. ###### Proposition 5.4. For any ploidy profile $\vec{m}$ on $X$ and any binary attainment of the terminal element in $\sigma(\vec{m})$, the graph $N(\vec{m})$ is a binary, semi-stable phylogenetic network on $X$ that realizes $\vec{m}$. ###### Proof. In view of Lemma 5.1, it suffices to show that $N(\vec{m})$ is semi-stable. Assume for contradiction that there exists a ploidy profile $\vec{m}=(m_{1},\ldots,m_{n})$ on $X$ such that $N(\vec{m})$ is not semi- stable. Since the construction of $N(\vec{m})$ is initialized with an attainment of the terminal element $\vec{m}_{t}$ of $\sigma(\vec{m}):\vec{m}_{0}=\vec{m},\vec{m}_{1},\ldots,\vec{m}_{l}=\vec{m}_{t}$, some $l\geq 0$ and, by Lemma 3.1(ii), an attainment is semi-stable there must exist some $0\leq i\leq l$ such that the network $N(\vec{m}_{i})$ is not semi- stable but all networks $N(\vec{m}_{j})$, $i+1\leq j\leq l$ are semi-stable. Without loss of generality, we may assume that $i=0$. Put ${\color[rgb]{0,0,0}\vec{m^{\prime}}=\vec{m}_{1}}$. We claim first that $m_{1}\not=m_{2}$. Indeed, if $m_{1}=m_{2}$ then $\alpha=0$. Hence, Line 8 in Algorithm 2 is executed to obtain $N(\vec{m})$ from $N(\vec{m^{\prime}})$. Since, by assumption, $N(\vec{m})$ is not semi- stable it follows that $N(\vec{m^{\prime}})$ is not semi-stable; a contradiction. Thus, $m_{1}\not=m_{2}$, as claimed. We next claim that $m_{1}>m_{2}$ cannot hold either. Assume for contradiction that $m_{1}>m_{2}$. Put $\alpha=m_{1}-m_{2}$. Assume first that $\alpha>m_{2}$. Then Line 10 in Algorithm 2 is executed to obtain $N(\vec{m})$ from $N(\vec{m^{\prime}})$. Since $N(\vec{m^{\prime}})$ is semi-stable, and this does not introduce an identifiable pair of vertices in $N(\vec{m})$, it follows that $N(\vec{m})$ is also semi-stable which is impossible. So assume that $\alpha\leq m_{2}$. Then Line 12 in Algorithm 2 is executed to obtain $N(\vec{m})$ from $N(\vec{m^{\prime}})$. Similar arguments as in the previous two cases imply again a contradiction. This completes the proof of the claim. Thus, $m_{1}<m_{2}$ must hold. Consequently, $\vec{m}$ is not a ploidy profile; a contradiction. Thus, $N(\vec{m})$ must be semi-stable. ∎ ## 6\. The hybrid number of a ploidy profile In this section, we prove Theorem 6.1 which implies a closed formula for the hybrid number of a ploidy profile (Corollary 6.2). To help illustrate our theorem, we remark that for Line 10 in Algorithm 2 not to be executed we must have for every element $\vec{m^{\prime}}=(m^{\prime}_{1},\ldots,m^{\prime}_{n^{\prime}})$, some $n^{\prime}\geq 2$, in the simplification sequence of $\vec{m}$ that $m_{1}^{\prime}>2m_{2}^{\prime}$ does not hold. ###### Theorem 6.1. Suppose $\vec{m}$ is a ploidy profile on $X$ such that, for every ploidy profile in $\sigma(\vec{m})$, Line 10 in Algorithm 2 is not executed. If $\mathcal{A}(\vec{m}_{t})$ is an attainment for $\vec{m}_{t}$ with which the construction of $N(\vec{m})$ is initialized then $N(\vec{m})$ is an attainment for $\vec{m}$. ###### Proof. Put $\vec{m}=(m_{1},\ldots,m_{n})$ and assume that $\vec{m}$ is such that $\mathcal{A}(\vec{m}_{t})$ is an attainment of $\vec{m}_{t}$. Suppose $X=\\{x_{1},\ldots,x_{n}\\}$, $1\leq n$. Note that we may assume that $n\geq 2$ as otherwise $\vec{m}$ is simple. Hence, $\vec{m}=\vec{m}_{t}$ and, so, the theorem follows by assumption on $\vec{m}_{t}$. Similar arguments as before imply that we may also assume that $\vec{m}$ is not simple. Assume for contradiction that $N(\vec{m})$ is not an attainment of $\vec{m}$. Let $Q$ denote an attainment of $\vec{m}$. Then $h(Q)<h(N(\vec{m}))$. In view of Proposition 3.2, there must exist a directed path $R$ in $F(U(Q))$ from the root $\rho$ of $F(U(Q))$ to $x_{1}$ that contains all hybrid vertices of $F(U(Q))$. Since $h(Q)=h(F(U(Q)))$ as $C(Q)$ and $F(U(Q))$ are equivalent by (R3), it follows that we may also assume that $Q$ is binary and that $R$ gives rise to a path $P$ from $\rho$ to $x_{1}$ that contains all hybrid vertices of $Q$. Since the construction of $N(\vec{m})$ is initialized with an attainment of $\vec{m}_{t}$, there must exist a ploidy profile $\overline{\vec{m}}$ in $\sigma(\vec{m})$ such that there exists a binary phylogenetic network $\overline{Q}$ that realizes $\overline{\vec{m}}$ and for which $h(\overline{Q})<h(N(\overline{\vec{m}}))$ holds. Without loss of generality, we may assume that $\overline{\vec{m}}$ is such that for all ploidy profiles $\vec{m^{\prime\prime}}$ succeeding $\overline{\vec{m}}$ in $\sigma(\vec{m})$ we have $h(N(\vec{m^{\prime\prime}}))\leq h(Q^{\prime\prime})$ for all binary phylogenetic networks $Q^{\prime\prime}$ that realize $\vec{m^{\prime\prime}}$. For ease of presentation we may assume that $\overline{\vec{m}}=\vec{m}$. Put $\vec{m^{\prime}}=\vec{m}_{1}=(m_{1}^{\prime},\ldots,m_{l^{\prime}}^{\prime})$, some $l^{\prime}\geq 1$. Also, put $\alpha=m_{1}-m_{2}$, $N=N(\vec{m})$, and $N^{\prime}=N(\vec{m}^{\prime})$. Since Line 10 in Algorithm 2 is not executed for any element in $\sigma(\vec{m})$, it follows that either $\alpha=0$ or that $\alpha\leq m_{2}$ since either Line 8 or Line 12 of that algorithm must be executed in a pass through the algorithm’s while loop. Case (a): Assume that $\alpha=0$. Let $x_{1}=x(m_{1})$ and $x_{2}=X(m_{2})$ as in Line 9 in Algorithm 2. Let $2\leq{\color[rgb]{0,0,0}r}\leq n$ such that $m_{1}={\color[rgb]{0,0,0}m_{r}}$ holds. By the minimality of $h(Q)$ it follows that the induced subgraph $T$ of $Q$ connecting the elements in $X_{1}=\\{x_{1},\ldots,{\color[rgb]{0,0,0}x_{r}}\\}$ must be a phylogenetic tree on $X_{1}$ where, for all $3\leq j\leq k$, we put $x_{j}=x(m_{j})$. Subject to potentially having to relabel the leaves of $T$, we may assume that $\\{x_{1},x_{2}\\}$ is a cherry in $T$. Since $\alpha=0$ the directed acyclic graph $Q^{\prime}$ obtained from $Q$ by deleting $x_{1}$ and its incoming arc (suppressing resulting vertices of indegree and outdegree one) and renaming $x_{i+1}$ by $x(m_{i}^{\prime})$, for all $1\leq i\leq n-1$, is a phylogenetic network on $\\{x(m_{1}^{\prime}),\ldots,x(m^{\prime}_{n-1})\\}$. Clearly, $Q^{\prime}$ realizes $\vec{m^{\prime}}$ since $Q$ realizes $\vec{m}$. By assumption on $\vec{m}$ it follows that $N^{\prime}$ is an attainment of $\vec{m^{\prime}}$. Hence, $h(N^{\prime})\leq h(Q^{\prime})$. Since $N$ is obtained from $N^{\prime}$ by executing Line 8 in Algorithm 2 it follows that $h(Q)<h(N)={\color[rgb]{0,0,0}h(N^{\prime})\leq h(Q^{\prime})}=h(Q)$ because $T$ is a tree; a contradiction. Consequently, $N$ must attain $\vec{m}$ in this case. Case (b): Assume that $\alpha\leq m_{2}$. Let $j$, $x_{1}$, and $x_{2}$ be as in Line 13 in Algorithm 2. We start with analyzing the structure of $Q$ with regards to $x_{1}$ and $x_{2}$. To this end, note first that $m_{2}\geq 2$ must hold since otherwise $\vec{m}$ is simple and the theorem follows in view of our observation at the beginning of the proof. By assumption on $Q$, there must exist a hybrid vertex $h$ on $P$ such that there is a directed path $P_{h}$ from $h$ to $x_{2}$ because $m_{2}\geq 2$. Without loss of generality, we may assume that $h$ is such that every vertex on $P_{h}$ other than $h$ is either a tree vertex or a leaf of $Q$. Let $t$ be the last vertex on $P$ that is also contained in $P_{h}$. We next transform $Q$ into a new phylogenetic network $Q^{\prime\prime}$ that is an attainment of $\vec{m^{\prime}}$ (see Figure 9 Figure 9. The transformation of $Q$ (i) into the phylogenetic networks $Q^{\prime}$ (ii) and $Q^{\prime\prime}$ (iii) as described in Case (b) of Theorem 6.1 for $p_{1}\not=p_{2}$. In each case, the dashed lines indicate paths. Note that in (iii) the dashed line could also start at $\rho_{Q}$. for an illustration). To do this, note first that since $m_{2}\not=m_{1}$ there must exist a hybrid vertex on $P$ below $t$. We modify $Q$ as follows to obtain a further attainment $Q^{\prime}$ of $\vec{m}$. If $t$ is the parent of $x_{2}$ then $Q^{\prime}$ is $Q$. So assume that $t$ is not the parent of $x_{2}$. Then we delete the subtree $T$ of $Q$ that is rooted at the child of $t$ not contained in $P$. Note that $T$ must have at least two leaves. Next, we subdivide the incoming arc of $t$ by $|L(T)|-1$ subdivision vertices. To each created subdivision vertex we add an arc and bijectively label the heads of these arcs by the elements in $L(T)-\\{x_{2}\\}$. Next, we add an arc to $t$ and label its head by $x_{2}$ so that $t$ is now the parent of $x_{2}$. By construction, $Q^{\prime}$ is a phylogenetic network on $X$ that attains $\vec{m}$ because $h(Q)=h(Q^{\prime})$. Let $h^{*}$ be a hybrid vertex on the subpath $P^{*}$ of $P$ from $t$ to $x_{1}$ so that no vertex strictly below $h^{*}$ is a hybrid vertex of $Q^{\prime}$. Let $a_{1}^{*}$ denote the incoming arc of $h^{*}$ that lies on $P^{*}$. Furthermore, let $a_{2}^{*}$ denote the incoming arc of $h^{*}$ that does not lie on $P^{*}$. For $i=1,2$, let $p_{i}$ denote the tail of $a_{i}^{*}$. Note that $p_{1}=p_{2}$ might hold. Also note that the assumptions on $Q$ imply that $p_{1}$ must be below $t$. Finally, note that $p_{1}$ must be a hybrid vertex unless $p_{1}=p_{2}$. We claim that if $p_{1}\not=p_{2}$ then any vertex $v$ on $P^{*}$ other than $t$ and $x_{1}$ must be a hybrid vertex. Assume for contradiction that there exists a vertex $v\not\in\\{t,x_{1}\\}$ on $P^{*}$ that is a tree vertex. We show first that $p_{2}$ must also be below $t$. Since all hybrid vertices of $Q$ lie on $P$, it follows that, $v$ contributes at least $2m_{2}$ to the number of directed paths from $\rho$ to $x_{1}$ as $m_{2}$ is the number of directed paths from $\rho$ to $x_{2}$ and therefore, also from $\rho$ to $t$. Since $h_{1}^{*}$ contributes at least one further directed path from $\rho$ to $x_{1}$ in case $p_{2}$ is not below $t$, it follows that $m_{1}\geq\beta+{\color[rgb]{0,0,0}2}m_{2}$ for some $\beta\geq 1$. Hence, $m_{2}\geq\alpha=m_{1}-m_{2}\geq\beta+{\color[rgb]{0,0,0}2}m_{2}-m_{2}\geq m_{2}$ because $\beta\geq 1$. Thus, $m_{2}=\beta+m_{2}$; a contradiction as $\beta\geq 1$. Hence, $p_{2}$ must also be below $t$, as required. We next show that $p_{2}$ must be a vertex on $P^{*}$. Indeed, if $p_{2}$ were not a vertex of $P^{*}$ then it cannot be a hybrid vertex in view of our assumptions on $Q$. Thus, $p_{2}$ must be a tree vertex in this case. Since $p_{1}\not=p_{2}$ we obtain a contradiction as the choice of $h^{*}$ implies that $h^{*}$ is the parent of $x_{1}$. Thus, $p_{2}$ must be a vertex of $P^{*}$, as required. Since $p_{2}$ is a tree vertex it contributes at least $2m_{2}$ directed paths from $\rho$ to $x_{1}$. Since $p_{1}$ contributes at least a further $m_{2}$ directed paths from $\rho$ to $x_{1}$, we obtain a contradiction using similar arguments as before. Thus any vertex on $P^{*}$ other than $t$ and $x_{1}$ must be a hybrid vertex in case $p_{1}\not=p_{2}$, as claimed. We claim that if $p_{1}=p_{2}$ then $P^{*}$ has precisely 4 vertices and there exists two arcs from $p_{1}$ to $h^{*}$. To see this claim, note that $p_{1}$ contributes at least $2m_{2}$ directed paths from $\rho$ to $x_{1}$ because it is a tree vertex. If there existed a vertex $v$ on $P^{*}$ distinct from $x_{1}$, $h^{*}$, $p_{1}$, $t$ then $v$ would contribute at least $m_{2}$ further directed paths from $\rho$ to $x_{1}$. Thus, we have again at least $3m_{2}$ directed paths from $\rho$ to $x_{1}$. Similar arguments as in the previous claim yield again a contradiction. By the choice of $h^{*}$ it follows that $t$, $p_{1}$, $h^{*}$ and $x_{1}$ are the only vertices on $P^{*}$. Since $p_{1}$ and $p_{2}$ are the parents of $h^{*}$ and $p_{1}=p_{2}$, it follows that there are two parallel arcs from $p_{1}$ to $h^{*}$. This concludes the proof of our second claim. Bearing in mind the previous two claims, we next transform $Q^{\prime}$ into a new phylogenetic network $Q^{\prime\prime}$ on $X$ as follows. If $p_{1}\not=p_{2}$ then we first delete $a_{2}^{*}$ from $Q^{\prime}$ and add an arc from $p_{2}$ to the child $t_{1}$ of $t$ on $P^{*}$. Next, we remove the arc $(t,t_{1})$ and suppress $h^{*}$ and $t$ as they are now vertices with indegree one and outdegree one. The resulting directed acyclic graph is $Q^{\prime\prime}$. By construction, $Q^{\prime\prime}$ is clearly a phylogenetic network on $X$. Furthermore, the construction combined with our two claims, implies that $Q^{\prime\prime}$ realizes $\vec{m}^{\prime}$ because the arc $(t,t_{1})$ contributes $m_{2}$ directed paths from $\rho$ to $x_{1}$ in $Q$ and therefore also in $Q^{\prime}$. By construction, $h(Q^{\prime\prime})=h(Q^{\prime})-1=h(Q)-1$. Furthermore, $h(N)=h({\color[rgb]{0,0,0}N^{\prime}})+1$ by the construction of $N$ from $N^{\prime}$. By the minimality of $h(Q)$ and the choice of $\vec{m}$, it follows that $h(Q)<h(N)=h({\color[rgb]{0,0,0}N^{\prime}})+1\leq h(Q^{\prime\prime})+1=h(Q)$; a contradiction. This concludes the proof of the theorem in case $p_{1}\not=p_{2}$. If $p_{1}=p_{2}$ then we delete one of the two parallel arcs from $p_{1}$ to $h^{*}$ and suppress $p_{1}$ and $h^{*}$ as this has rendered them vertices of indegree one and outdegree one. The resulting directed acyclic graph is $Q^{\prime\prime}$ in this case. As before, $Q^{\prime\prime}$ is a phylogenetic network that, in view of our second claim, realizes $\vec{m^{\prime}}$. Similar arguments as in the case that $p_{1}\not=p_{2}$ yield again a contradiction. This concludes the proof of the theorem in this case, and therefore, the proof of the theorem. ∎ To illustrate Theorem 6.1, note that the ploidy profile $\vec{m}=(12,6,6,5)$ in Figure 1 satisfies the assumptions of Theorem 6.1. Consequently, the phylogenetic network $N(\vec{m})$ depicted in that figure is an attainment of $\vec{m}$. As the example depicted in Figure 10 indicates, the assumption that Line 10 in Algorithm 2 is not executed is necessary for Theorem 6.1 to hold. In fact, if $\vec{m}$ is a ploidy profile such that $N(\vec{m})$ contains the subgraph highlighted by the dashed rectangle in the network in Figure 10, then $N(\vec{m})$ can in general not be an attainment of $\vec{m}$. Figure 10. (i) The phylogenetic network $N(\vec{m})$ for the ploidy profile $\vec{m}=(8,2)$ on $X=\\{a,b\\}$ obtained via Algorithms 1 and 2. (ii) A phylogenetic network on $X$ that attains $\vec{m}$ and has fewer hybrid vertices than $N(\vec{m})$. Theorem 6.1 and Case (b) in its proof combined with Theorem 4.3 and Proposition 5.2 implies our next result since $l-1$ additional hybrid vertices are inserted into $B(i_{1})$ to obtain $B(\vec{m})$ where $\vec{m}$ is a simple ploidy profile and $(i_{1},\ldots,i_{l})$, $l\geq 1$, is the binary representation of the first component of $\vec{m}$. To state it we require a further definition. Let $\vec{m},\vec{m}_{1},\ldots,\vec{m}_{i}=(m_{1,i},\ldots,m_{p_{i},i}),\ldots,\vec{m}_{t}$ denote the simplification sequence of a ploidy profile $\vec{m}$. Then we denote by $c(s(\vec{m}))$ the number of steps in $\sigma(\vec{m})$, for which $m_{1,i}>m_{2,i}$ holds where $0\leq i\leq s(\vec{m})$ and $p_{i}\geq 1$. ###### Corollary 6.2. Suppose $\vec{m}$ is a ploidy profile such that Line 10 in Algorithm 1 is not executed when constructing $\sigma(\vec{m})$. Then $h(\vec{m})=h(\vec{m}_{t})+c(s(\vec{m}))$. If $B(\vec{m}_{t})$ is an attainment of $\vec{m}_{t}$ and $(i_{1},\ldots,i_{l})$ is the binary representation of the first component of $\vec{m}_{t}$, some $l\geq 1$, then $h(\vec{m})=i_{1}+{\color[rgb]{0,0,0}l-1}+c(s(\vec{m}))$. ## 7\. A Viola dataset In this section, we turn our attention to computing the hybrid number of the ploidy profile of a Viola dataset that appeared in more general form in [17]. Denoting that dataset by $X$, the authors of [17] constructed a MUL-tree $M$ on $X$ and then used the PADRE software [12] to derive a phylogenetic network $N$ to help them shed light on the evolutionary past of their Viola species [17, Figure 4]. We depict a simplified network $N^{\prime}$ representing that past in Figure 11(i) the only difference being that we have removed species that are not below a hybrid vertex of $N$ as they do not contribute to the number of hybrid vertices of $N$. If more than one species were below a hybrid vertex of $N$, then we have also randomly removed all but one of them thereby ensuring that the hybrid vertex is still present in $N^{\prime}$. The resulting simplified dataset comprises the taxa $x_{1}=$V.langsdorffii, $x_{2}=$V.tracheliifolia, $x_{3}$= V.grahamii, $x_{4}=$V.721palustris, $x_{5}=$V.blanda, $x_{6}=$V.933palustris, $x_{7}=$V.glabella, $x_{8}=$V.macloskeyi, $x_{9}=$V.repens $x_{10}=$V.verecunda, $x_{11}=$Viola, and $x_{12}=$Rubellium (see [7] for more details on the simplified dataset). The labels of the internal vertices of $N^{\prime}$ represent the ploidy number of the ancestral species represented by that vertex where we canonically extend the concept of a ploidy profile to the interior vertices of a phylogenetic network. By counting directed paths from the root to each leaf, it is easy to check, $h(N^{\prime})=9$. By taking directed paths from the root to the leaves of $N^{\prime}$, we obtain the ploidy profile $\vec{m}=(9,7,7,4,4,4,2,2,2,2,2,1)$ on $X$. Note, since the root is diploid (labelled $2\times$), multiplying each component of $\vec{m}$ by two results in the ploidy numbers induced by the hybrid vertices in the network. The simplification sequence for $\vec{m}$ contains twelve elements and $\vec{m}_{t}=(2,1,1,1)$. Since an attainment of $\vec{m}_{t}$ must have one hybrid vertex and $D(\vec{m}_{t})$ are equal $B(\vec{m}_{t})$ and have one hybrid vertex each, it follows that $B(\vec{m}_{t})$ is an attainment for $\vec{m}_{t}$. The phylogenetic network $N(\vec{m})$ obtained by initializing Algorithm 2 with $B(\vec{m}_{t})$ is depicted in Figure 11(ii). Since at no stage in the construction of $N(\vec{m})$ Line 10 of that algorithm is executed, it follows by Theorem 6.1 that $N(\vec{m})$ is an attainment of $\vec{m}$. Counting again directed paths from the root to each leaf, it is easy to check that $N(\vec{m})$ has five hybrid vertices implying that $h(\vec{m})=5$. To compute the hybrid number of a ploidy profile whose components are not too large and, thererfore, we can find an attainment of its terminal element, we refer the interested reader to our R-function ‘ploidy profile hybrid number bound (PPHNB)’ which is obtainable from [1]. Figure 11. A phylogenetic network on leaf set $X=\\{$V.langsdorffii, V.tracheliifolia, V.grahamii, V.721palustris, V.blanda, V.933palustris, V.glabella, V.macloskeyi, V.repens, V.verecunda, Viola, Rubellium$\\}$ adapted from a more general phylogenetic network that appeared as Figure 4 in [17]. Hybrid vertices are indicated with a filled circle and labelled by their corresponding ploidy number i. e. the number of directed paths from the root to the vertex times two because the root is assumed to be diploid. Leaves are labelled by the first two characters of their names (omitting ’V.’, where applicable). ## 8\. Discussion Motivated by the signal left behind by polyploidization, we have introduced and studied the problem of computing the hybrid number $h(\vec{m})$ of a ploidy profile $\vec{m}$. Our arguments apply, however, to any type of dataset that induces a multiplicity vector. Although stated within a phylogenetics context, the underlying optimization problem is, at its heart, a natural mathematical problem: “Given a multiplicity vector $\vec{m}$ find a rooted, leaf-labelled, directed acyclic graph $G$ so that $\vec{m}$ is the path- multiplicity vector of $G$ and the cyclomatic number of $G$ is minimum”. Our results might therefore be also of relevance beyond phylogenetics. Using the framework of a phylogenetic network, we provide a construction of a phylogenetic network $N(\vec{m})$ that is guaranteed to attain a ploidy profile $\vec{m}$ for a large class of ploidy profiles provided the construction of $N(\vec{m})$ is initialized with an attainment $\mathcal{A}(\vec{m}_{t})$ of the terminal element $\vec{m}_{t}$ of the simplification sequence $\sigma(\vec{m})$ associated to $\vec{m}$. Members of that class include the ploidy profiles described in Proposition 5.3(ii). As a consequence, we obtain an exact formula for the hybrid number of $\vec{m}$ and also the size of the vertex set of $N(\vec{m})$ in terms of the length $s(\vec{m})$ of $\sigma(\vec{m})$ and the number $a(\vec{m}_{t})$ of vertices of $\mathcal{A}(\vec{m}_{t})$ for the members of our class. In case the ploidy numbers that make up $\vec{m}$ are not too large, both $c(s(\vec{m}))$ and $a(\vec{m}_{t})$ can be computed easily by computing $\sigma(\vec{m}$) to obtain $c(s(\vec{m}))$ and using, for example, an exhaustive search for $a(\vec{m}_{t}$). Having said this, we also present an infinite family of ploidy profiles $\vec{m}$ for which $\sigma(\vec{m})$ grows exponentially. Motivated by this, we provide a bound for $h(\vec{m})$ and show that that bound is sharp for certain types of ploidy profiles. To help demonstrate the applicability of our approach, we compute the hybrid number of a simplified version of a Viola dataset that appeared in more general form in [17]. Our result suggests that the authors of [17] potentially overestimate the number of polyploidization events that gave rise to their dataset. Despite these encouraging results, numerous questions that might merit further research remain. These include “What can be said about $h(\vec{m})$ if the ploidy profile $\vec{m}$ is not a member of our class?”, and “Can we shed more light on the length of $\sigma(\vec{m})$ and also into attainments of the terminal element of $\sigma(\vec{m})$?”. Looking a little bit further afield, it might also be of interest to explore the relationship between so called accumulation phylogenies introduced in [2] and ploidy profiles and also the relationship between ploidy profiles and ancestral profiles introduced in [21]. Acknowledgment We thank the anonymous referees for their constructive comments to improve earlier versions of the paper. ## References * [1] https://github.com/lmaher1/ploidy-profile-hybrid-number. * [2] M. Baroni and M. Steel. Accumulation phylogenies. Annals of Combinatorics, 10:19–30, 06 2006. * [3] M. Bordewich and C. Semple. Computing the minimum number of hybridization events for a consistent evolutionary history. Discrete Applied Mathematics, 155(8):914 – 928, 2007. * [4] F. Rossello G. Valiente G. Cardona, M. Llabres. A distance metric for a class of tree-sibling phylogenetic networks. Bioinformatics, 24:14841–1488, 2008. * [5] D. Gusfield. ReCombinatorics: The Algorithmics of Ancestral Recombination Graphs and Explicit Phylogenetic Networks. MIT Press, 2014. * [6] K.T. Huber, S. Linz, and V. Moulton. The rigid hybrid number for two phylogenetic trees. Journal of Mathematical Biology, 82(40), 2021. * [7] K.T. Huber and L.J.. Maher. Autopolyploidy, allopolyploidy, and phylogenetic networks with horizontal arcs. submitted, 2022. * [8] K.T. Huber and V. Moulton. Phylogenetic networks from multi-labelled trees. Journal of Mathematical Biology, 52:613–32, 2006. * [9] K.T. Huber and V. Moulton. Encoding and constructing 1-nested phylogenetic networks with trinets. Algorithmica, 66:714–738, 2013. * [10] K.T. Huber, V. Moulton, A. Spillner, S. Storandt, and R. Suchecki. Computing a consensus of multilabeled trees. Proceedings of the Workshop on Algorithm Engineering and Experiments, pages 84–92, 2012. * [11] K.T. Huber, V. Moulton, M. Steel, and T. Wu. Folding and unfolding phylogenetic trees and networks. Journal of Mathematical Biology, 73(6-7):1761–1780, 2016. * [12] K.T. Huber, B. Oxelman, M. Lott, and V. Moulton. Reconstructing the evolutionary history of polyploids from multilabeled trees. Molecular Biology and Evolution, 23:1784–1791, 2006. * [13] K.T. Huber and G. E. Scholz. Phylogenetic networks that are their own fold-ups. Advances in Applied Mathematics, 113:101959, 2020. * [14] D. Huson, R. Rupp, and C. Scornavacca. Phylogenetic Networks. Cambridge University Press, 2010. * [15] S. Sagitov Jones, G. and B. Oxelman. Statistical inference of allopolyploid species networks in the presence of incomplete lineage sorting. Systematic Biology, 62:467–478, 2013. * [16] T. Marcussen, L. Heier, A. K. Brysting, B. Oxelman, and K. S. Jakobsen. From gene trees to a dated allopolyploid network: Insights from the Angiosperm genus Viola (Violaceae). Systematic Biology, 64:84–101, 2015. * [17] T. Marcussen, K. S. Jakobsen, J. Danihelka, H. E. Ballard, K. Blaxland, A.K. Brysting, and B. Oxelman. Inferring species networks from gene trees in high-polyploid north american and hawaiian violets (viola, violaceae). Systematic Biology, 61:107–126, 2012. * [18] C. McDiarmid, C. Semple, and D. Welsh. Counting phylogenetic networks. Ann. Combin., 19:205–224, 2015. * [19] W. F S. Tomasello Oberpieler, C. and K. Konowalik. A permutation approach for inferring species networks from gene trees in polyploid complexes by minimizing deep coalescences. Methods in Ecology and Evolution, 8:835–849, 2017. * [20] M. Ownbey. Natural hybridization and amphiploidy in the genus Tragopogon. American Journal of Botany, 37:487–499, 1950. * [21] M. Steel P. L. Erdos, C. Semple. A class of phylogenetic networks reconstructable from ancestral profiles. Mathematical Biosciences, 313:33–40, 2019. * [22] Emiko M. Waight L. Kubatko A. Wolfe Paul D. Blischak, Coleen E. P. Thompson. Inferring patterns of hybridization and polyploidy in the plant genus penstemon (Plantaginaceae). BioRxiv, 2020. * [23] M. Steel. Phylogeny: Discrete and Random Processes in Evolution. Society for Industrial and Applied Mathematics, 2016. * [24] L. Van Iersel, R. Janssen, M. Jones, Y. Murakami, and N. Zeh. Polynomial-time algorithms for phylogenetic inference problems involving duplication and reticulation. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 17:14–26, 2020. * [25] L. van Iersel and S. Kelk. Counting the simplest phylogenetic networks from triplets. Algorithmica, 60:207–235, 2011. * [26] F. Varoquaux, R. Blanvillain, M. Delseny, and P. Gallois. Less is better: new approaches for seedless fruit production. Trends in Biotechnology, 18:233–242, 2000.
# TOWARDS UNIVERSAL PHYSICAL ATTACKS ON CASCADED CAMERA-LIDAR 3D OBJECT DETECTION MODELS ###### Abstract We propose a universal and physically realizable adversarial attack on a cascaded multi-modal deep learning network (DNN), in the context of self- driving cars. DNNs have achieved high performance in 3D object detection, but they are known to be vulnerable to adversarial attacks. These attacks have been heavily investigated in the RGB image domain and more recently in the point cloud domain, but rarely in both domains simultaneously - a gap to be filled in this paper. We use a single 3D mesh and differentiable rendering to explore how perturbing the mesh’s geometry and texture can reduce the robustness of DNNs to adversarial attacks. We attack a prominent cascaded multi-modal DNN, the Frustum-Pointnet model. Using the popular KITTI benchmark, we showed that the proposed universal multi-modal attack was successful in reducing the model’s ability to detect a car by nearly 73%. This work can aid in the understanding of what the cascaded RGB-point cloud DNN learns and its vulnerability to adversarial attacks. Index Terms— Adversarial attacks, cascaded multi-modal, 3D object detection, point cloud, deep learning. ## 1 Introduction Most modern autonomous vehicles (AVs) employ cameras and LiDAR sensors to generate a complimentary representation of the scene in the form of dense 2D RGB images and sparse 3D point clouds. Using these data, deep neural networks (DNNs) have achieved state-of-the-art performance in 3D object detection. Such DNNs are however vulnerable to adversarial attacks (mainly in the image domain) that slightly alter the input but greatly affect the output. As safety is critical for self-driving cars, this paper investigates the vulnerabilities of cascaded multi-modal DNNs in 3D car detection. Fig. 1: Illustrative example: Left: A car is detected with accurate bounding boxes. Right: The same car is not detected after adding an adversarial object with learned geometry and texture. The adversarial texture fools the 2D RGB detection pipeline, and the green points are the rendered LiDAR points added to the point cloud to fool the 3D point cloud pipeline. Previous works demonstrated vulnerability of DNNs when the input is either an image [1, 2] or a point cloud [3, 4]. Most point cloud attacks, simply alter or add individual points to the point cloud, and are not physically realizable due to properties of LiDAR sensors. More recently, a physically realizable adversarial attack on point clouds was made using a perturbed mesh that is placed in the 3D scene and rendered differentiably by simulating a LiDAR [5, 6]. Existing attacks either target a single modality or are not physically realizable, limiting their applicability to real world scenarios. Camera-LiDAR multi-model 3D object detectors can be divided into cascaded or fusion models. Cascaded models usually use an off-the-shelf 2D image detector to generate proposals that limit the 3D search space [7, 8, 9]. The points within the proposed regions are then fed into a 3D object detector for classification and bounding box regression. Fusion models extract and combine image and point cloud features in parallel using DNNs, and then a combined representation is sent to a detector for bounding box regression [10, 11]. . Fig. 2: The proposed attack pipeline on the cascaded F-PN model [7]. Backpropagation is represented by a dashed red line. This paper focuses on cascaded models because they are less computationally expensive and more interpretable than fusion models. They are therefore easier to train and deploy, especially that they also utilize mature 2D detection DNNs. While fused models are currently less studied, cascaded models are closer to industry deployment. Moreover, cascaded models have an implicit bias that in the post-proposal limited 3D point cloud an object must reside and so previous point cloud attacks could face a difficulty in attacking these models. Nonetheless, our proposed attack can extend easily to fusion models and other RGB-point cloud multi-modal DNNs. We propose a novel, universal, and physically realizable adversarial attack against a cascaded 3D car detection DNN. It attacks both image and point cloud. An adversarial 3D object is placed on top of a car in a 3D scene and then rendered to both point cloud and the corresponding RGB image by differentiable renderers. The shape and texture of the object are trainable parameters that are manipulated adversarially. To the best of our knowledge this is the first work that explores physically realizable multi-modal adversarial attacks on cascaded 3D object detection DNNs. This topic is critical because most self-driving vehicles now have LiDAR and camera sensors, and many advanced 3D detection models are multi-modal. Since the study of adversarial attacks on multi-modal DNN has so far been rare, this paper is towards addressing this research gap. Based on evaluations on the KITTI dataset, our multi-modal attack reduced the average precision of car detection from 86% to 28% under easy conditions. ## 2 Related Work Adversarial attacks are heavily explored in the image domain [12, 1, 2], starting with digital pixel-wise adversarial perturbations that were not realistic in the physical world [1]. Then constraints were established to ensure the physical feasibility of an attack and so patches [12], and sunglasses [2] were proposed to attack a model. Also, universal attacks were studied where an attack is not just trained on a single image [13]. Recently, differentiable renderers were used to make adversarial attacks by altering the 3D geometry of an object or its lighting conditions and then rendering them to images for an adversarial attack on 2D detection DNNs [14, 15]. Our attack is very different from the aforementioned work: We aim not to learn a patch or a pixel value, but an adversarial texture on a 3D object that can attack a model even when viewed from many angles. Moreover, we target point cloud. The dense representation of RGB images made image attacks easier compared to point cloud attacks where the representation is very sparse. In the context of AVs, assuming a physically realistic attack on RGB images, it still does nothing with regards to a LiDAR sensor that is mainly concerned with 3D geometry. Early work on adversarial attacks on point cloud DNNs focused on perturbing point clouds by slightly moving, adding, or removing a few points [3, 4]. These perturbations are largely theoretical, i.e. they have no guarantees that they can be realized physically. Also, these attacks were largely made on relatively dense point clouds that represent shapes and small objects, unlike the very sparse nature of AVs’ point clouds. To tackle the physical feasibility of a point cloud attack, [5] used an adversarial 3D mesh that is placed somewhere in a 3D scene around an AV. But it was only trained on a single example and an in-house dataset and so it was not universal, and their attack is not evaluated on a publicly available benchmark. To remedy these shortcomings, [6] proposed a universal attack on LiDAR using a mesh that is placed on top of a car, with the objective of minimizing the detection score. The attack was evaluated on different point cloud DNNs using the KITTI benchmark. All these approaches focus only on point cloud detection, while an AV also has a camera sensor. ## 3 Methods We plan to learn a single adversarial object that is placed on top of a car in a 3D scene, with the goal that this adversarial object can significantly reduce the accuracy of a cascaded multi-modal DNN for 3D object detection. This object is rendered to point cloud and RGB image as if it was present in the original scene (see Fig. 2). The consequences of such an attack are especially dangerous, because if those 2 main modalities are attacked, cars have very little recourse. Here we describe how this object is learned, the rendering methods, and the victim multi-modal DNN. ### 3.1 Attack method To make a realistic adversarial attack on multi-modal DNNs that can target both camera and LiDAR modalities, we require a representation that can maintain realistic 3D physical geometry and also can be differentiably rendered to RGB images and LiDAR. We choose a mesh for our adversarial object, since it provides an efficient representation of a 3D object and can be rendered to images or point clouds. This adversarial mesh is placed on top of a vehicle to avoid occlusion, and rotated around the vertical axis to have the same orientation as the vehicle. To train the shape of the object, following previous work [5, 6], we start with an initial mesh $S$ with $V$ vertices, where each initial vertex is defined as $v_{i}^{0}\in\mathbb{R}^{3}$ $,i\in\\{1,2,...,V\\}$. We deform the shape of the mesh by displacing each vertex with a displacement vector $\hat{v_{i}}\in\mathbb{R}^{3}$, which is a learnable parameter to produce an adversarial mesh $S^{adv}$ with vertices $v_{i}$’s, as in Eqn. (1). We use a $4\times 4$ transformation matrix $T$ to put the mesh on top of a car and give it the same orientation: $v_{i}=T\cdot(v_{i}^{0}+\hat{v_{i}})$ (1) For the texture of the adversarial mesh, we give each vertex a corresponding RGB color $c_{i}\in\mathbb{R}^{3}$, which is a learnable parameter. We then render the mesh to 2D and add it to the original RGB scene using a given projection matrix. To produce the texture of the mesh, each face is colored by the interpolation of the colors from the vertices surrounding it. To ensure our object is realistic, we put constraints on the size and smoothness of the mesh geometry. Also since we interpolate between vertex colors to produce the adversarial texture, the result is a realistic smooth texture without abrupt changes in color (a typical observation in RGB adversarial attacks using pixel-wise perturbations). ### 3.2 Differentiable Rendering To realistically render a mesh to point cloud, we need to find which LiDAR rays would intersect with the mesh if it was placed in the original scene. We simulate the LiDAR used in capturing the dataset’s point cloud by producing rays at the same angular frequencies and incorporating a small amount of noise that is present in this specific LiDAR. We then calculate the intersection points between these rays and our mesh’s triangles in the 3D scene using the Möller–Trumbore intersection algorithm [16]. Finally, for each ray with intersection points we take the nearest point, and add all the resulting points to the LiDAR point cloud scene. After placing our mesh on top of all cars in a 3D scene, we render the meshes to 2D using a given projection matrix. To train our adversarial texture, the rendering process needs to be differentiable. We therefore use the fast, differentiable rendering tools developed in [17] to render our adversarial mesh from the 3D scene to RGB images, as shown in Fig. 2. ### 3.3 Victim Model As mentioned in section 1, we are attacking cascaded models. We chose the Frustum-PointNet (F-PN) model [7] for many reasons. First, it’s a pioneering work in the area of multi-modal DNN for AVs and many works were developed based on its mythology and architecture [8, 9]. Moreover, it has a PointNet backbone which is commonly used in many single and multi-modal 3D detection DNNs, and so this attack could be a threat to other 3D detection DNNs that rely on PointNet [18]. Finally, it showed competitive results on the KITTI benchmark. As shown in Fig. 2, F-PN first takes an RGB image through a 2D detection DNN, which proposes 2D bounding boxes. These bounding boxes are then projected to 3D space, thus producing frustum-shaped 3D search spaces that surround each object. Points within each frustum are extracted and sent through two PointNet based DNNs for 3D instance segmentation, and 3D box estimation. F-PN directly outputs one 3D bounding box estimation from a given point cloud frustum, since the assumption is that there is only a single object in a frustum. For image detection, we use YOLOv3 [19]. It is a standard fast 2D object detection DNN that outputs region proposals, classifications, and confidence scores. The F-PN and YOLOv3 models were pre-trained on the KITTI dataset. ### 3.4 Objective Functions We tried to attack the point cloud network in several ways and found that the most successful attack is achieved by minimizing the accuracy of the segmentation network. Assuming $N$ points in $M$ point clouds, the F-PN segmentation network outputs 2 logits for each point, estimating whether it belongs to an object or not, $logit_{i}\in\mathbb{R}^{2},i\in\\{1,2,...,N\\}$. We softmax these logits and extract the highest probability given to a “car” point, $p_{k}=\max\limits_{j}\\{softmax(logit_{j})\\}$ in a point cloud $k$, where $j$ is a point classified as car. To attack the 3D segmentation network, we train the shape of the mesh to minimize this probability. Finally, following previous work [6], we weighed the objective function by the IoU score between the ground truth $y_{gt}$ and predicted $\hat{y}$ bounding boxes: $\mathcal{L}_{mesh}=\sum_{k\in M}-1*log(1-p_{k})*IoU(y_{gt},\hat{y}).$ (2) As for the image loss function, we simply minimize the “objectness” score given to resulting car classifications. We also want to ensure that the geometry of the mesh is smooth and realistic, so we minimize a Laplacian loss [20]: $\mathcal{L}_{lap}=\sum_{i}\left\lVert\delta_{i}\right\rVert^{2}_{2}$, where $\delta_{i}$ is the distance between a vertex $v_{i}$ and the centroid of its neighbors $\delta_{i}=v_{i}-\frac{1}{\left\lVert N(i)\right\rVert}\sum_{j\in N(i)}v_{j}$. The overall point cloud attack objective function is: $\mathcal{L}^{pc}_{adv}=\mathcal{L}_{mesh}+\lambda\mathcal{L}_{lap}$ (3) Fig. 3: An example of the RGB+point cloud attack. Green boxes are ground truth, yellow boxes are detections without adversarial attack, and red ones are detections after the adversarial attack. ## 4 Experiments ### 4.1 Experiment Setup We use the KITTI dataset [21], a popular benchmark for 3D detection in autonomous driving. Roughly half of the samples are used for training and the other half for validation. All reported results in the following section are from the validation set. We use ground truth 2D bounding boxes to generate the frustums used in training and evaluating the attack on the point cloud pipeline. We use YOLOv3 generated 2D bounding boxes to measure the effect of an image-only attack and image + point cloud attack on detection. This is a universal attack, i.e., the shape and texture of the object are trained on the entire training set. We first perturb the geometry of the object, to attack the 3D part. We start with a 40 cm radius isotropic sphere mesh with 162 vertices and 320 faces. We apply box constraints to keep the mesh’s 3 dimensions below the initial levels. An ADAM optimizer is then used to iteratively deform the mesh to minimize (3). Once we have an adversarial shape we render it to 2D images and train a universal adversarial texture. We give each vertex an initial color and then we use an ADAM optimizer to learn which color, for each vertex, would produce an adversarial texture to attack the RGB pipeline. ### 4.2 Results & Discussion We measure the success of an attack by the reduction in the average precision (AP) of car detection due to the introduction of an adversarial mesh to the scene. Table 1 shows the bird’s eye view (BEV) AP results of our trained F-PN after different types of attacks. We show the results for 3 difficulty levels (based on occlusion) with an IoU threshold of $0.7$. Attack Typea | Easy | Moderate | Hard ---|---|---|--- No Attack | 85.66 | 83.62 | 76.23 PC: Adv Shape | 37.36 | 37.79 | 39.10 Img: Adv Texture | 51.85 | 35.98 | 31.49 PC + Img: Adv Object | 27.50 | 22.67 | 19.21 aAttack on the point cloud (PC) or image (Img) pipeline. Table 1: BEV 3D car detection AP results It can be challenging to attack the point cloud part of a cascaded model for several reasons. First, these models are built with an implicit bias that an object must reside in a proposed region and so they can be resistant to simple perturbations. Also, the post-proposal point cloud is very sparse and small in size. Thus there is not much space for an adversarial attack to take place. Nonetheless, our adversarial mesh was able to reduce the localization BEV AP by nearly 45% by just targeting the point cloud pipeline. This can be attributed to cascaded models’ limited exposure to examples where objects could be on cars during training. The frustum limits the 3D search space by focusing only on the body of a car. This is a weakness in cascaded models, since the limited 3D search space means that it may not learn at all about the surrounding 3D scene. This poses a grave danger for autonomous driving since the dimensions and locations of surrounding vehicles and objects contribute heavily to decision making. As shown in Fig 3, our adversarial object not only led to some cars going entirely undetected, but also introduced other detection errors. The RGB pipeline is the bottleneck of detection performance in camera-LiDAR cascaded models. The 2D proposals decide exactly where to search for objects and so if that fails the entire model fails. Using 2D region proposals doesn’t utilize a main purpose behind LiDAR which is to avoid cases where lighting and occlusion affect localization and detection. RGB attacks are much simpler and less computationally expensive than point cloud attacks, and they’re more easily reproduced in real life which can make them more dangerous. Many previous work showed that data augmentation can make a DNN more robust to adversarial attacks. In the training of our network, we included the data augmentation step in the original work [7] by randomly scaling and shifting the proposed 2D bounding boxes. This gave the network certain robustness, as shown in the relatively high AP of car detection even after the RGB attack under the Easy case. Nonetheless, when occlusion increased, detection accuracy rapidly decreased. Under moderate circumstances, BEV AP decreased from 83.62% to 22.67%, a nearly 73% reduction. ## 5 Conclusion We proposed a novel, universal, and physically realistic adversarial attack on a cascaded camera-LiDAR DNN for 3D object detection. We manipulated mesh geometry and texture and used differentiable rendering to study the vulnerability of cascaded DNNs. We found that cascaded models are vulnerable to adversarial attacks and that detection accuracy can decrease by nearly 73% if we attack both camera and LiDAR modalities simultaneously. While we have applied here the proposed attack to a multi-modal cascaded model, it can be extended in future work to investigate the vulnerability of different multi- modal DNN models (e.g., fusion models) for different tasks. ## References * [1] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014. * [2] Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” in Proceedings of the 2016 acm sigsac conference on computer and communications security, 2016, pp. 1528–1540. * [3] Jiancheng Yang, Qiang Zhang, Rongyao Fang, Bingbing Ni, Jinxian Liu, and Qi Tian, “Adversarial attack and defense on point sets,” arXiv preprint arXiv:1902.10899, 2019. * [4] Chong Xiang, Charles R. Qi, and Bo Li, “Generating 3d adversarial point clouds,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. * [5] Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, and Bo Li, “Adversarial objects against lidar-based autonomous driving systems,” 2019\. * [6] James Tu, Mengye Ren, Sivabalan Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, and Raquel Urtasun, “Physically realizable adversarial examples for lidar object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. * [7] Charles R. Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J. Guibas, “Frustum pointnets for 3d object detection from rgb-d data,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. * [8] Zhixin Wang and Kui Jia, “Frustum convnet: Sliding frustums to aggregate local point-wise features for amodal 3d object detection,” arXiv preprint arXiv:1903.01864, 2019. * [9] K. Shin, Y. P. Kwon, and M. Tomizuka, “Roarnet: A robust 3d object detection based on region approximation refinement,” in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 2510–2515. * [10] Tengteng Huang, Zhe Liu, Xiwu Chen, and Xiang Bai, “Epnet: Enhancing point features with image semantics for 3d object detection,” in European Conference on Computer Vision. Springer, 2020, pp. 35–52. * [11] Vishwanath A Sindagi, Yin Zhou, and Oncel Tuzel, “Mvx-net: Multimodal voxelnet for 3d object detection,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 7276–7282. * [12] Simen Thys, Wiebe Van Ranst, and Toon Goedemé, “Fooling automated surveillance cameras: adversarial patches to attack person detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019, pp. 0–0. * [13] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard, “Universal adversarial perturbations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1765–1773. * [14] Xiaohui Zeng, Chenxi Liu, Yu-Siang Wang, Weichao Qiu, Lingxi Xie, Yu-Wing Tai, Chi-Keung Tang, and Alan L Yuille, “Adversarial attacks beyond the image space,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 4302–4311. * [15] Hsueh-Ti Derek Liu, Michael Tao, Chun-Liang Li, Derek Nowrouzezahrai, and Alec Jacobson, “Beyond pixel norm-balls: Parametric adversaries using an analytically differentiable renderer,” arXiv preprint arXiv:1808.02651, 2018. * [16] Tomas Möller and Ben Trumbore, “Fast, minimum storage ray-triangle intersection,” Journal of graphics tools, vol. 2, no. 1, pp. 21–28, 1997. * [17] Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari, “Accelerating 3d deep learning with pytorch3d,” arXiv:2007.08501, 2020. * [18] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652–660. * [19] Joseph Redmon and Ali Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018. * [20] Shichen Liu, Weikai Chen, Tianye Li, and Hao Li, “Soft rasterizer: Differentiable rendering for unsupervised single-view mesh reconstruction,” arXiv preprint arXiv:1901.05567, 2019. * [21] Andreas Geiger, Philip Lenz, and Raquel Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012, pp. 3354–3361.